Re: DIH Blob data

2014-11-12 Thread stockii
I had a similar problem and didnt find any solution to use the fields in JSON
Blob for a filter ... Not with DIH.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-Blob-data-tp4168896p4168925.html
Sent from the Solr - User mailing list archive at Nabble.com.


Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Thomas Lamy

Hi there!

As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on a 
regular basis, we started upgrading our 7 mode cloud from 4.10.1 to 4.10.2.

The first node upgrade worked like a charm.
After upgrading the second node, two cores no longer come up and we get 
the following error:


ERROR - 2014-11-12 15:17:34.226; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (16) core=cams_shard1_replica4
ERROR - 2014-11-12 15:17:34.230; org.apache.solr.common.SolrException; 
Error while trying to recover. 
core=onlinelist_shard1_replica7rg.noggit.JSONParser$ParseException: JSON 
Parse Error: char=d,position=0 BEFORE='d' AFTER='own'

at org.noggit.JSONParser.err(JSONParser.java:223)
at org.noggit.JSONParser.next(JSONParser.java:622)
at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
at 
org.apache.solr.common.cloud.ZkStateReader.fromJSON(ZkStateReader.java:129)
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStateObject(ZkController.java:1925)
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryState(ZkController.java:1890)

at org.apache.solr.cloud.ZkController.publish(ZkController.java:1071)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1041)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1037)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:355)
at 
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)


Any hint on how to solve this? Google didn't reveal anything useful...


Kind regards
Thomas

--
Thomas Lamy
Cytainment AG  Co KG
Nordkanalstrasse 52
20097 Hamburg

Tel.: +49 (40) 23 706-747
Fax: +49 (40) 23 706-139

Sitz und Registergericht Hamburg
HRA 98121
HRB 86068
Ust-ID: DE213009476



Re: Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Thomas Lamy

Am 12.11.2014 um 15:29 schrieb Thomas Lamy:

Hi there!

As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on 
a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to 
4.10.2.

The first node upgrade worked like a charm.
After upgrading the second node, two cores no longer come up and we 
get the following error:


ERROR - 2014-11-12 15:17:34.226; 
org.apache.solr.cloud.RecoveryStrategy; Recovery failed - trying 
again... (16) core=cams_shard1_replica4
ERROR - 2014-11-12 15:17:34.230; org.apache.solr.common.SolrException; 
Error while trying to recover. 
core=onlinelist_shard1_replica7rg.noggit.JSONParser$ParseException: 
JSON Parse Error: char=d,position=0 BEFORE='d' AFTER='own'

at org.noggit.JSONParser.err(JSONParser.java:223)
at org.noggit.JSONParser.next(JSONParser.java:622)
at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
at 
org.apache.solr.common.cloud.ZkStateReader.fromJSON(ZkStateReader.java:129)
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStateObject(ZkController.java:1925)
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryState(ZkController.java:1890)

at org.apache.solr.cloud.ZkController.publish(ZkController.java:1071)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1041)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1037)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:355)
at 
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)


Any hint on how to solve this? Google didn't reveal anything useful...


Kind regards
Thomas


Just switched to INFO loglevel:

INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy; 
Publishing state of core onlinelist_shard1_replica7 as recovering, 
leader is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ 
and I am http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy; 
Publishing state of core cams_shard1_replica4 as recovering, leader is 
http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am 
http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController; 
publishing core=onlinelist_shard1_replica7 state=recovering 
collection=onlinelist
INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController; 
publishing core=cams_shard1_replica4 state=recovering collection=cams
ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException; 
Error while trying to recover. 
core=cams_shard1_replica4rg.noggit.JSONParser$ParseException: JSON Parse 
Error: char=d,position=0 BEFORE='d' AFTER='own'
ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException; 
Error while trying to recover. 
core=onlinelist_shard1_replica7rg.noggit.JSONParser$ParseException: JSON 
Parse Error: char=d,position=0 BEFORE='d' AFTER='own'
ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (5) core=cams_shard1_replica4
ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (5) core=onlinelist_shard1_replica7
INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy; 
Wait 60.0 seconds before trying to recover again (6)
INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy; 
Wait 60.0 seconds before trying to recover again (6)


The leader for both collections (solr-bc1-blade2) is still on 4.10.1.
As no special instructions were given in the release notes and it's a 
minor upgrade, we thought there should be no BC issues and planned to 
upgrade one node after the other.


Did that provide more insight?

--
Thomas Lamy
Cytainment AG  Co KG
Nordkanalstrasse 52
20097 Hamburg

Tel.: +49 (40) 23 706-747
Fax: +49 (40) 23 706-139

Sitz und Registergericht Hamburg
HRA 98121
HRB 86068
Ust-ID: DE213009476



Re: Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Shalin Shekhar Mangar
Hi Thomas,

You're right, there's a back-compat break here. I'll open an issue.

On Wed, Nov 12, 2014 at 9:37 AM, Thomas Lamy t.l...@cytainment.de wrote:

 Am 12.11.2014 um 15:29 schrieb Thomas Lamy:

 Hi there!

 As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on a
 regular basis, we started upgrading our 7 mode cloud from 4.10.1 to 4.10.2.
 The first node upgrade worked like a charm.
 After upgrading the second node, two cores no longer come up and we get
 the following error:

 ERROR - 2014-11-12 15:17:34.226; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (16) core=cams_shard1_replica4
 ERROR - 2014-11-12 15:17:34.230; org.apache.solr.common.SolrException;
 Error while trying to recover. core=onlinelist_shard1_
 replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
 char=d,position=0 BEFORE='d' AFTER='own'
 at org.noggit.JSONParser.err(JSONParser.java:223)
 at org.noggit.JSONParser.next(JSONParser.java:622)
 at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
 at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
 at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
 at org.apache.solr.common.cloud.ZkStateReader.fromJSON(
 ZkStateReader.java:129)
 at org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
 eObject(ZkController.java:1925)
 at org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
 e(ZkController.java:1890)
 at org.apache.solr.cloud.ZkController.publish(ZkController.java:1071)
 at org.apache.solr.cloud.ZkController.publish(ZkController.java:1041)
 at org.apache.solr.cloud.ZkController.publish(ZkController.java:1037)
 at org.apache.solr.cloud.RecoveryStrategy.doRecovery(
 RecoveryStrategy.java:355)
 at org.apache.solr.cloud.RecoveryStrategy.run(
 RecoveryStrategy.java:235)

 Any hint on how to solve this? Google didn't reveal anything useful...


 Kind regards
 Thomas

  Just switched to INFO loglevel:

 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
 Publishing state of core onlinelist_shard1_replica7 as recovering, leader
 is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ and I am
 http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
 Publishing state of core cams_shard1_replica4 as recovering, leader is
 http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am
 http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
 publishing core=onlinelist_shard1_replica7 state=recovering
 collection=onlinelist
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
 publishing core=cams_shard1_replica4 state=recovering collection=cams
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
 Error while trying to recover. core=cams_shard1_replica4rg.
 noggit.JSONParser$ParseException: JSON Parse Error: char=d,position=0
 BEFORE='d' AFTER='own'
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
 Error while trying to recover. core=onlinelist_shard1_
 replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
 char=d,position=0 BEFORE='d' AFTER='own'
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (5) core=cams_shard1_replica4
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (5) core=onlinelist_shard1_replica7
 INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Wait 60.0 seconds before trying to recover again (6)
 INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Wait 60.0 seconds before trying to recover again (6)

 The leader for both collections (solr-bc1-blade2) is still on 4.10.1.
 As no special instructions were given in the release notes and it's a
 minor upgrade, we thought there should be no BC issues and planned to
 upgrade one node after the other.

 Did that provide more insight?


 --
 Thomas Lamy
 Cytainment AG  Co KG
 Nordkanalstrasse 52
 20097 Hamburg

 Tel.: +49 (40) 23 706-747
 Fax: +49 (40) 23 706-139

 Sitz und Registergericht Hamburg
 HRA 98121
 HRB 86068
 Ust-ID: DE213009476




-- 
Regards,
Shalin Shekhar Mangar.


Re: Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Shalin Shekhar Mangar
I opened https://issues.apache.org/jira/browse/SOLR-6732

On Wed, Nov 12, 2014 at 12:29 PM, Shalin Shekhar Mangar 
shalinman...@gmail.com wrote:

 Hi Thomas,

 You're right, there's a back-compat break here. I'll open an issue.

 On Wed, Nov 12, 2014 at 9:37 AM, Thomas Lamy t.l...@cytainment.de wrote:

 Am 12.11.2014 um 15:29 schrieb Thomas Lamy:

 Hi there!

 As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on
 a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
 4.10.2.
 The first node upgrade worked like a charm.
 After upgrading the second node, two cores no longer come up and we get
 the following error:

 ERROR - 2014-11-12 15:17:34.226; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (16) core=cams_shard1_replica4
 ERROR - 2014-11-12 15:17:34.230; org.apache.solr.common.SolrException;
 Error while trying to recover. core=onlinelist_shard1_
 replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
 char=d,position=0 BEFORE='d' AFTER='own'
 at org.noggit.JSONParser.err(JSONParser.java:223)
 at org.noggit.JSONParser.next(JSONParser.java:622)
 at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
 at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
 at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
 at org.apache.solr.common.cloud.ZkStateReader.fromJSON(
 ZkStateReader.java:129)
 at org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
 eObject(ZkController.java:1925)
 at org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
 e(ZkController.java:1890)
 at org.apache.solr.cloud.ZkController.publish(
 ZkController.java:1071)
 at org.apache.solr.cloud.ZkController.publish(
 ZkController.java:1041)
 at org.apache.solr.cloud.ZkController.publish(
 ZkController.java:1037)
 at org.apache.solr.cloud.RecoveryStrategy.doRecovery(
 RecoveryStrategy.java:355)
 at org.apache.solr.cloud.RecoveryStrategy.run(
 RecoveryStrategy.java:235)

 Any hint on how to solve this? Google didn't reveal anything useful...


 Kind regards
 Thomas

  Just switched to INFO loglevel:

 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
 Publishing state of core onlinelist_shard1_replica7 as recovering, leader
 is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ and I am
 http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
 Publishing state of core cams_shard1_replica4 as recovering, leader is
 http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am
 http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
 publishing core=onlinelist_shard1_replica7 state=recovering
 collection=onlinelist
 INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
 publishing core=cams_shard1_replica4 state=recovering collection=cams
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
 Error while trying to recover. core=cams_shard1_replica4rg.
 noggit.JSONParser$ParseException: JSON Parse Error: char=d,position=0
 BEFORE='d' AFTER='own'
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
 Error while trying to recover. core=onlinelist_shard1_
 replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
 char=d,position=0 BEFORE='d' AFTER='own'
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (5) core=cams_shard1_replica4
 ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Recovery failed - trying again... (5) core=onlinelist_shard1_replica7
 INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Wait 60.0 seconds before trying to recover again (6)
 INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
 Wait 60.0 seconds before trying to recover again (6)

 The leader for both collections (solr-bc1-blade2) is still on 4.10.1.
 As no special instructions were given in the release notes and it's a
 minor upgrade, we thought there should be no BC issues and planned to
 upgrade one node after the other.

 Did that provide more insight?


 --
 Thomas Lamy
 Cytainment AG  Co KG
 Nordkanalstrasse 52
 20097 Hamburg

 Tel.: +49 (40) 23 706-747
 Fax: +49 (40) 23 706-139

 Sitz und Registergericht Hamburg
 HRA 98121
 HRB 86068
 Ust-ID: DE213009476




 --
 Regards,
 Shalin Shekhar Mangar.




-- 
Regards,
Shalin Shekhar Mangar.


Re: SOLRJ Atomic updates of String field

2014-11-12 Thread Anurag Sharma
I understood the query now.
Atomic Update and Optimistic Concurrency are independent in Solr version 
5.
Not sure about version 4.2, if they are combined in this version a
_version_ field is needed to pass in every update. The atomic/partial
update will succeed if version in the request and indexed doc matches
otherwise response will have HTTP error code 409.

You can try by passing the _version_ of indexed doc during update.

It's also good to add a unit test in Solr for partial update which
currently I see missing.

On Wed, Nov 12, 2014 at 1:00 PM, Ahmet Arslan iori...@yahoo.com.invalid
wrote:

 Hi Bbarani,

 Partial update solrJ example can be found in :
 http://find.searchhub.org/document/5b1187abfcfad33f

 Ahmet



 On Tuesday, November 11, 2014 8:51 PM, bbarani bbar...@gmail.com wrote:
 I am using the below code to do partial update (in SOLR 4.2)

 partialUpdate = new HashMapString, Object();
 partialUpdate.put(set,Object);
 doc.setField(description, partialUpdate);
 server.add(docs);
 server.commit();

 I am seeing the below description value with {set =...}, Any idea why this
 is getting added?

 str name=description
 {set=The iPhone 6 Plus features a 5.5-inch retina HD display, the A8 chip
 for faster processing and longer battery life, the M8 motion coprocessor to
 track speed, distance and elevation, and with an 8MP iSight camera, you can
 record 1080p HD Video at 60 FPS!}
 /str



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SOLRJ-Atomic-updates-of-String-field-tp4168809.html
 Sent from the Solr - User mailing list archive at Nabble.com.




Re: DIH Blob data

2014-11-12 Thread Anurag Sharma
BLOB is non-searchable field so there is no benefit of storing it into
Solr. Any external key-value store can be used to store the blob and
reference of this blob can be stored as a string field in Solr.

On Wed, Nov 12, 2014 at 5:56 PM, stockii stock.jo...@googlemail.com wrote:

 I had a similar problem and didnt find any solution to use the fields in
 JSON
 Blob for a filter ... Not with DIH.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/DIH-Blob-data-tp4168896p4168925.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: DIH Blob data

2014-11-12 Thread Michael Sokolov
We routinely store images and pdfs in Solr. There *is* a benefit, since 
you don't need to manage another storage system, you don't have to worry 
about Solr getting out of sync with the other system, you can use Solr 
replication for all your assets, etc.


I don't use DIH, so personally I don't care whether it handles blobs, 
but it does seem like a natural extension for a system that indexes data 
from SQL in Solr.


-Mike


On 11/12/2014 01:31 PM, Anurag Sharma wrote:

BLOB is non-searchable field so there is no benefit of storing it into
Solr. Any external key-value store can be used to store the blob and
reference of this blob can be stored as a string field in Solr.

On Wed, Nov 12, 2014 at 5:56 PM, stockii stock.jo...@googlemail.com wrote:


I had a similar problem and didnt find any solution to use the fields in
JSON
Blob for a filter ... Not with DIH.



--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-Blob-data-tp4168896p4168925.html
Sent from the Solr - User mailing list archive at Nabble.com.





Re: DIH Blob data

2014-11-12 Thread Jeon Woosung
How about this?

First, define a field for filter query. It should be multivalued.

Second, implements transformer to extract json dynamic fields, and put the
dynamic fields into the solr field.

For example,

fieldType name=terms class=string multivalued=true/

Data : {a:1,b:2,c:3}

You can split the data to a:1, b:2, c:3, and put them into terms.

And then you can use filter query like fq=terms:a:1
2014. 11. 13. 오전 3:59에 Michael Sokolov msoko...@safaribooksonline.com님이
작성:

 We routinely store images and pdfs in Solr. There *is* a benefit, since
 you don't need to manage another storage system, you don't have to worry
 about Solr getting out of sync with the other system, you can use Solr
 replication for all your assets, etc.

 I don't use DIH, so personally I don't care whether it handles blobs, but
 it does seem like a natural extension for a system that indexes data from
 SQL in Solr.

 -Mike


 On 11/12/2014 01:31 PM, Anurag Sharma wrote:

 BLOB is non-searchable field so there is no benefit of storing it into
 Solr. Any external key-value store can be used to store the blob and
 reference of this blob can be stored as a string field in Solr.

 On Wed, Nov 12, 2014 at 5:56 PM, stockii stock.jo...@googlemail.com
 wrote:

  I had a similar problem and didnt find any solution to use the fields in
 JSON
 Blob for a filter ... Not with DIH.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/DIH-Blob-data-tp4168896p4168925.html
 Sent from the Solr - User mailing list archive at Nabble.com.





Re: Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Anshum Gupta
Considering the impact, I think we should put this out as an announcement
on the 'news' section of the website warning people about this.

On Wed, Nov 12, 2014 at 12:33 PM, Shalin Shekhar Mangar 
shalinman...@gmail.com wrote:

 I opened https://issues.apache.org/jira/browse/SOLR-6732

 On Wed, Nov 12, 2014 at 12:29 PM, Shalin Shekhar Mangar 
 shalinman...@gmail.com wrote:

  Hi Thomas,
 
  You're right, there's a back-compat break here. I'll open an issue.
 
  On Wed, Nov 12, 2014 at 9:37 AM, Thomas Lamy t.l...@cytainment.de
 wrote:
 
  Am 12.11.2014 um 15:29 schrieb Thomas Lamy:
 
  Hi there!
 
  As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on
  a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
  4.10.2.
  The first node upgrade worked like a charm.
  After upgrading the second node, two cores no longer come up and we get
  the following error:
 
  ERROR - 2014-11-12 15:17:34.226;
 org.apache.solr.cloud.RecoveryStrategy;
  Recovery failed - trying again... (16) core=cams_shard1_replica4
  ERROR - 2014-11-12 15:17:34.230; org.apache.solr.common.SolrException;
  Error while trying to recover. core=onlinelist_shard1_
  replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
  char=d,position=0 BEFORE='d' AFTER='own'
  at org.noggit.JSONParser.err(JSONParser.java:223)
  at org.noggit.JSONParser.next(JSONParser.java:622)
  at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
  at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
  at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
  at org.apache.solr.common.cloud.ZkStateReader.fromJSON(
  ZkStateReader.java:129)
  at
 org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
  eObject(ZkController.java:1925)
  at
 org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
  e(ZkController.java:1890)
  at org.apache.solr.cloud.ZkController.publish(
  ZkController.java:1071)
  at org.apache.solr.cloud.ZkController.publish(
  ZkController.java:1041)
  at org.apache.solr.cloud.ZkController.publish(
  ZkController.java:1037)
  at org.apache.solr.cloud.RecoveryStrategy.doRecovery(
  RecoveryStrategy.java:355)
  at org.apache.solr.cloud.RecoveryStrategy.run(
  RecoveryStrategy.java:235)
 
  Any hint on how to solve this? Google didn't reveal anything useful...
 
 
  Kind regards
  Thomas
 
   Just switched to INFO loglevel:
 
  INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
  Publishing state of core onlinelist_shard1_replica7 as recovering,
 leader
  is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ and I
 am
  http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
  INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.RecoveryStrategy;
  Publishing state of core cams_shard1_replica4 as recovering, leader is
  http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am
  http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
  INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
  publishing core=onlinelist_shard1_replica7 state=recovering
  collection=onlinelist
  INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
  publishing core=cams_shard1_replica4 state=recovering collection=cams
  ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
  Error while trying to recover. core=cams_shard1_replica4rg.
  noggit.JSONParser$ParseException: JSON Parse Error: char=d,position=0
  BEFORE='d' AFTER='own'
  ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
  Error while trying to recover. core=onlinelist_shard1_
  replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
  char=d,position=0 BEFORE='d' AFTER='own'
  ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
  Recovery failed - trying again... (5) core=cams_shard1_replica4
  ERROR - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
  Recovery failed - trying again... (5) core=onlinelist_shard1_replica7
  INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
  Wait 60.0 seconds before trying to recover again (6)
  INFO  - 2014-11-12 15:30:31.564; org.apache.solr.cloud.RecoveryStrategy;
  Wait 60.0 seconds before trying to recover again (6)
 
  The leader for both collections (solr-bc1-blade2) is still on 4.10.1.
  As no special instructions were given in the release notes and it's a
  minor upgrade, we thought there should be no BC issues and planned to
  upgrade one node after the other.
 
  Did that provide more insight?
 
 
  --
  Thomas Lamy
  Cytainment AG  Co KG
  Nordkanalstrasse 52
  20097 Hamburg
 
  Tel.: +49 (40) 23 706-747
  Fax: +49 (40) 23 706-139
 
  Sitz und Registergericht Hamburg
  HRA 98121
  HRB 86068
  Ust-ID: DE213009476
 
 
 
 
  --
  Regards,
  Shalin Shekhar Mangar.
 



 --
 Regards,
 Shalin Shekhar Mangar.




-- 
Anshum Gupta
http://about.me/anshumgupta


Re: Different ids for the same document in different replicas.

2014-11-12 Thread S.L
Thanks.

So the issue here is I already have a uniqueKeydoctorIduniquekey
defined in my schema.xml.

If along with that I also want the id/id field to be automatically
generated for each document do I have to declare it as a uniquekey as
well , because I just tried the following setting without the uniqueKey for
id and its only generating blank ids for me.

*schema.xml*

field name=id type=string indexed=true stored=true
required=true multiValued=false /

*solrconfig.xml*

  updateRequestProcessorChain name=uuid

processor class=solr.UUIDUpdateProcessorFactory
str name=fieldNameid/str
/processor
processor class=solr.RunUpdateProcessorFactory /
/updateRequestProcessorChain


On Tue, Nov 11, 2014 at 7:47 PM, Garth Grimm 
garthgr...@averyranchconsulting.com wrote:

 Looking a little deeper, I did find this about UUIDField


 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/schema/UUIDField.html

 NOTE: Configuring a UUIDField instance with a default value of NEW is
 not advisable for most users when using SolrCloud (and not possible if the
 UUID value is configured as the unique key field) since the result will be
 that each replica of each document will get a unique UUID value. Using
 UUIDUpdateProcessorFactory
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/UUIDUpdateProcessorFactory.html
 to generate UUID values when documents are added is recomended instead.”

 That might describe the behavior you saw.  And the use of
 UUIDUpdateProcessorFactory to auto generate ID’s seems to be covered well
 here:


 http://solr.pl/en/2013/07/08/automatically-generate-document-identifiers-solr-4-x/

 Though I’ve not actually tried that process before.

 On Nov 11, 2014, at 7:39 PM, Garth Grimm 
 garthgr...@averyranchconsulting.commailto:
 garthgr...@averyranchconsulting.com wrote:

 “uuid” isn’t an out of the box field type that I’m familiar with.

 Generally, I’d stick with the out of the box advice of the schema.xml
 file, which includes things like….

   !-- Only remove the id field if you have a very good reason to. While
 not strictly
 required, it is highly recommended. A uniqueKey is present in almost
 all Solr
 installations. See the uniqueKey declaration below where uniqueKey
 is set to id.
   --
   field name=id type=string indexed=true stored=true
 required=true multiValued=false /

 and…

 !-- Field to use to determine and enforce document uniqueness.
  Unless this field is marked with required=false, it will be a
 required field
   --
 uniqueKeyid/uniqueKey

 If you’re creating some key/value pair with uuid as the key as you feed
 documents in, and you know that the uuid values you’re creating are unique,
 just change the field name and unique key name from ‘id’ to ‘uuid’.  Or
 change the key name you send in from ‘uuid’ to ‘id’.

 On Nov 11, 2014, at 7:18 PM, S.L simpleliving...@gmail.commailto:
 simpleliving...@gmail.com wrote:

 Hi All,

 I am seeing interesting behavior on the replicas , I have a single
 shard and 6 replicas and on SolrCloud 4.10.1 . I  only have a small
 number of documents ~375 that are replicated across the six replicas .

 The interesting thing is that the same  document has a different id in
 each one of those replicas .

 This is causing the fq(id:xyz) type queries to fail, depending on
 which replica the query goes to.

 I have  specified the id field in the following manner in schema.xml,
 is it the right way to specifiy an auto generated id in  SolrCloud ?

   field name=id type=uuid indexed=true stored=true
   required=true multiValued=false /


 Thanks.





Re: Different ids for the same document in different replicas.

2014-11-12 Thread S.L
Just tried  adding  uniqueKeyid/uniqueKey while keeping id type=
string only blank ids are being generated ,looks like the id is being
auto generated only if the the id is set to  type uuid , but in case of
SolrCloud this id will be unique per replica.

Is there a  way to generate a unique id both in case of SolrCloud with out
using the uuid type or not having a per replica unique id?

The uuid in question is of type .

fieldType name=uuid class=solr.UUIDField indexed=true /


On Wed, Nov 12, 2014 at 6:20 PM, S.L simpleliving...@gmail.com wrote:

 Thanks.

 So the issue here is I already have a uniqueKeydoctorIduniquekey
 defined in my schema.xml.

 If along with that I also want the id/id field to be automatically
 generated for each document do I have to declare it as a uniquekey as
 well , because I just tried the following setting without the uniqueKey for
 id and its only generating blank ids for me.

 *schema.xml*

 field name=id type=string indexed=true stored=true
 required=true multiValued=false /

 *solrconfig.xml*

   updateRequestProcessorChain name=uuid

 processor class=solr.UUIDUpdateProcessorFactory
 str name=fieldNameid/str
 /processor
 processor class=solr.RunUpdateProcessorFactory /
 /updateRequestProcessorChain


 On Tue, Nov 11, 2014 at 7:47 PM, Garth Grimm 
 garthgr...@averyranchconsulting.com wrote:

 Looking a little deeper, I did find this about UUIDField


 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/schema/UUIDField.html

 NOTE: Configuring a UUIDField instance with a default value of NEW is
 not advisable for most users when using SolrCloud (and not possible if the
 UUID value is configured as the unique key field) since the result will be
 that each replica of each document will get a unique UUID value. Using
 UUIDUpdateProcessorFactory
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/UUIDUpdateProcessorFactory.html
 to generate UUID values when documents are added is recomended instead.”

 That might describe the behavior you saw.  And the use of
 UUIDUpdateProcessorFactory to auto generate ID’s seems to be covered well
 here:


 http://solr.pl/en/2013/07/08/automatically-generate-document-identifiers-solr-4-x/

 Though I’ve not actually tried that process before.

 On Nov 11, 2014, at 7:39 PM, Garth Grimm 
 garthgr...@averyranchconsulting.commailto:
 garthgr...@averyranchconsulting.com wrote:

 “uuid” isn’t an out of the box field type that I’m familiar with.

 Generally, I’d stick with the out of the box advice of the schema.xml
 file, which includes things like….

   !-- Only remove the id field if you have a very good reason to.
 While not strictly
 required, it is highly recommended. A uniqueKey is present in
 almost all Solr
 installations. See the uniqueKey declaration below where
 uniqueKey is set to id.
   --
   field name=id type=string indexed=true stored=true
 required=true multiValued=false /

 and…

 !-- Field to use to determine and enforce document uniqueness.
  Unless this field is marked with required=false, it will be a
 required field
   --
 uniqueKeyid/uniqueKey

 If you’re creating some key/value pair with uuid as the key as you feed
 documents in, and you know that the uuid values you’re creating are unique,
 just change the field name and unique key name from ‘id’ to ‘uuid’.  Or
 change the key name you send in from ‘uuid’ to ‘id’.

 On Nov 11, 2014, at 7:18 PM, S.L simpleliving...@gmail.commailto:
 simpleliving...@gmail.com wrote:

 Hi All,

 I am seeing interesting behavior on the replicas , I have a single
 shard and 6 replicas and on SolrCloud 4.10.1 . I  only have a small
 number of documents ~375 that are replicated across the six replicas .

 The interesting thing is that the same  document has a different id in
 each one of those replicas .

 This is causing the fq(id:xyz) type queries to fail, depending on
 which replica the query goes to.

 I have  specified the id field in the following manner in schema.xml,
 is it the right way to specifiy an auto generated id in  SolrCloud ?

   field name=id type=uuid indexed=true stored=true
   required=true multiValued=false /


 Thanks.






Re: Different ids for the same document in different replicas.

2014-11-12 Thread Garth Grimm
You mention you already have a unique Key identified for the data you’re 
storing in Solr:

 uniqueKeydoctorIduniquekey

If that’s the field you’re using to uniquely identify each thing you’re storing 
in the solr index, why do you want to have an id field that is populated with 
some random value?  You’ll be using the doctorId field as the key, and the id 
field will have no real meaning in your Data Model.

If doctorId actually isn’t unique to each item you plan on storing in Solr, is 
there any other field that is?  If so, use that field as your unique key.

Remember, this uniqueKeys are usually used for routing documents to shards in 
SolrCloud, and are used to ensure that later updates of the same “thing” 
overwrite the old one, rather than generating multiple copies.  So the keys 
really should be something derived from the data your storing.  I’m not sure if 
I understand why you would want to have the key randomly generated.

 On Nov 12, 2014, at 6:39 PM, S.L simpleliving...@gmail.com wrote:
 
 Just tried  adding  uniqueKeyid/uniqueKey while keeping id type=
 string only blank ids are being generated ,looks like the id is being
 auto generated only if the the id is set to  type uuid , but in case of
 SolrCloud this id will be unique per replica.
 
 Is there a  way to generate a unique id both in case of SolrCloud with out
 using the uuid type or not having a per replica unique id?
 
 The uuid in question is of type .
 
 fieldType name=uuid class=solr.UUIDField indexed=true /
 
 
 On Wed, Nov 12, 2014 at 6:20 PM, S.L simpleliving...@gmail.com wrote:
 
 Thanks.
 
 So the issue here is I already have a uniqueKeydoctorIduniquekey
 defined in my schema.xml.
 
 If along with that I also want the id/id field to be automatically
 generated for each document do I have to declare it as a uniquekey as
 well , because I just tried the following setting without the uniqueKey for
 id and its only generating blank ids for me.
 
 *schema.xml*
 
field name=id type=string indexed=true stored=true
required=true multiValued=false /
 
 *solrconfig.xml*
 
  updateRequestProcessorChain name=uuid
 
processor class=solr.UUIDUpdateProcessorFactory
str name=fieldNameid/str
/processor
processor class=solr.RunUpdateProcessorFactory /
/updateRequestProcessorChain
 
 
 On Tue, Nov 11, 2014 at 7:47 PM, Garth Grimm 
 garthgr...@averyranchconsulting.com wrote:
 
 Looking a little deeper, I did find this about UUIDField
 
 
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/schema/UUIDField.html
 
 NOTE: Configuring a UUIDField instance with a default value of NEW is
 not advisable for most users when using SolrCloud (and not possible if the
 UUID value is configured as the unique key field) since the result will be
 that each replica of each document will get a unique UUID value. Using
 UUIDUpdateProcessorFactory
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/UUIDUpdateProcessorFactory.html
 to generate UUID values when documents are added is recomended instead.”
 
 That might describe the behavior you saw.  And the use of
 UUIDUpdateProcessorFactory to auto generate ID’s seems to be covered well
 here:
 
 
 http://solr.pl/en/2013/07/08/automatically-generate-document-identifiers-solr-4-x/
 
 Though I’ve not actually tried that process before.
 
 On Nov 11, 2014, at 7:39 PM, Garth Grimm 
 garthgr...@averyranchconsulting.commailto:
 garthgr...@averyranchconsulting.com wrote:
 
 “uuid” isn’t an out of the box field type that I’m familiar with.
 
 Generally, I’d stick with the out of the box advice of the schema.xml
 file, which includes things like….
 
  !-- Only remove the id field if you have a very good reason to.
 While not strictly
required, it is highly recommended. A uniqueKey is present in
 almost all Solr
installations. See the uniqueKey declaration below where
 uniqueKey is set to id.
  --
  field name=id type=string indexed=true stored=true
 required=true multiValued=false /
 
 and…
 
 !-- Field to use to determine and enforce document uniqueness.
 Unless this field is marked with required=false, it will be a
 required field
  --
 uniqueKeyid/uniqueKey
 
 If you’re creating some key/value pair with uuid as the key as you feed
 documents in, and you know that the uuid values you’re creating are unique,
 just change the field name and unique key name from ‘id’ to ‘uuid’.  Or
 change the key name you send in from ‘uuid’ to ‘id’.
 
 On Nov 11, 2014, at 7:18 PM, S.L simpleliving...@gmail.commailto:
 simpleliving...@gmail.com wrote:
 
 Hi All,
 
 I am seeing interesting behavior on the replicas , I have a single
 shard and 6 replicas and on SolrCloud 4.10.1 . I  only have a small
 number of documents ~375 that are replicated across the six replicas .
 
 The interesting thing is that the same  document has a different id in
 each one of those replicas .
 
 This is causing the fq(id:xyz) type queries to fail, 

Re: Different ids for the same document in different replicas.

2014-11-12 Thread Meraj A. Khan
Sorry,its actually doctorUrl, so I dont want to use doctorUrl as a lookup
mechanism because urls can have special characters that can caise issue
with Solr lookup.

I guess I should rephrase my question to ,how to auto generate the unique
keys in the id field when using SolrCloud?
 On Nov 12, 2014 7:28 PM, Garth Grimm garthgr...@averyranchconsulting.com
wrote:

 You mention you already have a unique Key identified for the data you’re
 storing in Solr:

  uniqueKeydoctorIduniquekey

 If that’s the field you’re using to uniquely identify each thing you’re
 storing in the solr index, why do you want to have an id field that is
 populated with some random value?  You’ll be using the doctorId field as
 the key, and the id field will have no real meaning in your Data Model.

 If doctorId actually isn’t unique to each item you plan on storing in
 Solr, is there any other field that is?  If so, use that field as your
 unique key.

 Remember, this uniqueKeys are usually used for routing documents to shards
 in SolrCloud, and are used to ensure that later updates of the same “thing”
 overwrite the old one, rather than generating multiple copies.  So the keys
 really should be something derived from the data your storing.  I’m not
 sure if I understand why you would want to have the key randomly generated.

  On Nov 12, 2014, at 6:39 PM, S.L simpleliving...@gmail.com wrote:
 
  Just tried  adding  uniqueKeyid/uniqueKey while keeping id type=
  string only blank ids are being generated ,looks like the id is being
  auto generated only if the the id is set to  type uuid , but in case of
  SolrCloud this id will be unique per replica.
 
  Is there a  way to generate a unique id both in case of SolrCloud with
 out
  using the uuid type or not having a per replica unique id?
 
  The uuid in question is of type .
 
  fieldType name=uuid class=solr.UUIDField indexed=true /
 
 
  On Wed, Nov 12, 2014 at 6:20 PM, S.L simpleliving...@gmail.com wrote:
 
  Thanks.
 
  So the issue here is I already have a uniqueKeydoctorIduniquekey
  defined in my schema.xml.
 
  If along with that I also want the id/id field to be automatically
  generated for each document do I have to declare it as a uniquekey as
  well , because I just tried the following setting without the uniqueKey
 for
  id and its only generating blank ids for me.
 
  *schema.xml*
 
 field name=id type=string indexed=true stored=true
 required=true multiValued=false /
 
  *solrconfig.xml*
 
   updateRequestProcessorChain name=uuid
 
 processor class=solr.UUIDUpdateProcessorFactory
 str name=fieldNameid/str
 /processor
 processor class=solr.RunUpdateProcessorFactory /
 /updateRequestProcessorChain
 
 
  On Tue, Nov 11, 2014 at 7:47 PM, Garth Grimm 
  garthgr...@averyranchconsulting.com wrote:
 
  Looking a little deeper, I did find this about UUIDField
 
 
 
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/schema/UUIDField.html
 
  NOTE: Configuring a UUIDField instance with a default value of NEW
 is
  not advisable for most users when using SolrCloud (and not possible if
 the
  UUID value is configured as the unique key field) since the result
 will be
  that each replica of each document will get a unique UUID value. Using
  UUIDUpdateProcessorFactory
 
 http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/UUIDUpdateProcessorFactory.html
 
  to generate UUID values when documents are added is recomended
 instead.”
 
  That might describe the behavior you saw.  And the use of
  UUIDUpdateProcessorFactory to auto generate ID’s seems to be covered
 well
  here:
 
 
 
 http://solr.pl/en/2013/07/08/automatically-generate-document-identifiers-solr-4-x/
 
  Though I’ve not actually tried that process before.
 
  On Nov 11, 2014, at 7:39 PM, Garth Grimm 
  garthgr...@averyranchconsulting.commailto:
  garthgr...@averyranchconsulting.com wrote:
 
  “uuid” isn’t an out of the box field type that I’m familiar with.
 
  Generally, I’d stick with the out of the box advice of the schema.xml
  file, which includes things like….
 
   !-- Only remove the id field if you have a very good reason to.
  While not strictly
 required, it is highly recommended. A uniqueKey is present in
  almost all Solr
 installations. See the uniqueKey declaration below where
  uniqueKey is set to id.
   --
   field name=id type=string indexed=true stored=true
  required=true multiValued=false /
 
  and…
 
  !-- Field to use to determine and enforce document uniqueness.
  Unless this field is marked with required=false, it will be a
  required field
   --
  uniqueKeyid/uniqueKey
 
  If you’re creating some key/value pair with uuid as the key as you feed
  documents in, and you know that the uuid values you’re creating are
 unique,
  just change the field name and unique key name from ‘id’ to ‘uuid’.  Or
  change the key name you send in from ‘uuid’ to ‘id’.
 
  On Nov 11, 2014, at 7:18 PM, 

Re: Problems after upgrade 4.10.1 - 4.10.2

2014-11-12 Thread Jeon Woosung
you can migrate zookeeper data manually.

1. connect zookeeper.
- zkCli.sh -server host:port
2. check old data
- get /collections/your collection
name/leader_initiated_recovery/your shard name


[zk: localhost:3181(CONNECTED) 25] get
/collections/collection1/leader_initiated_recovery/shard1
*down*
cZxid = 0xe4
ctime = Thu Nov 13 13:38:53 KST 2014
mZxid = 0xe4
mtime = Thu Nov 13 13:38:53 KST 2014
pZxid = 0xe4
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0


i guess that there is only single word which is down

3. delete the data.
- remove /collections/your collection
name/leader_initiated_recovery/your shard name

4. create new data.
- create /collections/your collection
name/leader_initiated_recovery/your shard name {state:down}

5. restart the server.



On Thu, Nov 13, 2014 at 7:42 AM, Anshum Gupta ans...@anshumgupta.net
wrote:

 Considering the impact, I think we should put this out as an announcement
 on the 'news' section of the website warning people about this.

 On Wed, Nov 12, 2014 at 12:33 PM, Shalin Shekhar Mangar 
 shalinman...@gmail.com wrote:

  I opened https://issues.apache.org/jira/browse/SOLR-6732
 
  On Wed, Nov 12, 2014 at 12:29 PM, Shalin Shekhar Mangar 
  shalinman...@gmail.com wrote:
 
   Hi Thomas,
  
   You're right, there's a back-compat break here. I'll open an issue.
  
   On Wed, Nov 12, 2014 at 9:37 AM, Thomas Lamy t.l...@cytainment.de
  wrote:
  
   Am 12.11.2014 um 15:29 schrieb Thomas Lamy:
  
   Hi there!
  
   As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530
 on
   a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
   4.10.2.
   The first node upgrade worked like a charm.
   After upgrading the second node, two cores no longer come up and we
 get
   the following error:
  
   ERROR - 2014-11-12 15:17:34.226;
  org.apache.solr.cloud.RecoveryStrategy;
   Recovery failed - trying again... (16) core=cams_shard1_replica4
   ERROR - 2014-11-12 15:17:34.230;
 org.apache.solr.common.SolrException;
   Error while trying to recover. core=onlinelist_shard1_
   replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
   char=d,position=0 BEFORE='d' AFTER='own'
   at org.noggit.JSONParser.err(JSONParser.java:223)
   at org.noggit.JSONParser.next(JSONParser.java:622)
   at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
   at org.noggit.ObjectBuilder.init(ObjectBuilder.java:44)
   at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
   at org.apache.solr.common.cloud.ZkStateReader.fromJSON(
   ZkStateReader.java:129)
   at
  org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
   eObject(ZkController.java:1925)
   at
  org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
   e(ZkController.java:1890)
   at org.apache.solr.cloud.ZkController.publish(
   ZkController.java:1071)
   at org.apache.solr.cloud.ZkController.publish(
   ZkController.java:1041)
   at org.apache.solr.cloud.ZkController.publish(
   ZkController.java:1037)
   at org.apache.solr.cloud.RecoveryStrategy.doRecovery(
   RecoveryStrategy.java:355)
   at org.apache.solr.cloud.RecoveryStrategy.run(
   RecoveryStrategy.java:235)
  
   Any hint on how to solve this? Google didn't reveal anything
 useful...
  
  
   Kind regards
   Thomas
  
Just switched to INFO loglevel:
  
   INFO  - 2014-11-12 15:30:31.563;
 org.apache.solr.cloud.RecoveryStrategy;
   Publishing state of core onlinelist_shard1_replica7 as recovering,
  leader
   is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ and I
  am
   http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
   INFO  - 2014-11-12 15:30:31.563;
 org.apache.solr.cloud.RecoveryStrategy;
   Publishing state of core cams_shard1_replica4 as recovering, leader is
   http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am
   http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
   INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
   publishing core=onlinelist_shard1_replica7 state=recovering
   collection=onlinelist
   INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
   publishing core=cams_shard1_replica4 state=recovering collection=cams
   ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
   Error while trying to recover. core=cams_shard1_replica4rg.
   noggit.JSONParser$ParseException: JSON Parse Error: char=d,position=0
   BEFORE='d' AFTER='own'
   ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
   Error while trying to recover. core=onlinelist_shard1_
   replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
   char=d,position=0 BEFORE='d' AFTER='own'
   ERROR - 2014-11-12 15:30:31.564;
 

Can we query on _version_field ?

2014-11-12 Thread S.L
Hi All,

We know that _version_field is a mandatory field in solrcloud schema.xml,
it is expected to be of type long , it also seems to have unique value in a
collection.

However the query of the form
http://server1.mydomain.com:7344/solr/collection1/select/?q=*:*fq=%28_version_:148463254894438%29wt=json
does not seems to return any record , can we query on the _version_field in
the schema.xml ?

Thank you.