[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-03-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923605#comment-15923605
 ] 

Mark Miller commented on SOLR-10032:


I've recorded a talk on this topic that also demo's creating a test report a 
bit: https://youtu.be/A2VXU-JVoGY

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results 
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
> iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults 
> 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 
> iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults 
> 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults 
> 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults 
> 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running 
> 100 iterations, 12 at a time, 8 cores.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.
> Reports:
> https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing
> 01/24/2017 - 
> https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
> 02/01/2017 - 
> https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
> 02/08/2017 - 
> https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
> 02/14/2017 - 
> https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing
> 02/17/2017 - 
> https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9835) Create another replication mode for SolrCloud

2017-03-13 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9835:
---
Attachment: SOLR-9835.patch

Latest patch for this ticket. Will commit it soon.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : if we *do not use DQBs*, replicas will consistence 
> with master just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-03-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923517#comment-15923517
 ] 

Yonik Seeley commented on SOLR-10273:
-

Is there a way to check this while building the Document from the 
SolrInputDocument instead (may be cheaper?)
For multi-valued fields, perhaps we should be using the sum of the multiple 
fields?
As a generalization we could also consider sorting by size, not just picking 
out the largest single field.

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-03-13 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923512#comment-15923512
 ] 

David Smiley commented on SOLR-10273:
-

True; it's debatable... I nearly added a comment about being inclined to raise 
this min length to something higher so I'm glad you brought it up.  That Lucene 
side value might change in the future or based on a user-chosen codec; we 
needn't track it exactly. Also, just because the longest field is 1024 doesn't 
mean the document overall is "small" because theoretically there could be a ton 
of stored values instead of one particularly large one.  Perhaps change to 4KB 
default?  Shrug.

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6448 - Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6448/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:59201/solr;,   
"node_name":"127.0.0.1:59201_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:59196/solr;,   
"node_name":"127.0.0.1:59196_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:59201/solr;,
  "node_name":"127.0.0.1:59201_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:59196/solr;,
  "node_name":"127.0.0.1:59196_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([2EC0729D2256154C:7E95EA9E7B77A351]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+159) - Build # 19164 - Still Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19164/
Java: 32bit/jdk-9-ea+159 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=507 sum(shards)=506 cloudClient=506

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=507 sum(shards)=506 
cloudClient=506
at 
__randomizedtesting.SeedInfo.seed([1619D3F3522B699E:9E4DEC29FCD70466]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1332)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:229)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-03-13 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923508#comment-15923508
 ] 

Michael Braun commented on SOLR-10273:
--

In LUCENE-6898 a comment says it doesn't have an impact if the last stored 
value is under 16K - should the value be higher than 1024 by default?

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-03-13 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10273:

Attachment: SOLR_10273_DocumentBuilder_move_longest_to_last.patch

Here's a patch with test.  It also incorporates a minimum long length of 1024 
that is overrideable via {{solr.docBuilder.minLengthToMoveLast}} (an internal 
unsupported setting).  If no field is at least that long then it won't bother 
moving it.  In tests temporarily I set this to 0 and no existing test broke, 
which is a good sign.  If someone were to observe the ordering that fields come 
back from Solr with a '*', they might notice a change in ordering after this.  
Of course people shouldn't depend on inter-field ordering.

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Moving the Ref Guide: Progress Update & Next Steps

2017-03-13 Thread Jan Høydahl
+1

I’m happy we can finally move on with this.
And agree with Hoss that docs must be expected with code, or else the released 
git version will not contain the correct refguide. We cannot rely on releasing 
the refguide weeks after the code anymore, and we can’t hold up the release 
process and do tons of re-spins for simple adoc changes.

But that also makes it important that any committer is given good tools to make 
sure her edits look good. I hope Asciidoc is more standardised than Markdown, 
else your choice of tooling may ultimately decide whether your edits look good 
or bad.

Would it be possible to add a JIRA bot that tries to apply the latest 
SOLR-.patch (like the Hadoop QA bot does, see 
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=hadoopqa 
) and 
also, if the patch contains .adoc changes, verify and provide a preview of 
those changes right there in JIRA?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 14. mar. 2017 kl. 00.50 skrev Chris Hostetter :
> 
> 
> : *) should there be LICENCE on the Github repo if you want people to
> : play/experiment/contribute ideas?
> 
> I'm not sure that's really neccessary at this point -- so far all of the 
> contributions have been from existing committers, and we (probably?) don't 
> want to take on any (new) significant contributions from general users 
> until we get it imported into the lucene-solr.git repo?
> 
> : *) This part is future, right? "Custom Java tooling will be used to
> : process the .adoc file metadata to build up navigation data files" I
> : can't find it in the repo, but maybe I am confused.
> 
> no -- it already exists.  See solr-ref-guide/build.xml and 
> solr-ref-guide/tools.  
> 
> That "tools" code is "long life" code that will exist for managing the 
> ref-guide even after the migration ... possibly some where in dev-tools i 
> would imagine? 
> 
> (stuff under "confluence-export/conversion-tools" will be thrown away)
> 
> 
> : *) Because "How will we provide search? Recommend probably indexing
> : generated HTML pages. Could use bin/post from Solr to recurse over the
> : HTML files and index them. In this case, we will need to figure out
> : where to host Solr." - this is slightly embarrassing
> 
> that would be a nice to have, but since we already rely on third parties 
> (search-lucene.com & find.searchhub.org and ) to provide solr indexes of 
> our website, relying on them to search the ref guide shouldn't be a deal 
> breaker / blocker)
> 
> : > 2) We need to decide on our policy for branches. I recall there was
> : > valid concern about the process around this when I first proposed the
> : > change. I'd like to iron that out as soon as we can since that will be
> : > a key part of our new process.
> : >
> : > From our discussion last summer, there are 2 potential approaches:
> : >
> : > a) Make all changes in 'master' (trunk) and backport to branches for
> : > releasing the content. We'd need to merge "backward" into upcoming
> : > release branch.
> : > b) Make all changes in branch_6x (or branch_7x, etc.) and only move
> : > things to master when they are only applicable to unreleased next
> : > major version. We'd merge 6x "forward" when it's time for next major
> : > version.
> 
> I personally think "#A" is the only sane way to manage the ref guide.
> 
> I think we should do everything we can to move towards ref-guide edits
> being committed & managed exactly the same as source code edits -- ideally 
> in the exact same commits, to the exact same repo. So that if you are 
> adding/fixing a Foo feature, you have a single commit to master that edits 
> Foo.java and Foo.adoc (just in diff directories).  When you want to 
> backport that feature to branch 6x, you backport the whole commit.
> 
> (we would never consider committing fixes/improvements to code, and then 
> leaving javadoc corrections about those code changes until just before 
> release weeks later -- we shouldn't approach writing user docs that way 
> either.)
> 
> 
> Having this branching model, and getting use to this model of 
> committing/backporting doc changes at the exact same time we 
> commit/backport code, is the only way we can ever hope to move forward 
> with any of the really powerful things using adoc files (and a command 
> line ref-guide build system) can support:
> 
> * building the ref guide & checking broken links as part of our 
> precommit/smoketest build targets.
> * writing automated "tests" of our documentation (ex: assert every 
> collections API 'command' has a coresponding page/section) that can be run 
> by jenkins.
> * etc...
> 
> 
> : > I appreciate in advance your feedback.  As a reminder, you can see the
> : > demo site/PDF and the project repo at:
> : >
> : > http://people.apache.org/~ctargett/RefGuidePOC/
> : > https://github.com/ctargett/refguide-asciidoc-poc
> 
> 
> -Hoss
> 

[jira] [Commented] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-03-13 Thread Joshua Humphries (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923390#comment-15923390
 ] 

Joshua Humphries commented on SOLR-10277:
-

It's only changing properties for replicas that correspond to this node. 
However, if you look at the whole method, especially 10 lines below, you'll 
find that it's always adding a ZkWriteCommand for each collection, regardless 
of whether any replica properties were touched. So it generates the necessary 
changes *and* a bunch of no-op updates for every other collection.

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3
>Reporter: Joshua Humphries
>  Labels: leader, zookeeper
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923383#comment-15923383
 ] 

ASF subversion and git services commented on SOLR-8045:
---

Commit 464722a0a8ca1811d922e346d219d08676a12e65 in lucene-solr's branch 
refs/heads/branch_6x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=464722a ]

SOLR-8045: Fix smokeTestRelease.py from precommit


> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923377#comment-15923377
 ] 

ASF subversion and git services commented on SOLR-8045:
---

Commit faeb1fe8c16f9e02aa5c3bba295bc24325b94a07 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=faeb1fe ]

SOLR-8045: Fix smokeTestRelease.py from precommit


> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923358#comment-15923358
 ] 

Noble Paul commented on SOLR-8045:
--

bq.i'm concerned about this change being backported to 6x, since (IIUC) it 
means users have to change any existing requestHandler declarations they have 
that already use a registerPath attribute.

can you make it clearer ? v2 api is not yet released. So where is the 
backcompat problem coming from

> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+159) - Build # 19163 - Still Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19163/
Java: 32bit/jdk-9-ea+159 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.core.ConfigureRecoveryStrategyTest.testAlmostAllMethodsAreFinal

Error Message:
private static void 
org.apache.solr.cloud.RecoveryStrategy.$closeResource(java.lang.Throwable,java.lang.AutoCloseable)

Stack Trace:
java.lang.AssertionError: private static void 
org.apache.solr.cloud.RecoveryStrategy.$closeResource(java.lang.Throwable,java.lang.AutoCloseable)
at 
__randomizedtesting.SeedInfo.seed([4783962C6818C9A4:4D892B216624D5FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.ConfigureRecoveryStrategyTest.testAlmostAllMethodsAreFinal(ConfigureRecoveryStrategyTest.java:73)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12561 lines...]
   [junit4] Suite: 

[jira] [Resolved] (SOLR-10231) Cursor value always different for last page with sorting by a date based function using NOW

2017-03-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10231.
-
   Resolution: Information Provided
 Assignee: Hoss Man
Fix Version/s: 6.5

marking as resolved since I think documentation is really the only correct fix 
for this situation.

> Cursor value always different for last page with sorting by a date based 
> function using NOW
> ---
>
> Key: SOLR-10231
> URL: https://issues.apache.org/jira/browse/SOLR-10231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 4.10.2
>Reporter: Dmitry Kan
>Assignee: Hoss Man
> Fix For: 6.5
>
>
> Cursor based results fetching is a deal breaker for search performance.
> It works extremely well when paging using sort by field(s).
> Example, that works (Id is unique field in the schema):
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?q=*:*=DocumentId:76581059=AoIGAC5TU1ItNzY1ODEwNTktMQ===DocumentId=UserId+asc%2CId+desc=1
> {code}
> Response:
> {code}
> 
> 
> 0
> 4
> 
> *:*
> DocumentId
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> DocumentId:76581059
> UserId asc,Id desc
> 1
> 
> 
> 
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> 
> {code}
> nextCursorMark equals to cursorMark and so we know this is last page.
> However, sorting by function behaves differently:
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?rows=1=*:*=DocumentId:76581059=AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE==DocumentId=min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))%20asc,Id%20desc
> {code}
> Response:
> {code}
> 
> 
> 0
> 6
> 
> *:*
> DocumentId
> AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE=
> DocumentId:76581059
> 
> min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))
>  asc,Id desc
> 
> 1
> 
> 
> 
> 
> 76581059
> 
> 
> AoIFQf9yFyAuU1NSLTc2NTgxMDU5LTE=
> 
> {code}
> nextCursorMark does not equal to cursorMark, which suggests there are more 
> results. Which is not true (numFound=1). And so the client goes into infinite 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10231) Cursor value always different for last page with sorting by a date based function using NOW

2017-03-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923312#comment-15923312
 ] 

Hoss Man commented on SOLR-10231:
-

bq. Btw, would the same issue exist in 6.x?

yeah, there's nothing version specific happening here -- it's just the nature 
of the way cursors work.  Sorting by a NOW relative function like this, where 
the result for each doc changes every time you send a request, is just like 
sorting on a field that you constantly update for every doc in between every 
request.

bq. (Perhaps the NOW value should also be encoded into the cursor values so 
this happens automatically under the covers? ... not sure if that's a good idea 
in general, would need to think about it more)

The more i think about it, the more convinced i am this wouldn't be a good idea 
-- because it would complicate usescases where people want filter queries that 
involve "NOW" that they *do* want/expect to change in subsequent requests as 
they walk cursor -- ie: an {{fq=expiresAt:\[\NOW TO *\]}} that should use a NOW 
that represents the actual moment the request is made, even if they've been 
tailing a cursor (with a sort that might not even involve {{expiresAt}}) 
continuously 

I've added a note about sorts (implicitly) involving NOW to the docs on 
cursors...

https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=38572235=28=29

> Cursor value always different for last page with sorting by a date based 
> function using NOW
> ---
>
> Key: SOLR-10231
> URL: https://issues.apache.org/jira/browse/SOLR-10231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 4.10.2
>Reporter: Dmitry Kan
>
> Cursor based results fetching is a deal breaker for search performance.
> It works extremely well when paging using sort by field(s).
> Example, that works (Id is unique field in the schema):
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?q=*:*=DocumentId:76581059=AoIGAC5TU1ItNzY1ODEwNTktMQ===DocumentId=UserId+asc%2CId+desc=1
> {code}
> Response:
> {code}
> 
> 
> 0
> 4
> 
> *:*
> DocumentId
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> DocumentId:76581059
> UserId asc,Id desc
> 1
> 
> 
> 
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> 
> {code}
> nextCursorMark equals to cursorMark and so we know this is last page.
> However, sorting by function behaves differently:
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?rows=1=*:*=DocumentId:76581059=AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE==DocumentId=min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))%20asc,Id%20desc
> {code}
> Response:
> {code}
> 
> 
> 0
> 6
> 
> *:*
> DocumentId
> AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE=
> DocumentId:76581059
> 
> min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))
>  asc,Id desc
> 
> 1
> 
> 
> 
> 
> 76581059
> 
> 
> AoIFQf9yFyAuU1NSLTc2NTgxMDU5LTE=
> 
> {code}
> nextCursorMark does not equal to cursorMark, which suggests there are more 
> results. Which is not true (numFound=1). And so the client goes into infinite 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: How Zookeeper (and Puppet) brought down our Solr Cluster

2017-03-13 Thread Jan Høydahl
Hi

Thanks for reporting.
As it may take some time before we get ZK 3.5.x out there it would be nice with 
a fix.
Do you plan to make our zkClient somehow explicitly validate that all given zk 
nodes are “good”?

Or is there some way we could fix this with documentation?
I imagine, if we always propose to use a chroot, e.g. 
ZK_HOST=zoo1,zoo2,zoo3/solr then it would be a requirement to do a mkroot 
before being able to use ZK. And I assume that in that case if one of the ZK 
nodes got restarted without or with wrong configuration, it would startup with 
some other data folder(?) and refuse to serve any data whatsoever since the 
/solr root would not exist?

I’d say, even if this is not a Solr bug per se, it is still worthy of a JIRA 
issue.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 14. mar. 2017 kl. 00.11 skrev Ben DeMott :
> 
> So wanted to throw this out there, and get any feedback.
> 
> We had a persistent issue with our Solr clusters doing crazy things, from 
> running out of file-descriptors, to having replication issues, to filling up 
> the /overseer/queue  Just some of the log Exceptions:
> 
> o.e.j.s.ServerConnector java.io .IOException: Too many open 
> files
> 
> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error trying 
> to proxy request for url: 
> http://10.50.64.4:8983/solr/efc-jobsearch-col/select 
> 
> 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ClusterState 
> says we are the leader 
> (http://10.50.64.4:8983/solr/efc-jobsearch-col_shard1_replica2 
> ), but locally 
> we don't think so. Request came from null
> 
> o.a.s.c.Overseer Bad version writing to ZK using compare-and-set, will force 
> refresh cluster state: KeeperErrorCode = BadVersion for 
> /collections/efc-jobsearch-col/state.json
> 
> IndexFetcher File _5oz.nvd did not match. expected checksum is 3661731988 and 
> actual is checksum 840593658. expected length is 271091 and actual length is 
> 271091
> 
> 
> ...
> 
> I'll get to the point quickly.  This was all caused by a Zookeeper 
> configuration on a particular node getting reset, for a period of seconds, 
> and the service being restarted automatically.  When this happened, Solr's 
> connection to Zookeeper would be reset, Solr would reconnect, to the 
> Zookeeper node, which had a blank configuration and was in "STANDALONE" mode. 
>  The changes to ZK that were registered by the Solr connection wouldn't be 
> registered with the rest of the cluster.
> 
> As a result the cversion of /live_nodes would be ahead of the other servers 
> by a version or two, but the zxid's would all by in-sync.  The nodes would 
> never re-synchronize; as far as Zookeeper is concerned everything is synced 
> up properly.  Also /live_nodes would be a mis-matched mess, empty, or 
> inconsistent, depending on where Solr's ZK connections were pointed, 
> resulting in Client connections  returning some, wrong, or no "live nodes".
> 
> Now, it specifically tells you to never connect to an inconsistent group of 
> servers as it will play havoc with Zookeeper, and it did exactly this.
> 
> As of Zookeeper 3.5 there is an option to NEVER ALLOW IT TO RUN IN STANDALONE 
> which we will be using when a stable version is released.
> 
> It caused absolute havoc within our cluster.
> 
> So to summarize, if a Zookeeper ensemble host ever goes into "Standalone" 
> even temporarily, Solr will be disconnected, and then (may) reconnect 
> (depending on which ZK node it picks) and its updates will never by 
> synchronized. Also it won't be able to coordinate any of its Cloud operations.
> 
> So in the interest of being a good internet citizen I'm writing this up, is 
> there any desire for a patch that would provide a configuration or jvm option 
> to refuse to connect to nodes in standalone operation?   Obviously the 
> built-in ZK server that comes with Solr runs in standalone mode, so this 
> would only be an option for solr.in.sh But it would prevent Solr from 
> bringing the entire cluster down, in the event a single ZK server was 
> temporarily misconfigured, or lost it's configuration for some reason.
> 
> Maybe this isn't worth addressing.  Thoughts?
> 



Re: Moving the Ref Guide: Progress Update & Next Steps

2017-03-13 Thread Chris Hostetter

: *) should there be LICENCE on the Github repo if you want people to
: play/experiment/contribute ideas?

I'm not sure that's really neccessary at this point -- so far all of the 
contributions have been from existing committers, and we (probably?) don't 
want to take on any (new) significant contributions from general users 
until we get it imported into the lucene-solr.git repo?

: *) This part is future, right? "Custom Java tooling will be used to
: process the .adoc file metadata to build up navigation data files" I
: can't find it in the repo, but maybe I am confused.

no -- it already exists.  See solr-ref-guide/build.xml and 
solr-ref-guide/tools.  

That "tools" code is "long life" code that will exist for managing the 
ref-guide even after the migration ... possibly some where in dev-tools i 
would imagine? 

(stuff under "confluence-export/conversion-tools" will be thrown away)


: *) Because "How will we provide search? Recommend probably indexing
: generated HTML pages. Could use bin/post from Solr to recurse over the
: HTML files and index them. In this case, we will need to figure out
: where to host Solr." - this is slightly embarrassing

that would be a nice to have, but since we already rely on third parties 
(search-lucene.com & find.searchhub.org and ) to provide solr indexes of 
our website, relying on them to search the ref guide shouldn't be a deal 
breaker / blocker)

: > 2) We need to decide on our policy for branches. I recall there was
: > valid concern about the process around this when I first proposed the
: > change. I'd like to iron that out as soon as we can since that will be
: > a key part of our new process.
: >
: > From our discussion last summer, there are 2 potential approaches:
: >
: > a) Make all changes in 'master' (trunk) and backport to branches for
: > releasing the content. We'd need to merge "backward" into upcoming
: > release branch.
: > b) Make all changes in branch_6x (or branch_7x, etc.) and only move
: > things to master when they are only applicable to unreleased next
: > major version. We'd merge 6x "forward" when it's time for next major
: > version.

I personally think "#A" is the only sane way to manage the ref guide.

I think we should do everything we can to move towards ref-guide edits
being committed & managed exactly the same as source code edits -- ideally 
in the exact same commits, to the exact same repo. So that if you are 
adding/fixing a Foo feature, you have a single commit to master that edits 
Foo.java and Foo.adoc (just in diff directories).  When you want to 
backport that feature to branch 6x, you backport the whole commit.

(we would never consider committing fixes/improvements to code, and then 
leaving javadoc corrections about those code changes until just before 
release weeks later -- we shouldn't approach writing user docs that way 
either.)


Having this branching model, and getting use to this model of 
committing/backporting doc changes at the exact same time we 
commit/backport code, is the only way we can ever hope to move forward 
with any of the really powerful things using adoc files (and a command 
line ref-guide build system) can support:

 * building the ref guide & checking broken links as part of our 
precommit/smoketest build targets.
 * writing automated "tests" of our documentation (ex: assert every 
collections API 'command' has a coresponding page/section) that can be run 
by jenkins.
 * etc...


: > I appreciate in advance your feedback.  As a reminder, you can see the
: > demo site/PDF and the project repo at:
: >
: > http://people.apache.org/~ctargett/RefGuidePOC/
: > https://github.com/ctargett/refguide-asciidoc-poc


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923176#comment-15923176
 ] 

Hoss Man commented on SOLR-8045:


in addition to the smoketester failures -- i'm concerned about this change 
being backported to 6x, since (IIUC) it means users have to change any existing 
{{requestHandler}} declarations they have that already use a {{registerPath}} 
attribute.  

(This is an assumption on my part based on the fact that the commit seems to 
have included a change to every existing {{registerPath}} declaration -- either 
in a sample (or test) config, and in any test that called 
{{'add-requesthandler'}})

that type of change may be fine for 7.0, with an adequate upgrade instruction, 
but I worry about this breaking stuff for people who upgrade from 6.x to 6.5 
w/o changing their configs.



> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-8045:


re-opening and marking as blocker so we ensure we do something about the 
smoketest failures and backcompat questions before a 6.5 release

> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8045) Deploy V2 API at /v2 instead of /solr/v2

2017-03-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8045:
---
Priority: Blocker  (was: Major)

> Deploy V2 API at /v2 instead of /solr/v2
> 
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 6.5
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



How Zookeeper (and Puppet) brought down our Solr Cluster

2017-03-13 Thread Ben DeMott
So wanted to throw this out there, and get any feedback.

We had a persistent issue with our Solr clusters doing crazy things, from
running out of file-descriptors, to having replication issues, to filling
up the /overseer/queue  Just some of the log Exceptions:

*o.e.j.s.ServerConnector java.io .IOException: Too many
open files*

*o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error
trying to proxy request for
url: http://10.50.64.4:8983/solr/efc-jobsearch-col/select
*

*o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
ClusterState says we are the leader
(http://10.50.64.4:8983/solr/efc-jobsearch-col_shard1_replica2
), but
locally we don't think so. Request came from null*

*o.a.s.c.Overseer Bad version writing to ZK using compare-and-set, will
force refresh cluster state: KeeperErrorCode = BadVersion for
/collections/efc-jobsearch-col/state.json*

*IndexFetcher File _5oz.nvd did not match. expected checksum is 3661731988
and actual is checksum 840593658. expected length is 271091 and actual
length is 271091*


*...*

I'll get to the point quickly.  This was all caused by a Zookeeper
configuration on a particular node getting reset, for a period of seconds,
and the service being restarted automatically.  When this happened, Solr's
connection to Zookeeper would be reset, Solr would reconnect, to the
Zookeeper node, which had a blank configuration and was in "STANDALONE"
mode.  The changes to ZK that were registered by the Solr connection
wouldn't be registered with the rest of the cluster.

As a result the *cversion* of */live_nodes* would be ahead of the other
servers by a version or two, but the zxid's would all by in-sync.  The
nodes would never re-synchronize; as far as Zookeeper is concerned
everything is synced up properly.  Also */live_nodes* would be a
mis-matched mess, empty, or inconsistent, depending on where Solr's ZK
connections were pointed, resulting in Client connections  returning some,
wrong, or no "live nodes".

Now, it specifically tells you to never connect to an inconsistent group of
servers as it will play havoc with Zookeeper, and it did exactly this.

As of Zookeeper 3.5 there is an option to NEVER ALLOW IT TO RUN IN
STANDALONE which we will be using when a stable version is released.

It caused absolute havoc within our cluster.

So to summarize, if a Zookeeper ensemble host ever goes into "Standalone"
even temporarily, Solr will be disconnected, and then (may) reconnect
(depending on which ZK node it picks) and its updates will never by
synchronized. Also it won't be able to coordinate any of its Cloud
operations.

So in the interest of being a good internet citizen I'm writing this up, is
there any desire for a patch that would provide a configuration or jvm
option to refuse to connect to nodes in standalone operation?   Obviously
the built-in ZK server that comes with Solr runs in standalone mode, so
this would only be an option for solr.in.sh But it would prevent Solr
from bringing the entire cluster down, in the event a single ZK server was
temporarily misconfigured, or lost it's configuration for some reason.

Maybe this isn't worth addressing.  Thoughts?


[jira] [Commented] (SOLR-10275) Failures for intranode communication when blockUnknown is set to true

2017-03-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923163#comment-15923163
 ] 

Shawn Heisey commented on SOLR-10275:
-

It's theoretically possible that In extreme situations SOLR-10130 MIGHT cause 
issues like this, but if that were the case, I wouldn't expect to see 
"connection refused."  I am not 100 percent confident about what Java and 
HttpClient do when a TCP connection timeout is exceeded, and I am not sure what 
SolrCloud's inter-node TCP connection timeout is set to by default.


> Failures for intranode communication when blockUnknown is set to true
> -
>
> Key: SOLR-10275
> URL: https://issues.apache.org/jira/browse/SOLR-10275
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.4.1
>Reporter: Shawn Feldman
>
> {code} 
> forwarding update to https://{server}:{port}/solr/{shard}/ failed - retrying 
> ... retries: 11 add{,id={value}} params:update.distrib=TOLEADER= 
> https://{server}:{port}/solr/{shard}/  rsp:-1:java.net.ConnectException: 
> Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>   at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>   at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:311)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:184)
>   at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10275) Failures for intranode communication when blockUnknown is set to true

2017-03-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923154#comment-15923154
 ] 

Shawn Heisey commented on SOLR-10275:
-

This is saying "connection refused" which would typically mean that either Solr 
isn't running or that there's something blocking the traffic like a firewall.  
If there was a problem with authentication, I would expect the exception to say 
something very different.


> Failures for intranode communication when blockUnknown is set to true
> -
>
> Key: SOLR-10275
> URL: https://issues.apache.org/jira/browse/SOLR-10275
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.4.1
>Reporter: Shawn Feldman
>
> {code} 
> forwarding update to https://{server}:{port}/solr/{shard}/ failed - retrying 
> ... retries: 11 add{,id={value}} params:update.distrib=TOLEADER= 
> https://{server}:{port}/solr/{shard}/  rsp:-1:java.net.ConnectException: 
> Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>   at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>   at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:311)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:184)
>   at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10236.
--
   Resolution: Done
 Assignee: Tomás Fernández Löbbe
Fix Version/s: master (7.0)

> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: SOLR-10236.patch, SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923144#comment-15923144
 ] 

ASF subversion and git services commented on SOLR-10236:


Commit abec54bd5722bc818fe46e111cf652cd7671db86 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=abec54b ]

SOLR-10236: Remove FieldType.getNumericType() from master


> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10236.patch, SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-03-13 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey reassigned SOLR-10130:
---

Assignee: Shawn Heisey  (was: Andrzej Bialecki )

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.4.0
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Assignee: Shawn Heisey
>Priority: Blocker
>  Labels: perfomance
> Fix For: master (7.0), 6.4.2
>
> Attachments: SOLR-10130.patch, SOLR-10130.patch, 
> solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10236:
-
Attachment: SOLR-10236.patch

Same patch, updated to master and with CHANGES upgrade note

> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10236.patch, SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 758 - Still Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/758/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([26A4D4858693DF8D:73F43C172A6A107D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1394)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1087)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-03-13 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923092#comment-15923092
 ] 

Varun Thacker commented on SOLR-10277:
--

Hi Joshua,

> However, the current logic in NodeMutator#downNode always updates *every* 
> collection.

I am checking against Solr 5.5.3 since you have listed that as the 'Affected 
Versions' . Looking at 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/5.5.3/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java#L65
 it looks to me that it only updates those replicas of collection which belong 
to this node.  Am i missing something here?

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3
>Reporter: Joshua Humphries
>  Labels: leader, zookeeper
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-03-13 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923092#comment-15923092
 ] 

Varun Thacker edited comment on SOLR-10277 at 3/13/17 10:25 PM:


Hi Joshua,

bq. However, the current logic in NodeMutator#downNode always updates *every* 
collection.

I am checking against Solr 5.5.3 since you have listed that as the 'Affected 
Versions' . Looking at 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/5.5.3/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java#L65
 it looks to me that it only updates those replicas of collection which belong 
to this node.  Am i missing something here?


was (Author: varunthacker):
Hi Joshua,

> However, the current logic in NodeMutator#downNode always updates *every* 
> collection.

I am checking against Solr 5.5.3 since you have listed that as the 'Affected 
Versions' . Looking at 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/5.5.3/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java#L65
 it looks to me that it only updates those replicas of collection which belong 
to this node.  Am i missing something here?

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3
>Reporter: Joshua Humphries
>  Labels: leader, zookeeper
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-03-13 Thread Joshua Humphries (JIRA)
Joshua Humphries created SOLR-10277:
---

 Summary: On 'downnode', lots of wasteful mutations are done to ZK
 Key: SOLR-10277
 URL: https://issues.apache.org/jira/browse/SOLR-10277
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 5.5.3
Reporter: Joshua Humphries


When a node restarts, it submits a single 'downnode' message to the overseer's 
state update queue.

When the overseer processes the message, it does way more writes to ZK than 
necessary. In our cluster of 48 hosts, the majority of collections have only 1 
shard and 1 replica. So a single node restarting should only result in ~1/40th 
of the collections being updated with new replica states (to indicate the node 
that is no longer active).

However, the current logic in NodeMutator#downNode always updates *every* 
collection. So we end up having to do rolling restarts very slowly to avoid 
having a severe outage due to the overseer having to do way too much work for 
each host that is restarted. And subsequent shards becoming leader can't get 
processed until the `downnode` message is fully processed. So a fast rolling 
restart can result in the overseer queue growing incredibly large and nearly 
all shards winding up in a leader-less state until that backlog is processed.

The fix is a trivial logic change to only add a ZkWriteCommand for collections 
that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10276) Update ZK leader election so that leader notices if its leadership is revoked

2017-03-13 Thread Joshua Humphries (JIRA)
Joshua Humphries created SOLR-10276:
---

 Summary: Update ZK leader election so that leader notices if its 
leadership is revoked
 Key: SOLR-10276
 URL: https://issues.apache.org/jira/browse/SOLR-10276
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 5.5.3
Reporter: Joshua Humphries
Priority: Minor


When we have an issue with a solr node, it would be nice to revoke its 
leadership of one or more shard or to revoke its role as overseer without 
actually restarting the node. (Restarting the node tends to spam the overseer 
queue since we have a very large number of cores per node.)

Operationally, it would be nice if one could just delete the leader's election 
node (e.g. its ephemeral sequential node that indicates it as current leader) 
and to have it notice the change and stop behaving as leader.

Currently, once a node becomes leader, it isn't watching ZK for any changes 
that could revoke its leadership. I am proposing that, upon being elected 
leader, it use a ZK watch to monitor its own election node. If its own election 
node is deleted, it then relinquishes leadership (e.g. calls 
ElectionContext#cancelElection() and then re-joins the election).

I have a patch with tests that I can contribute.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+159) - Build # 19162 - Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19162/
Java: 64bit/jdk-9-ea+159 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8292, 
name=updateExecutor-1813-thread-1, state=RUNNABLE, 
group=TGRP-ChaosMonkeySafeLeaderTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8292, name=updateExecutor-1813-thread-1, 
state=RUNNABLE, group=TGRP-ChaosMonkeySafeLeaderTest]
at 
__randomizedtesting.SeedInfo.seed([8C23D7626F58E8DA:477E8B8C1A48522]:0)
Caused by: org.apache.solr.common.SolrException: Replica: 
https://127.0.0.1:39556/ah/g/collection1/ should have been marked under leader 
initiated recovery in ZkController but wasn't.
at __randomizedtesting.SeedInfo.seed([8C23D7626F58E8DA]:0)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryThread.run(LeaderInitiatedRecoveryThread.java:88)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.core.ConfigureRecoveryStrategyTest.testAlmostAllMethodsAreFinal

Error Message:
private static void 
org.apache.solr.cloud.RecoveryStrategy.$closeResource(java.lang.Throwable,java.lang.AutoCloseable)

Stack Trace:
java.lang.AssertionError: private static void 
org.apache.solr.cloud.RecoveryStrategy.$closeResource(java.lang.Throwable,java.lang.AutoCloseable)
at 
__randomizedtesting.SeedInfo.seed([8C23D7626F58E8DA:86296A6F6164F480]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.ConfigureRecoveryStrategyTest.testAlmostAllMethodsAreFinal(ConfigureRecoveryStrategyTest.java:73)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Closed] (SOLR-10275) Failures for intranode communication when blockUnknown is set to true

2017-03-13 Thread Shawn Feldman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Feldman closed SOLR-10275.

Resolution: Later

> Failures for intranode communication when blockUnknown is set to true
> -
>
> Key: SOLR-10275
> URL: https://issues.apache.org/jira/browse/SOLR-10275
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.4.1
>Reporter: Shawn Feldman
>
> {code} 
> forwarding update to https://{server}:{port}/solr/{shard}/ failed - retrying 
> ... retries: 11 add{,id={value}} params:update.distrib=TOLEADER= 
> https://{server}:{port}/solr/{shard}/  rsp:-1:java.net.ConnectException: 
> Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532)
>   at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>   at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>   at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:311)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:184)
>   at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10275) Failures for intranode communication when blockUnknown is set to true

2017-03-13 Thread Shawn Feldman (JIRA)
Shawn Feldman created SOLR-10275:


 Summary: Failures for intranode communication when blockUnknown is 
set to true
 Key: SOLR-10275
 URL: https://issues.apache.org/jira/browse/SOLR-10275
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.4.1
Reporter: Shawn Feldman


{code} 
forwarding update to https://{server}:{port}/solr/{shard}/ failed - retrying 
... retries: 11 add{,id={value}} params:update.distrib=TOLEADER= 
https://{server}:{port}/solr/{shard}/  rsp:-1:java.net.ConnectException: 
Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:532)
at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:311)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:184)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10269) MetricsHandler JSON output incorrect

2017-03-13 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-10269.
--
Resolution: Fixed

> MetricsHandler JSON output incorrect
> 
>
> Key: SOLR-10269
> URL: https://issues.apache.org/jira/browse/SOLR-10269
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.5, master (7.0), 6.4.2
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10269.patch
>
>
> Default XML output for {{/admin/metrics}} looks correct, but when 
> {{=json}} is used the output looks wrong:
> {code}
> ...
>   "metrics": [
> "solr.jetty",
> [
>   "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses",
>   [
> "count",
> 0,
> "meanRate",
> 0,
> "1minRate",
> 0,
> "5minRate",
> 0,
> "15minRate",
> 0
>   ],
>   "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses",
>   [
> "count",
> 6,
> "meanRate",
> 0.668669400584,
> "1minRate",
> 1.2,
> "5minRate",
> 1.2,
> "15minRate",
> 1.2
>   ],
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10269) MetricsHandler JSON output incorrect

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922992#comment-15922992
 ] 

ASF subversion and git services commented on SOLR-10269:


Commit dda17616a45219fca65dcebb997782211645571a in lucene-solr's branch 
refs/heads/branch_6x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dda1761 ]

SOLR-10269 MetricsHandler JSON output was incorrect. (ab)


> MetricsHandler JSON output incorrect
> 
>
> Key: SOLR-10269
> URL: https://issues.apache.org/jira/browse/SOLR-10269
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.5, master (7.0), 6.4.2
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10269.patch
>
>
> Default XML output for {{/admin/metrics}} looks correct, but when 
> {{=json}} is used the output looks wrong:
> {code}
> ...
>   "metrics": [
> "solr.jetty",
> [
>   "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses",
>   [
> "count",
> 0,
> "meanRate",
> 0,
> "1minRate",
> 0,
> "5minRate",
> 0,
> "15minRate",
> 0
>   ],
>   "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses",
>   [
> "count",
> 6,
> "meanRate",
> 0.668669400584,
> "1minRate",
> 1.2,
> "5minRate",
> 1.2,
> "15minRate",
> 1.2
>   ],
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Moving the Ref Guide: Progress Update & Next Steps

2017-03-13 Thread Alexandre Rafalovitch
Silly things:

*) should there be LICENCE on the Github repo if you want people to
play/experiment/contribute ideas?
*) This part is future, right? "Custom Java tooling will be used to
process the .adoc file metadata to build up navigation data files" I
can't find it in the repo, but maybe I am confused.

*) Blue sky idea: We should generate Solr index from the Asciidoc file
as well and offer HTML version of the manual bundled with custom Solr
index as a demo collection. Not as part of Solr, but somewhere else
(where).
*) Because "How will we provide search? Recommend probably indexing
generated HTML pages. Could use bin/post from Solr to recurse over the
HTML files and index them. In this case, we will need to figure out
where to host Solr." - this is slightly embarrassing

Regards,
   Alex.


http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 13 March 2017 at 16:41, Cassandra Targett  wrote:
> It's been a while, but I (with Hoss' assistance) have been doing some
> work on the proposal I made last summer to move the Ref Guide off of
> Confluence. In my opinion, we're really close to being ready to make
> this move, and I'd like to make it before Solr 7, and maybe even by
> 6.5 (partially, at least; see below).
>
> Here's where I think we are at, and next steps I'd like to take:
>
> 1) We have reorganized the project to reflect how it can look when
> integrated with Lucene/Solr source tree (link below).
>
> The project now has 2 top-level directories, "confluence-export",
> which contains tools for conversion out of Confluence, and
> "solr-ref-guide" for source content, tools and build output of the
> current Ref Guide.
>
> Feedback on this structure is welcome.
>
> 2) We need to decide on our policy for branches. I recall there was
> valid concern about the process around this when I first proposed the
> change. I'd like to iron that out as soon as we can since that will be
> a key part of our new process.
>
> From our discussion last summer, there are 2 potential approaches:
>
> a) Make all changes in 'master' (trunk) and backport to branches for
> releasing the content. We'd need to merge "backward" into upcoming
> release branch.
> b) Make all changes in branch_6x (or branch_7x, etc.) and only move
> things to master when they are only applicable to unreleased next
> major version. We'd merge 6x "forward" when it's time for next major
> version.
>
> There might be other ideas also - we should explore them and come to a
> consensus.
>
> * To move forward on #1 and #2 here, I'll create a JIRA issue for this
> effort (finally), and then create a branch in the lucene-solr repo
> named after the JIRA issue and move the project from the current
> location to the new branch. Then I'll file sub-tasks for the remaining
> work and decisions (such as branching, conversion, publication
> processes, where it lives, etc).
>
> 3) I'd like to see if we can publish the 6.5 Solr Ref Guide PDF with
> this new approach. This would require converting all the content out
> of Confluence, but it would only require that we get the PDF-relevant
> parts of the process finalized in the next couple of weeks.The entire
> publication process would remain the same; however, the editing
> experience for new content would be radically different so that may be
> too substantial an obstacle.
>
> I know it's an ambitious idea, and could leave us in a half-way state
> with the PDF published from one source and online docs in another, but
> it may be worth trying in order to get moving on this front and iron
> out remaining issues before 7.0.
>
> I appreciate in advance your feedback.  As a reminder, you can see the
> demo site/PDF and the project repo at:
>
> http://people.apache.org/~ctargett/RefGuidePOC/
> https://github.com/ctargett/refguide-asciidoc-poc
>
>
> Thanks,
> Cassandra
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10274) The search Streaming Expression should work in non-SolrCloud mode

2017-03-13 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10274:
-

 Summary: The search Streaming Expression should work in 
non-SolrCloud mode
 Key: SOLR-10274
 URL: https://issues.apache.org/jira/browse/SOLR-10274
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The *search* Streaming expression powers Solr's MapReduce queries, a large part 
of the SQL interface and graph expressions. So it would be great if it could 
work in non-SolrCloud mode as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Moving the Ref Guide: Progress Update & Next Steps

2017-03-13 Thread Cassandra Targett
It's been a while, but I (with Hoss' assistance) have been doing some
work on the proposal I made last summer to move the Ref Guide off of
Confluence. In my opinion, we're really close to being ready to make
this move, and I'd like to make it before Solr 7, and maybe even by
6.5 (partially, at least; see below).

Here's where I think we are at, and next steps I'd like to take:

1) We have reorganized the project to reflect how it can look when
integrated with Lucene/Solr source tree (link below).

The project now has 2 top-level directories, "confluence-export",
which contains tools for conversion out of Confluence, and
"solr-ref-guide" for source content, tools and build output of the
current Ref Guide.

Feedback on this structure is welcome.

2) We need to decide on our policy for branches. I recall there was
valid concern about the process around this when I first proposed the
change. I'd like to iron that out as soon as we can since that will be
a key part of our new process.

>From our discussion last summer, there are 2 potential approaches:

a) Make all changes in 'master' (trunk) and backport to branches for
releasing the content. We'd need to merge "backward" into upcoming
release branch.
b) Make all changes in branch_6x (or branch_7x, etc.) and only move
things to master when they are only applicable to unreleased next
major version. We'd merge 6x "forward" when it's time for next major
version.

There might be other ideas also - we should explore them and come to a
consensus.

* To move forward on #1 and #2 here, I'll create a JIRA issue for this
effort (finally), and then create a branch in the lucene-solr repo
named after the JIRA issue and move the project from the current
location to the new branch. Then I'll file sub-tasks for the remaining
work and decisions (such as branching, conversion, publication
processes, where it lives, etc).

3) I'd like to see if we can publish the 6.5 Solr Ref Guide PDF with
this new approach. This would require converting all the content out
of Confluence, but it would only require that we get the PDF-relevant
parts of the process finalized in the next couple of weeks.The entire
publication process would remain the same; however, the editing
experience for new content would be radically different so that may be
too substantial an obstacle.

I know it's an ambitious idea, and could leave us in a half-way state
with the PDF published from one source and online docs in another, but
it may be worth trying in order to get moving on this front and iron
out remaining issues before 7.0.

I appreciate in advance your feedback.  As a reminder, you can see the
demo site/PDF and the project repo at:

http://people.apache.org/~ctargett/RefGuidePOC/
https://github.com/ctargett/refguide-asciidoc-poc


Thanks,
Cassandra

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-03-13 Thread David Smiley (JIRA)
David Smiley created SOLR-10273:
---

 Summary: Re-order largest field values last in Lucene Document
 Key: SOLR-10273
 URL: https://issues.apache.org/jira/browse/SOLR-10273
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 6.5


(part of umbrella issue SOLR-10117)
In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
value(s) associated with the largest field (assuming "stored") to be last.  
Lucene's default stored value codec can avoid reading and decompressing  the 
last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+159) - Build # 3051 - Unstable!

2017-03-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3051/
Java: 32bit/jdk-9-ea+159 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info
at 
__randomizedtesting.SeedInfo.seed([3CB1719F7C30D0:88688EAB31805D28]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1176)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1117)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:977)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 725 - Still Failing

2017-03-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/725/

No tests ran.

Build Log:
[...truncated 39749 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (15.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.6 MB in 0.03 sec (1146.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.1 MB in 0.06 sec (1158.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.5 MB in 0.06 sec (1168.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (65.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.4 MB in 0.04 sec (1042.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 141.8 MB in 0.13 sec (1077.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 143.1 MB in 0.13 sec (1109.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=20892). Happy searching!
   

[jira] [Commented] (SOLR-10269) MetricsHandler JSON output incorrect

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922802#comment-15922802
 ] 

ASF subversion and git services commented on SOLR-10269:


Commit e3a0b428fd7dd8747a6b48ef165300ebb23b3198 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3a0b42 ]

SOLR-10269 MetricHandler JSON output was incorrect.


> MetricsHandler JSON output incorrect
> 
>
> Key: SOLR-10269
> URL: https://issues.apache.org/jira/browse/SOLR-10269
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.5, master (7.0), 6.4.2
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10269.patch
>
>
> Default XML output for {{/admin/metrics}} looks correct, but when 
> {{=json}} is used the output looks wrong:
> {code}
> ...
>   "metrics": [
> "solr.jetty",
> [
>   "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses",
>   [
> "count",
> 0,
> "meanRate",
> 0,
> "1minRate",
> 0,
> "5minRate",
> 0,
> "15minRate",
> 0
>   ],
>   "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses",
>   [
> "count",
> 6,
> "meanRate",
> 0.668669400584,
> "1minRate",
> 1.2,
> "5minRate",
> 1.2,
> "15minRate",
> 1.2
>   ],
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-03-13 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922798#comment-15922798
 ] 

Varun Thacker commented on SOLR-10272:
--

Hi Erick,

Okay I should have been more descriptive here. Today the start script works 
like this when we create a collection


{code}
~/solr-6.4.2$ ./bin/solr create -c test

Connecting to ZooKeeper at localhost:9983 ...
INFO  - 2017-03-13 12:28:04.069; 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
localhost:9983 ready
Uploading 
/Users/varunthacker/solr-6.4.2/server/solr/configsets/data_driven_schema_configs/conf
 for config test to ZooKeeper at localhost:9983

Creating new collection 'test' using command:
http://localhost:7574/solr/admin/collections?action=CREATE=test=1=1=1=test

{
  "responseHeader":{
"status":0,
"QTime":2555},
  "success":{"192.168.0.4:7574_solr":{
  "responseHeader":{
"status":0,
"QTime":1425},
  "core":"test_shard1_replica1"}}}
{code}

Given that this Jira adds a default configset and uses that if no 
"collection.configName" is present , we can remove this logic from the create 
command as this will be done automatically

{code}
Connecting to ZooKeeper at localhost:9983 ...
INFO  - 2017-03-13 12:28:04.069; 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
localhost:9983 ready
Uploading 
/Users/varunthacker/solr-6.4.2/server/solr/configsets/data_driven_schema_configs/conf
 for config test to ZooKeeper at localhost:9983

{code}

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-03-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922779#comment-15922779
 ] 

Erick Erickson commented on SOLR-10272:
---

Not quite sure what you mean by <3>. Change _which_ parts of that script?
> zk upconfig|downconfig?
> create_collection?
> example startup?

I really don't want to remove any of those. I often use the example startup to 
specify a custom configset 'cause it's easy, I don't have to do the separate 
step of uploading a configset.

The first two seem to need to be kept for advanced users.

> Use a default configset and make the configName parameter optional.
> ---
>
> Key: SOLR-10272
> URL: https://issues.apache.org/jira/browse/SOLR-10272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> This Jira's motivation is to improve the creating a collection experience 
> better for users.
> To create a collection we need to specify a configName that needs to be 
> present in ZK. When a new user is starting Solr why should he worry about 
> having to know about configsets before he can can create a collection.
> When you create a collection using "bin/solr create" the script uploads a 
> configset and references it. This is great. We should extend this idea to API 
> users as well.
> So here is the rough outline of what I think we can do here:
> 1. When you start solr , the bin script checks to see if 
> "/configs/_baseConfigSet" znode is present . If not it uploads the 
> "basic_configs". 
> We can discuss if its the "basic_configs" or something other default config 
> set. 
> Also we can discuss the name for "/_baseConfigSet". Moving on though
> 2. When a user creates a collection from the API  
> {{admin/collections?action=CREATE=gettingstarted}} here is what we do :
> Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
> over the default config set to a configset with the name of the collection 
> specified.
> collection.configName can truly be an optional parameter. If its specified we 
> don't need to do this step.
> 3. Have the bin scripts use this and remove the logic built in there to do 
> the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10085) SQL result-set fields not in order

2017-03-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922745#comment-15922745
 ] 

Joel Bernstein edited comment on SOLR-10085 at 3/13/17 7:05 PM:


Updated the ticket to reflect the focus on SQL output only. The work done in 
this ticket will make it possible though to maintain order in streaming as 
well. We can open another ticket to discuss the streaming implementation.


was (Author: joel.bernstein):
Updated the ticket to reflect the focus on SQL output only. The work done in 
this ticket will make it possible though to maintain order in streaming as 
well, but we can open another ticket to discuss the implementation.

> SQL result-set fields not in order
> --
>
> Key: SOLR-10085
> URL: https://issues.apache.org/jira/browse/SOLR-10085
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.3
> Environment: Windows 8.1, Java 8
>Reporter: Yeo Zheng Lin
>Assignee: Joel Bernstein
>  Labels: json, streaming
>
> I'm trying out the Streaming Expressions in Solr 6.3.0. 
> Currently, I'm facing the issue of not being able to get the fields in the 
> result-set to be displayed in the same order as what I put in the query.
> For example, when I execute this query:
>  http://localhost:8983/solr/collection1/stream?expr=facet(collection1,
>   q="*:*",
>   buckets="id,cost,quantity",
>   bucketSorts="cost desc",
>   bucketSizeLimit=100,
>   sum(cost), 
>   sum(quantity),
>   min(cost), 
>   min(quantity),
>   max(cost), 
>   max(quantity),
>   avg(cost), 
>   avg(quantity),
>   count(*))=true
> I get the following in the result-set.
>{
>   "result-set":{"docs":[
>   {
> "min(quantity)":12.21,
> "avg(quantity)":12.21,
> "sum(cost)":256.33,
> "max(cost)":256.33,
> "count(*)":1,
> "min(cost)":256.33,
> "cost":256.33,
> "avg(cost)":256.33,
> "quantity":12.21,
> "id":"01",
> "sum(quantity)":12.21,
> "max(quantity)":12.21},
>   {
> "EOF":true,
> "RESPONSE_TIME":359}]}}
> The fields are displayed randomly all over the place, instead of the order 
> sum, min, max, avg as in the query. This may cause confusion to user who look 
> at the output.  Possible improvement to display the fields in the result-set 
> in the same order as the query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10085) SQL result-set fields not in order

2017-03-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922745#comment-15922745
 ] 

Joel Bernstein commented on SOLR-10085:
---

Updated the ticket to reflect the focus on SQL output only. The work done in 
this ticket will make it possible though to maintain order in streaming as 
well, but we can open another ticket to discuss the implementation.

> SQL result-set fields not in order
> --
>
> Key: SOLR-10085
> URL: https://issues.apache.org/jira/browse/SOLR-10085
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.3
> Environment: Windows 8.1, Java 8
>Reporter: Yeo Zheng Lin
>Assignee: Joel Bernstein
>  Labels: json, streaming
>
> I'm trying out the Streaming Expressions in Solr 6.3.0. 
> Currently, I'm facing the issue of not being able to get the fields in the 
> result-set to be displayed in the same order as what I put in the query.
> For example, when I execute this query:
>  http://localhost:8983/solr/collection1/stream?expr=facet(collection1,
>   q="*:*",
>   buckets="id,cost,quantity",
>   bucketSorts="cost desc",
>   bucketSizeLimit=100,
>   sum(cost), 
>   sum(quantity),
>   min(cost), 
>   min(quantity),
>   max(cost), 
>   max(quantity),
>   avg(cost), 
>   avg(quantity),
>   count(*))=true
> I get the following in the result-set.
>{
>   "result-set":{"docs":[
>   {
> "min(quantity)":12.21,
> "avg(quantity)":12.21,
> "sum(cost)":256.33,
> "max(cost)":256.33,
> "count(*)":1,
> "min(cost)":256.33,
> "cost":256.33,
> "avg(cost)":256.33,
> "quantity":12.21,
> "id":"01",
> "sum(quantity)":12.21,
> "max(quantity)":12.21},
>   {
> "EOF":true,
> "RESPONSE_TIME":359}]}}
> The fields are displayed randomly all over the place, instead of the order 
> sum, min, max, avg as in the query. This may cause confusion to user who look 
> at the output.  Possible improvement to display the fields in the result-set 
> in the same order as the query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10085) SQL result-set fields not in order

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10085:
-

Assignee: Joel Bernstein

> SQL result-set fields not in order
> --
>
> Key: SOLR-10085
> URL: https://issues.apache.org/jira/browse/SOLR-10085
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.3
> Environment: Windows 8.1, Java 8
>Reporter: Yeo Zheng Lin
>Assignee: Joel Bernstein
>  Labels: json, streaming
>
> I'm trying out the Streaming Expressions in Solr 6.3.0. 
> Currently, I'm facing the issue of not being able to get the fields in the 
> result-set to be displayed in the same order as what I put in the query.
> For example, when I execute this query:
>  http://localhost:8983/solr/collection1/stream?expr=facet(collection1,
>   q="*:*",
>   buckets="id,cost,quantity",
>   bucketSorts="cost desc",
>   bucketSizeLimit=100,
>   sum(cost), 
>   sum(quantity),
>   min(cost), 
>   min(quantity),
>   max(cost), 
>   max(quantity),
>   avg(cost), 
>   avg(quantity),
>   count(*))=true
> I get the following in the result-set.
>{
>   "result-set":{"docs":[
>   {
> "min(quantity)":12.21,
> "avg(quantity)":12.21,
> "sum(cost)":256.33,
> "max(cost)":256.33,
> "count(*)":1,
> "min(cost)":256.33,
> "cost":256.33,
> "avg(cost)":256.33,
> "quantity":12.21,
> "id":"01",
> "sum(quantity)":12.21,
> "max(quantity)":12.21},
>   {
> "EOF":true,
> "RESPONSE_TIME":359}]}}
> The fields are displayed randomly all over the place, instead of the order 
> sum, min, max, avg as in the query. This may cause confusion to user who look 
> at the output.  Possible improvement to display the fields in the result-set 
> in the same order as the query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10085) SQL result-set fields not in order

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10085:
--
Summary: SQL result-set fields not in order  (was: Streaming Expressions 
result-set fields not in order)

> SQL result-set fields not in order
> --
>
> Key: SOLR-10085
> URL: https://issues.apache.org/jira/browse/SOLR-10085
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.3
> Environment: Windows 8.1, Java 8
>Reporter: Yeo Zheng Lin
>  Labels: json, streaming
>
> I'm trying out the Streaming Expressions in Solr 6.3.0. 
> Currently, I'm facing the issue of not being able to get the fields in the 
> result-set to be displayed in the same order as what I put in the query.
> For example, when I execute this query:
>  http://localhost:8983/solr/collection1/stream?expr=facet(collection1,
>   q="*:*",
>   buckets="id,cost,quantity",
>   bucketSorts="cost desc",
>   bucketSizeLimit=100,
>   sum(cost), 
>   sum(quantity),
>   min(cost), 
>   min(quantity),
>   max(cost), 
>   max(quantity),
>   avg(cost), 
>   avg(quantity),
>   count(*))=true
> I get the following in the result-set.
>{
>   "result-set":{"docs":[
>   {
> "min(quantity)":12.21,
> "avg(quantity)":12.21,
> "sum(cost)":256.33,
> "max(cost)":256.33,
> "count(*)":1,
> "min(cost)":256.33,
> "cost":256.33,
> "avg(cost)":256.33,
> "quantity":12.21,
> "id":"01",
> "sum(quantity)":12.21,
> "max(quantity)":12.21},
>   {
> "EOF":true,
> "RESPONSE_TIME":359}]}}
> The fields are displayed randomly all over the place, instead of the order 
> sum, min, max, avg as in the query. This may cause confusion to user who look 
> at the output.  Possible improvement to display the fields in the result-set 
> in the same order as the query



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10272) Use a default configset and make the configName parameter optional.

2017-03-13 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-10272:


 Summary: Use a default configset and make the configName parameter 
optional.
 Key: SOLR-10272
 URL: https://issues.apache.org/jira/browse/SOLR-10272
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


This Jira's motivation is to improve the creating a collection experience 
better for users.

To create a collection we need to specify a configName that needs to be present 
in ZK. When a new user is starting Solr why should he worry about having to 
know about configsets before he can can create a collection.

When you create a collection using "bin/solr create" the script uploads a 
configset and references it. This is great. We should extend this idea to API 
users as well.

So here is the rough outline of what I think we can do here:

1. When you start solr , the bin script checks to see if 
"/configs/_baseConfigSet" znode is present . If not it uploads the 
"basic_configs". 

We can discuss if its the "basic_configs" or something other default config 
set. 

Also we can discuss the name for "/_baseConfigSet". Moving on though

2. When a user creates a collection from the API  
{{admin/collections?action=CREATE=gettingstarted}} here is what we do :


Use https://cwiki.apache.org/confluence/display/solr/ConfigSets+API to copy 
over the default config set to a configset with the name of the collection 
specified.

collection.configName can truly be an optional parameter. If its specified we 
don't need to do this step.

3. Have the bin scripts use this and remove the logic built in there to do the 
same thing.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9045) make RecoveryStrategy settings configurable

2017-03-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922720#comment-15922720
 ] 

Christine Poerschke commented on SOLR-9045:
---

Above commit is for master branch, intending to backport (tomorrow) to 
branch_6x for the upcoming 6.5 release.

> make RecoveryStrategy settings configurable
> ---
>
> Key: SOLR-9045
> URL: https://issues.apache.org/jira/browse/SOLR-9045
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9045.patch
>
>
> objectives:
>  * to allow users to change RecoveryStrategy settings such as maxRetries and 
> startingRecoveryDelay
>  * to support configuration of a custom recovery strategy e.g. SOLR-9044
> patch summary:
>  * support for optional  solrconfig.xml element added (if 
> element is present then its class attribute is optional)
>  * RecoveryStrategy settings now have getters/setters
>  * RecoveryStrategy.Builder added (and RecoveryStrategy constructor made 
> non-public in favour of RecoveryStrategy.Builder.create)
>  * protected RecoveryStrategy.getReplicateLeaderUrl method factored out 
> (ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder test illustrates 
> how SOLR-9044 might override the method)
>  * ConfigureRecoveryStrategyTest.java using 
> solrconfig-configurerecoverystrategy.xml or 
> solrconfig-customrecoverystrategy.xml
> illustrative solrconfig.xml snippets:
>  * change a RecoveryStrategy setting
> {code}
>   
> 250
>   
> {code}
> * configure a custom class
> {code}
>class="org.apache.solr.core.ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder">
> recovery_base_url
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9045) make RecoveryStrategy settings configurable

2017-03-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922711#comment-15922711
 ] 

ASF subversion and git services commented on SOLR-9045:
---

Commit c8bad8c10ac52d89318932636b1e1401c314b5e4 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c8bad8c ]

SOLR-9045: Make RecoveryStrategy settings configurable.


> make RecoveryStrategy settings configurable
> ---
>
> Key: SOLR-9045
> URL: https://issues.apache.org/jira/browse/SOLR-9045
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9045.patch
>
>
> objectives:
>  * to allow users to change RecoveryStrategy settings such as maxRetries and 
> startingRecoveryDelay
>  * to support configuration of a custom recovery strategy e.g. SOLR-9044
> patch summary:
>  * support for optional  solrconfig.xml element added (if 
> element is present then its class attribute is optional)
>  * RecoveryStrategy settings now have getters/setters
>  * RecoveryStrategy.Builder added (and RecoveryStrategy constructor made 
> non-public in favour of RecoveryStrategy.Builder.create)
>  * protected RecoveryStrategy.getReplicateLeaderUrl method factored out 
> (ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder test illustrates 
> how SOLR-9044 might override the method)
>  * ConfigureRecoveryStrategyTest.java using 
> solrconfig-configurerecoverystrategy.xml or 
> solrconfig-customrecoverystrategy.xml
> illustrative solrconfig.xml snippets:
>  * change a RecoveryStrategy setting
> {code}
>   
> 250
>   
> {code}
> * configure a custom class
> {code}
>class="org.apache.solr.core.ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder">
> recovery_base_url
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922704#comment-15922704
 ] 

Erick Erickson commented on SOLR-10229:
---

bq: Actually, I was not suggesting using kitchen sink as the actual live 
schema, just as a source to copy from

Understood, I wasn't suggesting that either. The "mother schema" would be 
loaded by the test framework exactly once before running _any_ tests. 
Individual tests could choose to add definitions from that  schema as necessary.

There would be "basic schema(s)" available for use as-is, but not very many.

Tests that needed one-off additions would have the "mother schema" as a 
resource to add any custom stuff.

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Streaming Expressions should have better support for basic auth

2017-03-13 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Summary: Streaming Expressions should have better support for basic auth   
(was: Basic auth in not supported in Streaming Expressions)

> Streaming Expressions should have better support for basic auth 
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Assignee: Kevin Risden
>  Labels: features, security
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2017-03-13 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Fix Version/s: (was: 6.5)

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Assignee: Kevin Risden
>  Labels: features, security
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-13 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15922677#comment-15922677
 ] 

Alexandre Rafalovitch commented on SOLR-10229:
--

Actually, I was not suggesting using kitchen sink as the actual live schema, 
just as a source to copy from. Running the tests on the mother scheme may make 
tests too complicated to review. Having explicit list of definitions, even if 
copied, would be a good idea in my mind.

And to simplify, it may be useful to say something like 
"copyFieldAndDefinition", so if you pull in the "id" field, it all pulls the 
necessary definitions too. Which probably still means some bridging/assistance 
code after all.

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10269) MetricsHandler JSON output incorrect

2017-03-13 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-10269:
-
Summary: MetricsHandler JSON output incorrect  (was: MetricsHandler JSON 
output looks weird)

> MetricsHandler JSON output incorrect
> 
>
> Key: SOLR-10269
> URL: https://issues.apache.org/jira/browse/SOLR-10269
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.5, master (7.0), 6.4.2
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10269.patch
>
>
> Default XML output for {{/admin/metrics}} looks correct, but when 
> {{=json}} is used the output looks wrong:
> {code}
> ...
>   "metrics": [
> "solr.jetty",
> [
>   "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses",
>   [
> "count",
> 0,
> "meanRate",
> 0,
> "1minRate",
> 0,
> "5minRate",
> 0,
> "15minRate",
> 0
>   ],
>   "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses",
>   [
> "count",
> 6,
> "meanRate",
> 0.668669400584,
> "1minRate",
> 1.2,
> "5minRate",
> 1.2,
> "15minRate",
> 1.2
>   ],
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907881#comment-15907881
 ] 

Erick Erickson commented on SOLR-10229:
---

Alexandre:

This is why I throw things out for discussion, because other people almost 
always have a better idea ;) 

What I like about this is that we load/parse this monster file once when the 
test run starts rather than for each test. Each test (suite?) then makes any 
changes necessary in line. I don't particularly care if that file is a "kitchen 
sink" in this scenario.

That said, I don't want to force every test to define the schema in the test 
code. So there'd still be a handful of pre-defined schema files that people 
could use as-is, the 80-20 rule here, if your test is happy with the 
(relatively) small schema, just use that. 

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10271) SQL aggregations run in map_reduce mode should use javabin transport

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10271:
--
Attachment: SOLR-10271.patch

> SQL aggregations run in map_reduce mode should use javabin transport
> 
>
> Key: SOLR-10271
> URL: https://issues.apache.org/jira/browse/SOLR-10271
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10271.patch
>
>
> Currently the SQL interface is using json when shuffling tuples in map_reduce 
> mode. Switching to javabin will improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10271) SQL aggregations run in map_reduce mode should use javabin transport

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10271:
-

Assignee: Joel Bernstein

> SQL aggregations run in map_reduce mode should use javabin transport
> 
>
> Key: SOLR-10271
> URL: https://issues.apache.org/jira/browse/SOLR-10271
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10271.patch
>
>
> Currently the SQL interface is using json when shuffling tuples in map_reduce 
> mode. Switching to javabin will improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10271) SQL aggregations in map_reduce mode should use javabin transport

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10271:
--
Summary: SQL aggregations in map_reduce mode should use javabin transport  
(was: SQL aggregations run in map_reduce mode should use javabin transport)

> SQL aggregations in map_reduce mode should use javabin transport
> 
>
> Key: SOLR-10271
> URL: https://issues.apache.org/jira/browse/SOLR-10271
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10271.patch
>
>
> Currently the SQL interface is using json when shuffling tuples in map_reduce 
> mode. Switching to javabin will improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907830#comment-15907830
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9710 at 3/13/17 5:01 PM:
-

-Seems like this was released with 6.4. Closing out the issue.-
Edit: Ah, the commit was reverted. This is still open, with fix version as 6.5.


was (Author: ichattopadhyaya):
Seems like this was released with 6.4. Closing out the issue.

> SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9710.patch
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10271) SQL aggregations run in map_reduce mode should use javabin transport

2017-03-13 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10271:
-

 Summary: SQL aggregations run in map_reduce mode should use 
javabin transport
 Key: SOLR-10271
 URL: https://issues.apache.org/jira/browse/SOLR-10271
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the SQL interface is using json when shuffling tuples in map_reduce 
mode. Switching to javabin will improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9909) Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9909:
---
Fix Version/s: (was: 6.4)
   6.5

> Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory
> 
>
> Key: SOLR-9909
> URL: https://issues.apache.org/jira/browse/SOLR-9909
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 6.5, master (7.0)
>
>
> DefaultSolrThreadFactory and SolrjNamedThreadFactory have exactly the same 
> code. Let's remove one of them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10249) Allow index fetching to return a detailed result instead of a true/false value

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907838#comment-15907838
 ] 

Ishan Chattopadhyaya commented on SOLR-10249:
-

Moving to 6.5, since 6.4 has already been released.


> Allow index fetching to return a detailed result instead of a true/false value
> --
>
> Key: SOLR-10249
> URL: https://issues.apache.org/jira/browse/SOLR-10249
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.4.1
> Environment: Any
>Reporter: Jeff Miller
>Priority: Trivial
>  Labels: easyfix, newbie
> Fix For: 6.5
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> This gives us the ability to see into why a replication might of failed and 
> act on it if we need to.  We use this enhancement for logging conditions so 
> we can quantify what is happening with replication, get success rates, etc.
> The idea is to create a public static class IndexFetchResult as an inner 
> class to IndexFetcher that has strings that hold statuses that could occur 
> while fetching an index.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10249) Allow index fetching to return a detailed result instead of a true/false value

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10249:

Fix Version/s: (was: 6.4)
   6.5

> Allow index fetching to return a detailed result instead of a true/false value
> --
>
> Key: SOLR-10249
> URL: https://issues.apache.org/jira/browse/SOLR-10249
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.4.1
> Environment: Any
>Reporter: Jeff Miller
>Priority: Trivial
>  Labels: easyfix, newbie
> Fix For: 6.5
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> This gives us the ability to see into why a replication might of failed and 
> act on it if we need to.  We use this enhancement for logging conditions so 
> we can quantify what is happening with replication, get success rates, etc.
> The idea is to create a public static class IndexFetchResult as an inner 
> class to IndexFetcher that has strings that hold statuses that could occur 
> while fetching an index.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9962) need to extend classes in org.apache.solr.client.solrj.io.stream.metrics package

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9962:
---
Fix Version/s: (was: 6.4)
   6.5

> need to extend classes in org.apache.solr.client.solrj.io.stream.metrics 
> package
> 
>
> Key: SOLR-9962
> URL: https://issues.apache.org/jira/browse/SOLR-9962
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3
>Reporter: radhakrishnan devarajan
>Priority: Trivial
> Fix For: 6.5
>
>
> i want to extend the update(Tuple tuple) method in MaxMetric,. MinMetric, 
> SumMetric, MeanMetric classes.
> can you please make the below metioned variables and methods in the above 
> mentioned classes as protected so that it will be easy to extend
> variables
> ---
> longMax
> doubleMax
> columnName
> and 
> methods
> ---
> init



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9962) need to extend classes in org.apache.solr.client.solrj.io.stream.metrics package

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907835#comment-15907835
 ] 

Ishan Chattopadhyaya commented on SOLR-9962:


Moving to 6.5, since 6.4 has already been released.


> need to extend classes in org.apache.solr.client.solrj.io.stream.metrics 
> package
> 
>
> Key: SOLR-9962
> URL: https://issues.apache.org/jira/browse/SOLR-9962
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3
>Reporter: radhakrishnan devarajan
>Priority: Trivial
> Fix For: 6.5
>
>
> i want to extend the update(Tuple tuple) method in MaxMetric,. MinMetric, 
> SumMetric, MeanMetric classes.
> can you please make the below metioned variables and methods in the above 
> mentioned classes as protected so that it will be easy to extend
> variables
> ---
> longMax
> doubleMax
> columnName
> and 
> methods
> ---
> init



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907830#comment-15907830
 ] 

Ishan Chattopadhyaya commented on SOLR-9710:


Seems like this was released with 6.4. Closing out the issue.

> SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9710.patch
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9909) Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907832#comment-15907832
 ] 

Ishan Chattopadhyaya commented on SOLR-9909:


Moving to 6.5, since 6.4 has already been released.


> Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory
> 
>
> Key: SOLR-9909
> URL: https://issues.apache.org/jira/browse/SOLR-9909
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 6.4, master (7.0)
>
>
> DefaultSolrThreadFactory and SolrjNamedThreadFactory have exactly the same 
> code. Let's remove one of them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9710) SpellCheckComponentTest (still) occasionally fails

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9710:
---
Fix Version/s: (was: 6.4)
   6.5

> SpellCheckComponentTest (still) occasionally fails
> --
>
> Key: SOLR-9710
> URL: https://issues.apache.org/jira/browse/SOLR-9710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 6.2.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9710.patch
>
>
> In December 2015, I addressed occasional, non-reproducable failures with the 
> Spellcheck Component tests.  These were failing with this warning:
> bq. PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> ...and the test itself would run before the test data was committed, 
> resulting in failure.
> This problem is re-occurring and needs a better fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9612) Stored field access should be avoided when it's not needed

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9612:
---
Fix Version/s: (was: 6.4)
   6.5

> Stored field access should be avoided when it's not needed
> --
>
> Key: SOLR-9612
> URL: https://issues.apache.org/jira/browse/SOLR-9612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.0, 6.1, 6.2
>Reporter: Takahiro Ishikawa
>Priority: Minor
>  Labels: performance
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9612.patch
>
>
> This is a small enhancement. (unneeded stored access spend 5% in my profile 
> result)
> All fields which is written in fl parameter(some of them are only doc values 
> not stored) are iterated over from stored fields and it's inefficient. 
> Further when fl parameters are only docValues, we should avoid accessing 
> stored.
> I'm going to update a conservative patch.
> This patch exclude nonStoredDocValues fields from stored field list, and if 
> we don't need access stored, we skip it.
> What 'conservative' means is that when schema is dynamically changed this 
> patch not change behaviors.(ex. stored field 'a' is removed from schema, and 
> user search fl=a, then a is returned from DocStreamer.)
> But I'm not sure how should solr behaves when schema is dynamically changed.
> I think better approach is 
> Each fields are classified 3 types from schema and process each.
>  - stored   -> fetch from stored
>  - nonStoredDocValues   -> fetch from docValues
>  - unknown  -> error or lazy field(distinguishable?)
> But this might break backward compatibility.(like mentioned above)
> Any comments are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9545) DataImportHandler throws NPE to logs when pk attribute is not present when delta query is used

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9545:
---
Fix Version/s: (was: 6.4)
   6.5

> DataImportHandler throws NPE to logs when pk attribute is not present when 
> delta query is used
> --
>
> Key: SOLR-9545
> URL: https://issues.apache.org/jira/browse/SOLR-9545
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.2.1
>Reporter: Rafał Kuć
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9545.patch
>
>
> Hi, 
> Currently, when running a delta query from the Data Import Handler and pk 
> parameter is not specified Solr just logs NullPointerExeception, not 
> providing any information on what was expected. 
> Patch coming soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9612) Stored field access should be avoided when it's not needed

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907823#comment-15907823
 ] 

Ishan Chattopadhyaya commented on SOLR-9612:


Moving to 6.5, since 6.4 has already been released.

> Stored field access should be avoided when it's not needed
> --
>
> Key: SOLR-9612
> URL: https://issues.apache.org/jira/browse/SOLR-9612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.0, 6.1, 6.2
>Reporter: Takahiro Ishikawa
>Priority: Minor
>  Labels: performance
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9612.patch
>
>
> This is a small enhancement. (unneeded stored access spend 5% in my profile 
> result)
> All fields which is written in fl parameter(some of them are only doc values 
> not stored) are iterated over from stored fields and it's inefficient. 
> Further when fl parameters are only docValues, we should avoid accessing 
> stored.
> I'm going to update a conservative patch.
> This patch exclude nonStoredDocValues fields from stored field list, and if 
> we don't need access stored, we skip it.
> What 'conservative' means is that when schema is dynamically changed this 
> patch not change behaviors.(ex. stored field 'a' is removed from schema, and 
> user search fl=a, then a is returned from DocStreamer.)
> But I'm not sure how should solr behaves when schema is dynamically changed.
> I think better approach is 
> Each fields are classified 3 types from schema and process each.
>  - stored   -> fetch from stored
>  - nonStoredDocValues   -> fetch from docValues
>  - unknown  -> error or lazy field(distinguishable?)
> But this might break backward compatibility.(like mentioned above)
> Any comments are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7540) Upgrade ICU to 58.1

2017-03-13 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907816#comment-15907816
 ] 

Uwe Schindler edited comment on LUCENE-7540 at 3/13/17 4:49 PM:


Don't forget to also "regenerate" in analyzers-common: {{ant unicode-data}} 
There we have some data extracted from the ICU file (for some tokenizers):
https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/java/org/apache/lucene/analysis/util/UnicodeProps.java


was (Author: thetaphi):
Don't forget to also "regenerate" in analyzers-common. There we have some data 
extracted from the ICU file (for some tokenizers):
https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/java/org/apache/lucene/analysis/util/UnicodeProps.java

> Upgrade ICU to 58.1
> ---
>
> Key: LUCENE-7540
> URL: https://issues.apache.org/jira/browse/LUCENE-7540
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7540.patch
>
>
> ICU is up to 58.1, but our ICU analysis components currently use 56.1, which 
> is ~1 year old by now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9867:
---
Fix Version/s: (was: 6.4)
   6.5

> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9867.patch, SOLR-9867.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9489) Admin UI does not show port number for collection that has only 1 shard and 1 replica

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907817#comment-15907817
 ] 

Ishan Chattopadhyaya commented on SOLR-9489:


Moving to 6.5, since 6.4 has already been released.


> Admin UI does not show port number for collection that has only 1 shard and 1 
> replica
> -
>
> Key: SOLR-9489
> URL: https://issues.apache.org/jira/browse/SOLR-9489
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9489.png
>
>
> In the Graph view, port numbers are only shown when a collection has more 
> than 1 shard or more than 1 replica (or both). But if you create a collection 
> with just 1 shard, 1 replica then only the IP/hostname is shown without the 
> port. The link for that replica is correct i.e. it points to the right 
> ip:port combination.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10270) Stop exporting _version_ during GROUP BY aggregations in map_reduce mode

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10270:
-

Assignee: Joel Bernstein

> Stop exporting _version_ during GROUP BY aggregations in map_reduce mode
> 
>
> Key: SOLR-10270
> URL: https://issues.apache.org/jira/browse/SOLR-10270
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10270.patch
>
>
> Currently the SolrTable implementation is exporting _version_ when doing 
> GROUP BY aggregations in map_reduce mode. There is no reason for this and it 
> slows things down. This ticket will stop this from occurring.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9545) DataImportHandler throws NPE to logs when pk attribute is not present when delta query is used

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907819#comment-15907819
 ] 

Ishan Chattopadhyaya commented on SOLR-9545:


Moving to 6.5, since 6.4 has already been released.


> DataImportHandler throws NPE to logs when pk attribute is not present when 
> delta query is used
> --
>
> Key: SOLR-9545
> URL: https://issues.apache.org/jira/browse/SOLR-9545
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.2.1
>Reporter: Rafał Kuć
>Priority: Minor
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9545.patch
>
>
> Hi, 
> Currently, when running a delta query from the Data Import Handler and pk 
> parameter is not specified Solr just logs NullPointerExeception, not 
> providing any information on what was expected. 
> Patch coming soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10270) Stop exporting _version_ during GROUP BY aggregations in map_reduce mode

2017-03-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10270:
--
Attachment: SOLR-10270.patch

> Stop exporting _version_ during GROUP BY aggregations in map_reduce mode
> 
>
> Key: SOLR-10270
> URL: https://issues.apache.org/jira/browse/SOLR-10270
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10270.patch
>
>
> Currently the SolrTable implementation is exporting _version_ when doing 
> GROUP BY aggregations in map_reduce mode. There is no reason for this and it 
> slows things down. This ticket will stop this from occurring.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9489) Admin UI does not show port number for collection that has only 1 shard and 1 replica

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9489:
---
Fix Version/s: (was: 6.4)
   6.5

> Admin UI does not show port number for collection that has only 1 shard and 1 
> replica
> -
>
> Key: SOLR-9489
> URL: https://issues.apache.org/jira/browse/SOLR-9489
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9489.png
>
>
> In the Graph view, port numbers are only shown when a collection has more 
> than 1 shard or more than 1 replica (or both). But if you create a collection 
> with just 1 shard, 1 replica then only the IP/hostname is shown without the 
> port. The link for that replica is correct i.e. it points to the right 
> ip:port combination.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907814#comment-15907814
 ] 

Ishan Chattopadhyaya commented on SOLR-9779:


Moving to 6.5, since 6.4 has already been released.


> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Assignee: Kevin Risden
>  Labels: features, security
> Fix For: 6.5
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The Solr examples can not always be started after being stopped due to race with loading core.

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907815#comment-15907815
 ] 

Ishan Chattopadhyaya commented on SOLR-9867:


Moving to 6.5, since 6.4 has already been released.


> The Solr examples can not always be started after being stopped due to race 
> with loading core.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9867.patch, SOLR-9867.patch
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7540) Upgrade ICU to 58.1

2017-03-13 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907816#comment-15907816
 ] 

Uwe Schindler commented on LUCENE-7540:
---

Don't forget to also "regenerate" in analyzers-common. There we have some data 
extracted from the ICU file (for some tokenizers):
https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/java/org/apache/lucene/analysis/util/UnicodeProps.java

> Upgrade ICU to 58.1
> ---
>
> Key: LUCENE-7540
> URL: https://issues.apache.org/jira/browse/LUCENE-7540
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7540.patch
>
>
> ICU is up to 58.1, but our ICU analysis components currently use 56.1, which 
> is ~1 year old by now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9779:
---
Fix Version/s: (was: 6.4)
   6.5

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Assignee: Kevin Risden
>  Labels: features, security
> Fix For: 6.5
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-9644.

Resolution: Fixed

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10270) Stop exporting _version_ during GROUP BY aggregations in map_reduce mode

2017-03-13 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10270:
-

 Summary: Stop exporting _version_ during GROUP BY aggregations in 
map_reduce mode
 Key: SOLR-10270
 URL: https://issues.apache.org/jira/browse/SOLR-10270
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the SolrTable implementation is exporting _version_ when doing GROUP 
BY aggregations in map_reduce mode. There is no reason for this and it slows 
things down. This ticket will stop this from occurring.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907812#comment-15907812
 ] 

Ishan Chattopadhyaya commented on SOLR-9644:


Seems to have been released as part of 6.4. Closing it out.

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9594) Query requests to one shard collections can switch to two-phase distributed search if they hit a node in recovery

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907808#comment-15907808
 ] 

Ishan Chattopadhyaya commented on SOLR-9594:


Moving to 6.5, since 6.4 has already been released.


> Query requests to one shard collections can switch to two-phase distributed 
> search if they hit a node in recovery
> -
>
> Key: SOLR-9594
> URL: https://issues.apache.org/jira/browse/SOLR-9594
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 4.10.4, 5.5.3, 6.2.1
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-medium
> Fix For: 6.5, master (7.0)
>
>
> All search requests in SolrCloud are distributed two-phase requests by 
> default but Solr short-circuits it to the local replica/core if the 
> collection has numShards=1 and the local replica/core is active.
> But if the request happens to land on a replica which isn't active, the 
> short-circuiting doesn't happen and the local replica/core which isn't active 
> becomes the aggregator for a proper two-phase distributed request. If the 
> search components involved in the request do not support distributed search 
> then you can have weird results in such cases. This behavior is very 
> surprising because most of the times queries are short circuited and behave 
> as if they were non-distrib queries.
> We could either:
> # Forward the request to some other node entirely or
> # Make a call with distrib=false to another node



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9593) ConcurrentDeleteAndCreateCollectionTest failures in nightly builds

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9593:
---
Fix Version/s: (was: 6.4)
   6.5

> ConcurrentDeleteAndCreateCollectionTest failures in nightly builds
> --
>
> Key: SOLR-9593
> URL: https://issues.apache.org/jira/browse/SOLR-9593
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.5, master (7.0)
>
>
> Easily reproducible failures using:
> {code}
> ant test  -Dtestcase=ConcurrentDeleteAndCreateCollectionTest 
> -Dtests.seed=DE20B06605EB2E47 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=es-BO -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> [10:31:19.992] ERROR   0.00s J1 | ConcurrentDeleteAndCreateCollectionTest 
> (suite) <<<
>> Throwable #1: java.lang.AssertionError: ObjectTracker found 10 object(s) 
> that were not released!!! [InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient]
> {code}
> I have only seen this fail on nightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10269) MetricsHandler JSON output looks weird

2017-03-13 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-10269:
-
Attachment: SOLR-10269.patch

Patch. The output looks now like this:
{code}
...
  "metrics": [
"solr.jetty",
{
  "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses": {
"count": 0,
"meanRate": 0,
"1minRate": 0,
"5minRate": 0,
"15minRate": 0
  },
  "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses": {
"count": 6,
"meanRate": 0.6334427226239667,
"1minRate": 1.2,
"5minRate": 1.2,
"15minRate": 1.2
  },
...
{code}

> MetricsHandler JSON output looks weird
> --
>
> Key: SOLR-10269
> URL: https://issues.apache.org/jira/browse/SOLR-10269
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.5, master (7.0), 6.4.2
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10269.patch
>
>
> Default XML output for {{/admin/metrics}} looks correct, but when 
> {{=json}} is used the output looks wrong:
> {code}
> ...
>   "metrics": [
> "solr.jetty",
> [
>   "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses",
>   [
> "count",
> 0,
> "meanRate",
> 0,
> "1minRate",
> 0,
> "5minRate",
> 0,
> "15minRate",
> 0
>   ],
>   "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses",
>   [
> "count",
> 6,
> "meanRate",
> 0.668669400584,
> "1minRate",
> 1.2,
> "5minRate",
> 1.2,
> "15minRate",
> 1.2
>   ],
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9594) Query requests to one shard collections can switch to two-phase distributed search if they hit a node in recovery

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9594:
---
Fix Version/s: (was: 6.4)
   6.5

> Query requests to one shard collections can switch to two-phase distributed 
> search if they hit a node in recovery
> -
>
> Key: SOLR-9594
> URL: https://issues.apache.org/jira/browse/SOLR-9594
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 4.10.4, 5.5.3, 6.2.1
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-medium
> Fix For: 6.5, master (7.0)
>
>
> All search requests in SolrCloud are distributed two-phase requests by 
> default but Solr short-circuits it to the local replica/core if the 
> collection has numShards=1 and the local replica/core is active.
> But if the request happens to land on a replica which isn't active, the 
> short-circuiting doesn't happen and the local replica/core which isn't active 
> becomes the aggregator for a proper two-phase distributed request. If the 
> search components involved in the request do not support distributed search 
> then you can have weird results in such cases. This behavior is very 
> surprising because most of the times queries are short circuited and behave 
> as if they were non-distrib queries.
> We could either:
> # Forward the request to some other node entirely or
> # Make a call with distrib=false to another node



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9593) ConcurrentDeleteAndCreateCollectionTest failures in nightly builds

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907807#comment-15907807
 ] 

Ishan Chattopadhyaya commented on SOLR-9593:


Moving to 6.5, since 6.4 has already been released.


> ConcurrentDeleteAndCreateCollectionTest failures in nightly builds
> --
>
> Key: SOLR-9593
> URL: https://issues.apache.org/jira/browse/SOLR-9593
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.4, master (7.0)
>
>
> Easily reproducible failures using:
> {code}
> ant test  -Dtestcase=ConcurrentDeleteAndCreateCollectionTest 
> -Dtests.seed=DE20B06605EB2E47 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=es-BO -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> [10:31:19.992] ERROR   0.00s J1 | ConcurrentDeleteAndCreateCollectionTest 
> (suite) <<<
>> Throwable #1: java.lang.AssertionError: ObjectTracker found 10 object(s) 
> that were not released!!! [InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient, InternalHttpClient, 
> InternalHttpClient, InternalHttpClient]
> {code}
> I have only seen this fail on nightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9492) Request status API returns a completed status even if the collection API call failed

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9492:
---
Fix Version/s: (was: 6.4)
   6.5

> Request status API returns a completed status even if the collection API call 
> failed
> 
>
> Key: SOLR-9492
> URL: https://issues.apache.org/jira/browse/SOLR-9492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.2, 6.2
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-high
> Fix For: 6.5, master (7.0)
>
>
> A failed split shard response is:
> {code}
> {success={127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=2}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:50948_hfnp%2Fbq={responseHeader={status=0,QTime=0}}},c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044=/admin/cores=conf1=collection1_shard1_0_replica1=CREATE=collection1=shard1_0=javabin=2}
>  status=0 
> QTime=2},c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004=/admin/cores=conf1=collection1_shard1_1_replica1=CREATE=collection1=shard1_1=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904 webapp=null 
> path=/admin/cores 
> params={nodeName=127.0.0.1:43245_hfnp%252Fbq=collection1_shard1_1_replica1=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904=/admin/cores=core_node6=PREPRECOVERY=true=active=true=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003 webapp=null 
> path=/admin/cores 
> params={core=collection1=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003=/admin/cores=SPLIT=collection1_shard1_0_replica1=collection1_shard1_1_replica1=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632=/admin/cores=collection1_shard1_1_replica1=REQUESTAPPLYUPDATES=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900=/admin/cores=conf1=collection1_shard1_0_replica0=CREATE=collection1=shard1_0=javabin=2}
>  status=0 
> QTime=0},failure={127.0.0.1:43245_hfnp%2Fbq=org.apache.solr.client.solrj.SolrServerException:IOException
>  occured when talking to server at: http://127.0.0.1:43245/hfnp/bq},Operation 
> splitshard caused exception:=org.apache.solr.common.SolrException: ADDREPLICA 
> failed to create replica,exception={msg=ADDREPLICA failed to create 
> replica,rspCode=500}}
> {code}
> Note the "failure" bit. The split shard couldn't add a replica. But when you 
> use the request status API, it returns a "completed" status.
> Apparently, completed doesn't mean it was successful! In any case, it is very 
> misleading and makes it very hard to properly use the Collection APIs. We 
> need more investigation to figure out what other Collection APIs might be 
> affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9492) Request status API returns a completed status even if the collection API call failed

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907801#comment-15907801
 ] 

Ishan Chattopadhyaya commented on SOLR-9492:


Moving to 6.5, since 6.4 has already been released.


> Request status API returns a completed status even if the collection API call 
> failed
> 
>
> Key: SOLR-9492
> URL: https://issues.apache.org/jira/browse/SOLR-9492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.2, 6.2
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-high
> Fix For: 6.5, master (7.0)
>
>
> A failed split shard response is:
> {code}
> {success={127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=2}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:50948_hfnp%2Fbq={responseHeader={status=0,QTime=0}}},c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044=/admin/cores=conf1=collection1_shard1_0_replica1=CREATE=collection1=shard1_0=javabin=2}
>  status=0 
> QTime=2},c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004=/admin/cores=conf1=collection1_shard1_1_replica1=CREATE=collection1=shard1_1=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904 webapp=null 
> path=/admin/cores 
> params={nodeName=127.0.0.1:43245_hfnp%252Fbq=collection1_shard1_1_replica1=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904=/admin/cores=core_node6=PREPRECOVERY=true=active=true=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003 webapp=null 
> path=/admin/cores 
> params={core=collection1=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003=/admin/cores=SPLIT=collection1_shard1_0_replica1=collection1_shard1_1_replica1=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632=/admin/cores=collection1_shard1_1_replica1=REQUESTAPPLYUPDATES=javabin=2}
>  status=0 
> QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId:
>  c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900 webapp=null 
> path=/admin/cores 
> params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900=/admin/cores=conf1=collection1_shard1_0_replica0=CREATE=collection1=shard1_0=javabin=2}
>  status=0 
> QTime=0},failure={127.0.0.1:43245_hfnp%2Fbq=org.apache.solr.client.solrj.SolrServerException:IOException
>  occured when talking to server at: http://127.0.0.1:43245/hfnp/bq},Operation 
> splitshard caused exception:=org.apache.solr.common.SolrException: ADDREPLICA 
> failed to create replica,exception={msg=ADDREPLICA failed to create 
> replica,rspCode=500}}
> {code}
> Note the "failure" bit. The split shard couldn't add a replica. But when you 
> use the request status API, it returns a "completed" status.
> Apparently, completed doesn't mean it was successful! In any case, it is very 
> misleading and makes it very hard to properly use the Collection APIs. We 
> need more investigation to figure out what other Collection APIs might be 
> affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9560) Solr should check max open files and other ulimits and refuse to start if they are set too low

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9560:
---
Fix Version/s: (was: 6.4)
   6.5

> Solr should check max open files and other ulimits and refuse to start if 
> they are set too low
> --
>
> Key: SOLR-9560
> URL: https://issues.apache.org/jira/browse/SOLR-9560
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>  Labels: newdev
> Fix For: 6.5, master (7.0)
>
>
> Solr should check max open files and other ulimits and refuse to start if 
> they are set too low. Specifically:
> # max open files should be at least 32768
> # max memory size and virtual memory should both be unlimited



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9560) Solr should check max open files and other ulimits and refuse to start if they are set too low

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907802#comment-15907802
 ] 

Ishan Chattopadhyaya commented on SOLR-9560:


Moving to 6.5, since 6.4 has already been released.


> Solr should check max open files and other ulimits and refuse to start if 
> they are set too low
> --
>
> Key: SOLR-9560
> URL: https://issues.apache.org/jira/browse/SOLR-9560
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>  Labels: newdev
> Fix For: 6.5, master (7.0)
>
>
> Solr should check max open files and other ulimits and refuse to start if 
> they are set too low. Specifically:
> # max open files should be at least 32768
> # max memory size and virtual memory should both be unlimited



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9483) Add SolrJ support for the modify collection API

2017-03-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9483:
---
Fix Version/s: (was: 6.4)
   6.5

> Add SolrJ support for the modify collection API
> ---
>
> Key: SOLR-9483
> URL: https://issues.apache.org/jira/browse/SOLR-9483
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrCloud, SolrJ
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, newdev
> Fix For: 6.5, master (7.0)
>
>
> SolrJ currently does not have a method corresponding to the modify collection 
> API. There should be a Modify class inside CollectionAdminRequest and a 
> simple method to change all parameters supported by the modify API.
> Link to modify API documentation: 
> https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-modifycoll



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >