[jira] [Commented] (SOLR-8461) CloudSolrStream and ParallelStream can choose replicas that are not active

2015-12-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074750#comment-15074750
 ] 

Varun Thacker commented on SOLR-8461:
-

Hi [~caomanhdat] ,

Patch looks great! Couple of comments -
1. With the patch we check if the replica is active, we should also also check 
if the replica's nodeName belongs in the live nodes list. This is required for 
a scenario like this - Someone kills the node using "kill -9 " / OOM crash 
. In the cluster state that replica will still show us "active". 
HttpSolrCall#getCoreByCollection does the same thing 
2. Maybe move the code into a function - say "getActiveReplicasForCollection" ? 
The both CloudSolrStream and ParallelSteam can reuse it.

> CloudSolrStream and ParallelStream can choose replicas that are not active
> --
>
> Key: SOLR-8461
> URL: https://issues.apache.org/jira/browse/SOLR-8461
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8461.patch, SOLR-8461.patch, SOLR-8461.patch
>
>
> Currently CloudSolrStream and ParallelStream don't check the state of the 
> replicas they route requests to. This can result in replicas that are not 
> active receiving request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15375 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15375/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=9406, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[A0FFB43F75647302]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=9407, name=zkCallback-1770-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=9662, 
name=zkCallback-1770-thread-4, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=9405, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[A0FFB43F75647302]-SendThread(127.0.0.1:39012),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
5) Thread[id=9661, name=zkCallback-1770-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)6) Thread[id=9660, 
name=zkCallback-1770-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
co

[jira] [Updated] (SOLR-8461) CloudSolrStream and ParallelStream can choose replicas that are not active

2015-12-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8461:
---
Attachment: SOLR-8461.patch

Added unit test.

> CloudSolrStream and ParallelStream can choose replicas that are not active
> --
>
> Key: SOLR-8461
> URL: https://issues.apache.org/jira/browse/SOLR-8461
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8461.patch, SOLR-8461.patch, SOLR-8461.patch
>
>
> Currently CloudSolrStream and ParallelStream don't check the state of the 
> replicas they route requests to. This can result in replicas that are not 
> active receiving request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074676#comment-15074676
 ] 

David Smiley commented on LUCENE-6933:
--

Excellent; this is great!  I tried with another old source file too and git 
followed it.  Thanks again Dawid.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8467) CloudSolrStream and FacetStream should take a SolrParams object rather than a Map to allow more complex Solr queries to be specified

2015-12-29 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-8467:
-
Attachment: SOLR-8647.patch

Same patch with FacetStream changes, still some nocommits.


> CloudSolrStream and FacetStream should take a SolrParams object rather than a 
> Map to allow more complex Solr queries to be specified
> 
>
> Key: SOLR-8467
> URL: https://issues.apache.org/jira/browse/SOLR-8467
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8647.patch, SOLR-8647.patch
>
>
> Currently, it's impossible to, say, specify multiple "fq" clauses when using 
> Streaming Aggregation due to the fact that the c'tors take a Map of params.
> Opening to discuss whether we should
> 1> deprecate the current c'tor
> and/or
> 2> add a c'tor that takes a SolrParams object instead.
> and/or
> 3> ???
> I don't see a clean way to go from a Map to a 
> (Modifiable)SolrParams, so existing code would need a significant change. I 
> hacked together a PoC, just to see if I could make CloudSolrStream take a 
> ModifiableSolrParams object instead and it passes tests, but it's so bad that 
> I'm not going to even post it. There's _got_ to be a better way to do this, 
> but at least it's possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-29 Thread Erick Erickson
Kudos Dawid!

On Tue, Dec 29, 2015 at 9:10 PM, Yonik Seeley (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074636#comment-15074636
>  ]
>
> Yonik Seeley commented on LUCENE-6933:
> --
>
> Thanks Dawid, awesome job!  That missing history in git made some things 
> painful for me in the past... so glad it's fixed!
>
>> Create a (cleaned up) SVN history in git
>> 
>>
>> Key: LUCENE-6933
>> URL: https://issues.apache.org/jira/browse/LUCENE-6933
>> Project: Lucene - Core
>>  Issue Type: Task
>>Reporter: Dawid Weiss
>>Assignee: Dawid Weiss
>> Attachments: migration.txt, multibranch-commits.log, tools.zip
>>
>>
>> Goals:
>> * selectively drop projects and core-irrelevant stuff:
>>   ** {{lucene/site}}
>>   ** {{lucene/nutch}}
>>   ** {{lucene/lucy}}
>>   ** {{lucene/tika}}
>>   ** {{lucene/hadoop}}
>>   ** {{lucene/mahout}}
>>   ** {{lucene/pylucene}}
>>   ** {{lucene/lucene.net}}
>>   ** {{lucene/old_versioned_docs}}
>>   ** {{lucene/openrelevance}}
>>   ** {{lucene/board-reports}}
>>   ** {{lucene/java/site}}
>>   ** {{lucene/java/nightly}}
>>   ** {{lucene/dev/nightly}}
>>   ** {{lucene/dev/lucene2878}}
>>   ** {{lucene/sandbox/luke}}
>>   ** {{lucene/solr/nightly}}
>> * preserve the history of all changes to core sources (Solr and Lucene).
>>   ** {{lucene/java}}
>>   ** {{lucene/solr}}
>>   ** {{lucene/dev/trunk}}
>>   ** {{lucene/dev/branches/branch_3x}}
>>   ** {{lucene/dev/branches/branch_4x}}
>>   ** {{lucene/dev/branches/branch_5x}}
>> * provide a way to link git commits and history with svn revisions (amend 
>> the log message).
>> * annotate release tags
>> * deal with large binary blobs (JARs): keep empty files instead for their 
>> historical reference only.
>> Non goals:
>> * no need to preserve "exact" merge history from SVN (see "impossible" 
>> below).
>> * Ability to build ancient versions is not an issue.
>> Impossible:
>> * It is not possible to preserve SVN "merge history" because of the 
>> following reasons:
>>   ** Each commit in SVN operates on individual files. So one commit can 
>> "copy" (and record a merge) files from anywhere in the object tree, even 
>> modifying them along the way. There simply is no equivalent for this in git.
>>   ** There are historical commits in SVN that apply changes to multiple 
>> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
>> one commit ({{r940806}}).
>> * Because exact merge tracking is impossible then what follows is that exact 
>> "linearized" history of a given file is also impossible to record. Let's say 
>> changes X, Y and Z have been applied to a branch of a file A and then merged 
>> back. In git, this would be reflected as a single commit flattening X, Y and 
>> Z (on the target branch) and three independent commits on the branch. The 
>> "copy-from" link from one branch to another cannot be represented because, 
>> as mentioned, merges are done on entire branches in git, not on individual 
>> files. Yes, there are commits in SVN history that have selective file merges 
>> (not entire branches).
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-29 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074636#comment-15074636
 ] 

Yonik Seeley commented on LUCENE-6933:
--

Thanks Dawid, awesome job!  That missing history in git made some things 
painful for me in the past... so glad it's fixed!

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5507 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5507/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:61347/mm_g/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:61347/mm_g/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([96802E86DEB800C6:1ED4115C70446D3E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.

[GitHub] lucene pull request: lucence_study

2015-12-29 Thread rainforc
Github user rainforc closed the pull request at:

https://github.com/apache/lucene/pull/2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene pull request: lucence_study

2015-12-29 Thread rainforc
GitHub user rainforc reopened a pull request:

https://github.com/apache/lucene/pull/2

lucence_study

open source study

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene/pull/2.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2


commit 23ce4ea43d83e3a39ab637037bdc3b533e107df4
Author: Michael McCandless 
Date:   2009-12-05T18:27:27Z

LUCENE-2037: switch to junit 4.7

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887569 
13f79535-47bb-0310-9956-ffa450edef68

commit c7791d336627ee2480c1f1ed80414303af3c103f
Author: Robert Muir 
Date:   2009-12-05T22:37:36Z

fix enwiki (and a few others) task, DocMaker was removed in 3.0

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887602 
13f79535-47bb-0310-9956-ffa450edef68

commit e805f0756d18ad39d26dbd5be8acbf3ba3a24bde
Author: Uwe Schindler 
Date:   2009-12-06T00:28:04Z

Use better format for md5sum/sha1 sum on package build (binary files should 
have * before file name). The format attribute does that automatically.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887617 
13f79535-47bb-0310-9956-ffa450edef68

commit 24161cf5ab4c855862d92f991abdbb3bc6008dbe
Author: Michael McCandless 
Date:   2009-12-06T11:41:26Z

LUCENE-2119: behave better if you pass Integer.MAX_VALUE as nDcos to search 
methods

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887670 
13f79535-47bb-0310-9956-ffa450edef68

commit 401904e98649eef78da6fae8ffe0e5ede210cc20
Author: Michael McCandless 
Date:   2009-12-06T23:56:31Z

LUCENE-2119: use numDocs() not maxDoc() as max nDocs we pass to PQ

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887803 
13f79535-47bb-0310-9956-ffa450edef68

commit 5485782bd67588e41c2e40e638e8358b8e098b73
Author: Michael McCandless 
Date:   2009-12-07T09:57:38Z

LUCENE-2119: add some more comments in PQ around the +1

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887872 
13f79535-47bb-0310-9956-ffa450edef68

commit af7aaa6868aba709503e05c040fdb568668e5451
Author: Mark Robert Miller 
Date:   2009-12-07T12:17:54Z

LUCENE-2106: ReadTask does not close its Reader when OpenReader/CloseReader 
are not used.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887899 
13f79535-47bb-0310-9956-ffa450edef68

commit 01e441fa762b56b906ab9fbd60409648ec726751
Author: Uwe Schindler 
Date:   2009-12-07T16:49:21Z

LUCENE-2103: NoLockFactory should have a private constructor

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887995 
13f79535-47bb-0310-9956-ffa450edef68

commit 82ccc82d20f78879ed726ce374f2ad027c345cbb
Author: Michael McCandless 
Date:   2009-12-07T17:55:06Z

LUCENE-1844: speed up back-compat unit tests too

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888052 
13f79535-47bb-0310-9956-ffa450edef68

commit 5b021991a4a8f31f9f3ff1de044b44f03d4d35f9
Author: Robert Muir 
Date:   2009-12-08T04:03:40Z

LUCENE-2132: Fix the demo result.jsp

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888247 
13f79535-47bb-0310-9956-ffa450edef68

commit 4d3265012baa9a25fc4455a3445a69fd184d63bc
Author: Michael McCandless 
Date:   2009-12-08T10:19:08Z

fix typo in javadocs

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888308 
13f79535-47bb-0310-9956-ffa450edef68

commit 1a63be4921abca2586c49dc68fb99ecd9ffc5e8c
Author: Michael McCandless 
Date:   2009-12-08T10:37:30Z

LUCENE-2134: give 512M to JVM generating our javadocs

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888314 
13f79535-47bb-0310-9956-ffa450edef68

commit 6d3eaa719cf31153f66ac193e422f489378cb0fa
Author: Michael McCandless 
Date:   2009-12-08T13:47:20Z

LUCENE-2136: optimization: if Multi/DirectoryReader only has a single 
reader, delegate enums to it

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888398 
13f79535-47bb-0310-9956-ffa450edef68

commit 5223c896f2d73b3ed800a37de27ee94c6461faff
Author: Uwe Schindler 
Date:   2009-12-08T15:19:21Z

LUCENE-2136: This can be reverted.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888437 
13f79535-47bb-0310-9956-ffa450edef68

commit deaf3ea2bc2339a021427117f7fa9844f1e3e3b4
Author: Uwe Schindler 
Date:   2009-12-08T22:14:32Z

LUCENE-2128: Further parallelization of ParallelMultiSearcher

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888595 
13f79535-47bb-0310-9956-ffa450edef68

commit 40a97e5782ecd56d1a89fe866f5288ec76c6f218
Author: Michael McCandless 
Date:   2009-12-08T23:05:29Z

LUCENE-2

[GitHub] lucene pull request: lucence_study

2015-12-29 Thread rainforc
Github user rainforc closed the pull request at:

https://github.com/apache/lucene/pull/2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene pull request: lucence_study

2015-12-29 Thread rainforc
GitHub user rainforc opened a pull request:

https://github.com/apache/lucene/pull/2

lucence_study

open source study

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene/pull/2.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2


commit 23ce4ea43d83e3a39ab637037bdc3b533e107df4
Author: Michael McCandless 
Date:   2009-12-05T18:27:27Z

LUCENE-2037: switch to junit 4.7

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887569 
13f79535-47bb-0310-9956-ffa450edef68

commit c7791d336627ee2480c1f1ed80414303af3c103f
Author: Robert Muir 
Date:   2009-12-05T22:37:36Z

fix enwiki (and a few others) task, DocMaker was removed in 3.0

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887602 
13f79535-47bb-0310-9956-ffa450edef68

commit e805f0756d18ad39d26dbd5be8acbf3ba3a24bde
Author: Uwe Schindler 
Date:   2009-12-06T00:28:04Z

Use better format for md5sum/sha1 sum on package build (binary files should 
have * before file name). The format attribute does that automatically.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887617 
13f79535-47bb-0310-9956-ffa450edef68

commit 24161cf5ab4c855862d92f991abdbb3bc6008dbe
Author: Michael McCandless 
Date:   2009-12-06T11:41:26Z

LUCENE-2119: behave better if you pass Integer.MAX_VALUE as nDcos to search 
methods

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887670 
13f79535-47bb-0310-9956-ffa450edef68

commit 401904e98649eef78da6fae8ffe0e5ede210cc20
Author: Michael McCandless 
Date:   2009-12-06T23:56:31Z

LUCENE-2119: use numDocs() not maxDoc() as max nDocs we pass to PQ

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887803 
13f79535-47bb-0310-9956-ffa450edef68

commit 5485782bd67588e41c2e40e638e8358b8e098b73
Author: Michael McCandless 
Date:   2009-12-07T09:57:38Z

LUCENE-2119: add some more comments in PQ around the +1

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887872 
13f79535-47bb-0310-9956-ffa450edef68

commit af7aaa6868aba709503e05c040fdb568668e5451
Author: Mark Robert Miller 
Date:   2009-12-07T12:17:54Z

LUCENE-2106: ReadTask does not close its Reader when OpenReader/CloseReader 
are not used.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887899 
13f79535-47bb-0310-9956-ffa450edef68

commit 01e441fa762b56b906ab9fbd60409648ec726751
Author: Uwe Schindler 
Date:   2009-12-07T16:49:21Z

LUCENE-2103: NoLockFactory should have a private constructor

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@887995 
13f79535-47bb-0310-9956-ffa450edef68

commit 82ccc82d20f78879ed726ce374f2ad027c345cbb
Author: Michael McCandless 
Date:   2009-12-07T17:55:06Z

LUCENE-1844: speed up back-compat unit tests too

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888052 
13f79535-47bb-0310-9956-ffa450edef68

commit 5b021991a4a8f31f9f3ff1de044b44f03d4d35f9
Author: Robert Muir 
Date:   2009-12-08T04:03:40Z

LUCENE-2132: Fix the demo result.jsp

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888247 
13f79535-47bb-0310-9956-ffa450edef68

commit 4d3265012baa9a25fc4455a3445a69fd184d63bc
Author: Michael McCandless 
Date:   2009-12-08T10:19:08Z

fix typo in javadocs

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888308 
13f79535-47bb-0310-9956-ffa450edef68

commit 1a63be4921abca2586c49dc68fb99ecd9ffc5e8c
Author: Michael McCandless 
Date:   2009-12-08T10:37:30Z

LUCENE-2134: give 512M to JVM generating our javadocs

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888314 
13f79535-47bb-0310-9956-ffa450edef68

commit 6d3eaa719cf31153f66ac193e422f489378cb0fa
Author: Michael McCandless 
Date:   2009-12-08T13:47:20Z

LUCENE-2136: optimization: if Multi/DirectoryReader only has a single 
reader, delegate enums to it

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888398 
13f79535-47bb-0310-9956-ffa450edef68

commit 5223c896f2d73b3ed800a37de27ee94c6461faff
Author: Uwe Schindler 
Date:   2009-12-08T15:19:21Z

LUCENE-2136: This can be reverted.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888437 
13f79535-47bb-0310-9956-ffa450edef68

commit deaf3ea2bc2339a021427117f7fa9844f1e3e3b4
Author: Uwe Schindler 
Date:   2009-12-08T22:14:32Z

LUCENE-2128: Further parallelization of ParallelMultiSearcher

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@888595 
13f79535-47bb-0310-9956-ffa450edef68

commit 40a97e5782ecd56d1a89fe866f5288ec76c6f218
Author: Michael McCandless 
Date:   2009-12-08T23:05:29Z

LUCENE-213

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1061 - Still Failing

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1061/

3 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([EA3B02E14E63D573:626F3D3BE09FB88B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:560)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$Statement

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15371 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15371/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseSerialGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:46756";, 
"node_name":"127.0.0.1:46756_", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:46791";,  
   "node_name":"127.0.0.1:46791_", "state":"active", 
"leader":"true"},   "core_node3":{ "core":"collection1",
 "base_url":"http://127.0.0.1:40346";, 
"node_name":"127.0.0.1:40346_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:41466";, 
"node_name":"127.0.0.1:41466_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:46791";, 
"node_name":"127.0.0.1:46791_", "state":"active", 
"leader":"true"},   "core_node2":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:40346";, 
"node_name":"127.0.0.1:40346_", "state":"recovering", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collMinRf_1x3":{ "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:46791";, 
"node_name":"127.0.0.1:46791_", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:40346";, 
"node_name":"127.0.0.1:40346_", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:41466";, 
"node_name":"127.0.0.1:41466_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:46756";,
"node_name":"127.0.0.1:46756_",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:46791";,
"node_name":"127.0.0.1:46791_",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:40346";,
"node_name":"127.0.0.1:40346_",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:41466";,
"node_name":"127.0.0.1:41466_",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"aut

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 294 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/294/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([803EE5263009886B:CE9D90F521D2997B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:837)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 15074 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15074/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Doc Counts do not add up expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Doc Counts do not add up expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([8B9FD36758F420BC:3CBECBDF6084D44]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.assertDocCounts(AbstractFullDistribZkTestBase.java:1368)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.addUpdateDelete(FullSolrCloudDistribCmdsTest.java:487)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.j

[jira] [Resolved] (LUCENE-2229) SimpleSpanFragmenter fails to start a new fragment

2015-12-29 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-2229.
--
Resolution: Fixed

Thanks Elmer & Lukhnos!

> SimpleSpanFragmenter fails to start a new fragment
> --
>
> Key: LUCENE-2229
> URL: https://issues.apache.org/jira/browse/LUCENE-2229
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Elmer Garduno
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-2229.patch, LUCENE-2229.patch, LUCENE-2229.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> SimpleSpanFragmenter fails to identify a new fragment when there is more than 
> one stop word after a span is detected. This problem can be observed when the 
> Query contains a PhraseQuery.
> The problem is that the span extends toward the end of the TokenGroup. This 
> is because {{waitForProps = positionSpans.get(i).end + 1;}} and {{position += 
> posIncAtt.getPositionIncrement();}} this generates a value of {{position}} 
> greater than the value of {{waitForProps}} and {{(waitForPos == position)}} 
> never matches.
> {code:title=SimpleSpanFragmenter.java}
>   public boolean isNewFragment() {
> position += posIncAtt.getPositionIncrement();
> if (waitForPos == position) {
>   waitForPos = -1;
> } else if (waitForPos != -1) {
>   return false;
> }
> WeightedSpanTerm wSpanTerm = 
> queryScorer.getWeightedSpanTerm(termAtt.term());
> if (wSpanTerm != null) {
>   List positionSpans = wSpanTerm.getPositionSpans();
>   for (int i = 0; i < positionSpans.size(); i++) {
> if (positionSpans.get(i).start == position) {
>   waitForPos = positionSpans.get(i).end + 1;
>   break;
> }
>   }
> }
>...
> {code}
> An example is provided in the test case for the following Document and the 
> query *"all tokens"* followed by the words _of a_.
> {panel:title=Document}
> "Attribute instances are reused for *all tokens* _of a_ document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances."
> {panel}
> {code:title=HighlighterTest.java}
>  public void testSimpleSpanFragmenter() throws Exception {
> ...
> doSearching("\"all tokens\"");
> maxNumFragmentsRequired = 2;
> 
> scorer = new QueryScorer(query, FIELD_NAME);
> highlighter = new Highlighter(this, scorer);
> for (int i = 0; i < hits.totalHits; i++) {
>   String text = searcher.doc(hits.scoreDocs[i].doc).get(FIELD_NAME);
>   TokenStream tokenStream = analyzer.tokenStream(FIELD_NAME, new 
> StringReader(text));
>   highlighter.setTextFragmenter(new SimpleSpanFragmenter(scorer, 20));
>   String result = highlighter.getBestFragments(tokenStream, text,
>   maxNumFragmentsRequired, "...");
>   System.out.println("\t" + result);
> }
>   }
> {code}
> {panel:title=Result}
> are reused for all tokens of a document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances.
> {panel}
> {panel:title=Expected Result}
> for all tokens of a document
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2229) SimpleSpanFragmenter fails to start a new fragment

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074347#comment-15074347
 ] 

ASF subversion and git services commented on LUCENE-2229:
-

Commit 1722242 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722242 ]

LUCENE-2229: Fix SimpleSpanFragmenter bug with adjacent stop-words

> SimpleSpanFragmenter fails to start a new fragment
> --
>
> Key: LUCENE-2229
> URL: https://issues.apache.org/jira/browse/LUCENE-2229
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Elmer Garduno
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-2229.patch, LUCENE-2229.patch, LUCENE-2229.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> SimpleSpanFragmenter fails to identify a new fragment when there is more than 
> one stop word after a span is detected. This problem can be observed when the 
> Query contains a PhraseQuery.
> The problem is that the span extends toward the end of the TokenGroup. This 
> is because {{waitForProps = positionSpans.get(i).end + 1;}} and {{position += 
> posIncAtt.getPositionIncrement();}} this generates a value of {{position}} 
> greater than the value of {{waitForProps}} and {{(waitForPos == position)}} 
> never matches.
> {code:title=SimpleSpanFragmenter.java}
>   public boolean isNewFragment() {
> position += posIncAtt.getPositionIncrement();
> if (waitForPos == position) {
>   waitForPos = -1;
> } else if (waitForPos != -1) {
>   return false;
> }
> WeightedSpanTerm wSpanTerm = 
> queryScorer.getWeightedSpanTerm(termAtt.term());
> if (wSpanTerm != null) {
>   List positionSpans = wSpanTerm.getPositionSpans();
>   for (int i = 0; i < positionSpans.size(); i++) {
> if (positionSpans.get(i).start == position) {
>   waitForPos = positionSpans.get(i).end + 1;
>   break;
> }
>   }
> }
>...
> {code}
> An example is provided in the test case for the following Document and the 
> query *"all tokens"* followed by the words _of a_.
> {panel:title=Document}
> "Attribute instances are reused for *all tokens* _of a_ document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances."
> {panel}
> {code:title=HighlighterTest.java}
>  public void testSimpleSpanFragmenter() throws Exception {
> ...
> doSearching("\"all tokens\"");
> maxNumFragmentsRequired = 2;
> 
> scorer = new QueryScorer(query, FIELD_NAME);
> highlighter = new Highlighter(this, scorer);
> for (int i = 0; i < hits.totalHits; i++) {
>   String text = searcher.doc(hits.scoreDocs[i].doc).get(FIELD_NAME);
>   TokenStream tokenStream = analyzer.tokenStream(FIELD_NAME, new 
> StringReader(text));
>   highlighter.setTextFragmenter(new SimpleSpanFragmenter(scorer, 20));
>   String result = highlighter.getBestFragments(tokenStream, text,
>   maxNumFragmentsRequired, "...");
>   System.out.println("\t" + result);
> }
>   }
> {code}
> {panel:title=Result}
> are reused for all tokens of a document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances.
> {panel}
> {panel:title=Expected Result}
> for all tokens of a document
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2229) SimpleSpanFragmenter fails to start a new fragment

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074345#comment-15074345
 ] 

ASF subversion and git services commented on LUCENE-2229:
-

Commit 1722241 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1722241 ]

LUCENE-2229: Fix SimpleSpanFragmenter bug with adjacent stop-words

> SimpleSpanFragmenter fails to start a new fragment
> --
>
> Key: LUCENE-2229
> URL: https://issues.apache.org/jira/browse/LUCENE-2229
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Elmer Garduno
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-2229.patch, LUCENE-2229.patch, LUCENE-2229.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> SimpleSpanFragmenter fails to identify a new fragment when there is more than 
> one stop word after a span is detected. This problem can be observed when the 
> Query contains a PhraseQuery.
> The problem is that the span extends toward the end of the TokenGroup. This 
> is because {{waitForProps = positionSpans.get(i).end + 1;}} and {{position += 
> posIncAtt.getPositionIncrement();}} this generates a value of {{position}} 
> greater than the value of {{waitForProps}} and {{(waitForPos == position)}} 
> never matches.
> {code:title=SimpleSpanFragmenter.java}
>   public boolean isNewFragment() {
> position += posIncAtt.getPositionIncrement();
> if (waitForPos == position) {
>   waitForPos = -1;
> } else if (waitForPos != -1) {
>   return false;
> }
> WeightedSpanTerm wSpanTerm = 
> queryScorer.getWeightedSpanTerm(termAtt.term());
> if (wSpanTerm != null) {
>   List positionSpans = wSpanTerm.getPositionSpans();
>   for (int i = 0; i < positionSpans.size(); i++) {
> if (positionSpans.get(i).start == position) {
>   waitForPos = positionSpans.get(i).end + 1;
>   break;
> }
>   }
> }
>...
> {code}
> An example is provided in the test case for the following Document and the 
> query *"all tokens"* followed by the words _of a_.
> {panel:title=Document}
> "Attribute instances are reused for *all tokens* _of a_ document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances."
> {panel}
> {code:title=HighlighterTest.java}
>  public void testSimpleSpanFragmenter() throws Exception {
> ...
> doSearching("\"all tokens\"");
> maxNumFragmentsRequired = 2;
> 
> scorer = new QueryScorer(query, FIELD_NAME);
> highlighter = new Highlighter(this, scorer);
> for (int i = 0; i < hits.totalHits; i++) {
>   String text = searcher.doc(hits.scoreDocs[i].doc).get(FIELD_NAME);
>   TokenStream tokenStream = analyzer.tokenStream(FIELD_NAME, new 
> StringReader(text));
>   highlighter.setTextFragmenter(new SimpleSpanFragmenter(scorer, 20));
>   String result = highlighter.getBestFragments(tokenStream, text,
>   maxNumFragmentsRequired, "...");
>   System.out.println("\t" + result);
> }
>   }
> {code}
> {panel:title=Result}
> are reused for all tokens of a document. Thus, a 
> TokenStream/-Filter needs to update the appropriate Attribute(s) in 
> incrementToken(). The consumer, commonly the Lucene indexer, consumes the 
> data in the Attributes and then calls incrementToken() again until it retuns 
> false, which indicates that the end of the stream was reached. This means 
> that in each call of incrementToken() a TokenStream/-Filter can safely 
> overwrite the data in the Attribute instances.
> {panel}
> {panel:title=Expected Result}
> for all tokens of a document
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2922 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2922/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62544 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:794: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:674: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:661: Source checkout 
is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 92 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-29 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6933.
-
Resolution: Fixed

Ready. Whenever we decide to switch, it's there.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-29 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074305#comment-15074305
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Thanks for pointing out the problem, David. The cause of the issue was Steve's 
rename-and-merge a long time ago... very complex, not worth mentioning. I fixed 
it with some manual tweaks and updated the repo (your local clone will be 
invalid and will contain stale refs, fetch a fresh one).

https://github.com/dweiss/lucene-solr-svn2git

The migration procedure is 100% repeatable and I can roll out an up-to-date 
copy any time. It looks super good to me. I did not size-optimize anything 
except JAR files so that releases and diffs between commits are true. I don't 
think it's worth the trouble; a clone from github on my machine slurps a few 
mb/s.

I think this issue is ready and I'm closing it.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15370 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15370/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 53415 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:784: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:664: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:651: Source checkout 
is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 63 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Javascript testing in the Lucene/Solr build process

2015-12-29 Thread Upayavira
Please see SOLR-8473/8474, even if you are a Lucene dev, not Solr.

I am proposing to add unit/functional tests to the Solr Admin User
Interface. 

Whether you use the admin UI or not, if you want to run the full suite
of tests (e.g. before a release) you will need to have node/npm
installed, along with the Chrome browser.

If you run automated tests on a build server (and you want to run these
tests) you will require npm/node as above, plus either Chrome/X, or
(assuming I manage to get it working) PhantomJS which will remove the
dependency upon a user interface.

Please comment/object/praise/etc on the tickets, whether you are a Solr
dev or not, as I want to try and do this in a way that suits us all.

Upayavira
[1] https://issues.apache.org/jira/browse/SOLR-8473
[2] https://issues.apache.org/jira/browse/SOLR-8474

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 754 - Still Failing

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/754/

All tests passed

Build Log:
[...truncated 62107 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:784:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:664:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:651:
 Source checkout is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 79 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-8474) Test Framework for functional testing Angular UI

2015-12-29 Thread Upayavira (JIRA)
Upayavira created SOLR-8474:
---

 Summary: Test Framework for functional testing Angular UI
 Key: SOLR-8474
 URL: https://issues.apache.org/jira/browse/SOLR-8474
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 5.4
Reporter: Upayavira
Assignee: Upayavira


The Solr UI has no tests. This is less than ideal. This ticket is aimed at 
facilitating discussion around such a test framework for functional/end-to-end 
testing components within the Angular UI.

Having a unit testing framework will encourage developers of the UI to make 
more modular, and thus hopefully cleaner, code, as well as providing a means to 
identify regressions.

For functional testing, I am proposing a Karma/Protractor/Jasmine combination.

Karma runs the tests, as with the Unit testing framework, Protractor interacts 
with the pages effectively via a programmable browser (click here, enter there, 
confirm that) whilst Jasmine provides a BDD style syntax for constructing the 
tests themselves.

My proposal is that, for functional tests, we will fire up a full Solr server 
via the existing test framework, then invoke Karma/Protractor within that 
context. That will mean that the functional tests will be interacting with a 
real Solr instance, presumably with real data in it.

Karma/Protractor/Jasmine can be installed by npm, which would become a 
dependency for the Lucene/Solr build process, as for SOLR-8473.

As with SOLR-8473, there will be a dependency on either Chrome (and a UI such 
as X) or a UI-less browser such as PhantomJS.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15369 - Still Failing!

2015-12-29 Thread Robert Muir
I committed a fix.

On Tue, Dec 29, 2015 at 12:33 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15369/
> Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
> -XX:-CompactStrings
>
> All tests passed
>
> Build Log:
> [...truncated 53293 lines...]
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:784: The following 
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:664: The following 
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:651: Source 
> checkout is dirty after running tests!!! Offending files:
> * ./lucene/licenses/junit4-ant-2.3.2.jar.sha1
>
> Total time: 56 minutes 34 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6953) clean up lucene-test-framework dependencies

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074239#comment-15074239
 ] 

ASF subversion and git services commented on LUCENE-6953:
-

Commit 1722233 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722233 ]

LUCENE-6953: remove bogus sha1/LICENSE, nothing depends on this anymore

> clean up lucene-test-framework dependencies
> ---
>
> Key: LUCENE-6953
> URL: https://issues.apache.org/jira/browse/LUCENE-6953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6953.patch
>
>
> The current ivy configuration leads to the (wrong) belief that 
> lucene-test-framework depends on junit4-ant and ant itself.
> It confuses e.g. 'ant eclipse' (look and you will see those jars in 
> classpath), and lists these as dependencies in published maven poms, etc.
> But it really does not depend on junit4-ant at all, it works fine with other 
> test runners (e.g. IDE runners). That is our build itself, and for it to 
> taskdef the task, it can just use an ivy inline cachepath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6953) clean up lucene-test-framework dependencies

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074238#comment-15074238
 ] 

ASF subversion and git services commented on LUCENE-6953:
-

Commit 1722232 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1722232 ]

LUCENE-6953: remove bogus sha1/LICENSE, nothing depends on this anymore

> clean up lucene-test-framework dependencies
> ---
>
> Key: LUCENE-6953
> URL: https://issues.apache.org/jira/browse/LUCENE-6953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6953.patch
>
>
> The current ivy configuration leads to the (wrong) belief that 
> lucene-test-framework depends on junit4-ant and ant itself.
> It confuses e.g. 'ant eclipse' (look and you will see those jars in 
> classpath), and lists these as dependencies in published maven poms, etc.
> But it really does not depend on junit4-ant at all, it works fine with other 
> test runners (e.g. IDE runners). That is our build itself, and for it to 
> taskdef the task, it can just use an ivy inline cachepath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8473) Test Framework for Unit Testing Angular UI

2015-12-29 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-8473:

Description: 
The Solr UI has no tests. This is less than ideal. This ticket is aimed at 
facilitating discussion around such a test framework for unit testing 
components within the Angular UI.

Having a unit testing framework will encourage developers of the UI to make 
more modular, and thus hopefully cleaner, code, as well as providing a means to 
identify regressions.

The test framework I am proposing is a Karma/Jasmine combination. Karma runs 
the tests, Jasmine provides a BDD style framework for expressing the tests.

Karma and Jasmine can be installed via npm. This would add npm as a dependency 
for the Lucene/Solr build process, at least at release time.

Karma runs its tests within a browser. I will use, by default, Chrome. This is 
a bigger deal, as it will make our tests dependent upon a UI layer, such as X 
on Unix. 

I have looked into PhantomJS which is essentially the Javascript portion of a 
browser, but without the UI dependency, this would appear a much better 
solution for the headless scenario, however, I have as yet to get it to work 
(on MacOS). My next task would be to try it in a Linux VM.

> Test Framework for Unit Testing Angular UI
> --
>
> Key: SOLR-8473
> URL: https://issues.apache.org/jira/browse/SOLR-8473
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 5.4
>Reporter: Upayavira
>Assignee: Upayavira
>
> The Solr UI has no tests. This is less than ideal. This ticket is aimed at 
> facilitating discussion around such a test framework for unit testing 
> components within the Angular UI.
> Having a unit testing framework will encourage developers of the UI to make 
> more modular, and thus hopefully cleaner, code, as well as providing a means 
> to identify regressions.
> The test framework I am proposing is a Karma/Jasmine combination. Karma runs 
> the tests, Jasmine provides a BDD style framework for expressing the tests.
> Karma and Jasmine can be installed via npm. This would add npm as a 
> dependency for the Lucene/Solr build process, at least at release time.
> Karma runs its tests within a browser. I will use, by default, Chrome. This 
> is a bigger deal, as it will make our tests dependent upon a UI layer, such 
> as X on Unix. 
> I have looked into PhantomJS which is essentially the Javascript portion of a 
> browser, but without the UI dependency, this would appear a much better 
> solution for the headless scenario, however, I have as yet to get it to work 
> (on MacOS). My next task would be to try it in a Linux VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8473) Test Framework for Unit Testing Angular UI

2015-12-29 Thread Upayavira (JIRA)
Upayavira created SOLR-8473:
---

 Summary: Test Framework for Unit Testing Angular UI
 Key: SOLR-8473
 URL: https://issues.apache.org/jira/browse/SOLR-8473
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 5.4
Reporter: Upayavira
Assignee: Upayavira






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2976 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2976/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62172 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:784: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:664: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:651: Source 
checkout is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 91 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3893 - Still Failing

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3893/

All tests passed

Build Log:
[...truncated 62550 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:794: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:674: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:661: 
Source checkout is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 87 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074167#comment-15074167
 ] 

ASF subversion and git services commented on SOLR-7462:
---

Commit 176 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r176 ]

SOLR-7462: AIOOBE in RecordingJSONParser

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7462.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074160#comment-15074160
 ] 

ASF subversion and git services commented on SOLR-7462:
---

Commit 171 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r171 ]

SOLR-7462: AIOOBE in RecordingJSONParser

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Fix For: 5.3.2
>
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074159#comment-15074159
 ] 

ASF subversion and git services commented on SOLR-7462:
---

Commit 1722218 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1722218 ]

SOLR-7462: AIOOBE in RecordingJSONParser

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Fix For: 5.3.2
>
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5376 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5376/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseSerialGC

7 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([94904188D9E2CCE4:1CC47E52771EA11C]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15073 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15073/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 62670 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:674: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:661: Source checkout is 
dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 66 minutes 14 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8472) Improve debugability of chaos tests

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074128#comment-15074128
 ] 

ASF subversion and git services commented on SOLR-8472:
---

Commit 1722199 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722199 ]

SOLR-8472: tests - fix NPE in chaos tests debug hook

> Improve debugability  of chaos tests
> 
>
> Key: SOLR-8472
> URL: https://issues.apache.org/jira/browse/SOLR-8472
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
>
> This is sort of a temporary catch-all for improving the tests themselves and 
> improving logging to enable  easier debugging of chaos-type fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8472) Improve debugability of chaos tests

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074115#comment-15074115
 ] 

ASF subversion and git services commented on SOLR-8472:
---

Commit 1722197 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1722197 ]

SOLR-8472: tests - fix NPE in chaos tests debug hook

> Improve debugability  of chaos tests
> 
>
> Key: SOLR-8472
> URL: https://issues.apache.org/jira/browse/SOLR-8472
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
>
> This is sort of a temporary catch-all for improving the tests themselves and 
> improving logging to enable  easier debugging of chaos-type fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6950) DimensionalRangeQuery not working with UninvertingReader

2015-12-29 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6950.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

Thanks Ishan!

> DimensionalRangeQuery not working with UninvertingReader
> 
>
> Key: LUCENE-6950
> URL: https://issues.apache.org/jira/browse/LUCENE-6950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6950.patch, LUCENE-6950.patch, LUCENE-6950.patch, 
> LUCENE-6950.patch
>
>
> As I was trying out dimensional fields for SOLR-8396, I realized that 
> DimensionalRangeQuery is not working with UninvertingReader. 
> In Solr, all directory readers are wrapped by an UninvertingReader and an 
> ExitableDirectoryReader. 
> Here's the error:
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: field="rating" 
> was indexed with numDims=0 but this query has numDims=1
>   at 
> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:186)
>   at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:593)
>   at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:451)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:462)
>   at 
> DimensionalRangeQueryExample.query(DimensionalRangeQueryExample.java:66)
> {code}
> Here's an example program to trigger this failure:
> {code}
> import java.io.IOException;
> import java.util.HashMap;
> import java.util.Map;
> import java.util.Random;
> import org.apache.lucene.analysis.standard.StandardAnalyzer;
> import org.apache.lucene.document.DimensionalIntField;
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> import org.apache.lucene.document.Field.Store;
> import org.apache.lucene.document.LegacyIntField;
> import org.apache.lucene.document.StringField;
> import org.apache.lucene.document.TextField;
> import org.apache.lucene.index.DirectoryReader;
> import org.apache.lucene.index.IndexWriter;
> import org.apache.lucene.index.IndexWriterConfig;
> import org.apache.lucene.index.StoredDocument;
> import org.apache.lucene.queryparser.classic.ParseException;
> import org.apache.lucene.search.DimensionalRangeQuery;
> import org.apache.lucene.search.IndexSearcher;
> import org.apache.lucene.search.LegacyNumericRangeQuery;
> import org.apache.lucene.search.Query;
> import org.apache.lucene.search.ScoreDoc;
> import org.apache.lucene.search.TopDocs;
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.RAMDirectory;
> import org.apache.lucene.uninverting.UninvertingReader;
> import org.apache.lucene.util.BytesRef;
> public class DimensionalRangeQueryExample {
>   public static void main(String[] args) throws IOException, 
> ParseException {
>   StandardAnalyzer analyzer = new StandardAnalyzer();
>   Directory index = new RAMDirectory();
>   IndexWriterConfig config = new IndexWriterConfig(analyzer);
>   IndexWriter w = new IndexWriter(index, config);
>   addDoc(w, "Lucene in Action", 1);
>   addDoc(w, "Lucene for Dummies", 2);
>   addDoc(w, "Managing Gigabytes", 3);
>   addDoc(w, "The Art of Computer Science", 4);
>   w.commit();
>   w.close();
>   DirectoryReader reader = (DirectoryReader.open(index));
>   Map uninvertingMap = new 
> HashMap<>();
>   uninvertingMap.put("id", UninvertingReader.Type.BINARY);
>   uninvertingMap.put("rating", UninvertingReader.Type.INTEGER);
>   reader = UninvertingReader.wrap(reader, uninvertingMap);
>   IndexSearcher searcher = new IndexSearcher(reader);
>   Query legacyQuery = 
> LegacyNumericRangeQuery.newIntRange("rating_legacy", 1, 4, true, true);
>   Query dimensionalQuery = 
> DimensionalRangeQuery.new1DIntRange("rating", 1, true, 4, true);
>   System.out.println("Legacy query: ");
>   query(legacyQuery, searcher); // works
>   System.out.println("Dimensional query: ");
>   query(dimensionalQuery, searcher); // fails
>   
>   reader.close();
>   }
>   private static void query(Query q, IndexSearcher searcher) throws 
> IOException {
>   int hitsPerPage = 10;
>   TopDocs docs = searcher.search(q, hitsPerPage);
>   ScoreDoc[] hits = doc

[jira] [Commented] (LUCENE-6950) DimensionalRangeQuery not working with UninvertingReader

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074111#comment-15074111
 ] 

ASF subversion and git services commented on LUCENE-6950:
-

Commit 1722196 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722196 ]

LUCENE-6950: Fix FieldInfos handling of UninvertingReader

> DimensionalRangeQuery not working with UninvertingReader
> 
>
> Key: LUCENE-6950
> URL: https://issues.apache.org/jira/browse/LUCENE-6950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-6950.patch, LUCENE-6950.patch, LUCENE-6950.patch, 
> LUCENE-6950.patch
>
>
> As I was trying out dimensional fields for SOLR-8396, I realized that 
> DimensionalRangeQuery is not working with UninvertingReader. 
> In Solr, all directory readers are wrapped by an UninvertingReader and an 
> ExitableDirectoryReader. 
> Here's the error:
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: field="rating" 
> was indexed with numDims=0 but this query has numDims=1
>   at 
> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:186)
>   at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:593)
>   at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:451)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:462)
>   at 
> DimensionalRangeQueryExample.query(DimensionalRangeQueryExample.java:66)
> {code}
> Here's an example program to trigger this failure:
> {code}
> import java.io.IOException;
> import java.util.HashMap;
> import java.util.Map;
> import java.util.Random;
> import org.apache.lucene.analysis.standard.StandardAnalyzer;
> import org.apache.lucene.document.DimensionalIntField;
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> import org.apache.lucene.document.Field.Store;
> import org.apache.lucene.document.LegacyIntField;
> import org.apache.lucene.document.StringField;
> import org.apache.lucene.document.TextField;
> import org.apache.lucene.index.DirectoryReader;
> import org.apache.lucene.index.IndexWriter;
> import org.apache.lucene.index.IndexWriterConfig;
> import org.apache.lucene.index.StoredDocument;
> import org.apache.lucene.queryparser.classic.ParseException;
> import org.apache.lucene.search.DimensionalRangeQuery;
> import org.apache.lucene.search.IndexSearcher;
> import org.apache.lucene.search.LegacyNumericRangeQuery;
> import org.apache.lucene.search.Query;
> import org.apache.lucene.search.ScoreDoc;
> import org.apache.lucene.search.TopDocs;
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.RAMDirectory;
> import org.apache.lucene.uninverting.UninvertingReader;
> import org.apache.lucene.util.BytesRef;
> public class DimensionalRangeQueryExample {
>   public static void main(String[] args) throws IOException, 
> ParseException {
>   StandardAnalyzer analyzer = new StandardAnalyzer();
>   Directory index = new RAMDirectory();
>   IndexWriterConfig config = new IndexWriterConfig(analyzer);
>   IndexWriter w = new IndexWriter(index, config);
>   addDoc(w, "Lucene in Action", 1);
>   addDoc(w, "Lucene for Dummies", 2);
>   addDoc(w, "Managing Gigabytes", 3);
>   addDoc(w, "The Art of Computer Science", 4);
>   w.commit();
>   w.close();
>   DirectoryReader reader = (DirectoryReader.open(index));
>   Map uninvertingMap = new 
> HashMap<>();
>   uninvertingMap.put("id", UninvertingReader.Type.BINARY);
>   uninvertingMap.put("rating", UninvertingReader.Type.INTEGER);
>   reader = UninvertingReader.wrap(reader, uninvertingMap);
>   IndexSearcher searcher = new IndexSearcher(reader);
>   Query legacyQuery = 
> LegacyNumericRangeQuery.newIntRange("rating_legacy", 1, 4, true, true);
>   Query dimensionalQuery = 
> DimensionalRangeQuery.new1DIntRange("rating", 1, true, 4, true);
>   System.out.println("Legacy query: ");
>   query(legacyQuery, searcher); // works
>   System.out.println("Dimensional query: ");
>   query(dimensionalQuery, searcher); // fails
>   
>   reader.close();
>   }
>   private static void query(Query q, IndexSearcher searcher) throws 
> IOException {
>  

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15369 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15369/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

All tests passed

Build Log:
[...truncated 53293 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:784: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:664: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:651: Source checkout 
is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 56 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2015-12-29 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074100#comment-15074100
 ] 

David de Kleer commented on SOLR-7739:
--

Hi Alessandro,
Thank you for your quick response, that's nice! :D
With kind regards,
David

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5506 - Still Failing!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5506/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, TransactionLog]
at __randomizedtesting.SeedInfo.seed([6191B1D8E0BEEE05]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data\tlog\tlog.000:
 java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data\tlog\tlog.000:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data\tlog

C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B1D8E0BEEE05-001\tempDir-002\node1\testCloudExamplePrompt_shard1_replica1\data

C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.util.TestSolrCLIRunExample_6191B

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 753 - Failure

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/753/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:42289/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42289/awholynewcollection_0: non ok status: 
500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([EB0D45CF48E17032:63597A15E61D1DCA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(S

[jira] [Commented] (SOLR-8471) Different responses between solr java client and solr web, when request for range subfacets

2015-12-29 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074074#comment-15074074
 ] 

Yonik Seeley commented on SOLR-8471:


Is this just a display issue?
It looks like the values are parsing to Date objects in the java client, and 
then however you are printing those out is just using the local timezone?

> Different responses between solr java client and solr web, when request for 
> range subfacets
> ---
>
> Key: SOLR-8471
> URL: https://issues.apache.org/jira/browse/SOLR-8471
> Project: Solr
>  Issue Type: Bug
> Environment: Solr 5.2.1
>Reporter: Pablo Anzorena
>
> I make a request using json.facet via solr java client and solr web, and the 
> responses are different.
> This is the request:
> {
>   date_fulll: {
> type: range,
> field: date_full,
> start: "2015-01-01T00:00:00Z",
> end: "2015-12-31T00:00:00Z",
> gap: "+3MONTHS",
> mincount: 1,
> hardend: true,
> other: none,
> facet: {
>   Teus: "sum(mostrar_cant_teus)"
> }
>   },
>   Teus: "sum(mostrar_cant_teus)"
> }
> The response from the web is OK
> "buckets": [
> {
> "val": "2015-01-01T00:00:00Z",
> "count": 10225817,
> "Teus": 14606647.969191335
> },
> {
> "val": "2015-04-01T00:00:00Z",
> "count": 11107807,
> "Teus": 16075736.60429075
> },
> {
> "val": "2015-07-01T00:00:00Z",
> "count": 11450051,
> "Teus": 16654022.338799914
> },
> {
> "val": "2015-10-01T00:00:00Z",
> "count": 9232776,
> "Teus": 13341092.767605131
> }
> ]
> But the response from java substract 3 hours for each bucket. Here is the 
> java response:
> {count=42016451,Teus=6.0677499681092925E7,date_full={buckets=[{val=Wed Dec 31 
> 21:00:00 ART 2014,count=10225817,Teus=1.4606647969191335E7}, {val=Tue Mar 31 
> 21:00:00 ART 2015,count=11107807,Teus=1.607573660429075E7}, {val=Tue Jun 30 
> 21:00:00 ART 2015,count=11450051,Teus=1.6654022338799914E7}, {val=Wed Sep 30 
> 21:00:00 ART 2015,count=9232776,Teus=1.3341092767605131E7}]}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8472) Improve debugability of chaos tests

2015-12-29 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074060#comment-15074060
 ] 

Yonik Seeley commented on SOLR-8472:


First thing to fix is what looks like a test bug in the diagnostics callback:
{code}
  2> 75068 ERROR (qtp216026101-203) [n:127.0.0.1:41127_bxls%2Fs c:collection1 
s:shard2 r:core_node1 x:collection1] o.a.s.c.Diagnostics TEST HOOK EXCEPTION
  2> java.lang.NullPointerException
  2>at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest$1.call(ChaosMonkeyNothingIsSafeTest.java:66)
  2>at org.apache.solr.core.Diagnostics.call(Diagnostics.java:35)
  2>at 
org.apache.solr.update.SolrCmdDistributor.doRetriesIfNeeded(SolrCmdDistributor.java:112)
 {code}

> Improve debugability  of chaos tests
> 
>
> Key: SOLR-8472
> URL: https://issues.apache.org/jira/browse/SOLR-8472
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
>
> This is sort of a temporary catch-all for improving the tests themselves and 
> improving logging to enable  easier debugging of chaos-type fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8472) Improve debugability of chaos tests

2015-12-29 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8472:
--

 Summary: Improve debugability  of chaos tests
 Key: SOLR-8472
 URL: https://issues.apache.org/jira/browse/SOLR-8472
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley


This is sort of a temporary catch-all for improving the tests themselves and 
improving logging to enable  easier debugging of chaos-type fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8471) Different responses between solr java client and solr web, when request for range subfacets

2015-12-29 Thread Pablo Anzorena (JIRA)
Pablo Anzorena created SOLR-8471:


 Summary: Different responses between solr java client and solr 
web, when request for range subfacets
 Key: SOLR-8471
 URL: https://issues.apache.org/jira/browse/SOLR-8471
 Project: Solr
  Issue Type: Bug
 Environment: Solr 5.2.1
Reporter: Pablo Anzorena


I make a request using json.facet via solr java client and solr web, and the 
responses are different.
This is the request:

{
  date_fulll: {
type: range,
field: date_full,
start: "2015-01-01T00:00:00Z",
end: "2015-12-31T00:00:00Z",
gap: "+3MONTHS",
mincount: 1,
hardend: true,
other: none,
facet: {
  Teus: "sum(mostrar_cant_teus)"
}
  },
  Teus: "sum(mostrar_cant_teus)"
}

The response from the web is OK
"buckets": [
{
"val": "2015-01-01T00:00:00Z",
"count": 10225817,
"Teus": 14606647.969191335
},
{
"val": "2015-04-01T00:00:00Z",
"count": 11107807,
"Teus": 16075736.60429075
},
{
"val": "2015-07-01T00:00:00Z",
"count": 11450051,
"Teus": 16654022.338799914
},
{
"val": "2015-10-01T00:00:00Z",
"count": 9232776,
"Teus": 13341092.767605131
}
]

But the response from java substract 3 hours for each bucket. Here is the java 
response:
{count=42016451,Teus=6.0677499681092925E7,date_full={buckets=[{val=Wed Dec 31 
21:00:00 ART 2014,count=10225817,Teus=1.4606647969191335E7}, {val=Tue Mar 31 
21:00:00 ART 2015,count=11107807,Teus=1.607573660429075E7}, {val=Tue Jun 30 
21:00:00 ART 2015,count=11450051,Teus=1.6654022338799914E7}, {val=Wed Sep 30 
21:00:00 ART 2015,count=9232776,Teus=1.3341092767605131E7}]}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3892 - Failure

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3892/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [RawDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [RawDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([159D601B22B44E33]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11168 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_159D601B22B44E33-001/init-core-data-001
   [junit4]   2> 1590230 INFO  
(SUITE-HttpPartitionTest-seed#[159D601B22B44E33]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1590234 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1590241 INFO  (Thread-5580) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1590241 INFO  (Thread-5580) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1590339 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.ZkTestServer start zk server on port:56108
   [junit4]   2> 1590339 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1590340 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1590342 INFO  (zkCallback-1253-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@266c8031 
name:ZooKeeperConnection Watcher:127.0.0.1:56108 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1590342 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1590343 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1590343 INFO  
(TEST-HttpPartitionTest.test-seed#[159D601B22B44E33]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1590345 INFO  
(TEST-Htt

[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 290 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/290/
Java: multiarch/jdk1.7.0 -d64 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 55628 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:794: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:674: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:661: Source 
checkout is dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 107 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_66) - Build # 15072 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15072/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 62786 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:674: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:661: Source checkout is 
dirty after running tests!!! Offending files:
* ./lucene/licenses/junit4-ant-2.3.2.jar.sha1

Total time: 65 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2015-12-29 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074019#comment-15074019
 ] 

Alessandro Benedetti commented on SOLR-7739:


Hi David,
tomorrow I will update this patch.
And ping the committers for having a feedback !
The lucene side has already been officially integrated !

Cheers

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6950) DimensionalRangeQuery not working with UninvertingReader

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074016#comment-15074016
 ] 

ASF subversion and git services commented on LUCENE-6950:
-

Commit 1722165 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1722165 ]

LUCENE-6950: Fix FieldInfos handling of UninvertingReader

> DimensionalRangeQuery not working with UninvertingReader
> 
>
> Key: LUCENE-6950
> URL: https://issues.apache.org/jira/browse/LUCENE-6950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-6950.patch, LUCENE-6950.patch, LUCENE-6950.patch, 
> LUCENE-6950.patch
>
>
> As I was trying out dimensional fields for SOLR-8396, I realized that 
> DimensionalRangeQuery is not working with UninvertingReader. 
> In Solr, all directory readers are wrapped by an UninvertingReader and an 
> ExitableDirectoryReader. 
> Here's the error:
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: field="rating" 
> was indexed with numDims=0 but this query has numDims=1
>   at 
> org.apache.lucene.search.DimensionalRangeQuery$1.scorer(DimensionalRangeQuery.java:186)
>   at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:667)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:474)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:593)
>   at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:451)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:462)
>   at 
> DimensionalRangeQueryExample.query(DimensionalRangeQueryExample.java:66)
> {code}
> Here's an example program to trigger this failure:
> {code}
> import java.io.IOException;
> import java.util.HashMap;
> import java.util.Map;
> import java.util.Random;
> import org.apache.lucene.analysis.standard.StandardAnalyzer;
> import org.apache.lucene.document.DimensionalIntField;
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> import org.apache.lucene.document.Field.Store;
> import org.apache.lucene.document.LegacyIntField;
> import org.apache.lucene.document.StringField;
> import org.apache.lucene.document.TextField;
> import org.apache.lucene.index.DirectoryReader;
> import org.apache.lucene.index.IndexWriter;
> import org.apache.lucene.index.IndexWriterConfig;
> import org.apache.lucene.index.StoredDocument;
> import org.apache.lucene.queryparser.classic.ParseException;
> import org.apache.lucene.search.DimensionalRangeQuery;
> import org.apache.lucene.search.IndexSearcher;
> import org.apache.lucene.search.LegacyNumericRangeQuery;
> import org.apache.lucene.search.Query;
> import org.apache.lucene.search.ScoreDoc;
> import org.apache.lucene.search.TopDocs;
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.RAMDirectory;
> import org.apache.lucene.uninverting.UninvertingReader;
> import org.apache.lucene.util.BytesRef;
> public class DimensionalRangeQueryExample {
>   public static void main(String[] args) throws IOException, 
> ParseException {
>   StandardAnalyzer analyzer = new StandardAnalyzer();
>   Directory index = new RAMDirectory();
>   IndexWriterConfig config = new IndexWriterConfig(analyzer);
>   IndexWriter w = new IndexWriter(index, config);
>   addDoc(w, "Lucene in Action", 1);
>   addDoc(w, "Lucene for Dummies", 2);
>   addDoc(w, "Managing Gigabytes", 3);
>   addDoc(w, "The Art of Computer Science", 4);
>   w.commit();
>   w.close();
>   DirectoryReader reader = (DirectoryReader.open(index));
>   Map uninvertingMap = new 
> HashMap<>();
>   uninvertingMap.put("id", UninvertingReader.Type.BINARY);
>   uninvertingMap.put("rating", UninvertingReader.Type.INTEGER);
>   reader = UninvertingReader.wrap(reader, uninvertingMap);
>   IndexSearcher searcher = new IndexSearcher(reader);
>   Query legacyQuery = 
> LegacyNumericRangeQuery.newIntRange("rating_legacy", 1, 4, true, true);
>   Query dimensionalQuery = 
> DimensionalRangeQuery.new1DIntRange("rating", 1, true, 4, true);
>   System.out.println("Legacy query: ");
>   query(legacyQuery, searcher); // works
>   System.out.println("Dimensional query: ");
>   query(dimensionalQuery, searcher); // fails
>   
>   reader.close();
>   }
>   private static void query(Query q, IndexSearcher searcher) throws 
> IOException {
>   int 

[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2015-12-29 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074008#comment-15074008
 ] 

David de Kleer commented on SOLR-7739:
--

An easy way to classify text from within SOLR would be a very nice feature! 
Could you please take another look at this? I have tried to implement this 
patch but didn't succeed, because the patch underlying this patch (6631) got 
changed/updated in the meantime.

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2015-12-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8458:
---
Attachment: SOLR-8458.patch

New patch based on the suggestion from [~dpgove]

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch, SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)&left=search(...)&right=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7462:
-
Fix Version/s: 5.3.2

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Fix For: 5.3.2
>
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-29 Thread Anshum Gupta
Thanks for committing those David.

I just brought everything in synch on the 5.3.2 change log section on all
the branches.

I have also committed everything but I want to also commit SOLR-8470 before
cutting the RC (unless someone objects). In case someone has something else
to back port, kindly do so today.


On Mon, Dec 28, 2015 at 8:38 PM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> Yikes; I’m sure that was painful.
>
> So I just back-ported a couple issues, SOLR-8340 & SOLR-8059.  I was about
> to manually keep the 5.3.2 section in branch_5x & trunk in sync but then
> thought better of it.  Might as well wait until 5.3.2 is voted and then do
> it, since there are bound to be others who want to do the same, so why
> bother with the intermediate bookkeeping.
>
> On Mon, Dec 28, 2015 at 8:13 AM Anshum Gupta 
> wrote:
>
>> Sorry for the long delay but I burnt my hand and so have been MIA. It's
>> better now so I'll port the issues and cut an RC on Wednesday.
>>
>> On Wed, Dec 23, 2015 at 9:22 PM, Anshum Gupta 
>> wrote:
>>
>>> I've added the section for 5.3.2 in all the branches. Kindly back-port
>>> stuff that you think makes sense to go into a 'bug-fix' release for 5.3.1
>>> only.
>>>
>>> I think it'd make sense to duplicate entries for JIRAs we back port.
>>>
>>> On Mon, Dec 21, 2015 at 11:38 AM, Anshum Gupta 
>>> wrote:
>>>
 Seems like Noble ran addVersion.py for 5.3.2 on the lucene_solr_5_3
 branch during the 5.3.1 release.
 I can now run it for branch_5x and trunk with the old change id but
 there are a ton of property changes to multiple files. Can someone confirm
 that it'd be fine? The addVersion on 5.3.2, that I'm trying to merge onto
 branch_5x and trunk was done before 5.4 was released.

 Also, the change log entry for 5.3.2 is right above 5.3.1 and not
 chronological i.e. at the top. I think that is how it should be unless
 someone has some different ideas.

 On Thu, Dec 17, 2015 at 2:42 AM, Shawn Heisey 
 wrote:

> On 12/16/2015 1:08 PM, Anshum Gupta wrote:
> > There are a bunch of important bug fixes that call for a 5.3.2 in my
> > opinion. I'm specifically talking about security plugins related
> fixes
> > but I'm sure there are others too.
> >
> > Unless someone else wants to do it, I'd volunteer to do the release
> > and cut an RC next Tuesday.
>
> Sounds like a reasonable idea to me.  I assume these must be fixes that
> are not yet backported.
>
> I happen to have the 5.3 branch on my dev system, with SOLR-6188
> applied.  It is already up to date.  There's nothing in the 5.3.2
> section of either CHANGES.txt file.  The svn log indicates that nothing
> has been backported since the 5.3.1 release was cut.
>
> Perhaps SOLR-6188 could be added to the list of fixes to backport.  I
> believe it's a benign change.
>
> Thinking about CHANGES.txt, this might work for the 5.3 branch:
>
> 
> === Lucene 5.3.2 ===
> All changes were backported from 5.4.0.
>
> Bug Fixes
>
> * LUCENE-: A description (Committer Name)
> 
>
> If we decide it's a good idea to mention the release in trunk and
> branch_5x, something like the following might work, because that file
> should already contain the full change descriptions:
>
> 
> === Lucene 5.3.2 ===
> The following issues were backported from 5.4.0:
> LUCENE-
> LUCENE-
> 
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


 --
 Anshum Gupta

>>>
>>>
>>>
>>> --
>>> Anshum Gupta
>>>
>>
>>
>>
>> --
>> Anshum Gupta
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>



-- 
Anshum Gupta


[jira] [Commented] (SOLR-8470) Make PKIAuthPlugin''s token's TTL configurable

2015-12-29 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073971#comment-15073971
 ] 

Anshum Gupta commented on SOLR-8470:


Considering this isn't really a bug or even a new feature, I am debating 
weather we should add this to 5.3.2. I'm kind of inclined towards putting this 
in 5.3.2 unless someone has a problem with it as it would help users who're 
hitting the timeouts.

> Make PKIAuthPlugin''s token's TTL configurable
> --
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15368 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15368/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:56509/v_fod/om";, 
"node_name":"127.0.0.1:56509_v_fod%2Fom", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:34864/v_fod/om";, 
"node_name":"127.0.0.1:34864_v_fod%2Fom", 
"state":"active", "leader":"true"},   "core_node3":{
 "core":"collection1", 
"base_url":"http://127.0.0.1:35130/v_fod/om";, 
"node_name":"127.0.0.1:35130_v_fod%2Fom", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:56551/v_fod/om";, 
"node_name":"127.0.0.1:56551_v_fod%2Fom", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:56509/v_fod/om";, 
"node_name":"127.0.0.1:56509_v_fod%2Fom", "state":"active", 
"leader":"true"},   "core_node2":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:35130/v_fod/om";, 
"node_name":"127.0.0.1:35130_v_fod%2Fom", "state":"recovering", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collMinRf_1x3":{ "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:56509/v_fod/om";, 
"node_name":"127.0.0.1:56509_v_fod%2Fom", "state":"active"},
   "core_node2":{ "core":"collMinRf_1x3_shard1_replica2",   
  "base_url":"http://127.0.0.1:35130/v_fod/om";, 
"node_name":"127.0.0.1:35130_v_fod%2Fom", "state":"active"},
   "core_node3":{ "core":"collMinRf_1x3_shard1_replica1",   
  "base_url":"http://127.0.0.1:56551/v_fod/om";, 
"node_name":"127.0.0.1:56551_v_fod%2Fom", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:56509/v_fod/om";,
"node_name":"127.0.0.1:56509_v_fod%2Fom",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:34864/v_fod/om";,
"node_name":"127.0.0.1:34864_v_fod%2Fom",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:35130/v_fod/om";,
"node_name":"127.0.0.1:35130_v_fod%2Fom",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
 

[jira] [Comment Edited] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2015-12-29 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073961#comment-15073961
 ] 

Cao Manh Dat edited comment on SOLR-8458 at 12/29/15 2:40 PM:
--

{quote}
What's the purpose of ClientTupleStream? It appears it's only used in the tests 
and doesn't add any value as a Stream object.
{quote}
I create the class to simplify the code in test class. Currently, we dont have 
any TupleStream which support passing SolrClient and SolrParams. in SolrStream, 
we pass in baseUrl and it always create HttpSolrClient (not CloudClient). In 
CloudSolrStream, we pass in ZKAdress and it always look up for fl & sort 
params...

{quote}
I don't think it'd be necessary to test that substitution on each and every 
stream class because the implementation is outside of the stream classes.
{quote}
I good point. I forgot that query parameter substitution already been tested in 
other class. We just wanna to show the guide here. I will write a 
testSubstituteStream method which code derive from testMergeStream()


was (Author: caomanhdat):
{quote}
What's the purpose of ClientTupleStream? It appears it's only used in the tests 
and doesn't add any value as a Stream object.
{quote}
I create a class to simplify the code in test class. Currently, we dont have 
any TupleStream which support passing SolrClient and SolrParams. in SolrStream, 
we pass in baseUrl and it always create HttpSolrClient (not CloudClient). In 
CloudSolrStream, we pass in ZKAdress and it always look up for fl & sort 
params...

{quote}
I don't think it'd be necessary to test that substitution on each and every 
stream class because the implementation is outside of the stream classes.
{quote}
I good point. I forgot that query parameter substitution already been tested in 
other class. We just wanna to show the guide here. I will write a 
testSubstituteStream method which code derive from testMergeStream()

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)&left=search(...)&right=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2015-12-29 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073961#comment-15073961
 ] 

Cao Manh Dat commented on SOLR-8458:


{quote}
What's the purpose of ClientTupleStream? It appears it's only used in the tests 
and doesn't add any value as a Stream object.
{quote}
I create a class to simplify the code in test class. Currently, we dont have 
any TupleStream which support passing SolrClient and SolrParams. in SolrStream, 
we pass in baseUrl and it always create HttpSolrClient (not CloudClient). In 
CloudSolrStream, we pass in ZKAdress and it always look up for fl & sort 
params...

{quote}
I don't think it'd be necessary to test that substitution on each and every 
stream class because the implementation is outside of the stream classes.
{quote}
I good point. I forgot that query parameter substitution already been tested in 
other class. We just wanna to show the guide here. I will write a 
testSubstituteStream method which code derive from testMergeStream()

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)&left=search(...)&right=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073952#comment-15073952
 ] 

Dennis Gove commented on SOLR-7535:
---

I agree. It needs to be fleshed out some more.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6951) GeoPointInPolygonQuery can be improved

2015-12-29 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6951:
---
Attachment: LUCENE-6951.patch

Patch includes:

* Updated line crossing algorithm to use point orientation
* Updated GPTQ ConstantScoreWrapper MultiValue check to add doc when 1 point is 
found within the poly - avoids superflous pip checking if the doc is already a 
match
* Tests indicate up to 45% boost in GeoPointInPolygonQuery performance

> GeoPointInPolygonQuery can be improved
> --
>
> Key: LUCENE-6951
> URL: https://issues.apache.org/jira/browse/LUCENE-6951
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6951.patch
>
>
> {{GeoRelationutils}} uses a basic algebraic approach for computing if (and 
> where) a rectangle crosses a polygon by checking the line segments of both 
> the polygon and rectangle. The current suboptimal line crossing approach can 
> be greatly improved by exploiting the orientation of the lines and endpoints. 
> If the endpoints of one line are on different "sides" of the line segment 
> then  the two may cross. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6953) clean up lucene-test-framework dependencies

2015-12-29 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6953.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> clean up lucene-test-framework dependencies
> ---
>
> Key: LUCENE-6953
> URL: https://issues.apache.org/jira/browse/LUCENE-6953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6953.patch
>
>
> The current ivy configuration leads to the (wrong) belief that 
> lucene-test-framework depends on junit4-ant and ant itself.
> It confuses e.g. 'ant eclipse' (look and you will see those jars in 
> classpath), and lists these as dependencies in published maven poms, etc.
> But it really does not depend on junit4-ant at all, it works fine with other 
> test runners (e.g. IDE runners). That is our build itself, and for it to 
> taskdef the task, it can just use an ivy inline cachepath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6953) clean up lucene-test-framework dependencies

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073937#comment-15073937
 ] 

ASF subversion and git services commented on LUCENE-6953:
-

Commit 1722135 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722135 ]

LUCENE-6953: clean up test-framework dependencies

> clean up lucene-test-framework dependencies
> ---
>
> Key: LUCENE-6953
> URL: https://issues.apache.org/jira/browse/LUCENE-6953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6953.patch
>
>
> The current ivy configuration leads to the (wrong) belief that 
> lucene-test-framework depends on junit4-ant and ant itself.
> It confuses e.g. 'ant eclipse' (look and you will see those jars in 
> classpath), and lists these as dependencies in published maven poms, etc.
> But it really does not depend on junit4-ant at all, it works fine with other 
> test runners (e.g. IDE runners). That is our build itself, and for it to 
> taskdef the task, it can just use an ivy inline cachepath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073926#comment-15073926
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/29/15 2:01 PM:


The problem is the ParallelStream takes a TupleStream.  Possibly we'd need a 
ParallelWrite and ParallelRead stream. Let's not introduce that change into 
this ticket because I think it requires some more thought.


was (Author: joel.bernstein):
The problem is the ParallelStream takes a TupleStream.  Possibly we'd need a 
ParallelWrite and ParallelRead stream. Let's not introduce that change into 
this ticket because I think it requires some more though.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073926#comment-15073926
 ] 

Joel Bernstein commented on SOLR-7535:
--

The problem is the ParallelStream takes a TupleStream.  Possibly we'd need a 
ParallelWrite and ParallelRead stream. Let's not introduce that change into 
this ticket because I think it requires some more though.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6953) clean up lucene-test-framework dependencies

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073922#comment-15073922
 ] 

ASF subversion and git services commented on LUCENE-6953:
-

Commit 1722131 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1722131 ]

LUCENE-6953: clean up test-framework dependencies

> clean up lucene-test-framework dependencies
> ---
>
> Key: LUCENE-6953
> URL: https://issues.apache.org/jira/browse/LUCENE-6953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6953.patch
>
>
> The current ivy configuration leads to the (wrong) belief that 
> lucene-test-framework depends on junit4-ant and ant itself.
> It confuses e.g. 'ant eclipse' (look and you will see those jars in 
> classpath), and lists these as dependencies in published maven poms, etc.
> But it really does not depend on junit4-ant at all, it works fine with other 
> test runners (e.g. IDE runners). That is our build itself, and for it to 
> taskdef the task, it can just use an ivy inline cachepath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073920#comment-15073920
 ] 

Dennis Gove commented on SOLR-7535:
---

I had an interesting thought related to the call to read().

Should there be some distinction between a ReadStream and a WriteStream. A 
ReadStream is one which reads tuples out while a WriteStream is one which 
writes tuples in. Up until this point we've only ever had ReadStreams and the 
read() method has always made sense. But the UpdateStream is a WriteStream and 
maybe it should have a different function, maybe write(). Also, it might be 
nice to be able to say in a stream that it's direct incoming stream must be a 
WriteStream (for example, a CommitStream would only work on a WriteStream while 
a RollupStream would only work on a ReadStream). (though maybe it'd be 
interesting to do rollups over the output tuples of an UpdateStream.).

Thoughts?

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073912#comment-15073912
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/29/15 1:47 PM:


[~gerlowskija], you've described what I was thinking correctly.

I think swallowing the Tuples is the correct behavior. Imagine 15 workers 
pulling Tuples from 20+ shards. The throughput would bottleneck if we funneled 
all those tuples back to one client.

Think of the returned tuple as a type of useful aggregation like the 
RollupStream, which swallows Tuples on the worker nodes and returns aggregates 
to one client.




was (Author: joel.bernstein):

Yes, you've described what I was thinking correctly.

I think swallowing the Tuples is the correct behavior. Imagine 15 workers 
pulling Tuples from 20+ shards. The throughput would bottleneck if we funneled 
all those tuples back to one client.

Think of the returned tuple as a type of useful aggregation like the 
RollupStream, which swallows Tuples on the worker nodes and returns aggregates 
to one client.



> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073912#comment-15073912
 ] 

Joel Bernstein commented on SOLR-7535:
--


Yes, you've described what I was thinking correctly.

I think swallowing the Tuples is the correct behavior. Imagine 15 workers 
pulling Tuples from 20+ shards. The throughput would bottleneck if we funneled 
all those tuples back to one client.

Think of the returned tuple as a type of useful aggregation like the 
RollupStream, which swallows Tuples on the worker nodes and returns aggregates 
to one client.



> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [More Like This] Query building

2015-12-29 Thread Alessandro Benedetti
Sure, I will proceed tomorrow with the Jira and the simple patch + tests.

In the meantime let's try to collect some additional feedback.

Cheers

On 29 December 2015 at 12:43, Anshum Gupta  wrote:

> Feel free to create a JIRA and put up a patch if you can.
>
> On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti <
> abenede...@apache.org
> > wrote:
>
> > Hi guys,
> > While I was exploring the way we build the More Like This query, I
> > discovered a part I am not convinced of :
> >
> >
> >
> > Let's see how we build the query :
> > org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
> >
> > 1) we extract the terms from the interesting fields, adding them to a
> map :
> >
> > Map termFreqMap = new HashMap<>();
> >
> > *( we lose the relation field-> term, we don't know anymore where the
> term
> > was coming ! )*
> >
> > org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
> >
> > 2) we build the queue that will contain the query terms, at this point we
> > connect again there terms to some field, but :
> >
> > ...
> >> // go through all the fields and find the largest document frequency
> >> String topField = fieldNames[0];
> >> int docFreq = 0;
> >> for (String fieldName : fieldNames) {
> >>   int freq = ir.docFreq(new Term(fieldName, word));
> >>   topField = (freq > docFreq) ? fieldName : topField;
> >>   docFreq = (freq > docFreq) ? freq : docFreq;
> >> }
> >> ...
> >
> >
> > We identify the topField as the field with the highest document frequency
> > for the term t .
> > Then we build the termQuery :
> >
> > queue.add(new ScoreTerm(word, *topField*, score, idf, docFreq, tf));
> >
> > In this way we lose a lot of precision.
> > Not sure why we do that.
> > I would prefer to keep the relation between terms and fields.
> > The MLT query can improve a lot the quality.
> > If i run the MLT on 2 fields : *description* and *facilities* for
> example.
> > It is likely I want to find documents with similar terms in the
> > description and similar terms in the facilities, without mixing up the
> > things and loosing the semantic of the terms.
> >
> > Let me know your opinion,
> >
> > Cheers
> >
> >
> > --
> > --
> >
> > Benedetti Alessandro
> > Visiting card : http://about.me/alessandro_benedetti
> >
> > "Tyger, tyger burning bright
> > In the forests of the night,
> > What immortal hand or eye
> > Could frame thy fearful symmetry?"
> >
> > William Blake - Songs of Experience -1794 England
> >
>
>
>
> --
> Anshum Gupta
>



-- 
--

Benedetti Alessandro
Visiting card : http://about.me/alessandro_benedetti

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073907#comment-15073907
 ] 

Dennis Gove commented on SOLR-7535:
---

In the Streaming API, read() is called until an EOF tuple is seen. This means 
that, even with an UpdateStream, one would have this code

{code}
while(true){
  tuple = updateStream.read()

  // if # of records is some size, do a commit

  if(tuple.EOF){
break
  }
}
{code}

I think it's the correct thing for an UpdateStream to swallow the individual 
tuples. The use-case you described isn't one I see existing. But if it did then 
I could see it being dealt with using a TeeStream. A TeeStream would work 
exactly like the unix command tee and take a single input stream and tee it out 
into multiple output streams. In this use-case, one would Tee the underlying 
searches. But again, I don't see this need actually existing.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8422.

Resolution: Fixed

Marking this as resolved. Thanks everyone.

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8422:
---
Fix Version/s: 5.3.2

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073899#comment-15073899
 ] 

ASF subversion and git services commented on SOLR-8422:
---

Commit 1722124 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722124 ]

SOLR-8422: Add change log entry to 5.3.2 section (merge from trunk)

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073898#comment-15073898
 ] 

Dennis Gove commented on SOLR-8458:
---

Cao,

What's the purpose of ClientTupleStream? It appears it's only used in the tests 
and doesn't add any value as a Stream object.

I'd rather not replace all existing stream creations with a randomized choice 
between doing substitution and not. I think it'd be better to have explicit 
tests which exercise substitution. I don't think it'd be necessary to test that 
substitution on each and every stream class because the implementation is 
outside of the stream classes. Also, it appears that the randomization of the 
choice is non-repeatable. Ie, if I rerun the tests with a -Dtests.seed value 
would the random choices be the same?

It appears that the substitution is just picking some substring in the 
expression and marking it as being a parameter. I think this should test 
substituting entire expression clauses, like 
{code}
http://localhost:8983/col/stream?expr=merge($left, $right, 
...)&left=search(...)&right=search(...)
{code}
where left and right are entire clauses. The tests you've provided appear to do 
something like this
{code}
http://localhost:8983/col/stream?expr=merge(sear$left, se$right..), 
...)&left=ch(...)&right=arch(.
{code}
which I don't think makes much sense. Technically the substitution should 
handle that but I think the codification should be that one would want to 
substitute entire expressions.

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)&left=search(...)&right=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073895#comment-15073895
 ] 

Jason Gerlowski commented on SOLR-7535:
---

Thanks for the feedback Joel.  (1) and (2) I get.  (3)'s a little less clear to 
me.

Are you saying that a single {{read()}} on an UpdateStream will call {{read()}} 
X times (i.e. batchSize times) on the wrapped stream, package and send those 
docs to a collection, and then return a single tuple that says how many tuples 
were read?

Is it an issue at all that UpdateStream would be swallowing the individual 
tuples?  This would prevent users from doing things (other than committing) 
with the output of UpdateStream.  For example, the use-case below _seems_ valid 
to me, but wouldn't be supported with the proposed behavior:

{code}
update(collection5,
   merge(
   update(collection3, search(collection1, ...)),
   update(collection4, search(collection2, ...))
   )
)
{code}

Maybe there's not a real need to support that.  And Streaming API users would 
still be able to do this, they'd just need to do it in 2 steps/requests instead 
of 1.  I don't have a preference either way; just wanted to bring it up.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073894#comment-15073894
 ] 

ASF subversion and git services commented on SOLR-8422:
---

Commit 1722122 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1722122 ]

SOLR-8422: Add change log entry to 5.3.2 section on trunk

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073893#comment-15073893
 ] 

Noble Paul commented on SOLR-8422:
--

I have opened SOLR-8470 [~nirmalav] and [~anshum]

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8470) Make PKIAuthPlugin''s token's TTL configurable

2015-12-29 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8470:


 Summary: Make PKIAuthPlugin''s token's TTL configurable
 Key: SOLR-8470
 URL: https://issues.apache.org/jira/browse/SOLR-8470
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul


Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073888#comment-15073888
 ] 

ASF subversion and git services commented on SOLR-8422:
---

Commit 1722120 from [~anshumg] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1722120 ]

SOLR-8422: When authentication enabled, requests fail if sent to a node that 
doesn't host the collection (backport from branch_5x for 5.3.2 release)

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2015-12-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8458:
---
Attachment: SOLR-8458.patch

Added random substitution for constructing tuple stream in StreamExpressionTest.

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)&left=search(...)&right=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073867#comment-15073867
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/29/15 1:02 PM:


[~gerlowskija], the patch looks good. 

Three comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class to eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The UpdateStream is likely 
to be run in parallel which means dozens of workers will be committing at the 
same time. We can add a CommitStream which would not be run in paralllel that 
will commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(
 collection1, 
 parallel(
  update(
  collection1,
  search(collection2...))
  ), 
  10)
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 


was (Author: joel.bernstein):
[~gerlowskija], the patch looks good. 

Three comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class to eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The UpdateStream is likely 
to be run in parallel which means dozens of workers will be committing at the 
same time. We can add a CommitStream which would not be run in paralllel that 
will commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apach

[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073871#comment-15073871
 ] 

Jason Gerlowski commented on SOLR-7535:
---

Thanks for taking the time to help me out Dennis, that makes a lot of sense and 
really helps.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073867#comment-15073867
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/29/15 12:59 PM:
-

[~gerlowskija], the patch looks good. 

Three comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class to eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The UpdateStream is likely 
to be run in parallel which means dozens of workers will be committing at the 
same time. We can add a CommitStream which would not be run in paralllel that 
will commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 


was (Author: joel.bernstein):
[~gerlowskija], the patch looks good. 

Three comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class to eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The stream is likely to be 
run in parallel which means dozens of workers will be committing at the same 
time. We can add a CommitStream which would not be run in paralllel that will 
commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073867#comment-15073867
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/29/15 12:54 PM:
-

[~gerlowskija], the patch looks good. 

Three comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class to eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The stream is likely to be 
run in parallel which means dozens of workers will be committing at the same 
time. We can add a CommitStream which would not be run in paralllel that will 
commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 


was (Author: joel.bernstein):
[~gerlowskija], the patch looks good. 

Two comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The stream is likely to be 
run in parallel which means dozens of workers will be committing at the same 
time. We can add a CommitStream which would not be run in paralllel that will 
commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073867#comment-15073867
 ] 

Joel Bernstein commented on SOLR-7535:
--

[~gerlowskija], the patch looks good. 

Two comments


1) I'd like to limit the changes in the patch to the UpdateStream if possible. 
It looks like the UpdateStream is extending CloudSolrStream which pushed some 
changes into CloudSolrStream. Let's have the UpdateStream extend TupleStream 
for now. In another ticket we can look at moving some shared methods to the 
TupleStream class eliminate code duplication.

2) Let's remove the commit following the EOF tuple. The stream is likely to be 
run in parallel which means dozens of workers will be committing at the same 
time. We can add a CommitStream which would not be run in paralllel that will 
commit after a number updates or after it sees the EOF tuple.

We'll implement the CommitStream in a different ticket. For now we can rely on 
autoCommits to commit and explicitly commit in the test cases.

The pseudo code below shows a CommitStream wrapping an UpdateStream which is 
wrapped by a ParallelStream.
{code}
commit(collection1, 
 parallel(
   update(collection1, search(collection2...))
  ), 
  10))
{code}


3) We'll want to implement batching. So we'll need to add a batch size 
parameter to the UpdateStream. Then we'll send the updates in a batch to the 
CloudSolrClient. After each batch the read() method should return a Tuple with 
the number of documents indexed in the batch. This Tuple can be used by the 
CommitStream to commit every X records and can be returned to the client which 
will ensure that we don't get client timeouts do to inactivity.

So each call to the UpdateStream.read() will read a batch of docs from the 
sourceStream, index the batch and return a Tuple with the count.

 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [More Like This] Query building

2015-12-29 Thread Anshum Gupta
Feel free to create a JIRA and put up a patch if you can.

On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti  wrote:

> Hi guys,
> While I was exploring the way we build the More Like This query, I
> discovered a part I am not convinced of :
>
>
>
> Let's see how we build the query :
> org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
>
> 1) we extract the terms from the interesting fields, adding them to a map :
>
> Map termFreqMap = new HashMap<>();
>
> *( we lose the relation field-> term, we don't know anymore where the term
> was coming ! )*
>
> org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
>
> 2) we build the queue that will contain the query terms, at this point we
> connect again there terms to some field, but :
>
> ...
>> // go through all the fields and find the largest document frequency
>> String topField = fieldNames[0];
>> int docFreq = 0;
>> for (String fieldName : fieldNames) {
>>   int freq = ir.docFreq(new Term(fieldName, word));
>>   topField = (freq > docFreq) ? fieldName : topField;
>>   docFreq = (freq > docFreq) ? freq : docFreq;
>> }
>> ...
>
>
> We identify the topField as the field with the highest document frequency
> for the term t .
> Then we build the termQuery :
>
> queue.add(new ScoreTerm(word, *topField*, score, idf, docFreq, tf));
>
> In this way we lose a lot of precision.
> Not sure why we do that.
> I would prefer to keep the relation between terms and fields.
> The MLT query can improve a lot the quality.
> If i run the MLT on 2 fields : *description* and *facilities* for example.
> It is likely I want to find documents with similar terms in the
> description and similar terms in the facilities, without mixing up the
> things and loosing the semantic of the terms.
>
> Let me know your opinion,
>
> Cheers
>
>
> --
> --
>
> Benedetti Alessandro
> Visiting card : http://about.me/alessandro_benedetti
>
> "Tyger, tyger burning bright
> In the forests of the night,
> What immortal hand or eye
> Could frame thy fearful symmetry?"
>
> William Blake - Songs of Experience -1794 England
>



-- 
Anshum Gupta


[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073858#comment-15073858
 ] 

Noble Paul commented on SOLR-7462:
--

Sorry, this fell thru the cracks. I shall fix this. 

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-12-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-7462:


Assignee: Noble Paul

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>Assignee: Noble Paul
> Attachments: SOLR-7462.patch, SOLR-7462.test.json
>
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2921 - Failure!

2015-12-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2921/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
partialResults were expected expected: but was:

Stack Trace:
java.lang.AssertionError: partialResults were expected expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([A3329E3B0F84F042:2B66A1E1A1789DBA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:85)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 896 - Still Failing

2015-12-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/896/

1 tests failed.
FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryTiny

Error Message:
maxMBSortInHeap=2.255092363313635 only allows for maxPointsSortInHeap=1443, but 
this is less than maxPointsInLeafNode=1606; either increase maxMBSortInHeap or 
decrease maxPointsInLeafNode

Stack Trace:
java.lang.IllegalArgumentException: maxMBSortInHeap=2.255092363313635 only 
allows for maxPointsSortInHeap=1443, but this is less than 
maxPointsInLeafNode=1606; either increase maxMBSortInHeap or decrease 
maxPointsInLeafNode
at 
__randomizedtesting.SeedInfo.seed([B6A7B02C094A1978:181EDD6FE19860C1]:0)
at org.apache.lucene.util.bkd.BKDWriter.(BKDWriter.java:161)
at 
org.apache.lucene.codecs.lucene60.Lucene60DimensionalWriter.writeField(Lucene60DimensionalWriter.java:88)
at 
org.apache.lucene.index.DimensionalValuesWriter.flush(DimensionalValuesWriter.java:68)
at 
org.apache.lucene.index.DefaultIndexingChain.writeDimensionalValues(DefaultIndexingChain.java:146)
at 
org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:96)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:425)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:502)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:614)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3099)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3074)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1727)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1707)
at 
org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:421)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.verifyBinary(TestDimensionalRangeQuery.java:508)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.doTestRandomBinary(TestDimensionalRangeQuery.java:419)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryTiny(TestDimensionalRangeQuery.java:375)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.Statemen

[jira] [Comment Edited] (SOLR-8176) Model distributed graph traversals with Streaming Expressions

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073843#comment-15073843
 ] 

Dennis Gove edited comment on SOLR-8176 at 12/29/15 12:10 PM:
--

I've been thinking about this a little bit and one thing I keep coming back to 
is that there are different kinds of graph traversals and I think our model 
should take that into account. There are lots of types but I think the two 
major categories are node traversing graphs and edge traversing graphs. 

h3. Node Traversing Graphs
These are graphs where you have some set of root nodes and you want to find 
connected nodes with some set of criteria. For example, given a collection of 
geographic locations (city, county, state, country) with fields "id", "type", 
"parentId", "name" find all cities in NY. As a hiccup the data is not 
completely normalized and some cities have their county listed as their parent 
while some have their state listed as their parent. Ie, you do not know how 
many nodes are between any given city and any given state.
{code}
graph(
  geography,
  root(q="type=state AND name:ny", fl="id"),
  leaf(q="type=city", fl="id,parentId,name"),
  edge("id=parentId")
)
{code}
In this example you're starting with a set of nodes in the geography 
collection, all which have some relationship to each other. You select your 
starting (root) nodes as all states named "ny" (there could be more than one). 
You then define what constitutes an ending (leaf) node as all cities. And 
finally, you say that all edges where nodeA.id == nodeB.parentId should be 
followed.

This traversal can be implemented as a relatively simple iterative search 
following the form
{code}
frontier := search for all root nodes
leaves := empty list

while frontier is not empty
  frontierIds := list of ids of all nodes in frontier list
  leaves :append: search for all nodes whose parentId is in frontierIds and 
matches the leaf filter
  frontier := search for all nodes whose parentId is in frontierIds and does 
not match the leaf filter

{code}
In each iteration the leaves list can grow and the frontier list is replaced 
with the next set of nodes to consider. In the end you have a list of all leaf 
nodes which in some way connect to the original root nodes following the 
defined edge. Note that for simplicity I've left a couple of things out, 
including checking for already traversed nodes to avoid loops. Also, the leaf 
nodes are not added to the frontier but they can be. This would be useful in a 
situation where leaves are connected to leaves.

h3. Edge Traversal Graphs
These are graphs where you have some set of edges but the nodes themselves are 
relatively unimportant for traversal. For example, finding the shortest path 
between two nodes, or finding the minimum spanning tree for some set of nodes, 
or finding loops.


was (Author: dpgove):
I've been thinking about this a little bit and one thing I keep coming back to 
is that there are different kinds of graph traversals and I think our model 
should take that into account. There are lots of types but I think the two 
major categories are node traversing graphs and edge traversing graphs. 

h3. Node Traversing Graphs
These are graphs where you have some set of root nodes and you want to find 
connected nodes with some set of criteria. For example, given a collection of 
geographic locations (city, county, state, country) with fields "id", "type", 
"parentId", "name" find all cities in NY. As a hiccup the data is not 
completely normalized and some cities have their county listed as their parent 
while some have their state listed as their parent. Ie, you do not know how 
many nodes are between any given city and any given state.
{code}
graph(
  geography,
  root(q="type=state AND name:ny", fl="id"),
  leaf(q="type=city", fl="id,parentId,name"),
  edge("id=parentId")
)
{code}
In this example you're starting with a set of nodes in the geography 
collection, all which have some relationship to each other. You select your 
starting (root) nodes as all states named "ny" (there could be more than one). 
You then define what constitutes an ending (leaf) node as all cities. And 
finally, you say that all edges where nodeA.id == nodeB.parentId should be 
followed.

This traversal can be implemented as a relatively simple iterative search 
following the form
{code}
frontier := search for all root nodes
leaves := empty list

while frontier is not empty
  frontierIds := list of ids of all nodes in frontier list
  leaves :append: search for all nodes whose parentId is in frontierIds and 
matches the leaf filter
  frontier := search for all nodes whose parentId is in frontierIds and does 
not match the leaf filter

{code}
In each iteration the leaves list can grow and the frontier list is replaced 
with the next set of nodes to consider. In the end you have a list of all leaf 
nodes which

[jira] [Commented] (SOLR-8176) Model distributed graph traversals with Streaming Expressions

2015-12-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073843#comment-15073843
 ] 

Dennis Gove commented on SOLR-8176:
---

I've been thinking about this a little bit and one thing I keep coming back to 
is that there are different kinds of graph traversals and I think our model 
should take that into account. There are lots of types but I think the two 
major categories are node traversing graphs and edge traversing graphs. 

h3. Node Traversing Graphs
These are graphs where you have some set of root nodes and you want to find 
connected nodes with some set of criteria. For example, given a collection of 
geographic locations (city, county, state, country) with fields "id", "type", 
"parentId", "name" find all cities in NY. As a hiccup the data is not 
completely normalized and some cities have their county listed as their parent 
while some have their state listed as their parent. Ie, you do not know how 
many nodes are between any given city and any given state.
{code}
graph(
  geography,
  root(q="type=state AND name:ny", fl="id"),
  leaf(q="type=city", fl="id,parentId,name"),
  edge("id=parentId")
)
{code}
In this example you're starting with a set of nodes in the geography 
collection, all which have some relationship to each other. You select your 
starting (root) nodes as all states named "ny" (there could be more than one). 
You then define what constitutes an ending (leaf) node as all cities. And 
finally, you say that all edges where nodeA.id == nodeB.parentId should be 
followed.

This traversal can be implemented as a relatively simple iterative search 
following the form
{code}
frontier := search for all root nodes
leaves := empty list

while frontier is not empty
  frontierIds := list of ids of all nodes in frontier list
  leaves :append: search for all nodes whose parentId is in frontierIds and 
matches the leaf filter
  frontier := search for all nodes whose parentId is in frontierIds and does 
not match the leaf filter

{code}
In each iteration the leaves list can grow and the frontier list is replaced 
with the next set of nodes to consider. In the end you have a list of all leaf 
nodes which in some way connect to the original root nodes following the 
defined edge. Note that for simplicity I've left a couple of things out, 
including checking for already traversed nodes to avoid loops. Also, the leaf 
nodes are not added to the frontier but they can be. This would be useful in a 
situation where leaves are connected to leaves.

> Model distributed graph traversals with Streaming Expressions
> -
>
> Key: SOLR-8176
> URL: https://issues.apache.org/jira/browse/SOLR-8176
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrCloud, SolrJ
>Affects Versions: Trunk
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>  Labels: Graph
> Fix For: Trunk
>
>
> I think it would be useful to model a few *distributed graph traversal* use 
> cases with Solr's *Streaming Expression* language. This ticket will explore 
> different approaches with a goal of implementing two or three common graph 
> traversal use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-29 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073822#comment-15073822
 ] 

Anshum Gupta commented on SOLR-8422:


Thanks for confirming. I'll backport this to lucene_solr_5_3.

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Null Commit Mails from Buildbot for website

2015-12-29 Thread Upayavira
I've discussed this with infra and have constructed what I believe is
a workable replacement script that will only update SVN if something
has changed.

Unfortunately, testing it involves committing to the buildbot SVN and
waiting to see whether I've broken anything.

I'll be doing this today, so you might see stuff go wrong. If I fail to
get my updated script to work, I'll revert to the current config.

Upayavira

On Sat, Dec 19, 2015, at 09:55 PM, Upayavira wrote:
> Okay. I'll dig further into how to make the commits list actually
> useful by suppressing these messages.
>
> Whilst Shawn is right - it is the human commit we are interested in -
> there's a value in having messages from Buildbot when it actually does
> something, in case someone has worked out an alternative way of
> accessing our site.
>
> I'll report back when I work something out.
>
> Upayavira
>
> On Sat, Dec 19, 2015, at 07:46 PM, Jack Krupansky wrote:
>> I had asked about these messages two and a half years ago and nobody
>> stepped forward to claim that they had any value and merely suggested
>> filtering them in the user email client. So, I'm a solid +1 for
>> suppressing them. Uwe was the only person responding to my inquiry
>> back in July 2013.
>>
>> -- Jack Krupansky
>>
>> On Sat, Dec 19, 2015 at 1:59 PM, Shawn Heisey
>>  wrote:
>>> On 12/18/2015 4:48 PM, Upayavira wrote:
>>>
> We could prevent these messages by making the second and third steps
>>>
> "dependent" upon the first. In which case, they won't occur if
> no files
>>>
> are changed.
>>>
>
>>>
> Any objections to doing this?
>>>
>>> I'm all for this change.
>>>
>>> I added a filter to my email account to move these messages into a
>>> separate folder, so that I could do "mark folder read", after
>>> looking through them for changes made by real people instead of the
>>> bot.  I'm considering deleting my 2010-2014 archives for this
>>> folder, because I don't think I will ever need that information.
>>>
>>> I have a little bit of a gripe about seeing *any* notifications
>>> about buildbot updates to the website.  It looks like one of the
>>> updates that happens automatically is the "Latest SVN" section on
>>> the Lucene Core page ... but I already get notified about these
>>> changes.  They are the entire point of the commits mailing list.  If
>>> there is any way to suppress notifications about changes to "Latest
>>> SVN", I would definitely appreciate it.
>>>
>>> We used to have a semi-live twitter feed somewhere on the website,
>>> but it looks like this no longer exists, probably since the site was
>>> redesigned.  If this comes back, it is another update that I really
>>> don't need to see -- I can always subscribe via twitter if I'm
>>> interested.
>>>
>>> Thanks, Shawn
>>>
>>>
>>> ---
>>> --
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
>>> additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>


  1   2   >