[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Toke Eskildsen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946317#comment-14946317
 ] 

Toke Eskildsen commented on LUCENE-6828:


Shai: That all sounds very right. Great idea with the custom collector test. I 
was aware of the sentinels being re-used in-search, but thank you for making 
sure. The garbage collection I talk about is between searches as the HitQueue 
themselves are not re-used.

Regarding sentinels then they do seem to give a boost, compared to 
no-sentinels-but-still-objects, when the queue size is small and the number of 
hits is large. I have not investigated this is much detail and I suspect it 
would help to visualize the performance of the different implementations with 
some graphs. As queue-size, hits, threads and implementation are all relevant 
knobs to try and tweak, that task will have to wait a bit.

Ramkumar: Large result sets with grouping is very relevant for us. However, the 
current packed queue implementation only handles floats+docIDs. If the 
comparator key can be expressed as a numeric, it should be possible to have 
fast heap-ordering (a numeric array to hold the key and a parallel object array 
for the values, where the values themselves are only accessed upon export).

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945088#comment-14945088
 ] 

Adrien Grand commented on LUCENE-6828:
--

Since you opened this issue, I'm wondering if you have more information about 
the use-case of users that use such large pages? I think that some of our users 
that execute such requests are in fact trying to export a subset of their 
indexes, in which case they don't even need sorted results so we don't need a 
priority queue? And I'd be curious to understand more about the other ones.

Also since you're playing with priority queues at the moment, I remember 
getting better results at sorting with a ternary heap than a regular heap, I 
assume because it has better cache efficiency in spite of a worse runtime. And 
some people experimented with making priority queues more cache-efficient, eg. 
http://playfulprogramming.blogspot.it/2015/08/cache-optimizing-priority-queue.html

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8072) Rebalance leaders feature does not set CloudDescriptor#isLeader to false when bumping leaders.

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945102#comment-14945102
 ] 

ASF subversion and git services commented on SOLR-8072:
---

Commit 1707063 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1707063 ]

SOLR-8072: Rebalance leaders feature does not set CloudDescriptor#isLeader to 
false when bumping leaders.

> Rebalance leaders feature does not set CloudDescriptor#isLeader to false when 
> bumping leaders.
> --
>
> Key: SOLR-8072
> URL: https://issues.apache.org/jira/browse/SOLR-8072
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8072.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945144#comment-14945144
 ] 

Noble Paul edited comment on SOLR-8117 at 10/6/15 3:36 PM:
---

hmm, I see, the rules should be considered as a mandatory state before and 
after the collection creation.
This type of condition (<1) should be considered as invalid. I misunderstood 
the rule configuration.

Thank you Paul.

I will try to reproduce the other behavior: 

sometimes a collection creation is allowed and sometimes not with the same 
cluster and the same rules.

I use these two rules:
{code}
rule=shard:*,host:*,replica:<2
rule=shard:*,cores:<2
{code}
The last time, I had to retry 3 times to finally create a collection (7 shards, 
2 replicas per shard).

The demo cluster contains 4 hosts, 16 nodes (4 per host), 14 empty nodes.

With your explaination, it should never be allowed to create this collection 
because all nodes contain 2 cores after the collection creation.
Or perhaps, the two rules are not applied the way I think.

By the way, the behavior should always be the same.



was (Author: lboutros):
hmm, I see, the rules should be considered as a mandatory state before and 
after the collection creation.
This type of condition (<1) should be considered as invalid. I misunderstood 
the rule configuration.

Thank you Paul.

I will try to reproduce the other behavior: 

sometimes a collection creation is allowed and sometimes not with the same 
cluster and the same rules.

I use these two rules:

rule=shard:*,host:*,replica:<2
rule=shard:*,cores:<2

The last time, I had to retry 3 times to finally create a collection (7 shards, 
2 replicas per shard).

The demo cluster contains 4 hosts, 16 nodes (4 per host), 14 empty nodes.

With your explaination, it should never be allowed to create this collection 
because all nodes contain 2 cores after the collection creation.
Or perhaps, the two rules are not applied the way I think.

By the way, the behavior should always be the same.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8132) HDFS global block cache should default to true in 6.0.

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945125#comment-14945125
 ] 

ASF subversion and git services commented on SOLR-8132:
---

Commit 1707067 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1707067 ]

SOLR-8132: HDFSDirectoryFactory now defaults to using the global block cache.

> HDFS global block cache should default to true in 6.0.
> --
>
> Key: SOLR-8132
> URL: https://issues.apache.org/jira/browse/SOLR-8132
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.0
>
>
> No more back compat worry, no global is not very pleasant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14430 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14430/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([989B671A202F6F27:8BF855751140D681]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945121#comment-14945121
 ] 

ASF subversion and git services commented on SOLR-7956:
---

Commit 1707066 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1707066 ]

SOLR-7956: Fix CHANGES entries that got mangled.

> There are interrupts on shutdown in places that can cause 
> ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
> logs.
> 
>
> Key: SOLR-7956
> URL: https://issues.apache.org/jira/browse/SOLR-7956
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-7956-commit-tracker.patch, SOLR-7956.patch, 
> SOLR-7956.patch, SOLR-7956.patch
>
>
> Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8132) HDFS global block cache should default to true in 6.0.

2015-10-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8132.
---
Resolution: Fixed

> HDFS global block cache should default to true in 6.0.
> --
>
> Key: SOLR-8132
> URL: https://issues.apache.org/jira/browse/SOLR-8132
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.0
>
>
> No more back compat worry, no global is not very pleasant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945218#comment-14945218
 ] 

Noble Paul commented on SOLR-8117:
--

bq.This type of condition (<1) should be considered as invalid. I misunderstood 
the rule configuration.

So far I have not put in any checks for "impossible" conditions

bq.By the way, the behavior should always be the same.

Yes, it should be. IIRC the state of the system is printed out if the rules 
fail and it should be possible to debug it easily.

This should always pass , because for a given shard there can be a max of 1 
replica in a host. You have four hosts.  It requires only 14 nodes as per your 
rule {{cores:<2}} and you have 16 nodes. So , it should always pass. 

I guess it has something to do with the order the nodes are assigned. It'll be 
great if we could write a testcase with the same data 

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8072) Rebalance leaders feature does not set CloudDescriptor#isLeader to false when bumping leaders.

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945099#comment-14945099
 ] 

ASF subversion and git services commented on SOLR-8072:
---

Commit 1707062 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1707062 ]

SOLR-8072: Rebalance leaders feature does not set CloudDescriptor#isLeader to 
false when bumping leaders.

> Rebalance leaders feature does not set CloudDescriptor#isLeader to false when 
> bumping leaders.
> --
>
> Key: SOLR-8072
> URL: https://issues.apache.org/jira/browse/SOLR-8072
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8072.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Toke Eskildsen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945131#comment-14945131
 ] 

Toke Eskildsen commented on LUCENE-6828:


I do not know if you can do deep paging without sorting? For a single shard you 
could use the docID to keep track of progress (assuming they are collected in 
order), but that would not work for SolrCloud? Maybe I missed a trick here? Or 
are you describing a streaming scenario where the full result set is exported 
in one go?

Ignoring the details, you raise a fair point: Is there a need for large top-X 
results, where the X is not equal to the full amount of documents in the index? 
It would seems like a rare case - the times I have encountered the large-result 
problem (helping random people on IRC and working with Net Archiving) it has 
always been about the full result.

Thank you for the link to the cache-efficient queue. It looks so nifty I'll 
probably write an implementation even if LUCENE-6828 proves to be irrelevant.

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945199#comment-14945199
 ] 

Adrien Grand commented on LUCENE-6828:
--

bq. I do not know if you can do deep paging without sorting? For a single shard 
you could use the docID to keep track of progress (assuming they are collected 
in order), but that would not work for SolrCloud? Maybe I missed a trick here? 
Or are you describing a streaming scenario where the full result set is 
exported in one go?

This is the way elasticsearch's scans work: it obtains a IndexReader lease for 
each shard and then uses doc ids to track progress (resuming where it had 
previously stopped and throwing a CollectionTerminatedException when enough 
documents were collected) across consecutive requests. Streaming could be an 
option too...

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 456 - Still Failing

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/456/

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([D876DF9CDDEA55F6:CB15EDF3EC85EC50]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b82) - Build # 14436 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14436/
Java: 32bit/jdk1.9.0-ea-b82 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=65, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=63, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=67, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=64, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=66, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=65, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 

[jira] [Updated] (LUCENE-6826) java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be cast to org.apache.lucene.index.MultiTermsEnum when adding indexes

2015-10-06 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz updated LUCENE-6826:

Attachment: Lucene6826.java

This test creates an index with one document which contains a value which does 
not match the filter. It then migrates the index in a fashion that just filters 
out the values, we don't want, which becomes all values in that field, which 
triggers the error.

The first half of the day I tried to reproduce the exact same thing from 
scratch with no success - it happily migrated. This version comes from working 
code, simplified as far as possible without removing the issue, so it could 
turn out that there is a subtle bug in my code as well.


> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum when adding indexes
> --
>
> Key: LUCENE-6826
> URL: https://issues.apache.org/jira/browse/LUCENE-6826
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.2.1
>Reporter: Trejkaz
> Attachments: Lucene6826.java
>
>
> We are using addIndexes and FilterCodecReader tricks as part of index 
> migration.
> Whether FilterCodecReader tricks are required to reproduce this is uncertain, 
> but in any case, when migrating a particular index, I saw this exception:
> {noformat}
> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum
>   at 
> org.apache.lucene.index.MappedMultiFields$MappedMultiTerms.iterator(MappedMultiFields.java:65)
>   at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:426)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
>   at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
>   at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2519)
> {noformat}
> TermsEnum$2 appears to be TermsEnum.EMPTY. The place where it creates it is 
> here:
> MultiTermsEnum#reset:
> {code}
> if (queue.size() == 0) {
>   return TermsEnum.EMPTY;   // <- this is not a MultiTermsEnum
> } else {
>   return this;
> }
> {code}
> A quick hack would be for MappedMultiFields to check for TermsEnum.EMPTY 
> specifically before casting, but there might be some way to avoid the cast 
> entirely and that would obviously be a better idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2729 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2729/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([6F145AE51E2E137C:7C77688A2F41AADA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946187#comment-14946187
 ] 

Shai Erera commented on LUCENE-6828:


I read the post [~toke], very interesting! I've got a couple of comments. 
First, if you want to avoid the micro benchmark, you could implement your own 
Collector, copying most of TopScoreDocsCollector's logic and use the packed 
HitQueue version. That will compare end-to-end query performance, which is 
better than the micro benchmark in that I believe the majority of the time 
spent during search is at traversing posting lists, reading up DocValues values 
and computing the scores, and *not* sorting the heap. So I think it'd be nice 
to see how all 3 compare in an end-to-end query. I don't know how easy it is to 
implement it in Solr (that is, how a custom Collector can easily be 
plugged-in), but in Lucene it's straightforward. In Solr though you will be 
able to compare other facets such as deep paging and grouping, that others 
mentioned on this issue.

About Sentinel values: those were added in LUCENE-1593 with the purpose of 
avoiding the "is the queue full" checks in the collector's code. At the time it 
showed improvements, but the code has changed a lot since. Also, once any 
ScoreDoc object is added to the queue, it stays there and its values are 
modified in case a better ScoreDoc should replace it. Therefore GC-wise, there 
are only X ScoreDoc objects allocated (where X is the same as top-X). In your 
post I wasn't sure if you thought that the sentinel values are discarded and 
new ones allocated instead, so just wanted to clarify that.

I also think that we may not need to choose a one-queue-to-rule-them-all 
solution here. What about adding a VeryLargeTopScoreDocsCollector which Solr, 
and maybe even Lucene's {{searcher.search(q, numHits)}} API can do so 
automatically, uses when X is too large (100K taking an example from your 
post). It will use a packed HitQueue, it can even just throw the results in 
unsorted and heap-sort them if needed (or merge-sort in the end). It only needs 
to expose a TopDocs-like API. If we need to, let's make it so it can extend 
TopDocsCollector directly (such that you won't have to use a PQ at all).

That is all still pending end-to-end query benchmark results. If the Sentinel 
approach is better for small X, and the packed for large X, let's make the 
choice dynamically in the code, so users get the best performance per their 
search request.

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Created] (LUCENE-6829) OfflineSorter should use Directory API

2015-10-06 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6829:
--

 Summary: OfflineSorter should use Directory API
 Key: LUCENE-6829
 URL: https://issues.apache.org/jira/browse/LUCENE-6829
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4


I think this is a blocker for LUCENE-6825, because the block KD-tree makes 
heavy use of OfflineSorter and we don't want to fill up tmp space ...

This should be a straightforward cutover, but there are some challenges, e.g. 
the test was failing because virus checker blocked deleting of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 102 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/102/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth

Error Message:
Could not load collection from ZK:collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from 
ZK:collection1
at 
__randomizedtesting.SeedInfo.seed([2C7ADE71D574426A:1CC4472035C2C5CB]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1017)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:550)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:193)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getTotalReplicas(AbstractFullDistribZkTestBase.java:429)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:387)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:310)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:961)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /collections/collection1/state.json
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14432 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14432/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([B30244383CB4F3A2:A06176570DDB4A04]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6829) OfflineSorter should use Directory API

2015-10-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6829:
---
Attachment: LUCENE-6829.patch

Initial very, very rough patch ... only lucene core compiles/tests, I still 
need to cutover all places that use OfflineSorter.

TestOfflineSorter seems to pass, but I sidestepped the virus checker issue.

It becomes the callers job to pass in a "temp file prefix" from which the 
OfflineSorter will generate its own temp file names.

> OfflineSorter should use Directory API
> --
>
> Key: LUCENE-6829
> URL: https://issues.apache.org/jira/browse/LUCENE-6829
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6829.patch
>
>
> I think this is a blocker for LUCENE-6825, because the block KD-tree makes 
> heavy use of OfflineSorter and we don't want to fill up tmp space ...
> This should be a straightforward cutover, but there are some challenges, e.g. 
> the test was failing because virus checker blocked deleting of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6825) Add multidimensional byte[] indexing support to Lucene

2015-10-06 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945626#comment-14945626
 ] 

Ryan Ernst commented on LUCENE-6825:


This looks good. I think this is a great plan, to separate this out into a 
separate codec format, and to do it in pieces. And this patch is a good start. 

A few comments on the patch:
* I see a number of debugging s.o.p, these should be commented out or removed?
* Is this nocommit still relevant?
{quote}
// nocommit can we get this back?
//state.docs.grow(count);
{quote}
* There is a nocommit on the {{bytesToInt}} method. I think as a follow up we 
should investigate packing values, but is the nocommit still needed? Also, it 
seems to exist in both reader and writer? Could it be in one place, but still 
package private, perhaps in `oal.util.bkd.Util`?
* Can we avoid the bitflip in {{bytesToInt}} by using a long accumulator and 
casting to int? we can assert no upper bits are set before casting?
* Can we limit the num dims to 3 for now? I see a check for < 255 to fit in a 
byte, but it might be nice later to use those extra bits for some other 
information (essentially lets reserve the bits for now, instead of allowing 
silly number of dimensions)
* In `BKDWriter.finish()`, can the try/catch around building be simplified? I 
think you could remove the `success` marker and do the regular cleanup after 
build, and change the `finally` to a `catch`, then add any failures when 
destroying the per dim writers as suppressed exceptions to the original?
* In `BKDWriter.build()`, there is a nocommit about destroying perdim writers, 
but I think that is handled by the caller in `finish()` mentioned in my 
previous comment? I also see some destroy calls below that...is there double 
destroying going on, or is this more complicated than it looks?


> Add multidimensional byte[] indexing support to Lucene
> --
>
> Key: LUCENE-6825
> URL: https://issues.apache.org/jira/browse/LUCENE-6825
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6825.patch
>
>
> I think we should graduate the low-level block KD-tree data structure
> from sandbox into Lucene's core?
> This can be used for very fast 1D range filtering for numerics,
> removing the 8 byte (long/double) limit we have today, so e.g. we
> could efficiently support BigInteger, BigDecimal, IPv6 addresses, etc.
> It can also be used for > 1D use cases, like 2D (lat/lon) and 3D
> (x/y/z with geo3d) geo shape intersection searches.
> The idea here is to add a new part of the Codec API (DimensionalFormat
> maybe?) that can do low-level N-dim point indexing and at runtime
> exposes only an "intersect" method.
> It should give sizable performance gains (smaller index, faster
> searching) over what we have today, and even over what auto-prefix
> with efficient numeric terms would do.
> There are many steps here ... and I think adding this is analogous to
> how we added FSTs, where we first added low level data structure
> support and then gradually cutover the places that benefit from an
> FST.
> So for the first step, I'd like to just add the low-level block
> KD-tree impl into oal.util.bkd, but make a couple improvements over
> what we have now in sandbox:
>   * Use byte[] as the value not int (@rjernst's good idea!)
>   * Generalize it to arbitrary dimensions vs. specialized/forked 1D,
> 2D, 3D cases we have now
> This is already hard enough :)  After that we can build the
> DimensionalFormat on top, then cutover existing specialized block
> KD-trees.  We also need to fix OfflineSorter to use Directory API so
> we don't fill up /tmp when building a block KD-tree.
> A block KD-tree is at heart an inverted data structure, like postings,
> but is also similar to auto-prefix in that it "picks" proper
> N-dimensional "terms" (leaf blocks) to index based on how the specific
> data being indexed is distributed.  I think this is a big part of why
> it's so fast, i.e. in contrast to today where we statically slice up
> the space into the same terms regardless of the data (trie shifting,
> morton codes, geohash, hilbert curves, etc.)
> I'm marking this as trunk only for now... as we iterate we can see if
> it could maybe go back to 5.x...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2780 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2780/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking

Error Message:
Shard a1x2_shard1_replica2 received all 10 requests

Stack Trace:
java.lang.AssertionError: Shard a1x2_shard1_replica2 received all 10 requests
at 
__randomizedtesting.SeedInfo.seed([458EF0ABDE61BD0E:DB2A96B2A6AAC98]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14426 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14426/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=10380, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=10380, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([42BDB2D0998086DA]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:232)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_42BDB2D0998086DA-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.006.1514261472283197440
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:534)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
Caused by: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_42BDB2D0998086DA-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.006.1514261472283197440
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 10775 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_42BDB2D0998086DA-001/init-core-data-001
   [junit4]   2> 1142840 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[42BDB2D0998086DA]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 1142840 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[42BDB2D0998086DA]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /x_/xg
   [junit4]   2> 1142842 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1142842 INFO  (Thread-4041) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1142842 INFO  (Thread-4041) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1142942 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.ZkTestServer start zk server on port:41181
   [junit4]   2> 1142943 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1142943 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1142944 INFO  (zkCallback-1307-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@4bd34 name:ZooKeeperConnection 
Watcher:127.0.0.1:41181 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 1142944 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1142945 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1142945 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1142946 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[42BDB2D0998086DA]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1142947 

[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14143 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14143/
Java: 64bit/jdk1.9.0-ea-b82 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([77FF419EAB3A8FB0:649C73F19A553616]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:519)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-8133) NullPointerException in SolrInfoMBeanHandler.diffNamedList

2015-10-06 Thread Yurii Kartsev (JIRA)
Yurii Kartsev created SOLR-8133:
---

 Summary: NullPointerException in SolrInfoMBeanHandler.diffNamedList
 Key: SOLR-8133
 URL: https://issues.apache.org/jira/browse/SOLR-8133
 Project: Solr
  Issue Type: Bug
 Environment: SOLR 5.1.0
Reporter: Yurii Kartsev
Priority: Minor


This happened when I was watching changes in Plugin/Stats while doing some 
indexing from Java code + searching from another browser tab. My intention was 
to see how well filter cache performs.

>From solr.log (collection name manually replaced with "my-search"):{code}INFO  
>- 2015-10-06 19:28:59.861; [   ] org.apache.solr.core.SolrCore; [my-search] 
>Registered new searcher Searcher@70c5064c[my-search] 
>main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_12ad(5.1.0):C46911/1:delGen=1)
> Uninverting(_129r(5.1.0):C32078/1:delGen=1) Uninverting(_12a3(5.1.0):C28149) 
>Uninverting(_12ax(5.1.0):C58353/2:delGen=1) Uninverting(_12an(5.1.0):C38068) 
>Uninverting(_12ay(5.1.0):C1) Uninverting(_12ds(5.1.0):C4)))}
ERROR - 2015-10-06 19:29:02.389; [   my-search] 
org.apache.solr.common.SolrException; java.lang.NullPointerException
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffNamedList(SolrInfoMBeanHandler.java:229)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.getDiff(SolrInfoMBeanHandler.java:204)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:92)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)

INFO  - 2015-10-06 19:29:02.390; [   my-search] org.apache.solr.core.SolrCore; 
[my-search] webapp=/solr path=/admin/mbeans 

[jira] [Commented] (SOLR-8133) NullPointerException in SolrInfoMBeanHandler.diffNamedList

2015-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945742#comment-14945742
 ] 

Shawn Heisey commented on SOLR-8133:


Does the browser URL for the admin UI contain /#/ or /indexhtml# when this 
happens?

Are you in a position where you can upgrade to 5.3.1 to see if it's still a 
problem?

I'm somewhat mystified by an entire response (in XML format, no less) being 
part of the request, even in the stream.body parameter.

> NullPointerException in SolrInfoMBeanHandler.diffNamedList
> --
>
> Key: SOLR-8133
> URL: https://issues.apache.org/jira/browse/SOLR-8133
> Project: Solr
>  Issue Type: Bug
> Environment: SOLR 5.1.0
>Reporter: Yurii Kartsev
>Priority: Minor
>
> This happened when I was watching changes in Plugin/Stats while doing some 
> indexing from Java code + searching from another browser tab. My intention 
> was to see how well filter cache performs.
> From solr.log (collection name manually replaced with "my-search"):{code}INFO 
>  - 2015-10-06 19:28:59.861; [   ] org.apache.solr.core.SolrCore; [my-search] 
> Registered new searcher Searcher@70c5064c[my-search] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_12ad(5.1.0):C46911/1:delGen=1)
>  Uninverting(_129r(5.1.0):C32078/1:delGen=1) Uninverting(_12a3(5.1.0):C28149) 
> Uninverting(_12ax(5.1.0):C58353/2:delGen=1) Uninverting(_12an(5.1.0):C38068) 
> Uninverting(_12ay(5.1.0):C1) Uninverting(_12ds(5.1.0):C4)))}
> ERROR - 2015-10-06 19:29:02.389; [   my-search] 
> org.apache.solr.common.SolrException; java.lang.NullPointerException
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffNamedList(SolrInfoMBeanHandler.java:229)
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getDiff(SolrInfoMBeanHandler.java:204)
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:92)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> INFO  - 2015-10-06 19:29:02.390; [   my-search] 
> org.apache.solr.core.SolrCore; [my-search] webapp=/solr path=/admin/mbeans 
> 

[jira] [Commented] (SOLR-8133) NullPointerException in SolrInfoMBeanHandler.diffNamedList

2015-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945770#comment-14945770
 ] 

Shawn Heisey commented on SOLR-8133:


Ah!  With some poking, I figured out why the response is in the request.  When 
using the "diff=true" parameter, it expects a reponse for comparison to be in 
the stream.body parameter.

I think part of the problem here might be that the XML in the request is not 
URI encoded.  On a 5.2.1 server, when I do the "watch changes" thing, I see a 
request where stream.body starts out like this -- notice that all the less than 
and greater than symbols which are prominent in XML are encoded as %nn values:

stream.body=%3C%3Fxml+version%3D%221.0%22+encoding%3D%22UTF-8%22%3F%3E%0A%3Cresponse%3E%

This might be a bug in the 5.1 version that is fixed in later versions.  I am 
completely guessing here.

> NullPointerException in SolrInfoMBeanHandler.diffNamedList
> --
>
> Key: SOLR-8133
> URL: https://issues.apache.org/jira/browse/SOLR-8133
> Project: Solr
>  Issue Type: Bug
> Environment: SOLR 5.1.0
>Reporter: Yurii Kartsev
>Priority: Minor
>
> This happened when I was watching changes in Plugin/Stats while doing some 
> indexing from Java code + searching from another browser tab. My intention 
> was to see how well filter cache performs.
> From solr.log (collection name manually replaced with "my-search"):{code}INFO 
>  - 2015-10-06 19:28:59.861; [   ] org.apache.solr.core.SolrCore; [my-search] 
> Registered new searcher Searcher@70c5064c[my-search] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_12ad(5.1.0):C46911/1:delGen=1)
>  Uninverting(_129r(5.1.0):C32078/1:delGen=1) Uninverting(_12a3(5.1.0):C28149) 
> Uninverting(_12ax(5.1.0):C58353/2:delGen=1) Uninverting(_12an(5.1.0):C38068) 
> Uninverting(_12ay(5.1.0):C1) Uninverting(_12ds(5.1.0):C4)))}
> ERROR - 2015-10-06 19:29:02.389; [   my-search] 
> org.apache.solr.common.SolrException; java.lang.NullPointerException
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffNamedList(SolrInfoMBeanHandler.java:229)
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getDiff(SolrInfoMBeanHandler.java:204)
> at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:92)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> 

[jira] [Created] (SOLR-8134) AddSchemaFieldsUpdateProcessorFactory throws immutable schema error even if no fields are added

2015-10-06 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8134:


 Summary: AddSchemaFieldsUpdateProcessorFactory throws immutable 
schema error even if no fields are added
 Key: SOLR-8134
 URL: https://issues.apache.org/jira/browse/SOLR-8134
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
Priority: Minor


See SOLR-7967 for the genesis.  Basically, if you are in the weird case of 
having a mutable schema but the AddSchemaFieldsUpdateProcessor applied to it, 
you won't be able to index any data, even if the indexing does not cause any 
fields to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7967) AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is immutable

2015-10-06 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945888#comment-14945888
 ] 

Gregory Chanan commented on SOLR-7967:
--

SOLR-8134 for the mutable schema bug.

> AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is 
> immutable
> --
>
> Key: SOLR-7967
> URL: https://issues.apache.org/jira/browse/SOLR-7967
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, Trunk
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7967.patch
>
>
> SOLR-7742 introduced Immutable ConfigSets.  There are checks added to 
> SolrConfigHandler and SchemaHandler so that if a user tries to modify the 
> SolrConfig or the Schema via either of these interfaces an error is returned 
> if the ConfigSet is defined to be immutable.
> Updates to the schema made via the AddSchemaFieldsUpdateProcessorFactory are 
> not checked in this way.  I'm not certain this should be considered a bug.  A 
> ConfigSet is defined by \{SolrConfig, Schema, ConfigSetProperties\}.  On one 
> hand, you can argue that you are modifying the Schema, which is part of the 
> ConfigSet, so the immutable check should apply. On the other hand, the 
> SolrConfig (which defines the AddSchema...Factory} defines that it wants the 
> Config to be updated, so if you view the ConfigSet in totality you could 
> argue nothing is really changing. I'd slightly lean towards adding the check, 
> but could go either way.
> Other opinions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7543) Create GraphQuery that allows graph traversal as a query operator.

2015-10-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7543:
---
Attachment: SOLR-7543.patch

Thanks Kevin,
I just uploaded a patch with some minor improvements, including:

Slightly simplified some of the parameter parsing... example:
From:
boolean onlyLeafNodes = Boolean.valueOf(localParams.get("returnOnlyLeaf", 
"false"));
To:
boolean onlyLeafNodes = localParams.getBool("returnOnlyLeaf", false);

Simplified some of the query handling, for instance:
{code}
SolrParams params = getParams();
SolrParams solrParams = SolrParams.wrapDefaults(localParams, params);
QParser baseParser = subQuery(solrParams.get(QueryParsing.V), null);
// grab the graph query options / defaults
Query rootNodeQuery = baseParser.getQuery();  
{code}
Was replaced with
{code}
Query rootNodeQuery = subQuery(localParams.get(QueryParsing.V), 
null).getQuery();
{code}

I rewrote buildFrontierQuery to use TermsQuery instead of BooleanQuery (more 
efficient for this use case, and no 1024 limit).

I also marked FrontierQuery as internal and made it package protected... it's 
an implementation detail and it feels like
we could get rid of it in the future.

Unless anyone has ideas of how to improve the current interface, I think this 
is ready to commit! (at least to trunk)  We can continue to make more 
optimizations to the implementation at any point.


> Create GraphQuery that allows graph traversal as a query operator.
> --
>
> Key: SOLR-7543
> URL: https://issues.apache.org/jira/browse/SOLR-7543
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Kevin Watters
>Priority: Minor
> Attachments: SOLR-7543.patch, SOLR-7543.patch
>
>
> I have a GraphQuery that I implemented a long time back that allows a user to 
> specify a "startQuery" to identify which documents to start graph traversal 
> from.  It then gathers up the edge ids for those documents , optionally 
> applies an additional filter.  The query is then re-executed continually 
> until no new edge ids are identified.  I am currently hosting this code up at 
> https://github.com/kwatters/solrgraph and I would like to work with the 
> community to get some feedback and ultimately get it committed back in as a 
> lucene query.
> Here's a bit more of a description of the parameters for the query / graph 
> traversal:
> q - the initial start query that identifies the universe of documents to 
> start traversal from.
> fromField - the field name that contains the node id
> toField - the name of the field that contains the edge id(s).
> traversalFilter - this is an additional query that can be supplied to limit 
> the scope of graph traversal to just the edges that satisfy the 
> traversalFilter query.
> maxDepth - integer specifying how deep the breadth first search should go.
> returnStartNodes - boolean to determine if the documents that matched the 
> original "q" should be returned as part of the graph.
> onlyLeafNodes - boolean that filters the graph query to only return 
> documents/nodes that have no edges.
> We identify a set of documents with "q" as any arbitrary lucene query.  It 
> will collect the values in the fromField, create an OR query with those 
> values , optionally apply an additional constraint from the "traversalFilter" 
> and walk the result set until no new edges are detected.  Traversal can also 
> be stopped at N hops away as defined with the maxDepth.  This is a BFS 
> (Breadth First Search) algorithm.  Cycle detection is done by not revisiting 
> the same document for edge extraction.  
> This query operator does not keep track of how you arrived at the document, 
> but only that the traversal did arrive at the document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945860#comment-14945860
 ] 

Ramkumar Aiyengar commented on LUCENE-6828:
---

You still need to do old fashioned deep paging if you are paging with grouping. 
Grouping requires you to have context of groups and docs with any higher sort 
value than what you are returning.

> Speed up requests for many rows
> ---
>
> Key: LUCENE-6828
> URL: https://issues.apache.org/jira/browse/LUCENE-6828
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.4, 5.3
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: memory, performance
>
> Standard relevance ranked searches for top-X results uses the HitQueue class 
> to keep track of the highest scoring documents. The HitQueue is a binary heap 
> of ScoreDocs and is pre-filled with sentinel objects upon creation.
> Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
> bytes/element and memory access is scattered due to the binary heap algorithm 
> and the use of Objects. To make matters worse, the use of sentinel objects 
> means that even if only a tiny number of documents matches, the full amount 
> of Objects is still allocated.
> As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
> results are requested, it performs poorly and leaves 1M ScoreDocs to be 
> garbage collected.
> An alternative is to replace the ScoreDocs with a single array of packed 
> longs, each long holding the score and the document ID. This strategy 
> requires only 8 bytes/element and is a lot lighter on the GC.
> Some preliminary tests has been done and published at 
> https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
> These indicate that a long[]-backed implementation is at least 3x faster than 
> vanilla HitDocs for top-1M requests.
> For smaller requests, such as top-10, the packed version also seems 
> competitive, when the amount of matched documents exceeds 1M. This needs to 
> be investigated further.
> Going forward with this idea requires some refactoring as Lucene is currently 
> hardwired to the abstract PriorityQueue. Before attempting this, it seems 
> prudent to discuss whether speeding up large top-X requests has any value? 
> Paging seems an obvious contender for requesting large result sets, but I 
> guess the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7090) Cross collection join

2015-10-06 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-7090:
-
Attachment: SOLR-7090-fulljoin.patch

Tests passing.  I'm doing something kind of hacky to avoid the auto-warm.

> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090-fulljoin.patch, 
> SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7543) Create GraphQuery that allows graph traversal as a query operator.

2015-10-06 Thread Kevin Watters (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945822#comment-14945822
 ] 

Kevin Watters commented on SOLR-7543:
-

Nice improvements!  The new TermsQuery, that definitely is a nice fit for this 
type of query.  (though that code path is only active if useAutn=false so it 
doesn't do the automaton compilation. )
Looks good to me, lets roll with it!

> Create GraphQuery that allows graph traversal as a query operator.
> --
>
> Key: SOLR-7543
> URL: https://issues.apache.org/jira/browse/SOLR-7543
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Kevin Watters
>Priority: Minor
> Attachments: SOLR-7543.patch, SOLR-7543.patch
>
>
> I have a GraphQuery that I implemented a long time back that allows a user to 
> specify a "startQuery" to identify which documents to start graph traversal 
> from.  It then gathers up the edge ids for those documents , optionally 
> applies an additional filter.  The query is then re-executed continually 
> until no new edge ids are identified.  I am currently hosting this code up at 
> https://github.com/kwatters/solrgraph and I would like to work with the 
> community to get some feedback and ultimately get it committed back in as a 
> lucene query.
> Here's a bit more of a description of the parameters for the query / graph 
> traversal:
> q - the initial start query that identifies the universe of documents to 
> start traversal from.
> fromField - the field name that contains the node id
> toField - the name of the field that contains the edge id(s).
> traversalFilter - this is an additional query that can be supplied to limit 
> the scope of graph traversal to just the edges that satisfy the 
> traversalFilter query.
> maxDepth - integer specifying how deep the breadth first search should go.
> returnStartNodes - boolean to determine if the documents that matched the 
> original "q" should be returned as part of the graph.
> onlyLeafNodes - boolean that filters the graph query to only return 
> documents/nodes that have no edges.
> We identify a set of documents with "q" as any arbitrary lucene query.  It 
> will collect the values in the fromField, create an OR query with those 
> values , optionally apply an additional constraint from the "traversalFilter" 
> and walk the result set until no new edges are detected.  Traversal can also 
> be stopped at N hops away as defined with the maxDepth.  This is a BFS 
> (Breadth First Search) algorithm.  Cycle detection is done by not revisiting 
> the same document for edge extraction.  
> This query operator does not keep track of how you arrived at the document, 
> but only that the traversal did arrive at the document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7090) Cross collection join

2015-10-06 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945823#comment-14945823
 ] 

Scott Blum commented on SOLR-7090:
--

I was able to figure out why my random test wasn't passing.  After committing 
some updates, during SolrIndexSearcher.warm(), the cache will attempt to 
autowarm recent queries.  In my case, it's trying to auto-warm the fulljoin.  
The problem is, if I try to synchronously run my facet query during the warming 
process, I get the OLD facet results, which are no longer valid.  As a result I 
end up warming the cache with incorrect data.

How should I fix this?

1) I can throw an exception from the scorer() if I detect that it's a 
warming=true query.  But I hate spamming errors into the logs.

2) I tried running my facet query with cache=false for warming queries, but 
that didn't work, it still got the old results.

Help! :)
Scott


> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14433 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14433/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([1C4072C7E1495233:F2340A8D026EB95]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7967) AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is immutable

2015-10-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7967:
-
Attachment: SOLR-7967.patch

Here's a patch.  It only does the immutable ConfigSet check if it's planning to 
actually add a new field.  Interestingly, that differs from the existing 
mutable schema check, which checks before looking at the existing fields.  So, 
if you had a not-mutable schema and an AddSchemaFieldsUpdateProcessorFactory 
you wouldn't be able to index any data, even data with all existing fields.  
That's probably a pre-existing bug, but seems really minor.

> AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is 
> immutable
> --
>
> Key: SOLR-7967
> URL: https://issues.apache.org/jira/browse/SOLR-7967
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, Trunk
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7967.patch
>
>
> SOLR-7742 introduced Immutable ConfigSets.  There are checks added to 
> SolrConfigHandler and SchemaHandler so that if a user tries to modify the 
> SolrConfig or the Schema via either of these interfaces an error is returned 
> if the ConfigSet is defined to be immutable.
> Updates to the schema made via the AddSchemaFieldsUpdateProcessorFactory are 
> not checked in this way.  I'm not certain this should be considered a bug.  A 
> ConfigSet is defined by \{SolrConfig, Schema, ConfigSetProperties\}.  On one 
> hand, you can argue that you are modifying the Schema, which is part of the 
> ConfigSet, so the immutable check should apply. On the other hand, the 
> SolrConfig (which defines the AddSchema...Factory} defines that it wants the 
> Config to be updated, so if you view the ConfigSet in totality you could 
> argue nothing is really changing. I'd slightly lean towards adding the check, 
> but could go either way.
> Other opinions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 814 - Still Failing

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/814/

1 tests failed.
FAILED:  org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([F1EFABC159D8A9B4:12EBF044F2C36DF3]:0)
at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307)
at org.apache.lucene.util.fst.FST.addNode(FST.java:792)
at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126)
at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:215)
at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:311)
at org.apache.lucene.util.fst.Builder.add(Builder.java:417)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:257)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$500(MemoryPostingsFormat.java:112)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:399)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4054)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3634)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1915)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4698)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:689)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4724)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4715)
at 
org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1303)
at 
org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1281)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:144)
at 
org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:139)
at 
org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals(TestDuelingCodecsAtNight.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)




Build Log:
[...truncated 1677 lines...]
   [junit4] Suite: org.apache.lucene.index.TestDuelingCodecsAtNight
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestDuelingCodecsAtNight -Dtests.method=testBigEquals 
-Dtests.seed=F1EFABC159D8A9B4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=in -Dtests.timezone=Africa/Kigali -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   1220s J1 | TestDuelingCodecsAtNight.testBigEquals <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F1EFABC159D8A9B4:12EBF044F2C36DF3]:0)
   [junit4]>at 
org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307)
   [junit4]>at org.apache.lucene.util.fst.FST.addNode(FST.java:792)
   [junit4]>at 
org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126)
   [junit4]>at 
org.apache.lucene.util.fst.Builder.compileNode(Builder.java:215)
   [junit4]>at 
org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:311)
   [junit4]>at 
org.apache.lucene.util.fst.Builder.add(Builder.java:417)
   [junit4]>at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:257)
   [junit4]>at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$500(MemoryPostingsFormat.java:112)
   [junit4]>at 

[jira] [Commented] (LUCENE-6826) java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be cast to org.apache.lucene.index.MultiTermsEnum when adding indexes

2015-10-06 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944724#comment-14944724
 ] 

Trejkaz commented on LUCENE-6826:
-

One of the fields, our test indexes all have the same value, which happens to 
be the value we filter out, and then the contents of that filtered stream get 
merged with another field. It might not be too hard to mock up a test case with 
similar behaviour, will see what I can do tomorrow.


> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum when adding indexes
> --
>
> Key: LUCENE-6826
> URL: https://issues.apache.org/jira/browse/LUCENE-6826
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.2.1
>Reporter: Trejkaz
>
> We are using addIndexes and FilterCodecReader tricks as part of index 
> migration.
> Whether FilterCodecReader tricks are required to reproduce this is uncertain, 
> but in any case, when migrating a particular index, I saw this exception:
> {noformat}
> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum
>   at 
> org.apache.lucene.index.MappedMultiFields$MappedMultiTerms.iterator(MappedMultiFields.java:65)
>   at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:426)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
>   at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
>   at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2519)
> {noformat}
> TermsEnum$2 appears to be TermsEnum.EMPTY. The place where it creates it is 
> here:
> MultiTermsEnum#reset:
> {code}
> if (queue.size() == 0) {
>   return TermsEnum.EMPTY;   // <- this is not a MultiTermsEnum
> } else {
>   return this;
> }
> {code}
> A quick hack would be for MappedMultiFields to check for TermsEnum.EMPTY 
> specifically before casting, but there might be some way to avoid the cast 
> entirely and that would obviously be a better idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6826) java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be cast to org.apache.lucene.index.MultiTermsEnum when adding indexes

2015-10-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944738#comment-14944738
 ] 

Michael McCandless commented on LUCENE-6826:


Thank you [~trejkaz]!

> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum when adding indexes
> --
>
> Key: LUCENE-6826
> URL: https://issues.apache.org/jira/browse/LUCENE-6826
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.2.1
>Reporter: Trejkaz
>
> We are using addIndexes and FilterCodecReader tricks as part of index 
> migration.
> Whether FilterCodecReader tricks are required to reproduce this is uncertain, 
> but in any case, when migrating a particular index, I saw this exception:
> {noformat}
> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum
>   at 
> org.apache.lucene.index.MappedMultiFields$MappedMultiTerms.iterator(MappedMultiFields.java:65)
>   at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:426)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
>   at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
>   at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2519)
> {noformat}
> TermsEnum$2 appears to be TermsEnum.EMPTY. The place where it creates it is 
> here:
> MultiTermsEnum#reset:
> {code}
> if (queue.size() == 0) {
>   return TermsEnum.EMPTY;   // <- this is not a MultiTermsEnum
> } else {
>   return this;
> }
> {code}
> A quick hack would be for MappedMultiFields to check for TermsEnum.EMPTY 
> specifically before casting, but there might be some way to avoid the cast 
> entirely and that would obviously be a better idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6826) java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be cast to org.apache.lucene.index.MultiTermsEnum when adding indexes

2015-10-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944708#comment-14944708
 ] 

Michael McCandless commented on LUCENE-6826:


Hmm, no good ... I think we first need a small test case exposing this.

I think it should only happen if you have a {{FilterCodecReader}} that has 
filters a field by providing no terms in the {{TermsEnum}}?

I.e. I think Lucene (at least the default codec) would normally not write a 
field if it has 0 terms.

> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum when adding indexes
> --
>
> Key: LUCENE-6826
> URL: https://issues.apache.org/jira/browse/LUCENE-6826
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.2.1
>Reporter: Trejkaz
>
> We are using addIndexes and FilterCodecReader tricks as part of index 
> migration.
> Whether FilterCodecReader tricks are required to reproduce this is uncertain, 
> but in any case, when migrating a particular index, I saw this exception:
> {noformat}
> java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
> cast to org.apache.lucene.index.MultiTermsEnum
>   at 
> org.apache.lucene.index.MappedMultiFields$MappedMultiTerms.iterator(MappedMultiFields.java:65)
>   at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:426)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
>   at 
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
>   at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2519)
> {noformat}
> TermsEnum$2 appears to be TermsEnum.EMPTY. The place where it creates it is 
> here:
> MultiTermsEnum#reset:
> {code}
> if (queue.size() == 0) {
>   return TermsEnum.EMPTY;   // <- this is not a MultiTermsEnum
> } else {
>   return this;
> }
> {code}
> A quick hack would be for MappedMultiFields to check for TermsEnum.EMPTY 
> specifically before casting, but there might be some way to avoid the cast 
> entirely and that would obviously be a better idea.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2727 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2727/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([94D7CD79C9F6C70E:63A423210F1E68E8]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10451 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3592 - Failure

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3592/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node1:{"node_name":"127.0.0.1:56978_","base_url":"http://127.0.0.1:56978","core":"c8n_1x3_lf_shard1_replica3","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"node_name":"127.0.0.1:56978_",   "base_url":"http://127.0.0.1:56978;,  
 "core":"c8n_1x3_lf_shard1_replica3",   "state":"active",   
"leader":"true"}, "core_node2":{   
"node_name":"127.0.0.1:48738_",   "base_url":"http://127.0.0.1:48738;,  
 "core":"c8n_1x3_lf_shard1_replica1",   "state":"down"},
 "core_node3":{   "state":"down",   
"base_url":"http://127.0.0.1:58959;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:58959_",   "router":{"name":"compositeId"},   
"autoAddReplicas":"false",   "maxShardsPerNode":"1"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node1:{"node_name":"127.0.0.1:56978_","base_url":"http://127.0.0.1:56978","core":"c8n_1x3_lf_shard1_replica3","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "node_name":"127.0.0.1:56978_",
  "base_url":"http://127.0.0.1:56978;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "node_name":"127.0.0.1:48738_",
  "base_url":"http://127.0.0.1:48738;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "state":"down"},
"core_node3":{
  "state":"down",
  "base_url":"http://127.0.0.1:58959;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:58959_",
  "router":{"name":"compositeId"},
  "autoAddReplicas":"false",
  "maxShardsPerNode":"1"}
at 
__randomizedtesting.SeedInfo.seed([DAAF87AE43A9CF6E:52FBB874ED55A296]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:166)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 978 - Still Failing

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/978/

1 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR

Error Message:
Captured an uncaught exception in thread: Thread[id=43659, 
name=coreZkRegister-565-thread-2, state=RUNNABLE, 
group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=43659, name=coreZkRegister-565-thread-2, 
state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([730B792D2F31E8E]:0)
at 
org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:434)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346)
at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1113)
at org.apache.solr.cloud.ZkController.register(ZkController.java:926)
at org.apache.solr.cloud.ZkController.register(ZkController.java:881)
at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:183)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10272 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest_730B792D2F31E8E-001/init-core-data-001
   [junit4]   2> 550628 INFO  
(SUITE-LeaderInitiatedRecoveryOnShardRestartTest-seed#[730B792D2F31E8E]-worker) 
[] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/dss/s
   [junit4]   2> 550638 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 550655 INFO  (Thread-42667) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 550655 INFO  (Thread-42667) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 550755 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.ZkTestServer start zk server on port:54960
   [junit4]   2> 550756 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 550757 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 550760 INFO  (zkCallback-149-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@142807fc 
name:ZooKeeperConnection Watcher:127.0.0.1:54960 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 550760 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 550761 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 550761 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 550766 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 550767 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[730B792D2F31E8E])
 [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 550769 INFO  (zkCallback-150-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@5ce73cca 
name:ZooKeeperConnection Watcher:127.0.0.1:54960/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 550769 INFO  

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 455 - Failure

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/455/

2 tests failed.
FAILED:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([835338A5C6F076A4:EB4D3B4924602CEA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerTest.verifyStatus(OverseerTest.java:601)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1261)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14435 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14435/
Java: 64bit/jdk1.9.0-ea-b82 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:56270/mx;, 
"node_name":"127.0.0.1:56270_mx", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:49877/mx;,   
  "node_name":"127.0.0.1:49877_mx", "state":"active",   
  "leader":"true"},   "core_node3":{ "core":"collection1",  
   "base_url":"http://127.0.0.1:43723/mx;, 
"node_name":"127.0.0.1:43723_mx", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:60277/mx;, 
"node_name":"127.0.0.1:60277_mx", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:49877/mx;, 
"node_name":"127.0.0.1:49877_mx", "state":"active"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:60277/mx;, 
"node_name":"127.0.0.1:60277_mx", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica1",
 "base_url":"http://127.0.0.1:56270/mx;, 
"node_name":"127.0.0.1:56270_mx", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:49877/mx;, 
"node_name":"127.0.0.1:49877_mx", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:60277/mx;, 
"node_name":"127.0.0.1:60277_mx", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:56270/mx;,
"node_name":"127.0.0.1:56270_mx",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:49877/mx;,
"node_name":"127.0.0.1:49877_mx",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:43723/mx;,
"node_name":"127.0.0.1:43723_mx",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:60277/mx;,
"node_name":"127.0.0.1:60277_mx",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},

[jira] [Updated] (SOLR-7967) AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is immutable

2015-10-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7967:
-
Attachment: SOLR-7967.patch

Patch with some extra null checking to get the more invasive tests to pass.

> AddSchemaFieldsUpdateProcessorFactory does not check if the ConfigSet is 
> immutable
> --
>
> Key: SOLR-7967
> URL: https://issues.apache.org/jira/browse/SOLR-7967
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, Trunk
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7967.patch, SOLR-7967.patch
>
>
> SOLR-7742 introduced Immutable ConfigSets.  There are checks added to 
> SolrConfigHandler and SchemaHandler so that if a user tries to modify the 
> SolrConfig or the Schema via either of these interfaces an error is returned 
> if the ConfigSet is defined to be immutable.
> Updates to the schema made via the AddSchemaFieldsUpdateProcessorFactory are 
> not checked in this way.  I'm not certain this should be considered a bug.  A 
> ConfigSet is defined by \{SolrConfig, Schema, ConfigSetProperties\}.  On one 
> hand, you can argue that you are modifying the Schema, which is part of the 
> ConfigSet, so the immutable check should apply. On the other hand, the 
> SolrConfig (which defines the AddSchema...Factory} defines that it wants the 
> Config to be updated, so if you view the ConfigSet in totality you could 
> argue nothing is really changing. I'd slightly lean towards adding the check, 
> but could go either way.
> Other opinions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API at /admin/configs

2015-10-06 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946116#comment-14946116
 ] 

Gregory Chanan commented on SOLR-7789:
--

FYI I created a cwiki page on the the API here: 
https://cwiki.apache.org/confluence/display/solr/ConfigSets+API

> Introduce a ConfigSet management API at /admin/configs
> --
>
> Key: SOLR-7789
> URL: https://issues.apache.org/jira/browse/SOLR-7789
> Project: Solr
>  Issue Type: New Feature
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
> SOLR-7789.patch, SOLR-7789.patch
>
>
> SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
> another one, from a collection API call (i.e. one step collection creation).  
> Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
> the loop, we need support for a ConfigSet management API.
> The simplest ConfigSet API could have one operation:
> create a new config set, based on an existing one, possible modifying the 
> ConfigSet properties.  Note you need to be able to modify the ConfigSet 
> properties at creation time because otherwise Immutable could not be changed.
> Another logical operation to support is ConfigSet deletion; that may be more 
> complicated to implement than creation because you need to handle the case 
> where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4861) Simple reflected cross site scripting vulnerability

2015-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945976#comment-14945976
 ] 

Shawn Heisey commented on SOLR-4861:


My boss asked me about cross-site vulnerabilities in Solr today.  I remembered 
reading something about some vulnerabilities, so I went looking and found this.

Is this still a problem?

> Simple reflected cross site scripting vulnerability
> ---
>
> Key: SOLR-4861
> URL: https://issues.apache.org/jira/browse/SOLR-4861
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.2, 4.3
> Environment: Requires web ui / Jetty Solr to be exploited.
>Reporter: John Menerick
>  Labels: security
>
> There exists a simple XSS via the 404 Jetty / Solr code.  Within 
> JettySolrRunner.java, line 465, if someone asks for a non-existent page / url 
> which contains malicious code, the "Can not find" can be escaped and 
> malicious code will be executed on the victim's browser. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3419) XSS vulnerability in the json.wrf parameter

2015-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945981#comment-14945981
 ] 

Shawn Heisey commented on SOLR-3419:


My boss asked me about cross-site vulnerabilities in Solr today. I remembered 
reading something about some vulnerabilities, so I went looking and found this.

This issue is particularly old and the code in 5.x is likely very different.  
Is this still a problem?


> XSS vulnerability in the json.wrf parameter
> ---
>
> Key: SOLR-3419
> URL: https://issues.apache.org/jira/browse/SOLR-3419
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Affects Versions: 3.5
>Reporter: Prafulla Kiran
>Priority: Minor
> Attachments: SOLR-3419-escape.patch
>
>
> There's no filtering of the wrapper function name passed to the solr search 
> service
> If the name of the wrapper function passed to the solr query service is the 
> following string - 
> %3C!doctype%20html%3E%3Chtml%3E%3Cbody%3E%3Cimg%20src=%22x%22%20onerror=%22alert%281%29%22%3E%3C/body%3E%3C/html%3E
> solr passes the string back as-is which results in an XSS attack in browsers 
> like IE-7 which perform mime-sniffing. In any case, the callback function in 
> a jsonp response should always be sanitized - 
> http://stackoverflow.com/questions/2777021/do-i-need-to-sanitize-the-callback-parameter-from-a-jsonp-call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8135) SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection reproducible failure

2015-10-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8135:
--

 Summary: 
SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection reproducible 
failure
 Key: SOLR-8135
 URL: https://issues.apache.org/jira/browse/SOLR-8135
 Project: Solr
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Hoss Man


No idea what's going on here, noticed it while testing out an unrelated patch 
-- seed reproduces against pristine trunk...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=SolrCloudExampleTest 
-Dtests.method=testLoadDocsIntoGettingStartedCollection 
-Dtests.seed=59EA523FFF6CB60F -Dtests.slow=true -Dtests.locale=es_MX 
-Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 49.5s | 
SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Delete action failed!
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([59EA523FFF6CB60F:4A896050CE030FA9]:0)
   [junit4]>at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
   [junit4]>at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6830) Upgrade ANTLR to version 4.5.1

2015-10-06 Thread Jack Conradson (JIRA)
Jack Conradson created LUCENE-6830:
--

 Summary: Upgrade ANTLR to version 4.5.1
 Key: LUCENE-6830
 URL: https://issues.apache.org/jira/browse/LUCENE-6830
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Jack Conradson


Simple upgrade to ANTLR 4.5.1 which includes numerous bug fixes:
https://github.com/antlr/antlr4/releases/tag/4.5.1

Note this does not change the grammar itself, only small pieces of the 
generated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6825) Add multidimensional byte[] indexing support to Lucene

2015-10-06 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945944#comment-14945944
 ] 

Nicholas Knize commented on LUCENE-6825:


+1000 for graduating the data structure to core.

{{// Sort all docs once by lat, once by lon:}}  
* I'm assuming lat, lon specific naming will be refactored to more generalized 
naming?
* In an XTree implementation (similar to BKD but with more rigorous split 
criteria) I ran into bias issues when sorting by incremental dimensions (simple 
for loop like this). This is often why sort is done by a reduced dimensional 
encoding value (e.g., Hilbert, Morton). This is particularly important as the 
tree grows (which I'm guessing happens when BKD segments merge?). Maybe another 
new issue to investigate simple interleave packing and sorting on the packed 
value?

{{// Find which dim has the largest span so we can split on it:}}
* Maybe refactor this into a {{split}} method?  It would give an opportunity to 
override for investigating improvements based on other split criteria (e.g., 
squareness, area) 



> Add multidimensional byte[] indexing support to Lucene
> --
>
> Key: LUCENE-6825
> URL: https://issues.apache.org/jira/browse/LUCENE-6825
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6825.patch
>
>
> I think we should graduate the low-level block KD-tree data structure
> from sandbox into Lucene's core?
> This can be used for very fast 1D range filtering for numerics,
> removing the 8 byte (long/double) limit we have today, so e.g. we
> could efficiently support BigInteger, BigDecimal, IPv6 addresses, etc.
> It can also be used for > 1D use cases, like 2D (lat/lon) and 3D
> (x/y/z with geo3d) geo shape intersection searches.
> The idea here is to add a new part of the Codec API (DimensionalFormat
> maybe?) that can do low-level N-dim point indexing and at runtime
> exposes only an "intersect" method.
> It should give sizable performance gains (smaller index, faster
> searching) over what we have today, and even over what auto-prefix
> with efficient numeric terms would do.
> There are many steps here ... and I think adding this is analogous to
> how we added FSTs, where we first added low level data structure
> support and then gradually cutover the places that benefit from an
> FST.
> So for the first step, I'd like to just add the low-level block
> KD-tree impl into oal.util.bkd, but make a couple improvements over
> what we have now in sandbox:
>   * Use byte[] as the value not int (@rjernst's good idea!)
>   * Generalize it to arbitrary dimensions vs. specialized/forked 1D,
> 2D, 3D cases we have now
> This is already hard enough :)  After that we can build the
> DimensionalFormat on top, then cutover existing specialized block
> KD-trees.  We also need to fix OfflineSorter to use Directory API so
> we don't fill up /tmp when building a block KD-tree.
> A block KD-tree is at heart an inverted data structure, like postings,
> but is also similar to auto-prefix in that it "picks" proper
> N-dimensional "terms" (leaf blocks) to index based on how the specific
> data being indexed is distributed.  I think this is a big part of why
> it's so fast, i.e. in contrast to today where we statically slice up
> the space into the same terms regardless of the data (trie shifting,
> morton codes, geohash, hilbert curves, etc.)
> I'm marking this as trunk only for now... as we iterate we can see if
> it could maybe go back to 5.x...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14434 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14434/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([2430CAB19869276D:83747215F5D234D4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationWithTruncatedTlog(CdcrReplicationHandlerTest.java:121)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8135) SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection reproducible failure

2015-10-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8135:
---
Attachment: SOLR-8135.failure.log

attaching ant output testing against...

{noformat}
hossman@tray:~/lucene/dev$ svnversion && svn info | grep URL
1707150
URL: https://svn.apache.org/repos/asf/lucene/dev/trunk
Relative URL: ^/lucene/dev/trunk
{noformat}

with command...

{noformat}
ANT_ARGS="" ant test -Dtestcase=SolrCloudExampleTest 
-Dtests.method=testLoadDocsIntoGettingStartedCollection 
-Dtests.seed=59EA523FFF6CB60F -Dtests.slow=true -Dtests.locale=es_MX 
-Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1 > SOLR-8135.failure.log
{noformat}

> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection reproducible 
> failure
> --
>
> Key: SOLR-8135
> URL: https://issues.apache.org/jira/browse/SOLR-8135
> Project: Solr
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Hoss Man
> Attachments: SOLR-8135.failure.log
>
>
> No idea what's going on here, noticed it while testing out an unrelated patch 
> -- seed reproduces against pristine trunk...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=SolrCloudExampleTest 
> -Dtests.method=testLoadDocsIntoGettingStartedCollection 
> -Dtests.seed=59EA523FFF6CB60F -Dtests.slow=true -Dtests.locale=es_MX 
> -Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 49.5s | 
> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Delete action failed!
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([59EA523FFF6CB60F:4A896050CE030FA9]:0)
>[junit4]>  at 
> org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
>[junit4]>  at 
> org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6830) Upgrade ANTLR to version 4.5.1

2015-10-06 Thread Jack Conradson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Conradson updated LUCENE-6830:
---
Attachment: LUCENE-6830.patch

Patch attached.

> Upgrade ANTLR to version 4.5.1
> --
>
> Key: LUCENE-6830
> URL: https://issues.apache.org/jira/browse/LUCENE-6830
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
> Attachments: LUCENE-6830.patch
>
>
> Simple upgrade to ANTLR 4.5.1 which includes numerous bug fixes:
> https://github.com/antlr/antlr4/releases/tag/4.5.1
> Note this does not change the grammar itself, only small pieces of the 
> generated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4388) Admin UI - SolrCloud - expose Collections API

2015-10-06 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-4388:

Attachment: SOLR-4388.patch

This is the first version of the collections UI that is "ready" enough.

It likely has issues, but given it is new code, I plan to commit it soon.

Please folks, play with it! Follow the link to the new UI when in cloud mode, 
and try out the collections tab, and feed back what you like/don't like.

> Admin UI - SolrCloud - expose Collections API
> -
>
> Key: SOLR-4388
> URL: https://issues.apache.org/jira/browse/SOLR-4388
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.1
>Reporter: Shawn Heisey
>Assignee: Upayavira
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-4388.patch, SOLR-4388.patch, collections-1.png, 
> collections-2.png, collections-3.png, collections-4.png
>
>
> The CoreAdmin API is fairly well represented in the UI.  When SolrCloud is 
> enabled, the Collections API for SolrCloud needs similar treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14428 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14428/
Java: 64bit/jdk1.9.0-ea-b82 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 9362 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20151006_100558_523.sysout
   [junit4] >>> JVM J0: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fafa6302cce, pid=9162, 
tid=0x240a
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build 
1.9.0-ea-b82)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed 
mode, tiered, parallel gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x80ccce]  
PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/hs_err_pid9162.log
   [junit4] 
   [junit4] [error occurred during error reporting , id 0xb]
   [junit4] 
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 976 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J1-20151006_100558_523.sysout
   [junit4] >>> JVM J1: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fdad6670cce, pid=9163, 
tid=0x2416
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build 
1.9.0-ea-b82)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed 
mode, tiered, parallel gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x80ccce]  
PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/hs_err_pid9163.log
   [junit4] 
   [junit4] [error occurred during error reporting , id 0xb]
   [junit4] 
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 409 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk1.9.0-ea-b82/bin/java -XX:-UseCompressedOops 
-XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=307803D544649BDC -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.0.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=6.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[jira] [Commented] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944964#comment-14944964
 ] 

ASF subversion and git services commented on LUCENE-6827:
-

Commit 1707040 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1707040 ]

LUCENE-6827: Use explicit capacity ArrayList instead of a LinkedList in 
MultiFieldQueryNodeProcessor

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945039#comment-14945039
 ] 

Alexandre Rafalovitch commented on SOLR-8131:
-

What about all the embedded documentation in the examples that disappears on 
the first run with managed schema? Including all the commented-out sections and 
"this is default" sections.

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6827.
-
Resolution: Fixed

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944967#comment-14944967
 ] 

ASF subversion and git services commented on LUCENE-6827:
-

Commit 1707041 from [~dawidweiss] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1707041 ]

LUCENE-6827: Use explicit capacity ArrayList instead of a LinkedList in 
MultiFieldQueryNodeProcessor

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6827:

Fix Version/s: 5.4

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945013#comment-14945013
 ] 

Shalin Shekhar Mangar commented on SOLR-8131:
-

bq. Just to clarify what we'll have is a `managed-schema` file and no 
`schema.xml` file in the default configs right?

Yeah, I think the default is to rename any existing schema.xml file to 
schema.xml.bak and afterwards use 'managed-schema' as the generated schema file 
name.

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6817.
-
Resolution: Fixed

Thanks Ahmet!

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944972#comment-14944972
 ] 

ASF subversion and git services commented on LUCENE-6817:
-

Commit 1707044 from [~dawidweiss] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1707044 ]

LUCENE-6817: ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop 
in toString().

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8131:
---

 Summary: Make ManagedIndexSchemaFactory as the default in Solr
 Key: SOLR-8131
 URL: https://issues.apache.org/jira/browse/SOLR-8131
 Project: Solr
  Issue Type: Wish
  Components: Data-driven Schema, Schema and Analysis
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4


The techproducts and other examples shipped with Solr all use the 
ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
schema. It'd be nice to be able to support both read/write schema APIs without 
needing to enable data-driven or schema-less mode.

I propose to change all 5.x examples to explicitly use 
ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default in 
trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945064#comment-14945064
 ] 

Alexandre Rafalovitch commented on SOLR-8131:
-

I would be all over anything that's self-documenting. API endpoints, analyzers, 
etc. For API, something like http://swagger.io/ could help. That would enable 
other newbie-oriented use cases too. E.g. auto-generated UI for 
https://www.getpostman.com/ .

This deserves its own discussion, really.

A page in ref-guide could be a - simpler - option too, especially if the 
comments are hyperlinked into the specific guide sections. That would give 
people jump off points from the context of the config file into more detailed 
descriptions.


> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-6827:
---

 Summary: Use explicit capacity ArrayList instead of a LinkedList 
in MultiFieldQueryNodeProcessor
 Key: LUCENE-6827
 URL: https://issues.apache.org/jira/browse/LUCENE-6827
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 103 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/103/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([DCDCEDEE6F04430C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=15723, name=searcherExecutor-6365-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=15723, name=searcherExecutor-6365-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([DCDCEDEE6F04430C]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=15723, 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14139 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14139/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
18 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL: 1) Thread[id=9649, 
name=qtp2006100753-9649, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterSSL] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=9613, 
name=qtp2006100753-9613-selector-ServerConnectorManager@7ed82604/0, 
state=RUNNABLE, group=TGRP-TestMiniSolrCloudClusterSSL] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=9715, 
name=coreContainerWorkExecutor-4102-thread-1, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterSSL] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=9647, 
name=qtp2006100753-9647, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterSSL] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=9654, 
name=org.eclipse.jetty.server.session.HashSessionManager@31819370Timer, 
state=TIMED_WAITING, group=TGRP-TestMiniSolrCloudClusterSSL] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=9695, 
name=jetty-launcher-1143-thread-2-SendThread(127.0.0.1:34062), 

[jira] [Commented] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944911#comment-14944911
 ] 

Dawid Weiss commented on LUCENE-6827:
-

In fact I thought about that too -- if somebody uses LinkedList (or Hashtable 
or a Vector... any of these) then it's probably an ancient artefact and very 
likely a mistake and/or could be replaced with a faster implementation.

You should add these to forbidden APIs, Uwe :D

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7864) timeAllowed causing ClassCastException

2015-10-06 Thread Gianpaolo Lopresti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944968#comment-14944968
 ] 

Gianpaolo Lopresti commented on SOLR-7864:
--

Hi,
if could be helpful, while debugging the SearchHandler.java, line 275:

{code}
SolrDocumentList r = (SolrDocumentList) rb.rsp.getValues().get("response");
{code}

The "response" object is a ResultContext object, instead of a SolrDocumentList.

This is the debugger representation: 

{code}
"{responseHeader=},response=org.apache.solr.response.ResultContext@163d596,facet_counts={facet_queries=...}}"
 
{code}

The exception message is: "The request took too long to iterate over terms. 
Timeout: timeoutAt: 19882685958928 (System.nanoTime(): 19882734201853), 
TermsEnum=org.apache.lucene.codecs.blocktree.Lucene40SegmentTermsEnum@1c687b1"

> timeAllowed causing ClassCastException
> --
>
> Key: SOLR-7864
> URL: https://issues.apache.org/jira/browse/SOLR-7864
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Markus Jelsma
> Fix For: Trunk, 5.4
>
>
> If timeAllowed kicks in, following exception is thrown and user gets HTTP 500.
> {code}
> 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter  [  
>  search] – null:java.lang.ClassCastException: 
> org.apache.solr.response.ResultContext cannot be cast to 
> org.apache.solr.common.SolrDocumentList
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned LUCENE-6817:
---

Assignee: Dawid Weiss

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6817:

Fix Version/s: 5.4

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8123) Add HdfsChaosMonkeyNothingIsSafeTest test.

2015-10-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8123:
--
Summary: Add HdfsChaosMonkeyNothingIsSafeTest test.  (was: Add 
HdfsChaosMonkeySafeLeaderTest test.)

> Add HdfsChaosMonkeyNothingIsSafeTest test.
> --
>
> Key: SOLR-8123
> URL: https://issues.apache.org/jira/browse/SOLR-8123
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8123.patch
>
>
> So far I only have added a an HdfsChaosMonkeySafeLeaderTest - I figured it's 
> the same logic as the standard FS and so just one of the two Chaos tests was 
> good enough coverage. The other HdfsChaos test has proven very valuable in 
> finding fails I can't as easily find with the local FS though, so it makes 
> sense to add this class to help with testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944953#comment-14944953
 ] 

Noble Paul commented on SOLR-8117:
--

I see a rule in the testcase like {{shard:*,cores:<1}} 

This rule will always fail because if I assign at least one replica to a shard 
the core will become 1 which is NOT LESS THAN 1 . Did you mean to put 
{{shard:*,cores:<2}} which means exactly one replica or less 

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch, SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945010#comment-14945010
 ] 

Varun Thacker commented on SOLR-8131:
-

+1 

Just to clarify what we'll have is a `managed-schema` file and no `schema.xml` 
file in the default configs right?

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945019#comment-14945019
 ] 

Varun Thacker commented on SOLR-8131:
-

bq. Yeah, I think the default is to rename any existing schema.xml file to 
schema.xml.bak and afterwards use 'managed-schema' as the generated schema file 
name.

The current data_driven config doesn't have a schema.bak file . 

Also if we enable it by default in 6.0 is the "mutable" flag useful then?

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14429 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14429/
Java: 64bit/jdk1.9.0-ea-b82 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([2E332FFC829B30F2:89779758EF20234B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplication(CdcrReplicationHandlerTest.java:86)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:519)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6827:

Attachment: LUCENE-6827.patch

Patch. Also piggybacks {{new RuntimeException()}} if clone fails (should never 
happen means it probably will at some point -- we shouldn't ignore that 
quietly).

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6828) Speed up requests for many rows

2015-10-06 Thread Toke Eskildsen (JIRA)
Toke Eskildsen created LUCENE-6828:
--

 Summary: Speed up requests for many rows
 Key: LUCENE-6828
 URL: https://issues.apache.org/jira/browse/LUCENE-6828
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.3, 4.10.4
Reporter: Toke Eskildsen
Priority: Minor


Standard relevance ranked searches for top-X results uses the HitQueue class to 
keep track of the highest scoring documents. The HitQueue is a binary heap of 
ScoreDocs and is pre-filled with sentinel objects upon creation.

Binary heaps of Objects in Java does not scale well: The HitQueue uses 28 
bytes/element and memory access is scattered due to the binary heap algorithm 
and the use of Objects. To make matters worse, the use of sentinel objects 
means that even if only a tiny number of documents matches, the full amount of 
Objects is still allocated.

As long as the HitQueue is small (< 1000), it performs very well. If top-1M 
results are requested, it performs poorly and leaves 1M ScoreDocs to be garbage 
collected.

An alternative is to replace the ScoreDocs with a single array of packed longs, 
each long holding the score and the document ID. This strategy requires only 8 
bytes/element and is a lot lighter on the GC.

Some preliminary tests has been done and published at 
https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/
These indicate that a long[]-backed implementation is at least 3x faster than 
vanilla HitDocs for top-1M requests.

For smaller requests, such as top-10, the packed version also seems 
competitive, when the amount of matched documents exceeds 1M. This needs to be 
investigated further.

Going forward with this idea requires some refactoring as Lucene is currently 
hardwired to the abstract PriorityQueue. Before attempting this, it seems 
prudent to discuss whether speeding up large top-X requests has any value? 
Paging seems an obvious contender for requesting large result sets, but I guess 
the two could work in tandem, opening up for efficient large pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944970#comment-14944970
 ] 

ASF subversion and git services commented on LUCENE-6817:
-

Commit 1707043 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1707043 ]

LUCENE-6817: ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop 
in toString().

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945050#comment-14945050
 ] 

Shalin Shekhar Mangar commented on SOLR-8131:
-

bq. The current data_driven config doesn't have a schema.bak file

That's because the data driven config does not have a schema.xml and starts off 
directly with a managed-schema file. The techproducts and basic configs example 
do have a schema.xml (which I wasn't planning on removing) which will be 
renamed to schema.xml.bak

bq. What about all the embedded documentation in the examples that disappears 
on the first run with managed schema? Including all the commented-out sections 
and "this is default" sections.

Good point, Alexandre. What do you think we should do? Maybe we can create a 
page in the ref guide which has all that information instead? Another option 
(don't know how feasible it'd be) is to have a describe mode in the /schema API 
which prints helpful documentation about every enabled option/plugin in the 
schema?

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8132) HDFS global block cache should default to true in 6.0.

2015-10-06 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8132:
-

 Summary: HDFS global block cache should default to true in 6.0.
 Key: SOLR-8132
 URL: https://issues.apache.org/jira/browse/SOLR-8132
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 6.0


No more back compat worry, no global is not very pleasant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944898#comment-14944898
 ] 

Adrien Grand commented on LUCENE-6827:
--

+1

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6827) Use explicit capacity ArrayList instead of a LinkedList in MultiFieldQueryNodeProcessor

2015-10-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944905#comment-14944905
 ] 

Uwe Schindler commented on LUCENE-6827:
---

Indeed, we should review all usages of LinkedList throughout Lucene/Solr. It is 
not clear why it was used here, but some places used it in pre Java 6 times to 
allow fast removal and addition of entries at beginning (typical LIFO/FIFO 
usage).

Since Java 6 the much better data structure for this is java.util.Deque (which 
LinkedList implements), but using ArrayDeque as implementation is much more 
heap/performance efficient.

> Use explicit capacity ArrayList instead of a LinkedList in 
> MultiFieldQueryNodeProcessor
> ---
>
> Key: LUCENE-6827
> URL: https://issues.apache.org/jira/browse/LUCENE-6827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6827.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-10-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945005#comment-14945005
 ] 

Noble Paul commented on SOLR-8131:
--



> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-high
> Fix For: Trunk, 5.4
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 14141 - Failure!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14141/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([E61768726015BB64:F5745A1D517A02C2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 453 - Failure

2015-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/453/

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([78923316545DD029:6BF101796532698F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8129) HdfsChaosMonkeyNothingIsSafeTest failures

2015-10-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8129:
---
Attachment: fail.151005_080319

Here's the smallest log file that I've been able to generate with a large 
number of descrepancies:
{code}
fail.151005_080319:   > Throwable #1: java.lang.AssertionError: shard2 is not 
consistent.  Got 2076 from http://127.0.0.1:43897/collection1 (previous client) 
and got 2103 from http://127.0.0.1:36605/collection1
{code}

This was without deletes (if I managed to configure that correctly).

> HdfsChaosMonkeyNothingIsSafeTest failures
> -
>
> Key: SOLR-8129
> URL: https://issues.apache.org/jira/browse/SOLR-8129
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: fail.151005_080319
>
>
> New HDFS chaos test in SOLR-8123 hits a number of types of failures, 
> including shard inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2781 - Still Failing!

2015-10-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2781/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([39ED6EE583BCE7F6:E1A043B274614256]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:441)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-8072) Rebalance leaders feature does not set CloudDescriptor#isLeader to false when bumping leaders.

2015-10-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8072.
---
   Resolution: Fixed
 Assignee: Mark Miller
Fix Version/s: 5.4
   Trunk

> Rebalance leaders feature does not set CloudDescriptor#isLeader to false when 
> bumping leaders.
> --
>
> Key: SOLR-8072
> URL: https://issues.apache.org/jira/browse/SOLR-8072
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8072.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Attachment: (was: SOLR-8117.patch)

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945144#comment-14945144
 ] 

Ludovic Boutros commented on SOLR-8117:
---

hmm, I see, the rules should be considered as a mandatory state before and 
after the collection creation.
This type of condition (<1) should be considered as invalid. I misunderstood 
the rule configuration.

Thank you Paul.

I will try to reproduce the other behavior: 

sometimes a collection creation is allowed and sometimes not with the same 
cluster and the same rules.

I use these two rules:

rule=shard:*,host:*,replica:<2
rule=shard:*,cores:<2

The last time, I had to retry 3 times to finally create a collection (7 shards, 
2 replicas per shard).

The demo cluster contains 4 hosts, 16 nodes (4 per host), 14 empty nodes.

With your explaination, it should never be allowed to create this collection 
because all nodes contain 2 cores after the collection creation.
Or perhaps, the two rules are not applied the way I think.

By the way, the behavior should always be the same.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org