[jira] [Created] (SOLR-7915) Provide pluggable Velocity context tool facility

2015-08-12 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-7915:
--

 Summary: Provide pluggable Velocity context tool facility
 Key: SOLR-7915
 URL: https://issues.apache.org/jira/browse/SOLR-7915
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Velocity
Affects Versions: 5.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.4


Currently the tools placed in the VelocityResponseWriter's context are 
hard-coded.  It can be very handy to be able to plug in 3rd party or custom 
tools (just any ol' Java object a tool can be).

Here's a list of the currently hard-coded tools: 
https://github.com/apache/lucene-solr/blob/trunk/solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java#L189-L199



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 6 - Still Failing

2015-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/6/

No tests ran.

Build Log:
[...truncated 53176 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.0-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (698.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.tgz...
   [smoker] 65.7 MB in 0.10 sec (677.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.zip...
   [smoker] 75.9 MB in 0.12 sec (644.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (24.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.3.0-src.tgz...
   [smoker] 37.0 MB in 0.35 sec (104.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.3.0.tgz...
   [smoker] 128.7 MB in 1.49 sec (86.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.3.0.zip...
   [smoker] 136.3 MB in 1.49 sec (91.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 

[jira] [Created] (SOLR-7916) ExtractingDocumentLoader does not initialize context with Parser.class key and DelegatingParser needs that key.

2015-08-12 Thread JIRA
Germán Cáseres created SOLR-7916:


 Summary: ExtractingDocumentLoader does not initialize context with 
Parser.class key and DelegatingParser needs that key.
 Key: SOLR-7916
 URL: https://issues.apache.org/jira/browse/SOLR-7916
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 5.1
Reporter: Germán Cáseres


Tika PDFParser works perfectly with Solr except when you need to extract 
metadata from embedded images in PDF.

When PDFParser finds an embedded image, it tries to execute a DelegatingParser 
over that image. But the problem is that DelegatingParser expects ParseContext 
to have Parser.class key.
If that key is not present, it falls back to EmptyParser and inline image 
metadata is not extracted.

I tried to extract metadata using standalone Tika and Tesseract OCR and it 
works fine (the text from PDF and from OCRed inline images is extracted)... but 
when i do the same from SolR, only the text from the PDF is extracted.

I've properly configured PDFParser.properties with extractInlineImages true

Also, I tried overriding the PDFParser with a custom one and adding the 
following line:

{code}
context.set(Parser.class, new AutoDetectParser());
{code}

And it worked... but I think that is not correct to modify the Tika PDFParser 
if it works ok when executing without SolR.

Maybe the context should be initialized properly in the SolR class: 
ExtractingDocumentLoader.

Sorry for my bad English, hope this information is useful, and please tell me 
if i'm doing wrong.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694200#comment-14694200
 ] 

Erick Erickson commented on SOLR-7451:
--

Well, Solr tries very hard to not lose updates. Part of the processing is to be 
sure that the replica that becomes the leader has all the updates possible is 
contained in the leader election process. If a shard can't elect a leader, then 
it's in a non-deterministic state as far as being consistent. By going in and 
manually adjusting the state of a node to active, you're bypassing all that 
processing.

It's not _necessarily_ bad, it's just that there are risks.

BTW, it would have been _really_ helpful to tell us you had a custom plugin in 
the first place. Time spent trying to reproduce a problem isn't well spent 
without complete information.

I'll close this ticket.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7451.
--
Resolution: Not A Problem

Apparently a problem with a custom plugin.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694324#comment-14694324
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695616 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695616 ]

LUCENE-6699: fix more names

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694346#comment-14694346
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695619 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695619 ]

LUCENE-6699: fix more nocommits

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

This patch adds the basic architecture for handling degenerate cases of 
XYZSolids.  Still need to implement the specific cases though.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694251#comment-14694251
 ] 

Karl Wright commented on LUCENE-6699:
-

Well, as we discussed a while back, I'm going to need to implement some 
degenerate solids in any case, if this all works, and a factory method that 
picks among them.  That's a better fix methinks than changing stuff around.  
I'll get going on that tonight.


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694391#comment-14694391
 ] 

ASF subversion and git services commented on SOLR-7918:
---

Commit 1695623 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1695623 ]

SOLR-7918: optimize term-DocSet generation

 speed up term-DocSet production
 

 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: SOLR-7918.patch


 We can use index statistics to figure out before hand what type of doc set 
 (sorted int or bitset) we should create.  This should use less memory than 
 the current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7918:
--

 Summary: speed up term-DocSet production
 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley


We can use index statistics to figure out before hand what type of doc set 
(sorted int or bitset) we should create.  This should use less memory than the 
current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694417#comment-14694417
 ] 

ASF subversion and git services commented on SOLR-7918:
---

Commit 1695626 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695626 ]

SOLR-7918: optimize term-DocSet generation

 speed up term-DocSet production
 

 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: SOLR-7918.patch


 We can use index statistics to figure out before hand what type of doc set 
 (sorted int or bitset) we should create.  This should use less memory than 
 the current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7918.

   Resolution: Fixed
Fix Version/s: 5.4

 speed up term-DocSet production
 

 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Fix For: 5.4

 Attachments: SOLR-7918.patch


 We can use index statistics to figure out before hand what type of doc set 
 (sorted int or bitset) we should create.  This should use less memory than 
 the current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7918:
---
Attachment: SOLR-7918.patch

Patch attached..  This also introduces a DocSetProducer interface (ported from 
Heliosearch) to form a basis for future optimizations.

The actual set building was moved out to DocSetUtil from SolrIndexSearcher to 
avoid bloating that class more.

Performance improvements were quite good. On the low end was large SortedInt 
sets (only a 20% improvement), but large sets saw a 70% improvement and very 
small sets saw over 120% improvement.  Complete request+response was measured 
from the client, so the speedups were actually even greater.


 speed up term-DocSet production
 

 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: SOLR-7918.patch


 We can use index statistics to figure out before hand what type of doc set 
 (sorted int or bitset) we should create.  This should use less memory than 
 the current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694253#comment-14694253
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok -- let me synch up and see what new feature should have what priority...


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b24) - Build # 13833 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13833/
Java: 32bit/jdk1.8.0_60-ea-b24 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([C35243D1951C71D7:6416FB75F8A7626E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationWithTruncatedTlog(CdcrReplicationHandlerTest.java:121)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-12 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: SOLR-6760.patch

Test is no longer flaky

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3 release

2015-08-12 Thread Yonik Seeley
Figured it out and downgraded to a trivial bug.
-Yonik

On Wed, Aug 12, 2015 at 1:57 PM, Yonik Seeley ysee...@gmail.com wrote:
 I've set this as a blocker until we know more about the actual impact.
 https://issues.apache.org/jira/browse/SOLR-7917

 -Yonik


 On Tue, Aug 11, 2015 at 12:49 PM, Noble Paul noble.p...@gmail.com wrote:
 I'm done with the blockers.
 Planning to cut an RC soon.
 --Noble

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694234#comment-14694234
 ] 

Guido commented on SOLR-7451:
-

Thanks, really sorry if I have not originally specified the presence of a 
custom plugin but I did not realize it was related to the problem, since I had 
issues with the custom plugins in the past but I never experienced this 
particular issue. I am not sure that the 'custom plugin' is the root cause of 
the problem also for the other 3 users which experienced this issue, but thank 
you for your prompt help and support.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13835 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13835/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseConcMarkSweepGC 
-Djava.locale.providers=JRE,SPI

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=8225, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8225, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([3603182CD29D1BC6]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:234)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_3603182CD29D1BC6-001/jetty-001/cores/source_collection_shard1_replica2/data/tlog/tlog.007.1509348178277695488
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:526)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
Caused by: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_3603182CD29D1BC6-001/jetty-001/cores/source_collection_shard1_replica2/data/tlog/tlog.007.1509348178277695488
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:327)
at java.io.RandomAccessFile.init(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 11096 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_3603182CD29D1BC6-001/init-core-data-001
   [junit4]   2 1080964 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[3603182CD29D1BC6]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2 1080964 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[3603182CD29D1BC6]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2 1080966 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 1080966 INFO  (Thread-3396) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1080966 INFO  (Thread-3396) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 1081066 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.ZkTestServer start zk server on port:51154
   [junit4]   2 1081066 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 1081067 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 1081069 INFO  (zkCallback-932-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@190d075 name:ZooKeeperConnection 
Watcher:127.0.0.1:51154 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 1081069 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 1081069 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 1081069 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 1081071 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[3603182CD29D1BC6]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   

Re: [VOTE] 5.3.0 RC1

2015-08-12 Thread Ishan Chattopadhyaya
Maybe its just my setup, but I ran the smoke tester twice and failed on two
different tests.
Attached are the logs from the latest failure. Will try few more runs today.

On Thu, Aug 13, 2015 at 2:11 AM, Timothy Potter thelabd...@gmail.com
wrote:

 +1 SUCCESS! [0:52:42.295681]

 Thanks Noble.

 On Wed, Aug 12, 2015 at 1:14 PM, Noble Paul noble.p...@gmail.com wrote:
  Please vote for the # release candidate for Lucene/Solr 5.3.0
 
  The artifacts can be downloaded from:
 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC1-rev1695567/
 
  You can run the smoke tester directly with this command:
  python3 -u dev-tools/scripts/smokeTestRelease.py
 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC1-rev1695567/
 
  --
  -
  Noble Paul
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




test.log.xz
Description: application/xz

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13831 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13831/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseG1GC -Djava.locale.providers=JRE,SPI

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionStateFormat2Test.test

Error Message:
Error from server at http://127.0.0.1:57411: Could not find collection : 
myExternColl

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:57411: Could not find collection : myExternColl
at 
__randomizedtesting.SeedInfo.seed([2DD0605C86CC1379:A5845F8628307E81]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.testZkNodeLocation(CollectionStateFormat2Test.java:84)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.test(CollectionStateFormat2Test.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 250 - Still Failing

2015-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/250/

No tests ran.

Build Log:
[...truncated 52994 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 28.1 MB in 0.04 sec (723.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 64.9 MB in 0.09 sec (725.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 75.2 MB in 0.10 sec (722.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5831 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5831 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (28.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.0-src.tgz...
   [smoker] 36.7 MB in 0.48 sec (76.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.tgz...
   [smoker] 130.5 MB in 1.28 sec (102.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.zip...
   [smoker] 138.6 MB in 1.85 sec (74.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8
   [smoker]   startup done
   [smoker] 
   [smoker] Setup new core instance directory:
   [smoker] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/server/solr/techproducts
   [smoker] 
   [smoker] Creating new core 'techproducts' using command:
   [smoker] 
http://localhost:8983/solr/admin/cores?action=CREATEname=techproductsinstanceDir=techproducts
   [smoker] 
   [smoker] {
   [smoker]   responseHeader:{
   [smoker] status:0,
   [smoker] QTime:1825},
   

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_51) - Build # 5014 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5014/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.lucene.index.TestDocumentsWriterStallControl.testSimpleStall

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F570A1294577A8BA]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestDocumentsWriterStallControl

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([F570A1294577A8BA]:0)




Build Log:
[...truncated 1966 lines...]
   [junit4] Suite: org.apache.lucene.index.TestDocumentsWriterStallControl
   [junit4]   2 Aug 13, 2015 3:47:19 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2 WARNING: Suite execution timed out: 
org.apache.lucene.index.TestDocumentsWriterStallControl
   [junit4]   21) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2 at java.lang.Object.wait(Native Method)
   [junit4]   2 at java.lang.Thread.join(Thread.java:1245)
   [junit4]   2 at java.lang.Thread.join(Thread.java:1319)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
   [junit4]   2 at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
   [junit4]   2 at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
   [junit4]   2 at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
   [junit4]   22) Thread[id=10, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2 at java.lang.Thread.sleep(Native Method)
   [junit4]   2 at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:47)
   [junit4]   23) Thread[id=1298, 
name=TEST-TestDocumentsWriterStallControl.testSimpleStall-seed#[F570A1294577A8BA],
 state=TIMED_WAITING, group=TGRP-TestDocumentsWriterStallControl]
   [junit4]   2 at java.lang.Thread.sleep(Native Method)
   [junit4]   2 at 
org.apache.lucene.index.TestDocumentsWriterStallControl.awaitState(TestDocumentsWriterStallControl.java:351)
   [junit4]   2 at 
org.apache.lucene.index.TestDocumentsWriterStallControl.testSimpleStall(TestDocumentsWriterStallControl.java:49)
   [junit4]   2 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2 at java.lang.reflect.Method.invoke(Method.java:497)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
   [junit4]   2 at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
   [junit4]   2 at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   [junit4]   2 at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
   [junit4]   2 at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
   [junit4]   2 at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
   [junit4]   2 at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
   [junit4]   2 at 

[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

Updated patch, including degenerate (but untested) classes for single-dimension 
degeneracy.  Still four additional degeneracy classes to implement.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7918) speed up term-DocSet production

2015-08-12 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694643#comment-14694643
 ] 

David Smiley commented on SOLR-7918:


This is really cool Yonik!  I looked over the patch.  I have some feedback:
* Was there really any benefit to initializing the FixedBitSet manually versus 
simply creating it and calling set() ?  If not it's more clear to simply use 
the methods on FBS.
* I saw the size threshold numerous times --- {{maxDoc  6 + 5}}.  Could this 
go into a utility method to not repeat yourself?
* The private method createDocSetByIterator appears unused.  What's the story 
there?

 speed up term-DocSet production
 

 Key: SOLR-7918
 URL: https://issues.apache.org/jira/browse/SOLR-7918
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Fix For: 5.4

 Attachments: SOLR-7918.patch


 We can use index statistics to figure out before hand what type of doc set 
 (sorted int or bitset) we should create.  This should use less memory than 
 the current approach as well as increase performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread laigood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694653#comment-14694653
 ] 

laigood commented on SOLR-7451:
---

Thanks all,when i remove my custom analyzer and try again,it work fine!just 
curious why custom analyzer can case this.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2618 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2618/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([E19918C14BCC72B5:F2FA2AAE7AA3CB13]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-12 Thread Zack Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694699#comment-14694699
 ] 

Zack Liang commented on SOLR-7795:
--

Hi [~tomasflobbe], the pull request has been updated.
The range facet response is integrated and now interval facet will use 
intervals instead of counts.
The tests in TestIntervalFaceting.java and QueryResponseTest.java are 
modified to verify this.

In addition, I added a test case testMultipleRangeFacetsResponse, which reads 
sampleMultipleRangeFacetsResponse.xml and check whether the response like 
your example can be parsed properly.

Please let me know your feedback, thanks!

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: 5.3, Trunk


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-12 Thread Zack Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zack Liang updated SOLR-7795:
-
Comment: was deleted

(was: Hi [~tomasflobbe], the pull request has been updated.
The range facet response is integrated and now interval facet will use 
intervals instead of counts.
The tests in TestIntervalFaceting.java and QueryResponseTest.java are 
modified to verify this.

In addition, I added a test case testMultipleRangeFacetsResponse, which reads 
sampleMultipleRangeFacetsResponse.xml and check whether the response like 
your example can be parsed properly.

Please let me know your feedback, thanks!)

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: 5.3, Trunk


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-12 Thread Zack Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694698#comment-14694698
 ] 

Zack Liang commented on SOLR-7795:
--

Hi [~tomasflobbe], the pull request has been updated.
The range facet response is integrated and now interval facet will use 
intervals instead of counts.
The tests in TestIntervalFaceting.java and QueryResponseTest.java are 
modified to verify this.

In addition, I added a test case testMultipleRangeFacetsResponse, which reads 
sampleMultipleRangeFacetsResponse.xml and check whether the response like 
your example can be parsed properly.

Please let me know your feedback, thanks!

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: 5.3, Trunk


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread Kwan-I Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kwan-I Lee reopened SOLR-7914:
--

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Fix For: 4.10.5

 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-08-12 Thread Jens Wille (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693117#comment-14693117
 ] 

Jens Wille commented on SOLR-7639:
--

I'm kinda bummed that it didn't make it. However, thanks for following up.

The latest patch from 21/Jul/15 13:28 does contain the CloudMLTQParser changes; 
I've added it to SOLR-7912.

 Bring MLTQParser at par with the MLT Handler w.r.t supported options
 

 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.3

 Attachments: SOLR-7639-add-boost-and-exclude-current.patch, 
 SOLR-7639-add-boost-and-exclude-current.patch, SOLR-7639.patch, 
 SOLR-7639.patch


 As of now, there are options that the MLT Handler supports which the QParser 
 doesn't. It would be good to have the QParser tap into everything that's 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread Kwan-I Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kwan-I Lee resolved SOLR-7914.
--
   Resolution: Fixed
Fix Version/s: 4.10.5

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Fix For: 4.10.5

 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 926 - Still Failing

2015-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/926/

5 tests failed.
REGRESSION:  
org.apache.solr.cloud.ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteDoesNotFail

Error Message:
concurrent create and delete collection failed: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55888/solr: Could not fully remove collection: 
collection4

Stack Trace:
java.lang.AssertionError: concurrent create and delete collection failed: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55888/solr: Could not fully remove collection: 
collection4
at 
__randomizedtesting.SeedInfo.seed([FB030AAA9367F8FF:3A2A5F10E4BC22EE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteDoesNotFail(ConcurrentDeleteAndCreateCollectionTest.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread Kwan-I Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kwan-I Lee updated SOLR-7914:
-
Attachment: SOLR-7914.patch

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-08-12 Thread Jens Wille (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693114#comment-14693114
 ] 

Jens Wille commented on SOLR-7912:
--

I've added the correct patch from SOLR-7639, applied against latest trunk.


 Add support for boost and exclude the queried document id in MoreLikeThis 
 QParser
 -

 Key: SOLR-7912
 URL: https://issues.apache.org/jira/browse/SOLR-7912
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7912.patch, SOLR-7912.patch


 Continuing from SOLR-7639. We need to support boost, and also exclude input 
 document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-08-12 Thread Jens Wille (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Wille updated SOLR-7912:
-
Attachment: SOLR-7912.patch

 Add support for boost and exclude the queried document id in MoreLikeThis 
 QParser
 -

 Key: SOLR-7912
 URL: https://issues.apache.org/jira/browse/SOLR-7912
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7912.patch, SOLR-7912.patch


 Continuing from SOLR-7639. We need to support boost, and also exclude input 
 document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693081#comment-14693081
 ] 

ASF subversion and git services commented on LUCENE-6174:
-

Commit 1695438 from [~dawidweiss] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695438 ]

LUCENE-6174: Improve 'ant eclipse' to select right JRE for building.

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693090#comment-14693090
 ] 

ASF subversion and git services commented on LUCENE-6174:
-

Commit 1695442 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1695442 ]

LUCENE-6174: Improve 'ant eclipse' to select right JRE for building.

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693080#comment-14693080
 ] 

Uwe Schindler commented on LUCENE-6174:
---

+1 to commit

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6174.
-
Resolution: Fixed
  Assignee: Dawid Weiss  (was: Uwe Schindler)

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Dawid Weiss
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6174:

Fix Version/s: 5.4
   Trunk

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr MLT Interestingterms return different terms than Lucene MoreLikeThis for some of the documents

2015-08-12 Thread Ali Nazemian
Hi,

I am going to implement a searchcomponent for Solr to return document main
keywords with using the more like this interesting terms. The main part of
implemented component which uses mlt.retrieveInterestingTerms by lucene
docID does not work for all of the documents. I mean for some of the
documents solr interestingterms returns some useful terms as top tf-idf
terms; however, the implemented method returns null! But for other
documents both results (solr MLT interesting terms and the
mlt.retrieveInterestingTerms(docId)) are the same! Would you please help me
through solving this issue?

public ListString getKeywords(int docId) throws SyntaxError {
String[] fields = new String[keywordSourceFields.size()];
ListString terms = new ArrayListString();
fields = keywordSourceFields.toArray(fields);
mlt.setFieldNames(fields);
mlt.setAnalyzer(indexSearcher.getSchema().getIndexAnalyzer());
mlt.setMinTermFreq(minTermFreq);
mlt.setMinDocFreq(minDocFreq);
mlt.setMinWordLen(minWordLen);
mlt.setMaxQueryTerms(maxNumKeywords);
mlt.setMaxNumTokensParsed(maxTokensParsed);
try {

  terms = Arrays.asList(mlt.retrieveInterestingTerms(docId));
} catch (IOException e) {
  LOGGER.error(e.getMessage());
  throw new RuntimeException();
}

return terms;
  }

*Note:*

I did define termVectors=true for all the required fields that I am going
to use for the purpose of generating interesting terms (fields array in the
corresponding method)
Best regards.
-- 

A.Nazemian


[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_51) - Build # 13825 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13825/
Java: 64bit/jdk1.8.0_51 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6338, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6338, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([11B0A6E15FB5CB99:99E4993BF149A661]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34938/j_w: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([11B0A6E15FB5CB99]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 10427 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest_11B0A6E15FB5CB99-001/init-core-data-001
   [junit4]   2 724545 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[11B0A6E15FB5CB99]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true)
   [junit4]   2 724545 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[11B0A6E15FB5CB99]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /j_w/
   [junit4]   2 724556 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 724556 INFO  (Thread-2270) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 724556 INFO  (Thread-2270) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 724656 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.ZkTestServer start zk server on port:36516
   [junit4]   2 724656 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 724657 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 724658 INFO  (zkCallback-842-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@7c5ff4df 
name:ZooKeeperConnection Watcher:127.0.0.1:36516 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 724658 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 724658 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 724659 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 724660 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 724660 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 724660 INFO  (zkCallback-843-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@24774d22 
name:ZooKeeperConnection Watcher:127.0.0.1:36516/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 724661 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[11B0A6E15FB5CB99]) [] 
o.a.s.c.c.ConnectionManager 

[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693027#comment-14693027
 ] 

Dawid Weiss commented on LUCENE-6174:
-

Yep, this is a generic jvm type selector. Which is nice because then you can 
select which JVM you're using at runtime (for all projects). I'll attach a 
screenshot with the info where it is in Eclipse.

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-12 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6174:

Attachment: capture-2.png

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch, capture-2.png


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.3-Linux (64bit/jdk1.8.0_51) - Build # 46 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/46/
Java: 64bit/jdk1.8.0_51 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([CDB6FEFA61B92A66]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10707 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.3-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_CDB6FEFA61B92A66-001/init-core-data-001
   [junit4]   2 529258 INFO  
(SUITE-HttpPartitionTest-seed#[CDB6FEFA61B92A66]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2 529260 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 529260 INFO  (Thread-1606) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 529260 INFO  (Thread-1606) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 529360 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.ZkTestServer start zk server on port:40056
   [junit4]   2 529360 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 529361 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 529363 INFO  (zkCallback-434-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1059768d 
name:ZooKeeperConnection Watcher:127.0.0.1:40056 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 529363 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 529363 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 529364 INFO  
(TEST-HttpPartitionTest.test-seed#[CDB6FEFA61B92A66]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 529368 INFO  

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693198#comment-14693198
 ] 

Karl Wright commented on LUCENE-6699:
-

bq. Rather combative. Don't confuse my suggestions for keeping things light, 
approachable, and organized, as holding any enmity towards geo3d.

I am not trying to be combative; I'm interested in the same things you are, 
although I seem to look at the world a bit differently perhaps.  My major 
concern is that since January there's been quite a bit of back-and-forth with 
David Smiley and others about various aspects of geo3d organization, structure, 
abstractions, etc., and it seems like we'd be undoing quite a bit of what was 
agreed on *then* to meet your concerns *now*? 

bq. I still prefer it be a part of core/util so that (once again) the 90% geo 
use case can be accomplished with no dependencies other than core. Having it in 
a 3d specific package seems no better than simply moving it to Apache SIS 
(where all EPSG ellipsoids, OGC compliance, etc. are already provided). But 
that's not my call.

Unfortunately, *I* have constraints, in addition to Lucene.  I cannot at the 
moment contribute to Apache SIS, without going through a laborious and time 
consuming company process.  So if/when geo3d leaves Lucene, I won't immediately 
be able to leave with it.

Also, as we've discussed before at some length, geo3d was developed and 
optimized specifically for the search problem.  While that seems like a minor 
thing at first glance, it's actually quite a big deal.  My impression was that 
this was pretty far from the Apache SIS core mission.

bq. This messaging seems to change based on the agenda. Not that it matters 
except for keeping in mind whats best for the lucene project as a whole.

I've got two masters here.  First, it's essential that my company continues to 
be able to use geo3d, even before it is released via lucene.  Remember that 
development is taking place all the time on both sides.  Right now, geo3d is 
reasonably separable, and we've deliberately built the dependency structure to 
maintain that.  That was one of the reasons behind having a separate module.

If/when geo3d is actually pulled into core (which I still don't know will 
definitely happen or not), then it's a different ballgame, and integration with 
*other* core code will likely take place.  But that hasn't happened yet and may 
never happen.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13827 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13827/
Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseParallelGC 
-Djava.locale.providers=JRE,SPI

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([8A466F968FD76E49:2D02D732E26C7DF0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationAfterPeerSync(CdcrReplicationHandlerTest.java:158)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13828 - Still Failing!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13828/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:54005/oehz/dt: Could not fully remove 
collection: halfdeletedcollection2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:54005/oehz/dt: Could not fully remove 
collection: halfdeletedcollection2
at 
__randomizedtesting.SeedInfo.seed([19A39875F969E4BB:91F7A7AF57958943]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:302)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deleteCollectionWithDownNodes(CollectionsAPIDistributedZkTest.java:283)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693845#comment-14693845
 ] 

Mark Miller commented on SOLR-6760:
---

Good reasoning. I'd suggest we come up with a more descriptive modifier than 
ext though.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6741) IPv6 Field Type

2015-08-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6741:
--

Assignee: Erik Hatcher

 IPv6 Field Type
 ---

 Key: SOLR-6741
 URL: https://issues.apache.org/jira/browse/SOLR-6741
 Project: Solr
  Issue Type: Improvement
Reporter: Lloyd Ramey
Assignee: Erik Hatcher
 Attachments: SOLR-6741.patch


 It would be nice if Solr had a field type which could be used to index IPv6 
 data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693756#comment-14693756
 ] 

Guido commented on SOLR-7451:
-

Hi, I am experiencing the same issue on Solr 5.2.1, 1 collection with 4 shards.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693756#comment-14693756
 ] 

Guido edited comment on SOLR-7451 at 8/12/15 4:26 PM:
--

Hi, I am experiencing the same issue on Solr 5.2.1, 1 collection with 4 shards 
without any replica.


was (Author: gharm):
Hi, I am experiencing the same issue on Solr 5.2.1, 1 collection with 4 shards.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-12 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693804#comment-14693804
 ] 

Scott Blum commented on SOLR-6760:
--

[~noble.paul] I feel like the API and implementation of DistributedQueue 
represents a pretty clean, cohesive, and general API.  This is evidenced by the 
fact that most of the existing places we were using DQ just work.

DistributedQueueExt represents what I feel like is kind of crap that was 
glommed on to support the collection task queue, specifically.  You have 
methods like containsTaskWithRequestId() that are highly specific to the 
collection task queue, the strange QueueEvent and response-prefix stuff that I 
don't even understand what it's supposed to do, getTailId() to peek at the end 
of the queue with unclear semantics (is it good enough to answer with the end 
of the in-memory queue, or does the caller expect a synchronous read-through 
into ZK?), and a remove method that doesn't operate on the head of the queue.  
In addition to the unclear semantics on some of these, the implementations of 
some of them necessarily break the clean model DQ uses and are in some cases 
FAR less efficient -- containsTaskWithRequestId for example has to not only 
fetch the entire list from ZK, it then has to actually read all the data nodes.

Suffice it to say I don't think anything in there is good enough to promote 
into the general purpose DQ.  Maybe the core issue is that the collection work 
queue is fundamentally looking for something more, like a distributed task 
queue.  I think someone should go back and analyze the true needs there and 
figure out if there's something better we can do.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7867) implicit sharded, facet grouping problem with multivalued string field starting with digits

2015-08-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693384#comment-14693384
 ] 

Gürkan Vural commented on SOLR-7867:


I can confirm that such a bug exists. Some specific positioned documents in the 
index are causing this error. If you filter the group/facet query to return 
only this document the error still exists. For my specific document in the 
readTerm function start and suffix are computed as 32 and 9 respectively. 
However term.bytes array has length only 37. If you update the document with 
the same values the problem disappears. I assume this is because the position 
in the index is changing.

 implicit sharded, facet grouping problem with multivalued string field 
 starting with digits
 ---

 Key: SOLR-7867
 URL: https://issues.apache.org/jira/browse/SOLR-7867
 Project: Solr
  Issue Type: Bug
  Components: faceting, SolrCloud
Affects Versions: 5.2
 Environment: 3.13.0-48-generic #80-Ubuntu SMP x86_64 GNU/Linux
 java version 1.7.0_80
 Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
 Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
Reporter: Umut Erogul
  Labels: docValues, facet, group, sharding
 Attachments: DocValuesException.PNG, ErrorReadingDocValues.PNG


 related parts @ schema.xml:
 {code}field name=keyword_ss type=string indexed=true stored=true 
 docValues=true multiValued=true/
 field name=author_s type=string indexed=true stored=true 
 docValues=true/{code}
 every document has valid author_s and keyword_ss fields;
 we can make successful facet group queries on single node, single collection, 
 solr-4.9.0 server
 {code}
 q: *:* fq: keyword_ss:3m
 facet=truefacet.field=keyword_ssgroup=truegroup.field=author_sgroup.facet=true
 {code}
 when querying on solr-5.2.0 server with implicit sharded environment with:
 {code}!-- router.field --
 field name=shard_name type=string indexed=true stored=true 
 required=true/{code}
 with example shard names; affinity1 affinity2 affinity3 affinity4
 the same query with same documents gets:
 {code}
 ERROR - 2015-08-04 08:15:15.222; [document affinity3 core_node32 
 document_affinity3_replica2] org.apache.solr.common.SolrException; 
 org.apache.solr.common.SolrException: Exception during facet.field: keyword_ss
 at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:632)
 at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:617)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:571)
 at 
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:642)
 ...
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.ArrayIndexOutOfBoundsException
 at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene50DocValuesProducer.java:1008)
 at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.next(Lucene50DocValuesProducer.java:1026)
 at 
 org.apache.lucene.search.grouping.term.TermGroupFacetCollector$MV$SegmentResult.nextTerm(TermGroupFacetCollector.java:373)
 at 
 org.apache.lucene.search.grouping.AbstractGroupFacetCollector.mergeSegmentResults(AbstractGroupFacetCollector.java:91)
 at 
 org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:541)
 at 
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:463)
 at 
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:386)
 at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:626)
 ... 33 more
 {code}
 all the problematic queries are caused by strings starting with digits; 
 (3m, 8 saniye, 2 broke girls, 1v1y)
 there are some strings that the query works like (24, 90+, 45 dakika)
 we do not observe the problem when querying with 
 -keyword_ss:(0-9)*
 updating the problematic documents (a small subset of keyword_ss:(0-9)*), 
 fixes the query, 
 but we cannot find an easy solution to find the problematic documents
 there is around 400m docs; seperated at 28 shards; 
 -keyword_ss:(0-9)* matches %97 of documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-12 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-7775:
--

Assignee: Mikhail Khludnev

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693338#comment-14693338
 ] 

Jan Høydahl commented on SOLR-7914:
---

I think this should be closed as duplicate. Kwan, please engage in the 
discussion in SOLR-445 to arrive on a good bulk update error handling strategy.

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Fix For: 4.10.5

 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread Kwan-I Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693343#comment-14693343
 ] 

Kwan-I Lee commented on SOLR-7914:
--

Will do. Thanks, Jan.

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Fix For: 4.10.5

 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7914) Improve bulk doc update

2015-08-12 Thread Kwan-I Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kwan-I Lee closed SOLR-7914.

Resolution: Duplicate

 Improve bulk doc update
 ---

 Key: SOLR-7914
 URL: https://issues.apache.org/jira/browse/SOLR-7914
 Project: Solr
  Issue Type: Improvement
Reporter: Kwan-I Lee
Priority: Minor
 Fix For: 4.10.5

 Attachments: SOLR-7914.patch


 One limitation of Solr index update is: given a doc update batch, if one doc 
 fails, Solr aborts the full batch operation, without specifying the 
 problematic doc.
 This task aims to improve solr handling logic. E.g. The batch update should 
 proceed, only skipping the problematic doc(s), and report those problematic 
 doc ids in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693438#comment-14693438
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695496 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1695496 ]

LUCENE-6732: Improve javadoc-style license checker to use Apache RAT

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6732:
--
Fix Version/s: (was: 5.3)
   5.4

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693441#comment-14693441
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695499 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695499 ]

Merged revision(s) 1695496 from lucene/dev/trunk:
LUCENE-6732: Improve javadoc-style license checker to use Apache RAT

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6732.
---
Resolution: Fixed

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693415#comment-14693415
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695494 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695494 ]

LUCENE-6699: fix some nocommits

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13606 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13606/
Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseG1GC 
-Djava.locale.providers=JRE,SPI

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking

Error Message:
Shard a1x2_shard1_replica2 received all 10 requests

Stack Trace:
java.lang.AssertionError: Shard a1x2_shard1_replica2 received all 10 requests
at 
__randomizedtesting.SeedInfo.seed([22A970802826A759:6A952940DC2DB6CF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6732:
--
Attachment: LUCENE-6732-v2.patch

Patch using Apache RAT to detect if a javadocs comment is a License:
- first it finds all javadocs comments via Regex (as before)
- instead of just checking for Licensed to inside, it now passes the inner 
match of the previous to the Apache RAT license checker. If that detects a 
license it reports this as error.

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2567 - Still Failing!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2567/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 3238 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/analysis/icu/test/temp/junit4-J1-20150812_103052_453.syserr
   [junit4]  JVM J1: stderr (verbatim) 
   [junit4] java(63458,0x1371e2000) malloc: *** error for object 0x135b71bf0: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4]  JVM J1: EOF 

[...truncated 19 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_51.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=E3BBF85EB92C4171 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.4.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/analysis/icu/test/temp
 -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/clover/db
 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=5.4.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=UTF-8 -classpath 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693403#comment-14693403
 ] 

Michael McCandless commented on LUCENE-6699:


I'll address the rename nocommits shortly...

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-6732:
---

I improved the checker. It now detects all licenses inside javadocs comments: 
it uses Apache RAT to do that :-) [which is loaded already].

And I found more offenders!

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6732.patch, LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693487#comment-14693487
 ] 

Michael McCandless commented on LUCENE-6699:


bq. I think I don't like it

OK I removed it and now testBasic passes!

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693500#comment-14693500
 ] 

Michael McCandless commented on LUCENE-6699:


On the module dependencies, I think it's fine if we have some small code dup 
for now across modules.  We will sort this out over time: maybe sandbox depends 
on spatial3d, or vice versa, or we graduate the postings-based and 2D BKD 
implementations from sandbox into spatial3d (and rename it), or move them into 
core/util, or ... something.  I think we shouldn't fret about it at this point: 
things are moving quickly and it's a little too early to figure out where 
things will eventually land.

bq. I've got two masters here.

This is fine.

We all (necessarily: capitalism) have our own sometimes conflicting motives for 
improving Lucene (and other open-source projects), but it works out that when 
you sum up all those motives across all players what emerges is something that 
benefits many, many people.

bq.  moving it to Apache SIS

I think the Apache SIS project should feel free to poach geo3d at any time, but 
...

Selfishly (for Lucene) I think we should also keep it here as we iterate on the 
unique requirements we have for efficient searching.  E.g. here in this issue 
we already see that we need new APIs in geo3d for the BKD integration, maybe 
future issues require more such iterating.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13606 - Failure!

2015-08-12 Thread Shalin Shekhar Mangar
I'll dig.

On Wed, Aug 12, 2015 at 6:06 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13606/
 Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseG1GC 
 -Djava.locale.providers=JRE,SPI

 1 tests failed.
 FAILED:  
 org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking

 Error Message:
 Shard a1x2_shard1_replica2 received all 10 requests

 Stack Trace:
 java.lang.AssertionError: Shard a1x2_shard1_replica2 received all 10 requests
 at 
 __randomizedtesting.SeedInfo.seed([22A970802826A759:6A952940DC2DB6CF]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:109)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:502)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693486#comment-14693486
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695513 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695513 ]

LUCENE-6699: don't use the global min/max

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693482#comment-14693482
 ] 

Michael McCandless commented on LUCENE-6699:


I committed a basic test, but it fails in a fun way:

{noformat}
Time: 0.202
There was 1 failure:
1) testBasic(org.apache.lucene.bkdtree3d.TestGeo3DPointField)
java.lang.IllegalArgumentException: X values in wrong order or identical
at 
__randomizedtesting.SeedInfo.seed([71BA4E421B49E771:DA405357C495615F]:0)
at org.apache.lucene.geo3d.XYZSolid.init(XYZSolid.java:94)
at 
org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1$1.compare(PointInGeo3DShapeQuery.java:124)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:190)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:129)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:113)
at 
org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.scorer(PointInGeo3DShapeQuery.java:99)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:425)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:402)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:413)
at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField.testBasic(TestGeo3DPointField.java:58)
{noformat}

I think the X values are in fact identical ... I indexed a single point into 
the BKD tree, and so minX == maxX and the recursion uses these global min/max 
when recursing ... I think I might just remove the global min/max and instead 
recurse from the full int space.  This was a change I had tried vs the 2D BKD 
tree and I think I don't like it :)

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693464#comment-14693464
 ] 

Noble Paul commented on SOLR-6760:
--

[~dragonsinth] Why did you choose to have  a {{DistributedQueue}} and 
{{DistributedQueueExt}}, why not modify the original class?

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693480#comment-14693480
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695510 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695510 ]

LUCENE-6699: add basic test for the point field / query

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6188) solr.ICUFoldingFilterFactory causes NoClassDefFoundError: o/a/l/a/icu/ICUFoldingFilter

2015-08-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693516#comment-14693516
 ] 

Shawn Heisey commented on SOLR-6188:


Usually when there's a strange problem related to the classloader, it's 
Lucene's ICU analysis jars that show the problem.  Perhaps there's something 
strange going on in the ICU jars and this should be moved to the LUCENE project?

 solr.ICUFoldingFilterFactory causes NoClassDefFoundError: 
 o/a/l/a/icu/ICUFoldingFilter
 --

 Key: SOLR-6188
 URL: https://issues.apache.org/jira/browse/SOLR-6188
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.8.1
Reporter: Ahmet Arslan
  Labels: ICUFoldingFilterFactory
 Fix For: 4.10


 When fully qualified class name is used in schema.xml 
 {{org.apache.lucene.analysis.icu.ICUFoldingFilterFactory}}
 it works. However as documented in confluence and wiki, when 
 {{solr.ICUFoldingFilterFactory}} is used it throws following exception.
 This is true for both released 4.8.1 version and trunk r1604168
 following type works :
 {code:xml}
  fieldType name=folded2 class=solr.TextField
   analyzer
 tokenizer class=solr.StandardTokenizerFactory/
 filter 
 class=org.apache.lucene.analysis.icu.ICUFoldingFilterFactory/
   /analyzer
 /fieldType
 {code}
 this does not : 
 {code:xml}
  fieldType name=folded class=solr.TextField
   analyzer
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.ICUFoldingFilterFactory/
   /analyzer
 /fieldType
 {code}
 {noformat}
 257 [main] ERROR org.apache.solr.core.SolrCore  – Error loading 
 core:java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: 
 org/apache/lucene/analysis/icu/ICUFoldingFilter
   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:301)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:190)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:137)
   at org.eclipse.jetty.servlet.FilterHolder.doStart(FilterHolder.java:119)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:719)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:265)
   at 
 org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1252)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:710)
   at 
 org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:39)
   at 
 org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:186)
   at 
 org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:494)
   at 
 org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:141)
   at 
 org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:145)
   at 
 org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:56)
   at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:609)
   at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:540)
   at org.eclipse.jetty.util.Scanner.scan(Scanner.java:403)
   at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:337)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:121)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:555)
   at 
 org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:230)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.eclipse.jetty.util.component.AggregateLifeCycle.doStart(AggregateLifeCycle.java:81)
   at 
 org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:58)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:96)
   at org.eclipse.jetty.server.Server.doStart(Server.java:280)
   at 
 

[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-12 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693866#comment-14693866
 ] 

Scott Blum commented on SOLR-6760:
--

+1 I was actually hoping someone would suggest a better name!  Any suggestions?

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693972#comment-14693972
 ] 

Erick Erickson commented on SOLR-7451:
--

Could you add your cluster state for the collection when it fails? It's in your 
admin UI/cloud/tree either under

/clusterstate.json (older style)
or
/collections/collection_name/state.json (newer 5x versions).



 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694024#comment-14694024
 ] 

Guido commented on SOLR-7451:
-

Hello guys, thanks for your prompt attention on it. Actually, while trying to 
troubleshoot the problem I ended up manually modifying the state.json file of 
my collection. Basically, I changed the 'state' from 'down' to 'active', then I 
restarted Solr. It started without any error on the log file and I was able to 
query my collection again. I am still trying to work on it so if I see any 
other error I will post a new comment here, but so far the manual change on the 
json file seems to solve the problem.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] 5.3.0 RC1

2015-08-12 Thread Noble Paul
Please vote for the # release candidate for Lucene/Solr 5.3.0

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC1-rev1695567/

You can run the smoke tester directly with this command:
python3 -u dev-tools/scripts/smokeTestRelease.py
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC1-rev1695567/

-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693916#comment-14693916
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695586 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1695586 ]

LUCENE-6732: More filetypes to check

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7917:
---
Priority: Trivial  (was: Blocker)

 test framework doesn't necessarily fail when it should
 --

 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, Trunk
Reporter: Yonik Seeley
Priority: Trivial

 I was trying to track down a tricky bug, but when I added assertions to 
 narrow it down, the test started passing!
 These were assertions that were hit within the context of a search, not 
 assertions within the test class itself, so this is probably an issue with 
 the solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693992#comment-14693992
 ] 

Yonik Seeley commented on SOLR-7917:


OK, whew, I think I found it... false alarm.
The code in question was only being hit during warming.  So invalid DocSet 
creation would cause the test to fail, but when I added assertions to catch the 
issue, it would cause the warming to fail (and we only log exceptions, 
including assertions during warming since they don't belong to any user 
request).  The failed warming actually helped the test pass (because there was 
not a bad DocSet being cached).

I guess one takeaway would be that perhaps we want a way to fail tests if there 
were any failures during warming.

 test framework doesn't necessarily fail when it should
 --

 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, Trunk
Reporter: Yonik Seeley
Priority: Blocker

 I was trying to track down a tricky bug, but when I added assertions to 
 narrow it down, the test started passing!
 These were assertions that were hit within the context of a search, not 
 assertions within the test class itself, so this is probably an issue with 
 the solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 763 - Still Failing

2015-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/763/

4 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:56922/qc/d: Error CREATEing SolrCore 
'halfcollection_shard1_replica1': Error reading cluster properties Caused by: 
KeeperErrorCode = Session expired for /clusterprops.json

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:56922/qc/d: Error CREATEing SolrCore 
'halfcollection_shard1_replica1': Error reading cluster properties Caused by: 
KeeperErrorCode = Session expired for /clusterprops.json
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:302)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:430)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693919#comment-14693919
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695587 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695587 ]

Merged revision(s) 1695586 from lucene/dev/trunk:
LUCENE-6732: More filetypes to check

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6732-v2.patch, LUCENE-6732.patch, 
 LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5141 - Failure!

2015-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5141/
Java: 32bit/jdk1.8.0_51 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20150812171615545, index.20150812171617068, index.properties, 
replication.properties] expected:1 but was:2

Stack Trace:
java.lang.AssertionError: [index.20150812171615545, index.20150812171617068, 
index.properties, replication.properties] expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([ECB0E6792A78767F:371BE6BF2F501FCC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:818)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:785)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7917:
---
Priority: Blocker  (was: Major)

 test framework doesn't necessarily fail when it should
 --

 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, Trunk
Reporter: Yonik Seeley
Priority: Blocker

 I was trying to track down a tricky bug, but when I added assertions to 
 narrow it down, the test started passing!
 These were assertions that were hit within the context of a search, not 
 assertions within the test class itself, so this is probably an issue with 
 the solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7917:
---
Affects Version/s: Trunk
   5.2

 test framework doesn't necessarily fail when it should
 --

 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, Trunk
Reporter: Yonik Seeley
Priority: Blocker

 I was trying to track down a tricky bug, but when I added assertions to 
 narrow it down, the test started passing!
 These were assertions that were hit within the context of a search, not 
 assertions within the test class itself, so this is probably an issue with 
 the solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693909#comment-14693909
 ] 

Ishan Chattopadhyaya commented on SOLR-7451:


I tried to reproduce this with 5.2.1,
{noformat}
bin/solr -c
bin/solr create -c demo -shards 1 -replicationFactor 3
curl http://localhost:8983/solr/demo/update?commit=true; -H 
'Content-type:application/json' -d '[{id : MyTestDocument, title_t : 
This is just a test}]'
curl http://localhost:8983/solr/demo/select?q=*:*wt=jsonomitHeader=true;
{noformat}
This worked for me. I'll try to use the Collections API to create the 
collection next to see if I can reproduce.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7917:
--

 Summary: test framework doesn't necessarily fail when it should
 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


I was trying to track down a tricky bug, but when I added assertions to narrow 
it down, the test started passing!

These were assertions that were hit within the context of a search, not 
assertions within the test class itself, so this is probably an issue with the 
solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7917) test framework doesn't necessarily fail when it should

2015-08-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693949#comment-14693949
 ] 

Yonik Seeley commented on SOLR-7917:


I opened this issue as quickly as I realized there was a test framework issue 
since it's unclear what tests are compromised.

First step - I'll try to reproduce the lack of failure with a simple test.

 test framework doesn't necessarily fail when it should
 --

 Key: SOLR-7917
 URL: https://issues.apache.org/jira/browse/SOLR-7917
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, Trunk
Reporter: Yonik Seeley
Priority: Blocker

 I was trying to track down a tricky bug, but when I added assertions to 
 narrow it down, the test started passing!
 These were assertions that were hit within the context of a search, not 
 assertions within the test class itself, so this is probably an issue with 
 the solr test harness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693972#comment-14693972
 ] 

Erick Erickson edited comment on SOLR-7451 at 8/12/15 6:28 PM:
---

Could you add your cluster state for the collection when it fails? It's in your 
admin UI/cloud/tree either under

/clusterstate.json (older style)
or
/collections/collection_name/state.json (newer 5x versions).

Oh, and to help Ishan, it would also be good if you added the _exact_ commands 
you used to create the collection in the failing case and the code you use for 
adding docs. Otherwise it's really hard for anyone to have confidence that 
they're doing the same thing you're doing.

Best,
Erick




was (Author: erickerickson):
Could you add your cluster state for the collection when it fails? It's in your 
admin UI/cloud/tree either under

/clusterstate.json (older style)
or
/collections/collection_name/state.json (newer 5x versions).



 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694053#comment-14694053
 ] 

Erick Erickson commented on SOLR-7451:
--

Uhhh, this is like, really, really dangerous. But it seems like the answer to 
your original problem was accurately reported: there were no replicas active 
for a given shard.

There remains the question of _why_ there were no replicas, the Solr logs would 
help there. It's rather doubtful that the problem was with the collection 
creation, although I can't rule that out.

[~ichattopadhyaya] maybe close this ticket then?

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-12 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694081#comment-14694081
 ] 

Guido commented on SOLR-7451:
-

Hello. I am not sure why this is so dangerous (surely you know it better than 
me). I would be happy if you want to elaborate it. Anyway, I believe that the 
problem is strictly related to a plugin inside the /lib/ directory: as a second 
test, I tried to modify my solrconfig to avoid the use of the custom plugin and 
I was able to create the collection. Then, I deployed the plugin inside the 
'lib' directory, modified the solrconfig and reloaded the collection. In this 
way I did not get the problem. I hope that this helps you: I always get 
problems while trying to create a collection which uses a custom plugin and 
this 2-steps always helps me.

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org