[jira] [Updated] (LUCENE-5784) CommonTermsQuery HighFreq MUST not applied if lowFreq terms

2014-06-22 Thread Clinton Gormley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clinton Gormley updated LUCENE-5784:


Attachment: common_terms.patch

 CommonTermsQuery HighFreq MUST not applied if lowFreq terms
 ---

 Key: LUCENE-5784
 URL: https://issues.apache.org/jira/browse/LUCENE-5784
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.8.1
Reporter: Clinton Gormley
Priority: Minor
 Attachments: common_terms.patch


 When a CommonTermsQuery has high and low frequency terms,  the highFreq terms 
 Boolean query is always added as a SHOULD clause, even if highFreqOccur is 
 set to MUST:
 new CommonTermsQuery(Occur.MUST, Occur.MUST,0.1);
 My patch sets the top level Boolean query's minimum should match to 1 to 
 ensure that the SHOULD clause must match.  Not sure if this is the correct 
 approach, or if it should just add the highFreq query as a MUST clause 
 instead?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5784) CommonTermsQuery HighFreq MUST not applied if lowFreq terms

2014-06-22 Thread Clinton Gormley (JIRA)
Clinton Gormley created LUCENE-5784:
---

 Summary: CommonTermsQuery HighFreq MUST not applied if lowFreq 
terms
 Key: LUCENE-5784
 URL: https://issues.apache.org/jira/browse/LUCENE-5784
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.8.1
Reporter: Clinton Gormley
Priority: Minor
 Attachments: common_terms.patch

When a CommonTermsQuery has high and low frequency terms,  the highFreq terms 
Boolean query is always added as a SHOULD clause, even if highFreqOccur is set 
to MUST:

new CommonTermsQuery(Occur.MUST, Occur.MUST,0.1);

My patch sets the top level Boolean query's minimum should match to 1 to ensure 
that the SHOULD clause must match.  Not sure if this is the correct approach, 
or if it should just add the highFreq query as a MUST clause instead?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6165) Having issue with dataimport handler in 4.8

2014-06-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040059#comment-14040059
 ] 

ASF subversion and git services commented on SOLR-6165:
---

Commit 1604543 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1604543 ]

SOLR-6165: DataImportHandler should write BigInteger and BigDecimal values as 
strings

 Having issue with dataimport handler in 4.8
 ---

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar

 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 

[jira] [Resolved] (SOLR-6165) DataImportHandler writes BigInteger and BigDecimal as-is which causes errors in SolrCloud replication

2014-06-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6165.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks Anand!

 DataImportHandler writes BigInteger and BigDecimal as-is which causes errors 
 in SolrCloud replication
 -

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10


 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 

[jira] [Updated] (SOLR-6165) DataImportHandler writes BigInteger and BigDecimal as-is which causes errors in SolrCloud replication

2014-06-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6165:


Summary: DataImportHandler writes BigInteger and BigDecimal as-is which 
causes errors in SolrCloud replication  (was: Having issue with dataimport 
handler in 4.8)

 DataImportHandler writes BigInteger and BigDecimal as-is which causes errors 
 in SolrCloud replication
 -

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10


 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 

[jira] [Commented] (SOLR-6165) Having issue with dataimport handler in 4.8

2014-06-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040060#comment-14040060
 ] 

ASF subversion and git services commented on SOLR-6165:
---

Commit 1604544 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1604544 ]

SOLR-6165: DataImportHandler should write BigInteger and BigDecimal values as 
strings

 Having issue with dataimport handler in 4.8
 ---

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10


 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 

SSLMigration test hangs on FreeBSD blackhole

2014-06-22 Thread Uwe Schindler
Hi,

in every run the SSLMigrationTest hung on ASF Jenkins, where we have the 
FreeBSD blackhole. It is not even cancelled by the Test Runner, it hangs 
forever: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/565/console
We did not see this error before, because the ASF Jenkins was offline for about 
a month.
It looks like there is a timeout missing. For those who don't know: On FreeBSD 
the blackhole causes even for connection to localhost: if the port is not bound 
it timeouts when connecting (instead of connection error). If no timeout is 
set, it takes forever.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 4.9.0

2014-06-22 Thread Ahmet Arslan
Hi,

+1

SUCCESS! [1:47:26.786519]

python3 dev-tools/scripts/smokeTestRelease.py 
'http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/' 
1604085 4.9.0 tmp

Ahmet

On Sunday, June 22, 2014 2:11 AM, Walter Underwood wun...@wunderwood.org 
wrote:



Also, isn't JDK 7u51 a known bad release for Lucene? 

wunder


On Jun 21, 2014, at 12:32 PM, Robert Muir rcm...@gmail.com wrote:

Not *the* smoketester, instead some outdated arbitrary random
smoketester from the past.

please, use the latest one from the 4.9 branch.

This file is supposed to be there and the smoketester actually looks for it.

On Sat, Jun 21, 2014 at 3:16 PM, david.w.smi...@gmail.com
david.w.smi...@gmail.com wrote:

The smoke tester failed for me:

lucene-solr_4x_svn$ python3.3 -u dev-tools/scripts/smokeTestRelease.py
http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/
1604085 4.9.0 /Volumes/RamDisk/tmp

JAVA7_HOME is
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home

NOTE: output encoding is UTF-8


Load release URL
http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/;...


Test Lucene...

 test basics...

 get KEYS

   0.1 MB in 0.69 sec (0.2 MB/sec)

 check changes HTML...

 download lucene-4.9.0-src.tgz...

   27.6 MB in 94.12 sec (0.3 MB/sec)

   verify md5/sha1 digests

   verify sig

   verify trust

 GPG: gpg: WARNING: This key is not certified with a trusted signature!

 download lucene-4.9.0.tgz...

   61.7 MB in 226.09 sec (0.3 MB/sec)

   verify md5/sha1 digests

   verify sig

   verify trust

 GPG: gpg: WARNING: This key is not certified with a trusted signature!

 download lucene-4.9.0.zip...

   71.3 MB in 217.32 sec (0.3 MB/sec)

   verify md5/sha1 digests

   verify sig

   verify trust

 GPG: gpg: WARNING: This key is not certified with a trusted signature!

 unpack lucene-4.9.0.tgz...

   verify JAR metadata/identity/no javax.* or java.* classes...

   test demo with 1.7...

 got 5727 hits for query lucene

   check Lucene's javadoc JAR

 unpack lucene-4.9.0.zip...

   verify JAR metadata/identity/no javax.* or java.* classes...

   test demo with 1.7...

 got 5727 hits for query lucene

   check Lucene's javadoc JAR

 unpack lucene-4.9.0-src.tgz...

Traceback (most recent call last):

 File dev-tools/scripts/smokeTestRelease.py, line 1347, in module

 File dev-tools/scripts/smokeTestRelease.py, line 1291, in main

 File dev-tools/scripts/smokeTestRelease.py, line 1329, in smokeTest

 File dev-tools/scripts/smokeTestRelease.py, line 637, in unpackAndVerify

 File dev-tools/scripts/smokeTestRelease.py, line 708, in verifyUnpacked

RuntimeError: lucene: unexpected files/dirs in artifact
lucene-4.9.0-src.tgz: ['ivy-ignore-conflicts.properties']


And indeed, that file is there.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



--
Walter Underwood
wun...@wunderwood.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20-ea-b15) - Build # 10624 - Still Failing!

2014-06-22 Thread Michael McCandless
Woops, new test, a bit too evil I guess, I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Sun, Jun 22, 2014 at 12:28 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10624/
 Java: 64bit/jdk1.8.0_20-ea-b15 -XX:-UseCompressedOops -XX:+UseParallelGC

 1 tests failed.
 REGRESSION:  org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite

 Error Message:
 Java heap space

 Stack Trace:
 java.lang.OutOfMemoryError: Java heap space
 at 
 __randomizedtesting.SeedInfo.seed([B0BE34D7DCF26980:F70852EA63620CA1]:0)
 at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:97)
 at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.randomNoOp(TestAutomaton.java:584)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.unionTerms(TestAutomaton.java:637)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite(TestAutomaton.java:776)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)




 Build Log:
 [...truncated 1691 lines...]
[junit4] Suite: org.apache.lucene.util.automaton.TestAutomaton
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestAutomaton 
 -Dtests.method=testRandomFinite -Dtests.seed=B0BE34D7DCF26980 
 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en -Dtests.timezone=CNT 
 -Dtests.file.encoding=UTF-8
[junit4] ERROR   80.7s J1 | TestAutomaton.testRandomFinite 
[junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space
[junit4]at 
 __randomizedtesting.SeedInfo.seed([B0BE34D7DCF26980:F70852EA63620CA1]:0)
[junit4]at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:97)
[junit4]at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
[junit4]at 
 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20-ea-b15) - Build # 10624 - Still Failing!

2014-06-22 Thread Michael McCandless
I committed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Sun, Jun 22, 2014 at 6:12 AM, Michael McCandless
luc...@mikemccandless.com wrote:
 Woops, new test, a bit too evil I guess, I'll dig.

 Mike McCandless

 http://blog.mikemccandless.com


 On Sun, Jun 22, 2014 at 12:28 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10624/
 Java: 64bit/jdk1.8.0_20-ea-b15 -XX:-UseCompressedOops -XX:+UseParallelGC

 1 tests failed.
 REGRESSION:  org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite

 Error Message:
 Java heap space

 Stack Trace:
 java.lang.OutOfMemoryError: Java heap space
 at 
 __randomizedtesting.SeedInfo.seed([B0BE34D7DCF26980:F70852EA63620CA1]:0)
 at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:97)
 at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.randomNoOp(TestAutomaton.java:584)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.unionTerms(TestAutomaton.java:637)
 at 
 org.apache.lucene.util.automaton.TestAutomaton.testRandomFinite(TestAutomaton.java:776)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)




 Build Log:
 [...truncated 1691 lines...]
[junit4] Suite: org.apache.lucene.util.automaton.TestAutomaton
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestAutomaton 
 -Dtests.method=testRandomFinite -Dtests.seed=B0BE34D7DCF26980 
 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en 
 -Dtests.timezone=CNT -Dtests.file.encoding=UTF-8
[junit4] ERROR   80.7s J1 | TestAutomaton.testRandomFinite 
[junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space
[junit4]at 
 __randomizedtesting.SeedInfo.seed([B0BE34D7DCF26980:F70852EA63620CA1]:0)
[junit4]at 
 org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:97)
[junit4]at 
 

RE: SSLMigration test hangs on FreeBSD blackhole

2014-06-22 Thread Uwe Schindler
I requested a stack dump: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/565/console

   [junit4] JVM J0: stdout was not empty, see: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/test/temp/junit4-J0-20140621_125257_468.sysout
   [junit4]  JVM J0: stdout (verbatim) 
   [junit4] 2014-06-22 14:44:03
   [junit4] Full thread dump OpenJDK 64-Bit Server VM (24.60-b09 mixed mode):
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e6800 
nid=0x846b95c00 runnable [0x76e7]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
   [junit4] at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
   [junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
   [junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI RenewClean-[127.0.0.1:40915] daemon prio=5 
tid=0x0008517df000 nid=0x83c049800 in Object.wait() [0x72426000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   [junit4] - locked 0x00080de40760 (a 
java.lang.ref.ReferenceQueue$Lock)
   [junit4] at 
sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:535)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI Scheduler(0) daemon prio=5 tid=0x0008033e2800 
nid=0x84abf0c00 waiting on condition [0x7fffec5c8000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (parking)
   [junit4] at sun.misc.Unsafe.park(Native Method)
   [junit4] - parking to wait for  0x00080de3e008 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   [junit4] at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
   [junit4] at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
   [junit4] at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
   [junit4] at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] GC Daemon daemon prio=5 tid=0x0008033e4000 nid=0x837d98400 in 
Object.wait() [0x7090b000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at sun.misc.GC$Daemon.run(GC.java:117)
   [junit4] - locked 0x00080de434a8 (a sun.misc.GC$LatencyLock)
   [junit4] 
   [junit4] RMI Reaper prio=5 tid=0x0008033e3800 nid=0x83c050400 in 
Object.wait() [0x73638000]
   [junit4]java.lang.Thread.State: WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   [junit4] - locked 0x00080de36d38 (a 
java.lang.ref.ReferenceQueue$Lock)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
   [junit4] at 
sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:351)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e2000 
nid=0x850ca1800 runnable [0x75557000]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
   [junit4] at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
   [junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
   [junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e1000 
nid=0x8454da800 runnable [0x7fffddde1000]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at 

[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 565 - Still Failing

2014-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/565/

All tests passed

Build Log:
[...truncated 14421 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/test/temp/junit4-J0-20140621_125257_468.sysout
   [junit4]  JVM J0: stdout (verbatim) 
   [junit4] 2014-06-22 14:44:03
   [junit4] Full thread dump OpenJDK 64-Bit Server VM (24.60-b09 mixed mode):
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e6800 
nid=0x846b95c00 runnable [0x76e7]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
   [junit4] at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
   [junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
   [junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI RenewClean-[127.0.0.1:40915] daemon prio=5 
tid=0x0008517df000 nid=0x83c049800 in Object.wait() [0x72426000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   [junit4] - locked 0x00080de40760 (a 
java.lang.ref.ReferenceQueue$Lock)
   [junit4] at 
sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:535)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI Scheduler(0) daemon prio=5 tid=0x0008033e2800 
nid=0x84abf0c00 waiting on condition [0x7fffec5c8000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (parking)
   [junit4] at sun.misc.Unsafe.park(Native Method)
   [junit4] - parking to wait for  0x00080de3e008 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   [junit4] at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
   [junit4] at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
   [junit4] at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
   [junit4] at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
   [junit4] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] GC Daemon daemon prio=5 tid=0x0008033e4000 nid=0x837d98400 in 
Object.wait() [0x7090b000]
   [junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at sun.misc.GC$Daemon.run(GC.java:117)
   [junit4] - locked 0x00080de434a8 (a sun.misc.GC$LatencyLock)
   [junit4] 
   [junit4] RMI Reaper prio=5 tid=0x0008033e3800 nid=0x83c050400 in 
Object.wait() [0x73638000]
   [junit4]java.lang.Thread.State: WAITING (on object monitor)
   [junit4] at java.lang.Object.wait(Native Method)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   [junit4] - locked 0x00080de36d38 (a 
java.lang.ref.ReferenceQueue$Lock)
   [junit4] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
   [junit4] at 
sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:351)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e2000 
nid=0x850ca1800 runnable [0x75557000]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
   [junit4] at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
   [junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
   [junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
   [junit4] at 
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
   [junit4] at java.lang.Thread.run(Thread.java:745)
   [junit4] 
   [junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e1000 
nid=0x8454da800 runnable [0x7fffddde1000]
   [junit4]java.lang.Thread.State: RUNNABLE
   [junit4] at 

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1156: POMs out of sync

2014-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1156/

No tests ran.

Build Log:
[...truncated 38817 lines...]
-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:5.0-SNAPSHOT: checking for updates from 
sonatype.releases
[artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on 
download: local = '780ba3cf6b6eb0f7c9f6d41d8d25a86a2f46b0c4'; remote = 'html
[artifact:dependencies] headtitle301' - RETRYING
[artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on 
download: local = '780ba3cf6b6eb0f7c9f6d41d8d25a86a2f46b0c4'; remote = 'html
[artifact:dependencies] headtitle301' - IGNORING
[artifact:dependencies] An error has occurred while processing the Maven 
artifact tasks.

[...truncated 18 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:174: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:512:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1528:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:559:
 Unable to resolve artifact: Unable to get dependency information: Unable to 
read the metadata file for artifact 'org.apache.lucene:lucene-codecs:jar': 
Error getting POM for 'org.apache.lucene:lucene-codecs' from the repository: 
Unable to read local copy of metadata: Cannot read metadata from 
'/home/hudson/.m2/repository/org/apache/lucene/lucene-codecs/5.0-SNAPSHOT/maven-metadata-sonatype.releases.xml':
 end tag name /body must match start tag name hr from line 5 (position: 
TEXT seen .../center\r\n/body... @6:8) 
  org.apache.lucene:lucene-codecs:pom:5.0-SNAPSHOT


 for project org.apache.lucene:lucene-codecs
  org.apache.lucene:lucene-codecs:jar:5.0-SNAPSHOT

from the specified remote repositories:
  central (http://repo1.maven.org/maven2),
  sonatype.releases (http://oss.sonatype.org/content/repositories/releases),
  Nexus (http://repository.apache.org/snapshots)

Path to dependency: 
1) org.apache.lucene:lucene-test-framework:jar:5.0-SNAPSHOT



Total time: 29 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-06-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039800#comment-14039800
 ] 

Noble Paul edited comment on SOLR-5473 at 6/22/14 6:16 PM:
---

Patch updated to trunk. Incorporating most of the comments
# All external references are eliminated from the APIs
# the node is given a suffix as /state.json instead of /state
# removed the redundant attribute externla/stateVersion from the state object. 
The version is automatically derived from the znode from which the object is 
read
# Thread-safety issues addressed
# Added javadocs

(and many more other subtle cleanups)

The comments which are not addressed are
# The selective watching of collection nodes by solr nodes.  There are ony 3 
choices when it comes to watching states
#* Watch all nodes : this will would be equivalent or worse than the current 
clusterstate.json solution. All nodes will be notified of each state change 
(multiple times, one per collection where it is a member of )
#* Watch none. Just fetch the state data just in time (will kil the ZK) or 
cache , means the node will not have an updated state to make the right 
decision at the right time
#* Watch selectively. This is the approach we have taken here 
# maintaining the zkStateReader reference in clusterstate. Agreed that is not 
elegant. The ideal solution would be to completely get rid of ClusterState.java 
because that node is going to go away. and we will only hava ZkStateReader and 
DocCollection and nothing in between. The problem is we have clusterstate.json 
now and it will exist there for a at least a couple of releases . So , I am 
torn between the choices and I decided to go with the not so elegant choice of 
ClusterState keeping a reference to ZkStatereade , so that all APIs work fine . 
My suggestion is to eliminate CLusterState.java when we deprecate the old format
# The ephemeralCollectionData data in ZkStateReader. This is again not so 
elegant. This one is simple and performant and have minimal impact of the 
code.I'm happy to hear any other simpler ideas to make it better. 

We have done extensive testing on this patch internally with very large 
clusters (120+ nodes ) and very large non:of collections (100s of collections). 
The solr-5473 branch already has this code committed . 

If there are no objections I plan to commit this fairly soon 




was (Author: noble.paul):
Patch updated to trunk. Incorporating most of the comments
# All external references are eliminated from the APIs
# the node is given a suffix as /state.json instead of /state
# removed the redundant attribute externla/stateVersion from the state object. 
The version is automatically derived from the znode from which the object is 
read
# Thread-safety issues addressed
# Added javadocs

(and many more other subtle cleanups)

The comments which are not addressed are
# The selective watching of collection nodes by solr nodes.  There are ony 3 
choices when it comes to watching states
#* Watch all nodes : this will would be equivalent or worse than the current 
clusterstate.json solution. All nodes will be notified of each state change 
(multiple times, one per collection where it is a member of )
#* Watch none. Just fetch the state data just in time (will kil the ZK) or 
cache , means the node will not have an updated state to make the right 
decision at the right time
#* Watch selectively. This is the approach we have taken here 
# maintaining the zkStateReader reference in clusterstate. Agreed that is not 
elegant. The ideal solution would be to completely get rid of ClusterState.java 
because that node is going to go away. and we will only hava ZkStateReader and 
DocCollection and nothing in between. The problem is we have clusterstate.json 
now and it will exist there for a at least a couple of releases . So , I am 
torn between the choices and I decided to go with the not so elegant choice of 
ClusterState keeping a reference to ZkStatereade , so that all APIs work fine . 
My suggestion is to eliminate CLusterState.java when we deprecate the old format
# The ephemeralCollectionData data in ZkStateReader. This is again not so 
elegant. This one is simple and performant and have minimal impact of the 
code.I'm happy to hear any other simpler ideas to make it better. 

We have done extensive testing on this patch internally with very large 
clusters (60+ nodes ) and very large non:of collections (100s of collections). 
The solr-5473 branch already has this code committed . 

If there are no objections I plan to commit this fairly soon 



 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: 

[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-06-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039800#comment-14039800
 ] 

Noble Paul edited comment on SOLR-5473 at 6/22/14 6:21 PM:
---

Patch updated to trunk. Incorporating most of the comments
# All external references are eliminated from the APIs
# the node is given a suffix as /state.json instead of /state
# removed the redundant attribute externla/stateVersion from the state object. 
The version is automatically derived from the znode from which the object is 
read
# Thread-safety issues addressed
# Added javadocs

(and many more other subtle cleanups)

The comments which are not addressed are
# The selective watching of collection nodes by solr nodes.  There are ony 3 
choices when it comes to watching states
#* Watch all nodes : this will would be equivalent or worse than the current 
clusterstate.json solution. All nodes will be notified of each state change 
(multiple times, one per collection where it is a member of )
#* Watch none. Just fetch the state data just in time (will kil the ZK) or 
cache , means the node will not have an updated state to make the right 
decision at the right time
#* Watch selectively. This is the approach we have taken here 
# maintaining the zkStateReader reference in clusterstate. Agreed that is not 
elegant. The ideal solution would be to completely get rid of ClusterState.java 
because that node is going to go away. and we will only hava ZkStateReader and 
DocCollection and nothing in between. The problem is we have clusterstate.json 
now and it will exist there for a at least a couple of releases . So , I am 
torn between the choices and I decided to go with the not so elegant choice of 
ClusterState keeping a reference to ZkStatereade , so that all APIs work fine . 
My suggestion is to eliminate CLusterState.java when we deprecate the old format
# The ephemeralCollectionData data in ZkStateReader. This is again not so 
elegant. This one is simple and performant and have minimal impact of the 
code.I'm happy to hear any other simpler ideas to make it better. 

We have done extensive testing on this patch internally with very large 
clusters (120+ nodes ) and very large non:of collections (1000+ of 
collections). The solr-5473 branch already has this code committed . 

If there are no objections I plan to commit this fairly soon 




was (Author: noble.paul):
Patch updated to trunk. Incorporating most of the comments
# All external references are eliminated from the APIs
# the node is given a suffix as /state.json instead of /state
# removed the redundant attribute externla/stateVersion from the state object. 
The version is automatically derived from the znode from which the object is 
read
# Thread-safety issues addressed
# Added javadocs

(and many more other subtle cleanups)

The comments which are not addressed are
# The selective watching of collection nodes by solr nodes.  There are ony 3 
choices when it comes to watching states
#* Watch all nodes : this will would be equivalent or worse than the current 
clusterstate.json solution. All nodes will be notified of each state change 
(multiple times, one per collection where it is a member of )
#* Watch none. Just fetch the state data just in time (will kil the ZK) or 
cache , means the node will not have an updated state to make the right 
decision at the right time
#* Watch selectively. This is the approach we have taken here 
# maintaining the zkStateReader reference in clusterstate. Agreed that is not 
elegant. The ideal solution would be to completely get rid of ClusterState.java 
because that node is going to go away. and we will only hava ZkStateReader and 
DocCollection and nothing in between. The problem is we have clusterstate.json 
now and it will exist there for a at least a couple of releases . So , I am 
torn between the choices and I decided to go with the not so elegant choice of 
ClusterState keeping a reference to ZkStatereade , so that all APIs work fine . 
My suggestion is to eliminate CLusterState.java when we deprecate the old format
# The ephemeralCollectionData data in ZkStateReader. This is again not so 
elegant. This one is simple and performant and have minimal impact of the 
code.I'm happy to hear any other simpler ideas to make it better. 

We have done extensive testing on this patch internally with very large 
clusters (120+ nodes ) and very large non:of collections (100s of collections). 
The solr-5473 branch already has this code committed . 

If there are no objections I plan to commit this fairly soon 



 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: 

Re: SSLMigration test hangs on FreeBSD blackhole

2014-06-22 Thread Dawid Weiss
This looks very weird because there's no actual test thread running. And
the runner's thread is hung on readBytes (?)... This is very suspicious.

Dawid


On Sun, Jun 22, 2014 at 4:46 PM, Uwe Schindler u...@thetaphi.de wrote:

 I requested a stack dump:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/565/console

[junit4] JVM J0: stdout was not empty, see:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/test/temp/junit4-J0-20140621_125257_468.sysout
[junit4]  JVM J0: stdout (verbatim) 
[junit4] 2014-06-22 14:44:03
[junit4] Full thread dump OpenJDK 64-Bit Server VM (24.60-b09 mixed
 mode):
[junit4]
[junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e6800
 nid=0x846b95c00 runnable [0x76e7]
[junit4]java.lang.Thread.State: RUNNABLE
[junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
[junit4] at
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
[junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
[junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
[junit4] at
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
[junit4] at
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
[junit4] at java.lang.Thread.run(Thread.java:745)
[junit4]
[junit4] RMI RenewClean-[127.0.0.1:40915] daemon prio=5
 tid=0x0008517df000 nid=0x83c049800 in Object.wait() [0x72426000]
[junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
[junit4] at java.lang.Object.wait(Native Method)
[junit4] at
 java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
[junit4] - locked 0x00080de40760 (a
 java.lang.ref.ReferenceQueue$Lock)
[junit4] at
 sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:535)
[junit4] at java.lang.Thread.run(Thread.java:745)
[junit4]
[junit4] RMI Scheduler(0) daemon prio=5 tid=0x0008033e2800
 nid=0x84abf0c00 waiting on condition [0x7fffec5c8000]
[junit4]java.lang.Thread.State: TIMED_WAITING (parking)
[junit4] at sun.misc.Unsafe.park(Native Method)
[junit4] - parking to wait for  0x00080de3e008 (a
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
[junit4] at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
[junit4] at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
[junit4] at
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
[junit4] at
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
[junit4] at
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
[junit4] at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
[junit4] at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[junit4] at java.lang.Thread.run(Thread.java:745)
[junit4]
[junit4] GC Daemon daemon prio=5 tid=0x0008033e4000
 nid=0x837d98400 in Object.wait() [0x7090b000]
[junit4]java.lang.Thread.State: TIMED_WAITING (on object monitor)
[junit4] at java.lang.Object.wait(Native Method)
[junit4] at sun.misc.GC$Daemon.run(GC.java:117)
[junit4] - locked 0x00080de434a8 (a sun.misc.GC$LatencyLock)
[junit4]
[junit4] RMI Reaper prio=5 tid=0x0008033e3800 nid=0x83c050400 in
 Object.wait() [0x73638000]
[junit4]java.lang.Thread.State: WAITING (on object monitor)
[junit4] at java.lang.Object.wait(Native Method)
[junit4] at
 java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
[junit4] - locked 0x00080de36d38 (a
 java.lang.ref.ReferenceQueue$Lock)
[junit4] at
 java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
[junit4] at
 sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:351)
[junit4] at java.lang.Thread.run(Thread.java:745)
[junit4]
[junit4] RMI TCP Accept-0 daemon prio=5 tid=0x0008033e2000
 nid=0x850ca1800 runnable [0x75557000]
[junit4]java.lang.Thread.State: RUNNABLE
[junit4] at java.net.PlainSocketImpl.socketAccept(Native Method)
[junit4] at
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
[junit4] at java.net.ServerSocket.implAccept(ServerSocket.java:530)
[junit4] at java.net.ServerSocket.accept(ServerSocket.java:498)
[junit4] at
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
[junit4] at
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
  

Re: [VOTE] 4.9.0

2014-06-22 Thread david.w.smi...@gmail.com
Got it.  Turns out I forgot I was on the 4.8 branch for some other reason.
 (and yes Walter, I updated my JDK too)

 SUCCESS! [1:47:35.355454]

+1  to release

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley


On Sat, Jun 21, 2014 at 3:32 PM, Robert Muir rcm...@gmail.com wrote:

 Not *the* smoketester, instead some outdated arbitrary random
 smoketester from the past.

 please, use the latest one from the 4.9 branch.

 This file is supposed to be there and the smoketester actually looks for
 it.

 On Sat, Jun 21, 2014 at 3:16 PM, david.w.smi...@gmail.com
 david.w.smi...@gmail.com wrote:
  The smoke tester failed for me:
 
  lucene-solr_4x_svn$ python3.3 -u dev-tools/scripts/smokeTestRelease.py
  http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/
  1604085 4.9.0 /Volumes/RamDisk/tmp
 
  JAVA7_HOME is
  /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home
 
  NOTE: output encoding is UTF-8
 
 
  Load release URL
  
 http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/
 ...
 
 
  Test Lucene...
 
test basics...
 
get KEYS
 
  0.1 MB in 0.69 sec (0.2 MB/sec)
 
check changes HTML...
 
download lucene-4.9.0-src.tgz...
 
  27.6 MB in 94.12 sec (0.3 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
 
download lucene-4.9.0.tgz...
 
  61.7 MB in 226.09 sec (0.3 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
 
download lucene-4.9.0.zip...
 
  71.3 MB in 217.32 sec (0.3 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
 
unpack lucene-4.9.0.tgz...
 
  verify JAR metadata/identity/no javax.* or java.* classes...
 
  test demo with 1.7...
 
got 5727 hits for query lucene
 
  check Lucene's javadoc JAR
 
unpack lucene-4.9.0.zip...
 
  verify JAR metadata/identity/no javax.* or java.* classes...
 
  test demo with 1.7...
 
got 5727 hits for query lucene
 
  check Lucene's javadoc JAR
 
unpack lucene-4.9.0-src.tgz...
 
  Traceback (most recent call last):
 
File dev-tools/scripts/smokeTestRelease.py, line 1347, in module
 
File dev-tools/scripts/smokeTestRelease.py, line 1291, in main
 
File dev-tools/scripts/smokeTestRelease.py, line 1329, in smokeTest
 
File dev-tools/scripts/smokeTestRelease.py, line 637, in
 unpackAndVerify
 
File dev-tools/scripts/smokeTestRelease.py, line 708, in
 verifyUnpacked
 
  RuntimeError: lucene: unexpected files/dirs in artifact
  lucene-4.9.0-src.tgz: ['ivy-ignore-conflicts.properties']
 
 
  And indeed, that file is there.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Created] (LUCENE-5785) White space tokenizer has undocumented limit of 256 characters per token

2014-06-22 Thread Jack Krupansky (JIRA)
Jack Krupansky created LUCENE-5785:
--

 Summary: White space tokenizer has undocumented limit of 256 
characters per token
 Key: LUCENE-5785
 URL: https://issues.apache.org/jira/browse/LUCENE-5785
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
Reporter: Jack Krupansky
Priority: Minor


The white space tokenizer breaks tokens at 256 characters, which is a 
hard-wired limit of the character tokenizer abstract class.

The limit of 256 is obviously fine for normal, natural language text, but 
excessively restrictive for semi-structured data.

1. Document the current limit in the Javadoc for the character tokenizer. Add a 
note to any derived tokenizers (such as the white space tokenizer) that token 
size is limited as per the character tokenizer.

2. Added the setMaxTokenLength method to the character tokenizer ala the 
standard tokenizer so that an application can control the limit. This should 
probably be added to the character tokenizer abstract class, and then other 
derived tokenizer classes can inherit it.

3. Disallow a token size limit of 0.

4. A limit of -1 would mean no limit.

5. Add a token limit mode method - skip (what the standard tokenizer does), 
break (current behavior of the white space tokenizer and its derived 
tokenizers), and trim (what I think a lot of people might expect.)

6. Not sure whether to change the current behavior of the character tokenizer 
(break mode) to fix it to match the standard tokenizer, or to be trim mode, 
which is my choice and likely to be what people might expect.

7. Add matching attributes to the tokenizer factories for Solr, including Solr 
XML javadoc.

At a minimum, this issue should address the documentation problem.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 4.9.0

2014-06-22 Thread Shalin Shekhar Mangar
+1

Tested with java-1.8.0_05 on mac.

SUCCESS! [2:34:32.287329]


On Fri, Jun 20, 2014 at 5:43 PM, Robert Muir rcm...@gmail.com wrote:

 Artifacts here:
 http://people.apache.org/~rmuir/staging_area/lucene_solr_4_9_0_r1604085/

 Here's my +1

 SUCCESS! [0:35:36.654925]

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.