[jira] [Updated] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents

2017-01-10 Thread Zhu JiaJun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu JiaJun updated SOLR-9956:
-
Description: 
I'm using solr 6.3.0. I indexed a big amount of docuements into one solr 
collection with one shard, it's 60G in the disk, which has around 2506889 
documents. 

I frequently get the ArrayIndexOutOfBoundsException when I send a simple stats 
request, for example:

http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet

The solr log capture following exception as well as in the response like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 500,
"QTime": 11,
"params": {
"q": "*:*",
"stats": "true",
"start": "0",
"timeAllowed": "6",
"rows": "0",
"version": "2.2",
"wt": "json",
"stats.field": [
"asp_community_facet",
"asp_group_facet"
]
}
},
"response": {
"numFound": 2506211,
"start": 0,
"docs": [ ]
},
"error": {
"msg": "28",
"trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat 
org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat
 
org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat 
org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat
 
org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code": 500
}

}
{code}

I tried to remove some documents by reduce the document amount to 2334089, then 
the query get correct response, like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 154,
"params": {
"q": "*:*",
"stats": "true",
"start": "0",
"timeAllowed": "6",
"rows": "0",
"version": "2.2",
"wt": "json",
"stats.field": [
"asp_community_facet",
"asp_group_facet"
]
}
},
"response": {
"numFound": 

[jira] [Updated] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents

2017-01-10 Thread Zhu JiaJun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu JiaJun updated SOLR-9956:
-
Description: 
I'm using solr 6.3.0. I indexed a big amount of docuements into one solr 
collection with one shard, it's 60G in the disk, which has around 2506889 
documents. 

I frequently get the ArrayIndexOutOfBoundsException when I send a simple stats 
request, for example:

http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet

The solr log capture following exception as well as in the response like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 500,
"QTime": 11,
"params": {
"q": "*:*",
"stats": "true",
"start": "0",
"timeAllowed": "6",
"rows": "0",
"version": "2.2",
"wt": "json",
"stats.field": [
"asp_community_facet",
"asp_group_facet"
]
}
},
"response": {
"numFound": 2506211,
"start": 0,
"docs": [ ]
},
"error": {
"msg": "28",
"trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat 
org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat
 
org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat 
org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat
 
org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code": 500
}

}
{code}

I tried to remove some documents by reduce the document amount to 2334089, then 
the query get correct response, like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 154,
"params": {
"q": "*:*",
"stats": "true",
"start": "0",
"timeAllowed": "6",
"rows": "0",
"version": "2.2",
"wt": "json",
"stats.field": [
"asp_community_facet",
"asp_group_facet"
]
}
},
"response": {
"numFound": 

[jira] [Created] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents

2017-01-10 Thread Zhu JiaJun (JIRA)
Zhu JiaJun created SOLR-9956:


 Summary: Solr java.lang.ArrayIndexOutOfBoundsException when 
indexed large amount of documents
 Key: SOLR-9956
 URL: https://issues.apache.org/jira/browse/SOLR-9956
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.3
 Environment: Ubuntu 14.04.4 LTS
Reporter: Zhu JiaJun
Priority: Critical


I'm using solr 6.3.0. I indexed a big amount of docuements into one solr 
collection with one shard, it's 60G in the disk, which has around 2506889 
documents. 

I frequently the ArrayIndexOutOfBoundsException when I send a simple stats 
request, for example:

http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet

The solr log capture following exception as well as in the response like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 500,
"QTime": 11,
"params": {
"q": "*:*",
"stats": "true",
"start": "0",
"timeAllowed": "6",
"rows": "0",
"version": "2.2",
"wt": "json",
"stats.field": [
"asp_community_facet",
"asp_group_facet"
]
}
},
"response": {
"numFound": 2506211,
"start": 0,
"docs": [ ]
},
"error": {
"msg": "28",
"trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat 
org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat
 
org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat 
org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat
 
org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code": 500
}

}
{code}

I tried to remove some documents by reduce the document amount to 2334089, then 
the query get correct response, like below:
{code}
{

"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 154,
"params": {

[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817430#comment-15817430
 ] 

ASF subversion and git services commented on SOLR-9918:
---

Commit 2f721048d4e9e35ba81ad574d3927cdba798ee24 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f72104 ]

SOLR-9918: Remove unused import to make precommit happy

(cherry picked from commit 2437204)


> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817428#comment-15817428
 ] 

ASF subversion and git services commented on SOLR-9918:
---

Commit 2437204730130dc8c03efb111ec7d4db456189ed in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2437204 ]

SOLR-9918: Remove unused import to make precommit happy


> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_112) - Build # 6350 - Failure!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6350/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([CEE43C2256778669:A65B090886ED9485]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18748 - Failure!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18748/
Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 67857 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj393125516
 [ecj-lint] Compiling 744 source files to /tmp/ecj393125516
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 266)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 316)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java
 (at line 146)
 [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory();
 [ecj-lint]  ^^^
 [ecj-lint] Resource leak: 'hdfsFactory' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java
 (at line 53)
 [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin();
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'basicAuth' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 163)
 [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : 
collection2;
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'client' is never closed
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 221)
 [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), 
e);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'client' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 201)
 [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 204)
 [ecj-lint] OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 208)
 [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/response/TestBinaryResponseWriter.java
 (at line 63)
 [ecj-lint] NamedList res = (NamedList) new JavaBinCodec().unmarshal(new 
ByteArrayInputStream(baos.toByteArray()));
 [ecj-lint] ^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/response/TestJavabinTupleStreamParser.java
 (at line 72)
 [ecj-lint] JavabinTupleStreamParser parser = new 
JavabinTupleStreamParser(new ByteArrayInputStream(bytes), true);
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'parser' is never closed
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/response/TestJavabinTupleStreamParser.java
 (at line 173)
 [ecj-lint] 

Re: Solr Ref Guide, Highlighting

2017-01-10 Thread David Smiley
I made some progress... I'm having flashbacks to book-writing; ugh. If you
wish to help Cassandra, I think a great spot would be the "Basic Usage"
section I inserted.  I'll save that for last if nobody gets to it.  I need
to do the end of the page, and then add a UH page and deal with the other
highlighter pages.

On Tue, Jan 10, 2017 at 12:20 PM David Smiley 
wrote:

> Thanks for your input Cassandra.
> Within the code, DefaultSolrHighlighter.java can be renamed; it's not set
> in stone unlike code in SolrJ where there needs to be much more care.  Not
> sure what it would be named though... any way it's another issue perhaps
> for 7.0.
> I'll work on the ref guide tonight.
> ~ David
>
> On Tue, Jan 10, 2017 at 11:57 AM Cassandra Targett 
> wrote:
>
> (note: I replied to this thread earlier not noticing that dev@l.a.o
> was removed from the message I replied to...reposting the relevant
> part here for posterity or whatever...)
>
> [Regarding] reworking the Highlighting section, I'm +1 on the changes you
> propose, David. It's a bit of a mess, and not very consistent in the
> ways configuration options are described for each of the
> implementations.
>
> I generally prefer to name things along the lines they are named in
> the code, but in this case there's already a disconnect between
> "Standard Highlighter" and the DefaultSolrHighlighter. I wonder,
> though, if it would be a good idea to rename the
> DefaultSolrHighlighter? Perhaps it's too early to make such a change,
> but it's worth a moment's thought if you haven't already.
>
> Thanks for taking this on - I was briefly looking at UH yesterday and
> considering how to integrate it with the current docs. I didn't get
> very far, and found it a bit daunting, so I appreciate your assistance
> for sure. Please let me know if you need any help or review from me.
>
> On Mon, Jan 9, 2017 at 11:17 PM, David Smiley 
> wrote:
> > Unfortunately, The Solr Ref Guide is only editable by committers.  In the
> > near future it's going to move to a different platform that will allow
> you
> > to contribute via pull-request; that will be very nice.  In the mean
> time,
> > your feedback is highly appreciated.
> >
> > ~ David
> >
> > On Mon, Jan 9, 2017 at 6:21 PM Timothy Rodriguez (BLOOMBERG/ 120 PARK)
> >  wrote:
> >>
> >> +1, I'll be happy to offer assistance with edits or some of the sections
> >> if needed. We're glad to see this out there.
> >>
> >> From: dev@lucene.apache.org At: 01/09/17 18:03:32
> >> To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
> >> Subject: Re:Solr Ref Guide, Highlighting
> >>
> >> Solr 6.4 is the first release to introduce the UnifiedHighlighter as a
> new
> >> highlighter option.  I want to get it documented reasonably well in the
> Solr
> >> Ref Guide.  The Highlighters section is here: Highlighting   (lets see
> if
> >> this formatted email expands to the URL when it lands on the list)
> >>
> >> Unless anyone objects, I'd like to rename the "Standard Highlighter" as
> >> "Original Highlighter" in the ref guide.  The original Highlighter has
> no
> >> actual name qualifications as it was indeed Lucene's original
> Highlighter.
> >> "Standard Highlighter" as a name purely exists as-such within the Solr
> >> Reference Guide only.  In our code it's used by "DefaultSolrHighlighter"
> >> which is really a combo of the original Highlighter and
> >> FastVectorHighlighter.   DSH ought to be refactored perhaps... but I
> >> digress.
> >>
> >> For those that haven't read CHANGES.txt yet, there is a new "hl.method"
> >> parameter which can be used to pick your highlighter.  Here I purposely
> >> chose a possible value of "original" to choose the original Highlighter
> (not
> >> "standard").
> >>
> >> I haven't started documenting yet but I plan to refactor the highlighter
> >> docs a bit.  The intro page will better discuss the highlighter options
> and
> >> also how to configure both term vectors and offsets in postings.  Then
> the
> >> highlighter implementation specific pages will document the parameters
> and
> >> any configuration specific to them.  I'm a bit skeptical we need a page
> >> dedicated to the PostingsHighlighter as the UnifiedHighlighter is a
> >> derivative of it, supporting all it's options and more.  In that sense,
> >> maybe people are fine with it only being in the ref guide as a
> paragraph or
> >> two on the UH page describing how to activate it.  I suppose it's
> >> effectively deprecated.
> >>
> >> ~ David
> >> --
> >> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> >> http://www.solrenterprisesearchserver.com
> >>
> >>
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> > http://www.solrenterprisesearchserver.com
>
> 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 611 - Failure!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/611/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 67939 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /var/tmp/ecj1036041760
 [ecj-lint] Compiling 745 source files to /var/tmp/ecj1036041760
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 266)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
 (at line 316)
 [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java
 (at line 146)
 [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory();
 [ecj-lint]  ^^^
 [ecj-lint] Resource leak: 'hdfsFactory' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java
 (at line 53)
 [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin();
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'basicAuth' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 163)
 [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : 
collection2;
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'client' is never closed
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 221)
 [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), 
e);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'client' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 201)
 [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 204)
 [ecj-lint] OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 208)
 [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]  ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/request/TestBinaryResponseWriter.java
 (at line 65)
 [ecj-lint] NamedList res = (NamedList) new JavaBinCodec().unmarshal(new 
ByteArrayInputStream(baos.toByteArray()));
 [ecj-lint] ^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/test/org/apache/solr/response/TestBinaryResponseWriter.java
 (at line 63)
 [ecj-lint] NamedList res = (NamedList) new JavaBinCodec().unmarshal(new 
ByteArrayInputStream(baos.toByteArray()));
 [ecj-lint] ^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817100#comment-15817100
 ] 

David Smiley commented on LUCENE-7623:
--

Looks alright but I didn't review thoroughly. I noticed one problem: 
{{TwoPhaseIterator.matchCost}} as implemented here isn't right.  It's supposed 
to be the match cost for a _single document_, thus returning maxDocs is 
definitely not the right response.  See the javadocs.  Unfortunately since 
DoubleValueSource has no similar cost, you can't propagate... so might as well 
return some constant.  Judging from existing impls... anywhere between 10 and 
100 is good to me.

> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9955) Add cluster Streaming Expression

2017-01-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9955:
-
Description: 
This ticket will add the *cluster* Streaming Expression to hook into the 
carrot2 clustering handler. Real-time clustering will fit nicely into the 
Streaming Expression library and should benefit from being able interact with 
other streams. 


  was:
This ticket will add the *cluster* Streaming Expression to hook into the 
carrot2 clustering handler. Real-time clustering will fit nicely into the 
Streaming Expression library and should benefit from being able interact with 
other streams. For example clustering be used seed a graph query:

{code}
gatherNodes(articles
 cluster(articles, q="author:John Doe", ...),
 walk="cluster->articleText",
 

{code}




> Add cluster Streaming Expression
> 
>
> Key: SOLR-9955
> URL: https://issues.apache.org/jira/browse/SOLR-9955
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket will add the *cluster* Streaming Expression to hook into the 
> carrot2 clustering handler. Real-time clustering will fit nicely into the 
> Streaming Expression library and should benefit from being able interact with 
> other streams. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9955) Add cluster Streaming Expression

2017-01-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9955:
-
Description: 
This ticket will add the *cluster* Streaming Expression to hook into the 
carrot2 clustering handler. Real-time clustering will fit nicely into the 
Streaming Expression library and should benefit from being able interact with 
other streams. For example clustering be used seed a graph query:

{code}
gatherNodes(articles
 cluster(articles, q="author:John Doe", ...),
 walk="cluster->articleText",
 

{code}



  was:
This ticket will add the *cluster* Streaming Expression to hook into the 
carrot2 clustering handler. Real-time clustering will fit nicely into the 
Streaming Expression library and should benefit from being able interact with 
other streams.




> Add cluster Streaming Expression
> 
>
> Key: SOLR-9955
> URL: https://issues.apache.org/jira/browse/SOLR-9955
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket will add the *cluster* Streaming Expression to hook into the 
> carrot2 clustering handler. Real-time clustering will fit nicely into the 
> Streaming Expression library and should benefit from being able interact with 
> other streams. For example clustering be used seed a graph query:
> {code}
> gatherNodes(articles
>  cluster(articles, q="author:John Doe", ...),
>  walk="cluster->articleText",
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9947) Miscellaneous metrics cleanup

2017-01-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817069#comment-15817069
 ] 

David Smiley commented on SOLR-9947:


The existing SolrInfoMBeans hierarchy isn't what we want them to be if we were 
to start over.  It's good to see a refactor like this.  Is it possible to 
register beans in the legacy location and the new location as a transition?  
And then in the UI we'd not show the legacy ones.

> Miscellaneous metrics cleanup
> -
>
> Key: SOLR-9947
> URL: https://issues.apache.org/jira/browse/SOLR-9947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9947.patch
>
>
> Misc cleanup in metrics API to fix:
> # metrics reporting themselves under the wrong category
> # core container metrics are without a category



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18747 - Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18747/
Java: 64bit/jdk-9-ea+147 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard

Error Message:
Error from server at https://127.0.0.1:41774/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41774/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([4D55B458402C50FB:880B0044C6D99A06]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard(CollectionsAPISolrJTest.java:100)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9955) Add cluster Streaming Expression

2017-01-10 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9955:


 Summary: Add cluster Streaming Expression
 Key: SOLR-9955
 URL: https://issues.apache.org/jira/browse/SOLR-9955
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add the *cluster* Streaming Expression to hook into the 
carrot2 clustering handler. Real-time clustering will fit nicely into the 
Streaming Expression library and should benefit from being able interact with 
other streams.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1071 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1071/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([47218B8E512301D4:CF75B454FFDF6C2C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-10 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi resolved SOLR-9918.
--
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

Thanks, Tim!

> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817007#comment-15817007
 ] 

ASF subversion and git services commented on SOLR-9918:
---

Commit 2979a1eacd916201548303245f81705da7f9cc36 in lucene-solr's branch 
refs/heads/branch_6x from koji
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2979a1e ]

SOLR-9918: Add SkipExistingDocumentsProcessor that skips duplicate inserts and 
ignores updates to missing docs


> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816965#comment-15816965
 ] 

ASF subversion and git services commented on SOLR-9918:
---

Commit d66bfba5dc1bd9154bd48898865f51d9715e8d0c in lucene-solr's branch 
refs/heads/master from koji
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d66bfba ]

SOLR-9918: Add SkipExistingDocumentsProcessor that skips duplicate inserts and 
ignores updates to missing docs


> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816959#comment-15816959
 ] 

Cao Manh Dat commented on SOLR-9835:


I created a new ReplicationHandler to do the replication process, so it will 
not use the same instance from different threads.
{code}
replicationProcess = new ReplicationHandler();
replicationProcess.init(replicationConfig);
replicationProcess.inform(core);
{code}

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816951#comment-15816951
 ] 

Tomás Fernández Löbbe commented on SOLR-9835:
-

bq. I don't see any reason why we need masterUrl to be volatile? IndexFetcher 
instance is not being shared across threads.
Isn't it being used by the ReplicationHandler? Different requests would use the 
same instance from different threads, right? 

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816918#comment-15816918
 ] 

Cao Manh Dat commented on SOLR-9835:


Thanks a lot for your comments!

bq. Maybe add a method to DocCollection like {{isOnlyLeaderIndexes()}}
As you know, we won't use {{isOnlyLeaderIndexes()}} in the future ( it won't 
have enough information ), so I just do not want to add a public method on 
DocCollection ( solrj ) and remove it in the future.
bq. Does this need to be synchronized?
Yeah, I think we should.
bq. should masterUrl now be volatile?
I don't see any reason why we need masterUrl to be volatile? IndexFetcher 
instance is not being shared across threads.
bq. In many cases in the tests the leader will change before the replication 
happens, right? Does it make sense to discover the leader inside of the loop? 
Also, is there a way to remove that Thread.sleep(1000) at the beginning? This 
code will be called very frequently in tests.
That's a good idea.



> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-9954.
--
Resolution: Fixed

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9954:
-
Fix Version/s: master (7.0)

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816816#comment-15816816
 ] 

ASF GitHub Bot commented on SOLR-9954:
--

Github user thelabdude closed the pull request at:

https://github.com/apache/lucene-solr/pull/137


> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #137: SOLR-9954: Prevent against failure during fai...

2017-01-10 Thread thelabdude
Github user thelabdude closed the pull request at:

https://github.com/apache/lucene-solr/pull/137


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9954:
-
Fix Version/s: 6.4

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 6.4
>
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816810#comment-15816810
 ] 

ASF subversion and git services commented on SOLR-9954:
---

Commit f36a493d55bb9ed5676710146dcf3c51c7983ea6 in lucene-solr's branch 
refs/heads/branch_6x from [~thelabdude]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f36a493 ]

SOLR-9954: Prevent against failure during failed snapshot cleanup from 
swallowing the actual cause for the snapshot to fail.


> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1609 - Unstable

2017-01-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1609/

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([5E4BE84580A3DFD7:36F4DD6F5039CD3B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816777#comment-15816777
 ] 

ASF subversion and git services commented on SOLR-9954:
---

Commit 118fc422d0cff8492db99edccb3d73068cf04b52 in lucene-solr's branch 
refs/heads/master from [~thelabdude]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=118fc42 ]

SOLR-9954: Prevent against failure during failed snapshot cleanup from 
swallowing the actual cause for the snapshot to fail.


> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816698#comment-15816698
 ] 

Varun Thacker commented on SOLR-9954:
-

+1 for the patch

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816670#comment-15816670
 ] 

Timothy Potter commented on SOLR-9954:
--

I'd like to include this into 6.4 -> 
https://github.com/apache/lucene-solr/pull/137


> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816665#comment-15816665
 ] 

ASF GitHub Bot commented on SOLR-9954:
--

GitHub user thelabdude opened a pull request:

https://github.com/apache/lucene-solr/pull/137

SOLR-9954: Prevent against failure during failed snapshot cleanup fro…

…m swallowing the actual cause for the snapshot to fail.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene-solr jira/solr-9954

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #137


commit 4856550b45548f376b6292aeef4d501fd3d85fd2
Author: Timothy Potter 
Date:   2017-01-11T00:33:50Z

SOLR-9954: Prevent against failure during failed snapshot cleanup from 
swallowing the actual cause for the snapshot to fail.




> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #137: SOLR-9954: Prevent against failure during fai...

2017-01-10 Thread thelabdude
GitHub user thelabdude opened a pull request:

https://github.com/apache/lucene-solr/pull/137

SOLR-9954: Prevent against failure during failed snapshot cleanup fro…

…m swallowing the actual cause for the snapshot to fail.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene-solr jira/solr-9954

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #137


commit 4856550b45548f376b6292aeef4d501fd3d85fd2
Author: Timothy Potter 
Date:   2017-01-11T00:33:50Z

SOLR-9954: Prevent against failure during failed snapshot cleanup from 
swallowing the actual cause for the snapshot to fail.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-10 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816600#comment-15816600
 ] 

Markus Jelsma commented on SOLR-8542:
-

I agree with Cassandra because it also allows for confusion with reranking post 
filter. Machine Learned Ranking covers the topic nicely i believe.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> 
> Solr Reference Guide documentation:
> * https://cwiki.apache.org/confluence/display/solr/Result+Reranking
> Source code and README files:
> * 
> [solr/contrib/ltr|https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr]
> * 
> [solr/contrib/ltr/example|https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/example]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816584#comment-15816584
 ] 

Tomás Fernández Löbbe commented on SOLR-9835:
-

As I said, I understand the reason why it was done this way, but the code right 
now is only using the count to identify the mode. In the future, the count 
alone won't be enough in any of those sections to determine what to do, you'll 
have to identify the mode first and then take some more actions (e.g. Am I in 
the list of replicas that index or not?). 

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816564#comment-15816564
 ] 

Noble Paul commented on SOLR-9835:
--

It will have a proper count in the future. A mixed mode has to be there 
eventually. Enum can't accommodate that

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816552#comment-15816552
 ] 

Timothy Potter edited comment on SOLR-9954 at 1/10/17 11:47 PM:


Here's a patch (against 6.2.1 tag) that logs the delete as a warning and allows 
the actual exception to propagate out of this method correctly. I'll work a PR 
through to 6x from master...


was (Author: thelabdude):
Here's a patch that logs the delete as a warning and allows the actual 
exception to propagate out of this method correctly.

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9954:
-
Attachment: SOLR-9954.patch

Here's a patch that logs the delete as a warning and allows the actual 
exception to propagate out of this method correctly.

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9954.patch
>
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816535#comment-15816535
 ] 

Yago Riveiro commented on SOLR-9835:


bq. how about getLiveReplicasCount() ?

If I'm reading the code and found a method called getLiveReplicasCount(), I 
expected that return the number of live replicas for a shard, and if the only 
value that can return is 1 for onlyLeaderIndexes and -1 for the rest is not a 
good name.

Something like: 
{{zkStateReader.getClusterState().getCollection(collection).getReplicationMode()}}
 that returns an enum(ONLY_LEADER_INDEXES, ALL_REPLICAS_INDEXES) or something 
like that.



> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9954:
-
Component/s: (was: kup)
 Hadoop Integration

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-9954:


Assignee: Timothy Potter

> SnapShooter createSnapshot can swallow an exception raised by the underlying 
> backup repo
> 
>
> Key: SOLR-9954
> URL: https://issues.apache.org/jira/browse/SOLR-9954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: kup
>Affects Versions: 6.2.1, 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> While configuring the HdfsBackupRepository to use Google compute storage, I 
> misconfigured the permissions on my bucket. Unfortunately, the exception that 
> would have pointed me in the right direction gets squelched by the finally 
> block in createSnapshot:
> {code}
> } finally {
>   if (!success) {
> backupRepo.deleteDirectory(snapshotDirPath);
>   }
> }
> {code}
> If there's a permissions issue, then the deleteDelectory is going to fail and 
> raise another exception from the finally block, which swallows the original 
> exception. For example:
> {code}
> ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
> x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
> Exception while creating snapshot
> java.io.IOException: GoogleHadoopFileSystem has been closed or not 
> initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
> at 
> org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
> at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
> Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7625) Support Multiple (AND) Context Filter Query in Suggestor

2017-01-10 Thread jefferyyuan (JIRA)
jefferyyuan created LUCENE-7625:
---

 Summary: Support Multiple (AND) Context Filter Query in Suggestor
 Key: LUCENE-7625
 URL: https://issues.apache.org/jira/browse/LUCENE-7625
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/suggest
Reporter: jefferyyuan


Just as the normal query, usually we want to use multiple filter query when run 
auto-completion.

It would be great if suggestor can return (title of) doc that is meaningful to 
the current user where we need multiple filters.

Thanks
Jeffery Yuan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9954) SnapShooter createSnapshot can swallow an exception raised by the underlying backup repo

2017-01-10 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-9954:


 Summary: SnapShooter createSnapshot can swallow an exception 
raised by the underlying backup repo
 Key: SOLR-9954
 URL: https://issues.apache.org/jira/browse/SOLR-9954
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: kup
Affects Versions: 6.3, 6.2.1
Reporter: Timothy Potter


While configuring the HdfsBackupRepository to use Google compute storage, I 
misconfigured the permissions on my bucket. Unfortunately, the exception that 
would have pointed me in the right direction gets squelched by the finally 
block in createSnapshot:

{code}
} finally {
  if (!success) {
backupRepo.deleteDirectory(snapshotDirPath);
  }
}
{code}

If there's a permissions issue, then the deleteDelectory is going to fail and 
raise another exception from the finally block, which swallows the original 
exception. For example:

{code}
ERROR - 2017-01-10 18:38:52.650; [c:gettingstarted s:shard1 r:core_node1 
x:gettingstarted_shard1_replica1] org.apache.solr.handler.SnapShooter; 
Exception while creating snapshot
java.io.IOException: GoogleHadoopFileSystem has been closed or not initialized.
at 
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
at 
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.delete(GoogleHadoopFileSystemBase.java:1255)
at 
org.apache.solr.core.backup.repository.HdfsBackupRepository.deleteDirectory(HdfsBackupRepository.java:160)
at org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:234)
at 
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$1(SnapShooter.java:186)
at org.apache.solr.handler.SnapShooter$$Lambda$89/43739789.run(Unknown 
Source)
at java.lang.Thread.run(Thread.java:745)
{code}

That's merely the symptom and not the actual cause of the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 668 - Still Unstable

2017-01-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/668/

1 tests failed.
FAILED:  org.apache.solr.update.HardAutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([60F36972CEF8A74:BCDD59EFAFC16461]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.update.HardAutoCommitTest.testCommitWithin(HardAutoCommitTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:814)
... 40 more




Build Log:
[...truncated 11940 lines...]
   [junit4] Suite: org.apache.solr.update.HardAutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-10 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816376#comment-15816376
 ] 

Cassandra Targett commented on SOLR-8542:
-

[~cpoerschke]: About the docs in the Ref Guide - thanks, by the way! - I've 
started to take a look and will have more feedback but for now I'm wondering if 
there is a reason why you didn't name the page in the Ref Guide something like 
"Learning to Rank", or "Machine Learned Ranking"? The current name feels like 
it is hiding the true topic of the page, but I haven't studied the topic enough 
to know if there is a reason for doing that in this case.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> 
> Solr Reference Guide documentation:
> * https://cwiki.apache.org/confluence/display/solr/Result+Reranking
> Source code and README files:
> * 
> [solr/contrib/ltr|https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr]
> * 
> [solr/contrib/ltr/example|https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/example]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816355#comment-15816355
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit f1bd0f462456b4aa8273b394247c6acbadf9fa3b in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f1bd0f4 ]

SOLR-8029: added a testcase for absolute paths returned by _introspect api


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816283#comment-15816283
 ] 

Tomás Fernández Löbbe commented on SOLR-9835:
-

I'm just saying, we use the count to detect the mode, and everywhere we are 
getting the count and inferring the mode depending on the count (count==1 -> 
onlyLeaderIndexes, count==-1 -> default mode), Instead of that, lets put that 
logic inside of the DocCollection and ask it for the mode. Anyway, this is not 
too important, just a suggestion. 

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816241#comment-15816241
 ] 

Noble Paul commented on SOLR-9835:
--

bq.Maybe add a method to DocCollection like isOnlyLeaderIndexes() (or choose 
other name)? 

how about {{getLiveReplicasCount()}} ?

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9953) Add solr factory for Axiomatic Similarity

2017-01-10 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9953:
--

 Summary: Add solr factory for Axiomatic Similarity
 Key: SOLR-9953
 URL: https://issues.apache.org/jira/browse/SOLR-9953
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


LUCENE-7466 added the {{Axiomatic}} Similarity class, but there is currently no 
Solr {{SimilarityFactory}} (and it has no zero-arg constructors) so it can't 
currently be used in Solr.

Adding a factory for this Sim should be fairly trivial to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5170) Spatial multi-value distance sort via DocValues

2017-01-10 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816053#comment-15816053
 ] 

Jeff Wartes commented on SOLR-5170:
---

Well, yes, I'm interested. I've got enough other work projects going at the 
moment I'm not sure if I'll be able to dedicate much time in the next month or 
two, but I wouldn't mind trying to chip at it.

I don't want to pollute this issue, so if you have a few minutes, and could 
drop me an email with any pointers about the code areas involved, or references 
to any prior art you're aware of, I expect that'd accelerate things a lot. 
Thanks.

> Spatial multi-value distance sort via DocValues
> ---
>
> Key: SOLR-5170
> URL: https://issues.apache.org/jira/browse/SOLR-5170
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch.txt
>
>
> The attached patch implements spatial multi-value distance sorting.  In other 
> words, a document can have more than one point per field, and using a 
> provided function query, it will return the distance to the closest point.  
> The data goes into binary DocValues, and as-such it's pretty friendly to 
> realtime search requirements, and it only uses 8 bytes per point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2637 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2637/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:201)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:344)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:689)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:906)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:846)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:201)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:475)  
at org.apache.solr.core.SolrCore.(SolrCore.java:900)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:201)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:97)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:721)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:906)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 

[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815930#comment-15815930
 ] 

Tomás Fernández Löbbe commented on SOLR-9835:
-

Great idea! just took a quick look at the patch to understand this better. I 
have a couple of questions/comments, I know this is work in progress, so feel 
free to disregard any of my comments if you are working on them:

{code}
onlyLeaderIndexes = 
zkStateReader.getClusterState().getCollection(collection).getLiveReplicas() == 
1;
{code}
Maybe add a method to DocCollection like {{isOnlyLeaderIndexes()}} (or choose 
other name)? I understand why you did this, but this code is repeated many 
times, maybe can be improved for now.

{code}
private Map replicateFromLeaders = new HashMap<>();
{code}
Does this need to be synchronized?

{code}
-  private final String masterUrl;
+  private String masterUrl;
{code}
should {{masterUrl}} now be volatile?

{code}
+  public static boolean waitForInSyncWithLeader(SolrCore core, Replica 
leaderReplica) throws InterruptedException {
+if (waitForReplicasInSync == null) return true;
+
+Pair pair = parseValue(waitForReplicasInSync);
+boolean enabled = pair.first();
+if (!enabled) return true;
+
+Thread.sleep(1000);
+HttpSolrClient leaderClient = new 
HttpSolrClient.Builder(leaderReplica.getCoreUrl()).build();
+long leaderVersion = -1;
+String localVersion = null;
+try {
+  for (int i = 0; i < pair.second(); i++) {
+if (core.isClosed()) return true;
+ModifiableSolrParams params = new ModifiableSolrParams();
+params.set(CommonParams.QT, ReplicationHandler.PATH);
+params.set(COMMAND, CMD_DETAILS);
+
+NamedList response = leaderClient.request(new 
QueryRequest(params));
+leaderVersion = (long) 
((NamedList)response.get("details")).get("indexVersion");
+
+localVersion = 
core.getDeletionPolicy().getLatestCommit().getUserData().get(SolrIndexWriter.COMMIT_TIME_MSEC_KEY);
+if (localVersion == null && leaderVersion == 0) return true;
+
+if (localVersion != null && Long.parseLong(localVersion) == 
leaderVersion) {
+  return true;
+} else {
+  Thread.sleep(500);
+}
+  }
+
+} catch (Exception e) {
+  log.error("Exception when wait for replicas in sync with master");
+} finally {
+  try {
+if (leaderClient != null) leaderClient.close();
+  } catch (IOException e) {
+e.printStackTrace();
+  }
+}
+
+return false;
+  }

{code}
In many cases in the tests the leader will change before the replication 
happens, right? Does it make sense to discover the leader inside of the loop? 
Also, is there a way to remove that Thread.sleep(1000) at the beginning? This 
code will be called very frequently in tests.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 253 - Still Unstable

2017-01-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/253/

5 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([EE0D0F6DCA1B554C:3CFD438E94B4F37E]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:305)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 610 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/610/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor145.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:930)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor145.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.(SolrCore.java:930)
at org.apache.solr.core.SolrCore.(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([1D981EDE44192233]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9941:
---
Attachment: SOLR-9941.patch

bq. In TestRecovery, we kinda mixed up single instance code path along with 
SolrCloud mode code path, so we end up with the fail.

yeah, that was the piece i wasn't clear about -- makes sense.

based on Dat's comments, I re-did my previous idea as a new independent 
{{testNewDBQAndDocMatchingOldDBQDuringLogReplay}} method:

* the "client" only sends regular updates (w/o fake versions) as it would to a 
leader/single-node instance
* confirms that a DBQ which arrives during recovery is processed correctly, 
even if the docs being deleted haven't been added yet as part of recovery
* Also includes a DBQ in the tlog which may (randomly) be applied before/after 
the new DBQ arrives
** just to sanity check that the new delete doesn't muck with the replay of old 
deletes
* the DBQ from the tlog also matches a doc which isn't added until recovery is 
in process (randomly before/after the DBQ from the tlog is applied) to verify 
that doc isn't deleted by mistake

...i've added this new test method to the patch to help with the coverage of 
the affected code paths.

bq. Unless someone has any objections or suggests some modifications, I'd like 
to commit this after the 6.4 branch is cut (or alternatively, commit this to 
master and wait for backporting after the 6.4 branch is cut).

The sooner it gets committed to master, the sooner jenkins starts helping us do 
randomized testing -- since Dat thinks the change makes sense, and there are no 
objections I would encourage committing to master early and waiting to backport.



> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18744 - Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18744/
Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([F48048D67DF56907:9C3F7DFCAD6F7BEB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8760) PeerSync replay of ADDs older than ourLowThreshold interacting with DBQs to stall new leadership

2017-01-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815617#comment-15815617
 ] 

Christine Poerschke commented on SOLR-8760:
---

Tentatively linked as related to SOLR-9941 issue.

> PeerSync replay of ADDs older than ourLowThreshold interacting with DBQs to 
> stall new leadership
> 
>
> Key: SOLR-8760
> URL: https://issues.apache.org/jira/browse/SOLR-8760
> Project: Solr
>  Issue Type: Bug
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: solr-8760-fixA.patch, solr-8760-fixB.patch
>
>
> When we are doing rolling restarts of our Solr servers, we are sometimes 
> hitting painfully long times without a shard leader. What happens is that a 
> new leader is elected, but first needs to fully sync old updates before it 
> assumes the leadership role and accepts new updates. The syncing process is 
> taking unusually long because of an interaction between having one of our 
> hourly garbage collection DBQs in the update logs and the replaying of old 
> ADDs. If there is a single DBQ, and 1000 older ADDs that are getting 
> replayed, then the DBQ is replayed 1000 times, instead of once. This itself 
> may be hard to fix. But, the thing that is easier to fix is that most of the 
> ADDs getting replayed shouldn't need to get replayed in the first place, 
> since they are older than ourLowThreshold.
> The problem can be fixed by eliminating or by modifying the way that the 
> "completeList" term is used to effect the PeerSync lists.
> We propose two alternatives to fix this:
> FixA: Based on my possibly incomplete understanding of PeerSync, the 
> completeList term should be eliminated. If updates older than ourLowThreshold 
> need to replayed, then aren't all the prerequisities for PeerSync violated 
> and hence we should fall back to SnapPull? (My gut suspects that a later bug 
> fix to PeerSync fixed whatever issue completeList was trying to deal with.)
> FixB: The patch that added the completeList term mentions that it is needed 
> for the replay of some DELETEs. Well, if that is true and we do need to 
> replay some DELETEs older than ourLowThreshold, then there is still no need 
> to replay any ADDs older than ourLowThreshold, right??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815615#comment-15815615
 ] 

Christine Poerschke commented on SOLR-9941:
---

Tentatively linking SOLR-8760 and SOLR-9941 as related (but haven't yet had 
chance to catchup on SOLR-9941 here to see if it would solve SOLR-8760 also or 
indeed if the issues are different).

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9947) Miscellaneous metrics cleanup

2017-01-10 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9947:

Attachment: SOLR-9947.patch

Current patch with the following changes:
* simplified UPDATEHANDLER and QUERYHANDLER to UPDATE and QUERY respectively. 
This also allows us to put into this category those plugins that are not 
handlers.
* added ADMIN and CONTAINER. The first category covers all admin handlers, the 
other covers CoreContainer-specific handlers.
* moved some handlers to UPDATE and REPLICATION

This seems to bring much more consistency and transparency into where plugins 
logically belong to, as oppose to them being {{RequestHandlerBase}} subclasses, 
which is an implementation detail. If there are no objections I'd like to 
commit it soon, so that it makes it into 6.4.

> Miscellaneous metrics cleanup
> -
>
> Key: SOLR-9947
> URL: https://issues.apache.org/jira/browse/SOLR-9947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9947.patch
>
>
> Misc cleanup in metrics API to fix:
> # metrics reporting themselves under the wrong category
> # core container metrics are without a category



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1070 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1070/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
must have failed

Stack Trace:
java.lang.AssertionError: must have failed
at 
__randomizedtesting.SeedInfo.seed([476CF8A00ED578A9:FB028EB2AA86FBD3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12733 lines...]
   [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.security.BasicAuthIntegrationTest_476CF8A00ED578A9-001/init-core-data-001
   [junit4]   2> 2598954 INFO  

[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-10 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9941:
---
Attachment: SOLR-9941.patch

Thanks [~caomanhdat]. Updated the patch, removed the conditional parameter for 
startup.

Unless someone has any objections or suggests some modifications, I'd like to 
commit this after the 6.4 branch is cut (or alternatively, commit this to 
master and wait for backporting after the 6.4 branch is cut).

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-10 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-9950.
-
Resolution: Fixed

> TestRecovery.testBuffering() failure
> 
>
> Key: SOLR-9950
> URL: https://issues.apache.org/jira/browse/SOLR-9950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: policeman-jenkins-master-windows-6347-failed-tests.log.gz
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
> reproduces 100% for me on Linux:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
> was:<10>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
>  val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
> docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
> timezone=America/Rainy_River
>[junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
> (64-bit)/cpus=3,threads=1,free=213046664,total=411041792
> {noformat}
> Another test failure that on the same run doesn't reproduce for me, but these 
> two tests were running on the same JVM, and so may have somehow influenced 
> each other:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
> -Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
> between replicas
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815571#comment-15815571
 ] 

ASF subversion and git services commented on SOLR-9950:
---

Commit fc0bdeff2e165f430451944165cf336d57ab4b20 in lucene-solr's branch 
refs/heads/branch_6x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fc0bdef ]

SOLR-9950 Check the difference in counts - meter may not be zero at this point.


> TestRecovery.testBuffering() failure
> 
>
> Key: SOLR-9950
> URL: https://issues.apache.org/jira/browse/SOLR-9950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: policeman-jenkins-master-windows-6347-failed-tests.log.gz
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
> reproduces 100% for me on Linux:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
> was:<10>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
>  val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
> docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
> timezone=America/Rainy_River
>[junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
> (64-bit)/cpus=3,threads=1,free=213046664,total=411041792
> {noformat}
> Another test failure that on the same run doesn't reproduce for me, but these 
> two tests were running on the same JVM, and so may have somehow influenced 
> each other:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
> -Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
> between replicas
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-10 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815575#comment-15815575
 ] 

Andrzej Bialecki  commented on SOLR-9950:
-

Thanks Steve for confirming this. Jenkins seems to be happy, too, at least 
about these tests :)

> TestRecovery.testBuffering() failure
> 
>
> Key: SOLR-9950
> URL: https://issues.apache.org/jira/browse/SOLR-9950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: policeman-jenkins-master-windows-6347-failed-tests.log.gz
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
> reproduces 100% for me on Linux:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
> was:<10>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
>  val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
> docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
> timezone=America/Rainy_River
>[junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
> (64-bit)/cpus=3,threads=1,free=213046664,total=411041792
> {noformat}
> Another test failure that on the same run doesn't reproduce for me, but these 
> two tests were running on the same JVM, and so may have somehow influenced 
> each other:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
> -Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
> between replicas
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Ref Guide, Highlighting

2017-01-10 Thread David Smiley
Thanks for your input Cassandra.
Within the code, DefaultSolrHighlighter.java can be renamed; it's not set
in stone unlike code in SolrJ where there needs to be much more care.  Not
sure what it would be named though... any way it's another issue perhaps
for 7.0.
I'll work on the ref guide tonight.
~ David

On Tue, Jan 10, 2017 at 11:57 AM Cassandra Targett 
wrote:

> (note: I replied to this thread earlier not noticing that dev@l.a.o
> was removed from the message I replied to...reposting the relevant
> part here for posterity or whatever...)
>
> [Regarding] reworking the Highlighting section, I'm +1 on the changes you
> propose, David. It's a bit of a mess, and not very consistent in the
> ways configuration options are described for each of the
> implementations.
>
> I generally prefer to name things along the lines they are named in
> the code, but in this case there's already a disconnect between
> "Standard Highlighter" and the DefaultSolrHighlighter. I wonder,
> though, if it would be a good idea to rename the
> DefaultSolrHighlighter? Perhaps it's too early to make such a change,
> but it's worth a moment's thought if you haven't already.
>
> Thanks for taking this on - I was briefly looking at UH yesterday and
> considering how to integrate it with the current docs. I didn't get
> very far, and found it a bit daunting, so I appreciate your assistance
> for sure. Please let me know if you need any help or review from me.
>
> On Mon, Jan 9, 2017 at 11:17 PM, David Smiley 
> wrote:
> > Unfortunately, The Solr Ref Guide is only editable by committers.  In the
> > near future it's going to move to a different platform that will allow
> you
> > to contribute via pull-request; that will be very nice.  In the mean
> time,
> > your feedback is highly appreciated.
> >
> > ~ David
> >
> > On Mon, Jan 9, 2017 at 6:21 PM Timothy Rodriguez (BLOOMBERG/ 120 PARK)
> >  wrote:
> >>
> >> +1, I'll be happy to offer assistance with edits or some of the sections
> >> if needed. We're glad to see this out there.
> >>
> >> From: dev@lucene.apache.org At: 01/09/17 18:03:32
> >> To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
> >> Subject: Re:Solr Ref Guide, Highlighting
> >>
> >> Solr 6.4 is the first release to introduce the UnifiedHighlighter as a
> new
> >> highlighter option.  I want to get it documented reasonably well in the
> Solr
> >> Ref Guide.  The Highlighters section is here: Highlighting   (lets see
> if
> >> this formatted email expands to the URL when it lands on the list)
> >>
> >> Unless anyone objects, I'd like to rename the "Standard Highlighter" as
> >> "Original Highlighter" in the ref guide.  The original Highlighter has
> no
> >> actual name qualifications as it was indeed Lucene's original
> Highlighter.
> >> "Standard Highlighter" as a name purely exists as-such within the Solr
> >> Reference Guide only.  In our code it's used by "DefaultSolrHighlighter"
> >> which is really a combo of the original Highlighter and
> >> FastVectorHighlighter.   DSH ought to be refactored perhaps... but I
> >> digress.
> >>
> >> For those that haven't read CHANGES.txt yet, there is a new "hl.method"
> >> parameter which can be used to pick your highlighter.  Here I purposely
> >> chose a possible value of "original" to choose the original Highlighter
> (not
> >> "standard").
> >>
> >> I haven't started documenting yet but I plan to refactor the highlighter
> >> docs a bit.  The intro page will better discuss the highlighter options
> and
> >> also how to configure both term vectors and offsets in postings.  Then
> the
> >> highlighter implementation specific pages will document the parameters
> and
> >> any configuration specific to them.  I'm a bit skeptical we need a page
> >> dedicated to the PostingsHighlighter as the UnifiedHighlighter is a
> >> derivative of it, supporting all it's options and more.  In that sense,
> >> maybe people are fine with it only being in the ref guide as a
> paragraph or
> >> two on the UH page describing how to activate it.  I suppose it's
> >> effectively deprecated.
> >>
> >> ~ David
> >> --
> >> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> >> http://www.solrenterprisesearchserver.com
> >>
> >>
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> > http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815537#comment-15815537
 ] 

Steve Rowe commented on SOLR-9950:
--

After your commit, Andrzej, this passes for me on master on Linux:

{noformat}
ant test  -Dtestcase=TestRecovery -Dtests.seed=416C60950450F681 
-Dtests.slow=true -Dtests.locale=no -Dtests.timezone=America/Rainy_River 
-Dtests.asserts=true -Dtests.file.encodingTF-8 -Dtests.dups=4 -Dtests.iters=10
{noformat}

> TestRecovery.testBuffering() failure
> 
>
> Key: SOLR-9950
> URL: https://issues.apache.org/jira/browse/SOLR-9950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: policeman-jenkins-master-windows-6347-failed-tests.log.gz
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
> reproduces 100% for me on Linux:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
> was:<10>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
>  val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
> docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
> timezone=America/Rainy_River
>[junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
> (64-bit)/cpus=3,threads=1,free=213046664,total=411041792
> {noformat}
> Another test failure that on the same run doesn't reproduce for me, but these 
> two tests were running on the same JVM, and so may have somehow influenced 
> each other:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
> -Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
> between replicas
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2636 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2636/
Java: 32bit/jdk-9-ea+147 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=515 sum(shards)=514 cloudClient=514

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=515 sum(shards)=514 
cloudClient=514
at 
__randomizedtesting.SeedInfo.seed([F5CBE1029FB9FF6F:7D9FDED831459297]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1323)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:228)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Solr Ref Guide, Highlighting

2017-01-10 Thread Cassandra Targett
(note: I replied to this thread earlier not noticing that dev@l.a.o
was removed from the message I replied to...reposting the relevant
part here for posterity or whatever...)

[Regarding] reworking the Highlighting section, I'm +1 on the changes you
propose, David. It's a bit of a mess, and not very consistent in the
ways configuration options are described for each of the
implementations.

I generally prefer to name things along the lines they are named in
the code, but in this case there's already a disconnect between
"Standard Highlighter" and the DefaultSolrHighlighter. I wonder,
though, if it would be a good idea to rename the
DefaultSolrHighlighter? Perhaps it's too early to make such a change,
but it's worth a moment's thought if you haven't already.

Thanks for taking this on - I was briefly looking at UH yesterday and
considering how to integrate it with the current docs. I didn't get
very far, and found it a bit daunting, so I appreciate your assistance
for sure. Please let me know if you need any help or review from me.

On Mon, Jan 9, 2017 at 11:17 PM, David Smiley  wrote:
> Unfortunately, The Solr Ref Guide is only editable by committers.  In the
> near future it's going to move to a different platform that will allow you
> to contribute via pull-request; that will be very nice.  In the mean time,
> your feedback is highly appreciated.
>
> ~ David
>
> On Mon, Jan 9, 2017 at 6:21 PM Timothy Rodriguez (BLOOMBERG/ 120 PARK)
>  wrote:
>>
>> +1, I'll be happy to offer assistance with edits or some of the sections
>> if needed. We're glad to see this out there.
>>
>> From: dev@lucene.apache.org At: 01/09/17 18:03:32
>> To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
>> Subject: Re:Solr Ref Guide, Highlighting
>>
>> Solr 6.4 is the first release to introduce the UnifiedHighlighter as a new
>> highlighter option.  I want to get it documented reasonably well in the Solr
>> Ref Guide.  The Highlighters section is here: Highlighting   (lets see if
>> this formatted email expands to the URL when it lands on the list)
>>
>> Unless anyone objects, I'd like to rename the "Standard Highlighter" as
>> "Original Highlighter" in the ref guide.  The original Highlighter has no
>> actual name qualifications as it was indeed Lucene's original Highlighter.
>> "Standard Highlighter" as a name purely exists as-such within the Solr
>> Reference Guide only.  In our code it's used by "DefaultSolrHighlighter"
>> which is really a combo of the original Highlighter and
>> FastVectorHighlighter.   DSH ought to be refactored perhaps... but I
>> digress.
>>
>> For those that haven't read CHANGES.txt yet, there is a new "hl.method"
>> parameter which can be used to pick your highlighter.  Here I purposely
>> chose a possible value of "original" to choose the original Highlighter (not
>> "standard").
>>
>> I haven't started documenting yet but I plan to refactor the highlighter
>> docs a bit.  The intro page will better discuss the highlighter options and
>> also how to configure both term vectors and offsets in postings.  Then the
>> highlighter implementation specific pages will document the parameters and
>> any configuration specific to them.  I'm a bit skeptical we need a page
>> dedicated to the PostingsHighlighter as the UnifiedHighlighter is a
>> derivative of it, supporting all it's options and more.  In that sense,
>> maybe people are fine with it only being in the ref guide as a paragraph or
>> two on the UH page describing how to activate it.  I suppose it's
>> effectively deprecated.
>>
>> ~ David
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815448#comment-15815448
 ] 

Timo Hund commented on SOLR-9584:
-

Thx!

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9947) Miscellaneous metrics cleanup

2017-01-10 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-9947:
---

Assignee: Andrzej Bialecki   (was: Shalin Shekhar Mangar)

> Miscellaneous metrics cleanup
> -
>
> Key: SOLR-9947
> URL: https://issues.apache.org/jira/browse/SOLR-9947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Misc cleanup in metrics API to fix:
> # metrics reporting themselves under the wrong category
> # core container metrics are without a category



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7268) Add a script to pipe data from other programs or files to Solr using SolrJ

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815112#comment-15815112
 ] 

Jan Høydahl commented on SOLR-7268:
---

{{post.jar}} already reads from stdin if you pass {{-Ddata=stdin}}. Does not 
use SolrJ though, but perhaps it is time for bin/post to start using SolrJ?

The open-ended {{-=}} is scary if a request handler's 
param overlaps with script args. 

> Add a script to pipe data from other programs or files to Solr using SolrJ
> --
>
> Key: SOLR-7268
> URL: https://issues.apache.org/jira/browse/SOLR-7268
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to pipe JSON/XML/CSV or whatever is possible at the 
> {{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
> correct leader in native format. 
> In the following examples , all connection details of the cluster is put into 
> a file called solrj.properties
> example :
> {noformat}
> #post a file
> cat myjson.json | bin/post -c gettingstarted -s http://localhost:8983/solr 
> #or a producer program
> myprogram | bin/post  -c gettingstarted -s http://localhost:8983/solr
> {noformat}
> The behavior of the script would be exactly similar to the behavior if I were 
> to post the request directly to solr to the specified {{qt}} . Everything 
> parameter the requesthandler accepts would be accepted as a 
> {{-=}} format. The same things could be put into a 
> properties file called {{indexer.properties}} and be passed as a -p 
> parameter. The script would expect the following extra properties {{zk.url}} 
> for cloud or {{solr.url}} for standalone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9947) Miscellaneous metrics cleanup

2017-01-10 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815089#comment-15815089
 ] 

Andrzej Bialecki  commented on SOLR-9947:
-

We can put "infrastructure" handlers into other categories, especially those 
that weren't previously reported via JMX anyway.

Moving {{SolrConfigHandler}} from OTHER would slightly break back-compat, since 
this value is also reported in the old JMX interface as a bean attribute, 
similarly moving /update or /replication handlers from QUERYHANDLER. I'm not 
sure whether it's a big deal or not - maybe [~otis] or [~wunder] could comment 
on this?

> Miscellaneous metrics cleanup
> -
>
> Key: SOLR-9947
> URL: https://issues.apache.org/jira/browse/SOLR-9947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Misc cleanup in metrics API to fix:
> # metrics reporting themselves under the wrong category
> # core container metrics are without a category



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-9584:
-

Assignee: Jan Høydahl

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9584.
---
Resolution: Fixed

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815082#comment-15815082
 ] 

ASF GitHub Bot commented on SOLR-9584:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/86


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #86: SOLR-9584 - use relative URL path instead of a...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/86


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9584:
--
Fix Version/s: master (7.0)

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815079#comment-15815079
 ] 

ASF subversion and git services commented on SOLR-9584:
---

Commit f99c9676325c1749e570b9337a8c67a089d1fb28 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f99c967 ]

SOLR-9584: Support Solr being proxied with another endpoint than default /solr
This closes #86 - see original commit e0b4caccd3312b011cdfbb3951ea43812486ca98


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3769 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3769/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([4A39EBB288C3A0B6:C26DD468263FCD4E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2635 - Still Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2635/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([BE20B04FB42C1A79:83F81E638CC24409]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12437 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithKerberosAlt
   [junit4]   2> 1650607 WARN  
(TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[BE20B04FB42C1A79]) [] 
o.a.d.s.c.DefaultDirectoryService You didn't change the admin password of 
directory service instance 'DefaultKrbServer'.  Please update the admin 
password as soon as possible to prevent a possible security breach.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSolrCloudWithKerberosAlt -Dtests.method=testBasics 
-Dtests.seed=BE20B04FB42C1A79 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=sq -Dtests.timezone=Canada/East-Saskatchewan 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   11.0s J1 | TestSolrCloudWithKerberosAlt.testBasics <<<
   [junit4]> Throwable #1: java.net.BindException: Address already in use
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([BE20B04FB42C1A79:83F81E638CC24409]:0)
   [junit4]>at sun.nio.ch.Net.bind0(Native Method)
   [junit4]>at sun.nio.ch.Net.bind(Net.java:433)
   [junit4]>at sun.nio.ch.Net.bind(Net.java:425)
   [junit4]>at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   [junit4]>at 
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   [junit4]>at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
   [junit4]>at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
   [junit4]>at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.TestSolrCloudWithKerberosAlt_BE20B04FB42C1A79-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=549, maxMBSortInHeap=6.417035164865636, 
sim=RandomSimilarity(queryNorm=true,coord=crazy): {}, locale=sq, 
timezone=Canada/East-Saskatchewan
   [junit4]   2> NOTE: Linux 4.4.0-53-generic i386/Oracle Corporation 1.8.0_112 
(32-bit)/cpus=12,threads=1,free=244676024,total=451674112
   [junit4]   2> NOTE: All tests run in this JVM: [SliceStateTest, 
TestCollationFieldDocValues, BitVectorTest, TestSchemaSimilarityResource, 
CSVRequestHandlerTest, MultiThreadedOCPTest, DistributedSuggestComponentTest, 
SuggestComponentTest, TestSQLHandlerNonCloud, 
LeaderInitiatedRecoveryOnCommitTest, BJQParserTest, 
TestAuthenticationFramework, CollectionReloadTest, RecoveryAfterSoftCommitTest, 
TestSegmentSorting, StressHdfsTest, 

[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814974#comment-15814974
 ] 

Jan Høydahl commented on SOLR-9584:
---

I'm testing locally and plan to commit to master. Then some time after 6.4 we 
can back port to 6.x

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814972#comment-15814972
 ] 

Upayavira commented on SOLR-9584:
-

The original intention of SOLR-9000 still stands. We didn't bother supporting 
non /solr paths, because Solr is moving away from being a webapp.

However, this particular patch is pretty innocuous, and doesn't appear to 
change much, so just like [~janhoy], I think this would be a reasonable patch 
to apply.

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9952) S3BackupRepository

2017-01-10 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9952:
--

 Summary: S3BackupRepository
 Key: SOLR-9952
 URL: https://issues.apache.org/jira/browse/SOLR-9952
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mikhail Khludnev


I'd like to have a backup repository implementation allows to snapshot to AWS S3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814934#comment-15814934
 ] 

Timo Hund commented on SOLR-9584:
-

I've checked the patch on my system and with the adaptions i could use the new 
angular js ui again.

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-10 Thread Tommaso Teofili
Welcome Dat!

Regards,
Tommaso

Il giorno lun 9 gen 2017 alle ore 16:57 Joel Bernstein 
ha scritto:

> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links: <
> http://www.apache.org/dev/>.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>


[jira] [Commented] (SOLR-9947) Miscellaneous metrics cleanup

2017-01-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814835#comment-15814835
 ] 

Shalin Shekhar Mangar commented on SOLR-9947:
-

A few of the things to fix:
# Config handler is under "OTHERS" category
# The /update/* handlers are under "QUERYHANDLER"
# The container level metrics such as "core.*" gauges, 
threadPool.coreContainerWorkExecutor and threadPool.coreLoadExecutor are 
without a category unlike other metrics. Perhaps we can introduce a CONTAINER 
category?
# InfoHandler which is created in CoreContainer and has sub-handler such as 
/admin/logging, /admin/threads etc which show up in each core. I am not sure 
why.

I think we are too tied with SolrInfoMBean and the categories that exist 
already. It feels wrong for a new API to use outdated category names. For 
example, "/replication" is not a query handler, it is something else entirely. 
Same for core admin handlers and other such stuff. I am not sure how to fix 
this without breaking back-compat with existing monitoring solutions. What do 
you think [~ab]?

> Miscellaneous metrics cleanup
> -
>
> Key: SOLR-9947
> URL: https://issues.apache.org/jira/browse/SOLR-9947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Misc cleanup in metrics API to fix:
> # metrics reporting themselves under the wrong category
> # core container metrics are without a category



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9951) FileAlreadyExistsException on replication.properties

2017-01-10 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma resolved SOLR-9951.
-
Resolution: Duplicate

> FileAlreadyExistsException on replication.properties
> 
>
> Key: SOLR-9951
> URL: https://issues.apache.org/jira/browse/SOLR-9951
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Just spotted this one right after restarting two nodes. Only one node logged 
> the error. It's a single shard with two replica's. The exception was logged 
> for all three active cores:
> {code}
> java.nio.file.FileAlreadyExistsException: 
> /var/lib/solr/core_shard1_replica1/data/replication.properties
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>   at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>   at java.nio.file.Files.newOutputStream(Files.java:216)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:157)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:675)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:487)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:156)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:408)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:221)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9951) FileAlreadyExistsException on replication.properties

2017-01-10 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814814#comment-15814814
 ] 

Markus Jelsma commented on SOLR-9951:
-

Yes it indeed is, thanks. I searched Jira for FileAlreadyExistsException but i 
got no results.

> FileAlreadyExistsException on replication.properties
> 
>
> Key: SOLR-9951
> URL: https://issues.apache.org/jira/browse/SOLR-9951
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Just spotted this one right after restarting two nodes. Only one node logged 
> the error. It's a single shard with two replica's. The exception was logged 
> for all three active cores:
> {code}
> java.nio.file.FileAlreadyExistsException: 
> /var/lib/solr/core_shard1_replica1/data/replication.properties
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>   at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>   at java.nio.file.Files.newOutputStream(Files.java:216)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:157)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:675)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:487)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:156)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:408)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:221)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9951) FileAlreadyExistsException on replication.properties

2017-01-10 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814797#comment-15814797
 ] 

Yago Riveiro commented on SOLR-9951:


This is not a duplicated of 
[SOLR-9859|https://issues.apache.org/jira/browse/SOLR-9859]?

> FileAlreadyExistsException on replication.properties
> 
>
> Key: SOLR-9951
> URL: https://issues.apache.org/jira/browse/SOLR-9951
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: master (7.0), 6.4
>
>
> Just spotted this one right after restarting two nodes. Only one node logged 
> the error. It's a single shard with two replica's. The exception was logged 
> for all three active cores:
> {code}
> java.nio.file.FileAlreadyExistsException: 
> /var/lib/solr/core_shard1_replica1/data/replication.properties
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>   at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>   at java.nio.file.Files.newOutputStream(Files.java:216)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:157)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:675)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:487)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:156)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:408)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:221)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-10 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814790#comment-15814790
 ] 

Timo Hund commented on SOLR-9584:
-

It's not my patch, but i can test it if this helps to get it integrated.

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9951) FileAlreadyExistsException on replication.properties

2017-01-10 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-9951:
---

 Summary: FileAlreadyExistsException on replication.properties
 Key: SOLR-9951
 URL: https://issues.apache.org/jira/browse/SOLR-9951
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.3
Reporter: Markus Jelsma
Priority: Minor
 Fix For: master (7.0), 6.4


Just spotted this one right after restarting two nodes. Only one node logged 
the error. It's a single shard with two replica's. The exception was logged for 
all three active cores:

{code}
java.nio.file.FileAlreadyExistsException: 
/var/lib/solr/core_shard1_replica1/data/replication.properties
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at 
java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
at 
org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:157)
at 
org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:675)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:487)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
at 
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:156)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:408)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:221)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-10 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Congrats and welcome!

From: dev@lucene.apache.org At: 01/10/17 11:29:42
To: dev@lucene.apache.org
Subject: Re: Welcome Cao Manh Dat as a Lucene/Solr committer

Congrats Đạt. Very well deserved!

On Tue, Jan 10, 2017 at 10:02 AM, Shalin Shekhar Mangar 
 wrote:

Congratulations and welcome Đạt!

On Mon, Jan 9, 2017 at 9:27 PM, Joel Bernstein  wrote:
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links:
> .
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/


--
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re: JDK 9 EA Build 151 is available on java.net

2017-01-10 Thread Rory O'Donnell

Thanks for the updates Uwe, see you at FOSDEM.

Rgds,Rory


On 10/01/2017 11:09, Uwe Schindler wrote:


Hallo Rory,

I will be on FOSDEM, I also have a talk there: 
https://fosdem.org/2017/schedule/event/jigsaw_challenges/


I downloaded build 151 already, but we have to wait a bit until the 
new Groovy 2.4.8 release is out (release vote is ongoing an should 
finish today/tomorrow), as our build system breaks because of the 
build 148 Jigsaw changes.


The other problem with unmapping memory mapped byte buffers was fixed 
in 150, thanks (https://bugs.openjdk.java.net/browse/JDK-8171377, 
interestingly the changelog of build 150 on the jdk9.java.net web site 
is incorrect, as it only shows the Jigsaw changes, not the main build 
ones. The Jigsaw changelog on the Jigsaw download page is identical! 
build 151 is fine again - but we are missing the build 150 changes web 
page which got lost). Elasticsearch is also working on fixing the 
b148+ mmap issues: https://github.com/elastic/elasticsearch/issues/22495


Also because of the build 148 changes, some Solr tests, using Mocking 
Frameworks like Mockito or EasyMock need to be disabled. The reason 
are problems in the famous library Cglib, where I opened an issue: 
https://github.com/cglib/cglib/issues/93 The mocking frameworks is 
still a hot theme and many people are complaining. I see no good 
solution here, especially because Solr uses multiple mocking 
frameworks and some of them have no updates anymore. So disabling 
tests is the only solution! Once all of this is done, I will update 
the Jenkins CI server to use the recent JDK9 builds. Currently, I can 
only test manually on my local machine.


Otherwise building Lucene’s Javadocs is still broken, because javadoc 
tool crushes with an Exception. The issue is: 
https://bugs.openjdk.java.net/browse/JDK-8157611 - I think it should 
be fixed before release of Java 9!


Uwe

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de 

eMail: u...@thetaphi.de

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Tuesday, January 10, 2017 11:26 AM
*To:* Uwe Schindler 
*Cc:* rory.odonn...@oracle.com; Dalibor Topic 
; Balchandra Vaidya 
; Muneer Kolarkunnu 
; Dawid Weiss 
; dev@lucene.apache.org

*Subject:* JDK 9 EA Build 151 is available on java.net

Hi Uwe & Dawid,

Best wishes for the New Year.

Dalibor and I will be at FOSDEM '17, Brussels 4 & 5 February. Let us 
know if you will be there, hopefully we can meet up !


*JDK 9 Early Access*b151   is 
available on java.net


Can you confirm fixes for

 1. JDK-8171377 : Add sun.misc.Unsafe::invokeCleaner
 2. JDK-8075793 : Source incompatibility for inference using -source 7


There have been a number of fixes to bugs reported by Open Source 
projects since the last availability email  :


  * JDK-8087303 : LSSerializer pretty print does not work anymore
  * JDK-8167143 :CLDR timezone parsing does not work for all locales

Other changes that maybe of interest:

  * JDK-8066474 : Remove the lib/$ARCH directory from Linux and
Solaris images
  * JDK-8170428 : Move src.zip to JDK/lib/src.zip

*JEPs intergrated:*

  * JEP 295 : Ahead-of-Time
Compilation has been integrated in b150.

*Schedule - Milestones since last availability email *

  * *Feature Extension Complete 22nd of December 2016*
  * *Rampdown Started 5th of January 2017 *

  o Phases in which increasing levels of scrutiny are applied to
incoming changes.
  o In phase 1, only P1-P3 bugs can be fixed. In phase 2 only
showstopper bugs can be fixed.

Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-10 Thread Ishan Chattopadhyaya
Congrats Đạt. Very well deserved!

On Tue, Jan 10, 2017 at 10:02 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> Congratulations and welcome Đạt!
>
> On Mon, Jan 9, 2017 at 9:27 PM, Joel Bernstein  wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-7268) Add a script to pipe data from other programs or files to Solr using SolrJ

2017-01-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7268:
-
Description: 
I should be able to pipe JSON/XML/CSV or whatever is possible at the 
{{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
correct leader in native format. 
In the following examples , all connection details of the cluster is put into a 
file called solrj.properties
example :
{noformat}
#post a file
cat myjson.json | bin/post -c gettingstarted -s http://localhost:8983/solr 
#or a producer program
myprogram | bin/post  -c gettingstarted -s http://localhost:8983/solr
{noformat}

The behavior of the script would be exactly similar to the behavior if I were 
to post the request directly to solr to the specified {{qt}} . Everything 
parameter the requesthandler accepts would be accepted as a 
{{-=}} format. The same things could be put into a 
properties file called {{indexer.properties}} and be passed as a -p parameter. 
The script would expect the following extra properties {{zk.url}} for cloud or 
{{solr.url}} for standalone. 

  was:
I should be able to pipe JSON/XML/CSV or whatever is possible at the 
{{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
correct leader in native format. 
In the following examples , all connection details of the cluster is put into a 
file called solrj.properties
example :
{noformat}
#post a file
cat myjson.json | bin/post  -qt=/update/json/docs  -p indexer.properties 
#or a producer program
myprogram | bin/post  -qt=/update/json -p indexer.properties
{noformat}

The behavior of the script would be exactly similar to the behavior if I were 
to post the request directly to solr to the specified {{qt}} . Everything 
parameter the requesthandler accepts would be accepted as a 
{{-=}} format. The same things could be put into a 
properties file called {{indexer.properties}} and be passed as a -p parameter. 
The script would expect the following extra properties {{zk.url}} for cloud or 
{{solr.url}} for standalone. 


> Add a script to pipe data from other programs or files to Solr using SolrJ
> --
>
> Key: SOLR-7268
> URL: https://issues.apache.org/jira/browse/SOLR-7268
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to pipe JSON/XML/CSV or whatever is possible at the 
> {{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
> correct leader in native format. 
> In the following examples , all connection details of the cluster is put into 
> a file called solrj.properties
> example :
> {noformat}
> #post a file
> cat myjson.json | bin/post -c gettingstarted -s http://localhost:8983/solr 
> #or a producer program
> myprogram | bin/post  -c gettingstarted -s http://localhost:8983/solr
> {noformat}
> The behavior of the script would be exactly similar to the behavior if I were 
> to post the request directly to solr to the specified {{qt}} . Everything 
> parameter the requesthandler accepts would be accepted as a 
> {{-=}} format. The same things could be put into a 
> properties file called {{indexer.properties}} and be passed as a -p 
> parameter. The script would expect the following extra properties {{zk.url}} 
> for cloud or {{solr.url}} for standalone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7268) Add a script to pipe data from other programs or files to Solr using SolrJ

2017-01-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7268:
-
Component/s: SolrJ
 scripts and tools

> Add a script to pipe data from other programs or files to Solr using SolrJ
> --
>
> Key: SOLR-7268
> URL: https://issues.apache.org/jira/browse/SOLR-7268
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to pipe JSON/XML/CSV or whatever is possible at the 
> {{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
> correct leader in native format. 
> In the following examples , all connection details of the cluster is put into 
> a file called solrj.properties
> example :
> {noformat}
> #post a file
> cat myjson.json | bin/post  -qt=/update/json/docs  -p indexer.properties 
> #or a producer program
> myprogram | bin/post  -qt=/update/json -p indexer.properties
> {noformat}
> The behavior of the script would be exactly similar to the behavior if I were 
> to post the request directly to solr to the specified {{qt}} . Everything 
> parameter the requesthandler accepts would be accepted as a 
> {{-=}} format. The same things could be put into a 
> properties file called {{indexer.properties}} and be passed as a -p 
> parameter. The script would expect the following extra properties {{zk.url}} 
> for cloud or {{solr.url}} for standalone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JDK 9 EA Build 151 is available on java.net

2017-01-10 Thread dalibor topic



On 10.01.2017 12:09, Uwe Schindler wrote:

interestingly the changelog of build 150 on the jdk9.java.net web site
is incorrect, as it only shows the Jigsaw changes, not the main build
ones. The Jigsaw changelog on the Jigsaw download page is identical!
build 151 is fine again - but we are missing the build 150 changes web
page which got lost).


You can use the search facility on bugs.openjdk.java.net for more fine 
grained searches over publicly visible issues.


For example

https://bugs.openjdk.java.net/browse/JDK-8171503?jql=project%20%3D%20JDK%20AND%20(labels%20is%20EMPTY%20or%20labels%20!%3D%20hgupdate-sync)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%209%20AND%20%22Resolved%20In%20Build%22%20%3D%20b150

can give you an idea what kind of changes went into a specific build of 
JDK 9 (b150 in this case).


cheers,
dalibor topic

--
 Dalibor Topic | Principal Product Manager
Phone: +494089091214  | Mobile: +491737185961


ORACLE Deutschland B.V. & Co. KG | Kühnehöfe 5 | 22761 Hamburg

ORACLE Deutschland B.V. & Co. KG
Hauptverwaltung: Riesstr. 25, D-80992 München
Registergericht: Amtsgericht München, HRA 95603

Komplementärin: ORACLE Deutschland Verwaltung B.V.
Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
Geschäftsführer: Alexander van der Ven, Jan Schultheiss, Val Maher

 Oracle is committed to developing
practices and products that help protect the environment

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2634 - Unstable!

2017-01-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2634/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection

Error Message:
Error from server at https://127.0.0.1:36895/solr: Could not fully remove 
collection: solrj_test

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:36895/solr: Could not fully remove collection: 
solrj_test
at 
__randomizedtesting.SeedInfo.seed([4D4AF3BFF2244497:4A9FF2F4D5B0219A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection(CollectionsAPISolrJTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

RE: JDK 9 EA Build 151 is available on java.net

2017-01-10 Thread Uwe Schindler
Hallo Rory,

 

I will be on FOSDEM, I also have a talk there: 
https://fosdem.org/2017/schedule/event/jigsaw_challenges/

 

I downloaded build 151 already, but we have to wait a bit until the new Groovy 
2.4.8 release is out (release vote is ongoing an should finish today/tomorrow), 
as our build system breaks because of the build 148 Jigsaw changes.

 

The other problem with unmapping memory mapped byte buffers was fixed in 150, 
thanks (https://bugs.openjdk.java.net/browse/JDK-8171377, interestingly the 
changelog of build 150 on the jdk9.java.net web site is incorrect, as it only 
shows the Jigsaw changes, not the main build ones. The Jigsaw changelog on the 
Jigsaw download page is identical! build 151 is fine again - but we are missing 
the build 150 changes web page which got lost). Elasticsearch is also working 
on fixing the b148+ mmap issues: 
https://github.com/elastic/elasticsearch/issues/22495

 

Also because of the build 148 changes, some Solr tests, using Mocking 
Frameworks like Mockito or EasyMock need to be disabled. The reason are 
problems in the famous library Cglib, where I opened an issue: 
https://github.com/cglib/cglib/issues/93 The mocking frameworks is still a hot 
theme and many people are complaining. I see no good solution here, especially 
because Solr uses multiple mocking frameworks and some of them have no updates 
anymore. So disabling tests is the only solution! Once all of this is done, I 
will update the Jenkins CI server to use the recent JDK9 builds. Currently, I 
can only test manually on my local machine.

 

Otherwise building Lucene’s Javadocs is still broken, because javadoc tool 
crushes with an Exception. The issue is: 
https://bugs.openjdk.java.net/browse/JDK-8157611 - I think it should be fixed 
before release of Java 9!

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Tuesday, January 10, 2017 11:26 AM
To: Uwe Schindler 
Cc: rory.odonn...@oracle.com; Dalibor Topic ; 
Balchandra Vaidya ; Muneer Kolarkunnu 
; Dawid Weiss ; 
dev@lucene.apache.org
Subject: JDK 9 EA Build 151 is available on java.net

 

 

Hi Uwe & Dawid, 

Best wishes for the New Year.

Dalibor and I will be at FOSDEM '17, Brussels 4 & 5 February. Let us know if 
you will be there, hopefully we can meet up ! 

JDK 9 Early Access    b151  is available on 
java.net

Can you confirm fixes for 

1.  JDK-8171377 : Add sun.misc.Unsafe::invokeCleaner
2.  JDK-8075793 : Source incompatibility for inference using -source 7 


There have been a number of fixes to bugs reported by Open Source projects 
since the last availability email  : 

*   JDK-8087303 : LSSerializer pretty print does not work anymore
*   JDK-8167143 :CLDR timezone parsing does not work for all locales

Other changes that maybe of interest:

*   JDK-8066474 : Remove the lib/$ARCH directory from Linux and Solaris 
images 
*   JDK-8170428 : Move src.zip to JDK/lib/src.zip 

JEPs intergrated:

*  JEP 295: Ahead-of-Time Compilation 
has been integrated in b150.

Schedule - Milestones since last availability email  

*   Feature Extension Complete 22nd of December 2016
*   Rampdown Started 5th of January 2017 

*   Phases in which increasing levels of scrutiny are applied to incoming 
changes. 
*   In phase 1, only P1-P3 bugs can be fixed. In phase 2 only showstopper 
bugs can be fixed.

Rgds,Rory

-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


  1   2   >