[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317743#comment-14317743
 ] 

Noble Paul commented on SOLR-6736:
--

bq.I think this is on similar lines as the config/blob storage API.

That comment was about the syntax , not the functionality

bq.Maybe that's a security issue too 
I'm not sure why this is any more insecure than uploading stuff from ZkCli?

I would say , we should just limit the scope of this to adding a whole config 
and not
* individual files
* or linking a collection to another config set. 

If you want those let us open another ticket . Dealing with too many moving 
parts is hard to track

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6241) don't filter subdirectories in listAll()

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6241:

Attachment: LUCENE-6241.patch

add missing ensureOpen.

> don't filter subdirectories in listAll()
> 
>
> Key: LUCENE-6241
> URL: https://issues.apache.org/jira/browse/LUCENE-6241
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6241.patch, LUCENE-6241.patch
>
>
> The issue is, today this means listAll() is always slow, sometimes MUCH 
> slower, because it must do the fstat()-equivalent of each file to check if 
> its a directory to filter it out.
> When i benchmarked this on a fast filesystem, doing all these filesystem 
> metadata calls only makes listAll() 2.6x slower, but on a non-ssd, slower 
> i/o, it can be more than 60x slower.
> Lucene doesn't make subdirectories, so hiding these for abuse cases just 
> makes real use cases slower.
> To add insult to injury, most code (e.g. all of lucene except for where 
> RAMDir copies from an FSDir) does not actually care if extraneous files are 
> directories or not.
> Finally it sucks the name is listAll() when it is doing anything but that.
> I really hate to add a method here to deal with this abusive stuff, but I'd 
> rather add isDirectory(String) for the rare code that wants to filter out, 
> than just let stuff always be slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6241) don't filter subdirectories in listAll()

2015-02-11 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317667#comment-14317667
 ] 

Ryan Ernst commented on LUCENE-6241:


+1

> don't filter subdirectories in listAll()
> 
>
> Key: LUCENE-6241
> URL: https://issues.apache.org/jira/browse/LUCENE-6241
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6241.patch
>
>
> The issue is, today this means listAll() is always slow, sometimes MUCH 
> slower, because it must do the fstat()-equivalent of each file to check if 
> its a directory to filter it out.
> When i benchmarked this on a fast filesystem, doing all these filesystem 
> metadata calls only makes listAll() 2.6x slower, but on a non-ssd, slower 
> i/o, it can be more than 60x slower.
> Lucene doesn't make subdirectories, so hiding these for abuse cases just 
> makes real use cases slower.
> To add insult to injury, most code (e.g. all of lucene except for where 
> RAMDir copies from an FSDir) does not actually care if extraneous files are 
> directories or not.
> Finally it sucks the name is listAll() when it is doing anything but that.
> I really hate to add a method here to deal with this abusive stuff, but I'd 
> rather add isDirectory(String) for the rare code that wants to filter out, 
> than just let stuff always be slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6241) don't filter subdirectories in listAll()

2015-02-11 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6241:
---

 Summary: don't filter subdirectories in listAll()
 Key: LUCENE-6241
 URL: https://issues.apache.org/jira/browse/LUCENE-6241
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6241.patch

The issue is, today this means listAll() is always slow, sometimes MUCH slower, 
because it must do the fstat()-equivalent of each file to check if its a 
directory to filter it out.

When i benchmarked this on a fast filesystem, doing all these filesystem 
metadata calls only makes listAll() 2.6x slower, but on a non-ssd, slower i/o, 
it can be more than 60x slower.

Lucene doesn't make subdirectories, so hiding these for abuse cases just makes 
real use cases slower.

To add insult to injury, most code (e.g. all of lucene except for where RAMDir 
copies from an FSDir) does not actually care if extraneous files are 
directories or not.

Finally it sucks the name is listAll() when it is doing anything but that.

I really hate to add a method here to deal with this abusive stuff, but I'd 
rather add isDirectory(String) for the rare code that wants to filter out, than 
just let stuff always be slow.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6241) don't filter subdirectories in listAll()

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6241:

Attachment: LUCENE-6241.patch

> don't filter subdirectories in listAll()
> 
>
> Key: LUCENE-6241
> URL: https://issues.apache.org/jira/browse/LUCENE-6241
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6241.patch
>
>
> The issue is, today this means listAll() is always slow, sometimes MUCH 
> slower, because it must do the fstat()-equivalent of each file to check if 
> its a directory to filter it out.
> When i benchmarked this on a fast filesystem, doing all these filesystem 
> metadata calls only makes listAll() 2.6x slower, but on a non-ssd, slower 
> i/o, it can be more than 60x slower.
> Lucene doesn't make subdirectories, so hiding these for abuse cases just 
> makes real use cases slower.
> To add insult to injury, most code (e.g. all of lucene except for where 
> RAMDir copies from an FSDir) does not actually care if extraneous files are 
> directories or not.
> Finally it sucks the name is listAll() when it is doing anything but that.
> I really hate to add a method here to deal with this abusive stuff, but I'd 
> rather add isDirectory(String) for the rare code that wants to filter out, 
> than just let stuff always be slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-11 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317658#comment-14317658
 ] 

Varun Thacker commented on SOLR-6775:
-

Thanks Shalin! I'll keep an eye out on the builds

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch, 
> SOLR-6775_test_fix.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317618#comment-14317618
 ] 

ASF subversion and git services commented on SOLR-6775:
---

Commit 1659151 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659151 ]

SOLR-6775: Do not attempt cleanup of temp directory because it is handled by 
test framework

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch, 
> SOLR-6775_test_fix.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2630 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2630/

6 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:52747//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:52747//collection1
at 
__randomizedtesting.SeedInfo.seed([CB66DCD923A9C47A:4332E3038D55A982]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:547)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOr

[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317613#comment-14317613
 ] 

ASF subversion and git services commented on SOLR-6775:
---

Commit 1659149 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1659149 ]

SOLR-6775: Do not attempt cleanup of temp directory because it is handled by 
test framework

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch, 
> SOLR-6775_test_fix.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-11 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317612#comment-14317612
 ] 

Shalin Shekhar Mangar commented on SOLR-6775:
-

Thanks Varun. I committed your patch r1659149 on trunk and r1659151 on 
branch_5x.

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch, 
> SOLR-6775_test_fix.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_31) - Build # 4376 - Still Failing!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4376/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.testBackupOnCommit

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 218E0D13768560B5-001\solr-instance-001

at 
__randomizedtesting.SeedInfo.seed([218E0D13768560B5:866E3F8097DDCAC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:294)
at 
org.apache.solr.handler.TestReplicationHandler$SolrInstance.tearDown(TestReplicationHandler.java:1509)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.tearDown(TestReplicationHandlerBackup.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:885)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4480 - Still Failing!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4480/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001\tempDir-002
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001\tempDir-002
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 82DA0281CC1F497C-001

at __randomizedtesting.SeedInfo.seed([82DA0281CC1F497C]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:286)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:170)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:49627/xrm/zk/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:49627/xrm/zk/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement

[jira] [Commented] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317338#comment-14317338
 ] 

ASF subversion and git services commented on SOLR-7101:
---

Commit 1659118 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659118 ]

SOLR-7101: JmxMonitoredMap can throw an exception in clear when queryNames 
fails.

> JmxMonitoredMap can throw an exception in clear when queryNames fails.
> --
>
> Key: SOLR-7101
> URL: https://issues.apache.org/jira/browse/SOLR-7101
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7101.patch
>
>
> This was added in SOLR-2927 - we should be lienant on failures here like we 
> are in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317333#comment-14317333
 ] 

ASF subversion and git services commented on SOLR-7101:
---

Commit 1659116 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1659116 ]

SOLR-7101: JmxMonitoredMap can throw an exception in clear when queryNames 
fails.

> JmxMonitoredMap can throw an exception in clear when queryNames fails.
> --
>
> Key: SOLR-7101
> URL: https://issues.apache.org/jira/browse/SOLR-7101
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7101.patch
>
>
> This was added in SOLR-2927 - we should be lienant on failures here like we 
> are in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2629 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2629/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:41014/zc/em/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:41014/zc/em/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([4C6C5EF2C75AC10F:C438612869A6ACF7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317277#comment-14317277
 ] 

Anshum Gupta commented on SOLR-6736:


Sure, it's be good to know if this is actually a potential security issue. 
Also, I'm not really talking about this patch in particular but the issue and 
what it's trying to solve.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317246#comment-14317246
 ] 

Mark Miller commented on SOLR-6736:
---

Hey [~thetaphi] - could we get your expert advice on this patch?

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317242#comment-14317242
 ] 

Mark Miller commented on SOLR-6736:
---

bq. I think this is on similar lines as the config/blob storage API.

Maybe that's a security issue too :)

Certainly this issue appears to be.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1949 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1949/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
no segments* file found in 
SimpleFSDirectory@/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandlerBackup
 
195973B0FE6A4B35-001/solr-instance-001/collection1/data/snapshot.20150212072808752
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@17544c97: files: 
[_0.fnm, _0.nvm]

Stack Trace:
org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
SimpleFSDirectory@/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandlerBackup
 
195973B0FE6A4B35-001/solr-instance-001/collection1/data/snapshot.20150212072808752
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@17544c97: files: 
[_0.fnm, _0.nvm]
at 
__randomizedtesting.SeedInfo.seed([195973B0FE6A4B35:58D253D5D9D4B87A]:0)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:632)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:68)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.verify(TestReplicationHandlerBackup.java:139)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup(TestReplicationHandlerBackup.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapte

Re: 'ant test' -- calculation for tests.jvms

2015-02-11 Thread Uwe Schindler
The easiest to work around this is to put a lucene.build.properties into your 
home directory and specify tests.jvms there.

I have this next to other settings like disabling slow tests. The Jenkins 
machines are set up the same way.

Am 12. Februar 2015 00:05:25 MEZ, schrieb Shawn Heisey :
>On 2/11/2015 12:42 PM, Dawid Weiss wrote:
>>> IMHO, this calculation should be adjusted so that a 3-core system
>gets a value of 2.
>> A 3-core system? What happened to one of its, ahem, gems? :)
>
>This is the processor I have:
>
>http://www.newegg.com/Product/Product.aspx?Item=N82E16819103683
>
>The X3 chip line consists of 4-core chips that have had one of the
>cores
>disabled.  Initially AMD did this because sometimes one of the cores
>would be bad and fail tests, but later they also used it as a way to
>sell perfectly good 4-core chips at a lower price point, by disabling
>one of the cores.  There's no way to know (aside from testing) why any
>specific chip is an X3 instead of an X4, but apparently most of the X3
>chips on the market have 4 perfectly good cores.
>
>The motherboard I'm using will enable the disabled core, but when I
>enabled the relevant BIOS setting (which also overclocked the chip a
>little bit), I had stability problems with the machine, so I disabled
>it
>and now I'm back down to three cores at the labelled speed.  Eventually
>I will get around to figuring out whether the disabled core is bad or
>the stability problems were due to overclocking.
>
>Is this JVM calculation only done in the carrotsearch randomized
>testing, or is it also found in JUnit itself?
>
>Thanks,
>Shawn
>
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
H.-H.-Meier-Allee 63, 28213 Bremen
http://www.thetaphi.de

Re: 'ant test' -- calculation for tests.jvms

2015-02-11 Thread Shawn Heisey
On 2/11/2015 12:42 PM, Dawid Weiss wrote:
>> IMHO, this calculation should be adjusted so that a 3-core system gets a 
>> value of 2.
> A 3-core system? What happened to one of its, ahem, gems? :)

This is the processor I have:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819103683

The X3 chip line consists of 4-core chips that have had one of the cores
disabled.  Initially AMD did this because sometimes one of the cores
would be bad and fail tests, but later they also used it as a way to
sell perfectly good 4-core chips at a lower price point, by disabling
one of the cores.  There's no way to know (aside from testing) why any
specific chip is an X3 instead of an X4, but apparently most of the X3
chips on the market have 4 perfectly good cores.

The motherboard I'm using will enable the disabled core, but when I
enabled the relevant BIOS setting (which also overclocked the chip a
little bit), I had stability problems with the machine, so I disabled it
and now I'm back down to three cores at the labelled speed.  Eventually
I will get around to figuring out whether the disabled core is bad or
the stability problems were due to overclocking.

Is this JVM calculation only done in the carrotsearch randomized
testing, or is it also found in JUnit itself?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7019) Can't change the field key for interval faceting

2015-02-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-7019.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk
 Assignee: Tomás Fernández Löbbe

> Can't change the field key for interval faceting
> 
>
> Key: SOLR-7019
> URL: https://issues.apache.org/jira/browse/SOLR-7019
> Project: Solr
>  Issue Type: Bug
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7019.patch, SOLR-7019.patch
>
>
> Right now it is possible to set the key for each interval when using interval 
> faceting, but it's not possible to change the field key. For example:
> Supported: 
> {noformat}
> ...&facet.interval=popularity
> &facet.interval.set={!key=bad}[0,5]
> &facet.interval.set={!key=good}[5,*]
> &facet=true
> {noformat}
> Not Supported: 
> {noformat}
> ...&facet.interval={!key=popularity}some_field
> &facet.interval.set={!key=bad}[0,5]
> &facet.interval.set={!key=good}[5,*]
> &facet=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.0 Solr Ref Guide Release Plans (RC0 ~ Thurs 2015-02-12)

2015-02-11 Thread Chris Hostetter

FYI: still on track to do this tomorow (about 23HR after i send this 
email)



: Date: Fri, 6 Feb 2015 15:09:41 -0700 (MST)
: From: Chris Hostetter 
: To: Lucene Dev 
: Subject: 5.0 Solr Ref Guide Release Plans (RC0 ~ Thurs 2015-02-12)
: 
: 
: I'm volunteering to be the RM for the 5.0 ref guide, wanted to get my plans on
: the radar for folks.
: 
: The ref guide is in pretty good shape as far as some of the really "meaty"
: changes that are in 5.0 - there are still some new features that need
: documented, but there always are in every release.
: 
: Most significantly: there is a lot of really important new stuff in the guide
: related to the new start scripts ("no more war") and the examples that i want
: to make sure are available to users ASAP once the 5.0 code release hits the
: mirrors.
: 
: 
: With that in mind: I'd like to plan on starting the vote for an RC0 of the 5.0
: ref guide on 2015-02-12.
: 
: If you are currently working on ref guide edits intended for 5.0, please try
: to have them done before then.  If you have ambitions for lots of different
: edits, please focus on edits to existing pages/content that may need
: updated/correct before 5.0 -- and prioritize those types of changes over "new
: pages" for new features that can live under the "Internal" section and stay
: out of hte published doc until a later version.
: 
: If there is anything you think really needs to be a blocker for releasing the
: guide, please either note that on the TODO page (edit or comment), or file a
: blocker jira (go ahead and assign to me to ensure i see it) ...
: 
: https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List
: 
: 
: 
: 
: 
: 
: -Hoss
: http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_76) - Build # 11620 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11620/
Java: 64bit/jdk1.7.0_76 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
{   "responseHeader":{ "status":404, "QTime":3},   "error":{ 
"msg":"no such blob or version available: test/1", "code":404}}

Stack Trace:
java.lang.AssertionError: {
  "responseHeader":{
"status":404,
"QTime":3},
  "error":{
"msg":"no such blob or version available: test/1",
"code":404}}
at 
__randomizedtesting.SeedInfo.seed([B15139B2CECF84FD:691C14E53912215D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
  

[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317151#comment-14317151
 ] 

Anshum Gupta commented on SOLR-7099:


Sure, it was more of an idea than anything.

> bin/solr -cloud mode should launch a local ZK in its own process using 
> zkcli's runzk option (instead of embedded in the first Solr process)
> ---
>
> Key: SOLR-7099
> URL: https://issues.apache.org/jira/browse/SOLR-7099
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Embedded ZK is great for unit testing and quick examples, but as soon as 
> someone wants to restart their cluster, embedded mode causes a lot of issues, 
> esp. if you restart the node that embeds ZK. Of course we don't want users to 
> have to install ZooKeeper just to get started with Solr either. 
> Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
> process but still within the Solr directory structure. We can hide the 
> details and complexity of working with ZK in the bin/solr script. The 
> solution to this should still make it very clear that this is for getting 
> started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317133#comment-14317133
 ] 

Hoss Man commented on SOLR-7099:


bq. I think it might make sense to have a more generic name. That 1. hides the 
implementation detail of running zk for anyone who doesn't want/need to know. 
2. Gives us the freedom to replace the configuration manager (zk) with 
something else

I disagree -- right now, for a high quality production installation of Solr 
it's very important to understand that ZooKeeper is involved, and to understand 
the importance of having multiple ZK nodes.  If/when we replace (or add an 
option to substitute something new for) ZooKeeper, then it will almost certaily 
still be important to understand how that new thing works and how to have it 
working reliably.

it's one thing to add a convince option that says "here's a simple command line 
to setup a single node ZK instance" but we shouldn't hide the fact that it's 
zk, or that it's a single node -- it should not be magic.  and if we name this 
command line option/script something agnostic of the fact that it's launching a 
zk node, then the user will only ever think of it as magic, and never 
understnad why they have to run it (or what the importance of having multiple 
"magic" (aka: zk) nodes configured to talk to eachother is).

> bin/solr -cloud mode should launch a local ZK in its own process using 
> zkcli's runzk option (instead of embedded in the first Solr process)
> ---
>
> Key: SOLR-7099
> URL: https://issues.apache.org/jira/browse/SOLR-7099
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Embedded ZK is great for unit testing and quick examples, but as soon as 
> someone wants to restart their cluster, embedded mode causes a lot of issues, 
> esp. if you restart the node that embeds ZK. Of course we don't want users to 
> have to install ZooKeeper just to get started with Solr either. 
> Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
> process but still within the Solr directory structure. We can hide the 
> details and complexity of working with ZK in the bin/solr script. The 
> solution to this should still make it very clear that this is for getting 
> started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317101#comment-14317101
 ] 

Robert Muir commented on LUCENE-6239:
-

+1 to backport as well. If the reference size is wrong on IBM J9 it wont have a 
huge impact on the ramBytesUsed of lucene's data structures, as we have all 
mentioned on this issue.

Furthermore, I don't know of a configuration of J9 that actually works right 
now, you will get false NPE's in normswriter when indexing, etc etc.

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch, LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6239:
--
Attachment: LUCENE-6239.patch

New patch. I did additional comparisons with the Unsafe detected constants. I 
tested various JVMs, all is consistent now.

I changed the code a little bit so 32 bit and 64 bits JVMs are handled 
separately. For 32 bit JVMs it does not even try to get the alignment size or 
compressed oops value. Also I fixed the array haeder, on 32 bit it is not 
aligned.

I think it's ready, maybe [~dweiss] can have a look, too.

About backporting: We can do this, but reference size detection would not work 
correctly with IBM J9, so it would not detect compressed references there and 
always assume 64 bits. But J9 does not enable compressed refs by default...

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch, LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7096) The Solr service script doesn't like SOLR_HOME pointing to a path containing a symlink

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317069#comment-14317069
 ] 

Hoss Man commented on SOLR-7096:


if/when this behavior is changed, the mention of symbolic links on this ref 
guide page should be removed...
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0

> The Solr service script doesn't like SOLR_HOME pointing to a path containing 
> a symlink
> --
>
> Key: SOLR-7096
> URL: https://issues.apache.org/jira/browse/SOLR-7096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.1
>
>
> While documenting the process to upgrade a SolrCloud cluster from 4.x to 5.0, 
> I discovered that the init.d/solr script doesn't like the SOLR_HOME pointing 
> to a path that contains a symlink. Work-around is to use an absolute path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7102) bin/solr should activate cloud mode if ZK_HOST is set

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317072#comment-14317072
 ] 

Hoss Man commented on SOLR-7102:


if/when this behavior is changed, the "Note" box regarding SOLR_MODE on this 
page should be removed...
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0

> bin/solr should activate cloud mode if ZK_HOST is set
> -
>
> Key: SOLR-7102
> URL: https://issues.apache.org/jira/browse/SOLR-7102
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> you have to set SOLR_MODE=solrcloud in the /var/solr/solr.in.sh to get the 
> init.d/solr script to start Solr in cloud mode (since it doesn't pass -c). 
> What should happen is that the bin/solr script should assume cloud mode if 
> ZK_HOST is set.
> This mainly affects the /etc/init.d/solr script because it doesn't pass the 
> -c | -cloud option. If working with bin/solr directly, you can just pass the 
> -c explicitly to get cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7102) bin/solr should activate cloud mode if ZK_HOST is set

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-7102:


Assignee: Timothy Potter

> bin/solr should activate cloud mode if ZK_HOST is set
> -
>
> Key: SOLR-7102
> URL: https://issues.apache.org/jira/browse/SOLR-7102
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> you have to set SOLR_MODE=solrcloud in the /var/solr/solr.in.sh to get the 
> init.d/solr script to start Solr in cloud mode (since it doesn't pass -c). 
> What should happen is that the bin/solr script should assume cloud mode if 
> ZK_HOST is set.
> This mainly affects the /etc/init.d/solr script because it doesn't pass the 
> -c | -cloud option. If working with bin/solr directly, you can just pass the 
> -c explicitly to get cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7096) The Solr service script doesn't like SOLR_HOME pointing to a path containing a symlink

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-7096:
-
Comment: was deleted

(was: Also, you have to set SOLR_MODE=solrcloud in the /var/solr/solr.in.sh to 
get the init.d/solr script to start Solr in cloud mode (since it doesn't pass 
-c). What should happen is that the bin/solr script should assume cloud mode if 
ZK_HOST is set.)

> The Solr service script doesn't like SOLR_HOME pointing to a path containing 
> a symlink
> --
>
> Key: SOLR-7096
> URL: https://issues.apache.org/jira/browse/SOLR-7096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.1
>
>
> While documenting the process to upgrade a SolrCloud cluster from 4.x to 5.0, 
> I discovered that the init.d/solr script doesn't like the SOLR_HOME pointing 
> to a path that contains a symlink. Work-around is to use an absolute path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7102) bin/solr should activate cloud mode if ZK_HOST is set

2015-02-11 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-7102:


 Summary: bin/solr should activate cloud mode if ZK_HOST is set
 Key: SOLR-7102
 URL: https://issues.apache.org/jira/browse/SOLR-7102
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter


you have to set SOLR_MODE=solrcloud in the /var/solr/solr.in.sh to get the 
init.d/solr script to start Solr in cloud mode (since it doesn't pass -c). What 
should happen is that the bin/solr script should assume cloud mode if ZK_HOST 
is set.

This mainly affects the /etc/init.d/solr script because it doesn't pass the -c 
| -cloud option. If working with bin/solr directly, you can just pass the -c 
explicitly to get cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6072) The 'deletereplica' API should remove the data and instance directory default

2015-02-11 Thread Craig MacGregor (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317061#comment-14317061
 ] 

Craig MacGregor commented on SOLR-6072:
---

This definitely broke backwards compatibility for my limited use case... I was 
using DELETEREPLICA as a way to keep a backup copy of a recent core, following 
creation of a new core in the collection, but now it deletes it... there's no 
UNLOADREPLICA or "unloadOnly" param do the same thing, which is the behavior I 
was expecting :(

> The 'deletereplica' API should remove the data and instance directory default
> -
>
> Key: SOLR-6072
> URL: https://issues.apache.org/jira/browse/SOLR-6072
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.8
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6072.patch
>
>
> The 'deletereplica' collection API should clean up the data and instance 
> directory automatically. Not doing that is a bug even if it's a back-compat 
> break because if we don't do that then there is no way to free up the disk 
> space except manual intervention.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2628 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2628/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51989/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51989/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([38537C0A8CA1F23:8BD1081A063672DB]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316969#comment-14316969
 ] 

Anshum Gupta commented on SOLR-6736:


Thanks for brining it up Mark and Erick. Here are a few things:
# This would not allow linking of configs to collections and only 
upload/replacing/deleting (may be) of configsets.
# Uploading a configset shouldn't be an issue unless a configset is actually 
used.
# The configs API allows, or at least is moving on the lines of being able to 
update the config via API.
# This issue doesn't involve exposing anything via the Admin UI.

I may be missing out on something but so far, I think this is on similar lines 
as the config/blob storage API.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316967#comment-14316967
 ] 

Steve Molloy commented on SOLR-6311:


bq. Definitely not a bug. you have to remember the context of how distributed 
search was added 

Thanks for the history, makes it clearer why it was needed.

bq. But now is not then

Indeed, now distributed/SolrCloud is pretty much the norm...

So anyhow, patch with logic on version makes sense for me, so +1. 

> SearchHandler should use path when no qt or shard.qt parameter is specified
> ---
>
> Key: SOLR-6311
> URL: https://issues.apache.org/jira/browse/SOLR-6311
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: Timothy Potter
> Attachments: SOLR-6311.patch, SOLR-6311.patch
>
>
> When performing distributed searches, you have to specify shards.qt unless 
> you're on the default /select path for your handler. As this is configurable, 
> even the default search handler could be on another path. The shard requests 
> should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316958#comment-14316958
 ] 

Erick Erickson commented on SOLR-6736:
--

A bit of clarification, I'm not really actively working on that, assuming it'll 
all be superseded by the managed stuff, that's assigned to me to keep from 
losing track of it.

But this is a very interesting point. The objection to being able to upload 
arbitrary XML from a client is a security vulnerability as per Uwe's comments 
here: https://issues.apache.org/jira/browse/SOLR-5287 (about half way down, 
dated 30-Nov-2013). It's not clear to me that this capability is similar, 
although I rather assume it is. Sorry for not bringing this up earlier.

We need to be sure of this before committing.


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316943#comment-14316943
 ] 

Mark Miller commented on SOLR-6736:
---

How does this address the security concerns raised in the issue 
[~erickerickson] was working on to allow uploading config from the UI?

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6971) TestRebalanceLeaders fails too often.

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316944#comment-14316944
 ] 

Mark Miller commented on SOLR-6971:
---

Thanks Erick - I'll try to get to this soon.

> TestRebalanceLeaders fails too often.
> -
>
> Key: SOLR-6971
> URL: https://issues.apache.org/jira/browse/SOLR-6971
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6971-dumper.patch
>
>
> I see this fail too much - I've seen 3 different fail types so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6311:
-
Attachment: SOLR-6311.patch

Patch that implements the conditional logic based on the luceneMatchVersion. 
I'm intending this fix to be included in 5.1. The 
{{TermVectorComponentDistributedTest}} test now works without specifying the 
{{shards.qt}} query param. Feedback welcome!

> SearchHandler should use path when no qt or shard.qt parameter is specified
> ---
>
> Key: SOLR-6311
> URL: https://issues.apache.org/jira/browse/SOLR-6311
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: Timothy Potter
> Attachments: SOLR-6311.patch, SOLR-6311.patch
>
>
> When performing distributed searches, you have to specify shards.qt unless 
> you're on the default /select path for your handler. As this is configurable, 
> even the default search handler could be on another path. The shard requests 
> should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316895#comment-14316895
 ] 

Anshum Gupta commented on SOLR-7099:


About the bin/solr zk call, I think it might make sense to have a more generic 
name. That 1. hides the implementation detail of running zk for anyone who 
doesn't want/need to know. 2. Gives us the freedom to replace the configuration 
manager (zk) with something else, if it ever comes to that.

and yes, totally +1 for this change.

> bin/solr -cloud mode should launch a local ZK in its own process using 
> zkcli's runzk option (instead of embedded in the first Solr process)
> ---
>
> Key: SOLR-7099
> URL: https://issues.apache.org/jira/browse/SOLR-7099
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Embedded ZK is great for unit testing and quick examples, but as soon as 
> someone wants to restart their cluster, embedded mode causes a lot of issues, 
> esp. if you restart the node that embeds ZK. Of course we don't want users to 
> have to install ZooKeeper just to get started with Solr either. 
> Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
> process but still within the Solr directory structure. We can hide the 
> details and complexity of working with ZK in the bin/solr script. The 
> solution to this should still make it very clear that this is for getting 
> started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316875#comment-14316875
 ] 

Noble Paul commented on SOLR-7097:
--

I could not really understand the use case. Can you give out a PoC patch ?

> Update other Document in DocTransformer
> ---
>
> Key: SOLR-7097
> URL: https://issues.apache.org/jira/browse/SOLR-7097
> Project: Solr
>  Issue Type: Improvement
>Reporter: yuanyun.cn
>Priority: Minor
>  Labels: searcher, transformers
>
> Solr DocTransformer is good, but it only allows us to change current 
> document: add or remove, update fields.
> It would be great if we can update other document(previous especially) , or 
> better we can delete doc(especially useful during test) or add doc in 
> DocTransformer.
> User case:
> We can use flat group mode(group.main=true) to put parent and child close to 
> each other(parent first), then we can use DocTransformer to update parent 
> document when access its child document.
> Some thought about Implementation:
> org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
> ResultContext, ReturnFields)
> when cachMode=true, in the for loop, after transform, we can store the 
> solrdoc in a list, write these doc at the end.
> cachMode = req.getParams().getBool("cachMode", false);
> SolrDocument[] cachedDocs = new SolrDocument[sz];
> for (int i = 0; i < sz; i++) {
>  SolrDocument sdoc = toSolrDocument(doc);
>  if (transformer != null) {
>   transformer.transform(sdoc, id);
>  }
>  if(cachMode)
>  {
> cachedDocs[i] = sdoc;
>  }
>  else{
> writeSolrDocument( null, sdoc, returnFields, i );
>  }
>   
> }
> if (transformer != null) {
>  transformer.setContext(null);
> }
> if(cachMode) {
>  for (int i = 0; i < sz; i++) {
>   writeSolrDocument(null, cachedDocs[i], returnFields, i);
>  }
> }
> writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 'ant test' -- calculation for tests.jvms

2015-02-11 Thread Dawid Weiss
> IMHO, this calculation should be adjusted so that a 3-core system gets a 
> value of 2.

A 3-core system? What happened to one of its, ahem, gems? :)

> I've been trying to find the code that calculates it, but I've come up empty 
> so far.

The code to adjust it automatically is in the runner itself, here:

https://github.com/carrotsearch/randomizedtesting/blob/master/junit4-ant/src/main/java/com/carrotsearch/ant/tasks/junit4/JUnit4.java#L1288

Feel free to provide a patch, although I think a 3-core system is an
not something many people have. The rationale for decreasing the
number of threads on 4-cores and on is to leave some slack for GC, ANT
itself, etc. Otherwise you can brick the machine.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316831#comment-14316831
 ] 

Noble Paul edited comment on SOLR-6736 at 2/11/15 7:31 PM:
---

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. I see no reason to deviate from the plan. The syntax is as 
important as the functionality.  BlobHandler.java implements a similar API  


was (Author: noble.paul):
[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality.  BlobHandler.java 
implements a similar API  

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316831#comment-14316831
 ] 

Noble Paul edited comment on SOLR-6736 at 2/11/15 7:31 PM:
---

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality.  BlobHandler.java 
implements a similar API  


was (Author: noble.paul):
[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316834#comment-14316834
 ] 

ASF subversion and git services commented on LUCENE-6240:
-

Commit 1659049 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659049 ]

LUCENE-6240: ban @Seed in tests

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316831#comment-14316831
 ] 

Noble Paul commented on SOLR-6736:
--

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6736.patch, SOLR-6736.patch
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6240.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316823#comment-14316823
 ] 

ASF subversion and git services commented on LUCENE-6240:
-

Commit 1659044 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659044 ]

LUCENE-6240: ban @Seed in tests

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316802#comment-14316802
 ] 

Adrien Grand commented on LUCENE-1518:
--

bq. So this looks fine, makes it easy to use Filters as real queries. There is 
only one thing: The score returned is now always be 0. If you want to get the 
old behaviour where you get the boost as score, you just have to wrap the 
Filter with ConstantScoreQuery, like it was before?

Exactly.

bq. One other thing: QueryWrapperFilter is now obsolete, or not?

I didn't want to remove it yet because we still have some APIs that take a 
filter and not a query (eg. IndexSearcher.search, FilteredQuery). I want to 
remove it eventually but I think it's still a bit early?

> Merge Query and Filter classes
> --
>
> Key: LUCENE-1518
> URL: https://issues.apache.org/jira/browse/LUCENE-1518
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 2.4
>Reporter: Uwe Schindler
> Fix For: 4.9, Trunk
>
> Attachments: LUCENE-1518.patch, LUCENE-1518.patch
>
>
> This issue presents a patch, that merges Queries and Filters in a way, that 
> the new Filter class extends Query. This would make it possible, to use every 
> filter as a query.
> The new abstract filter class would contain all methods of 
> ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
> Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
> just use the filter as a normal query.
> I do not want to completely convert Filters to ConstantScoreQueries. The idea 
> is to combine Queries and Filters in such a way, that every Filter can 
> automatically be used at all places where a Query can be used (e.g. also 
> alone a search query without any other constraint). For that, the abstract 
> Query methods must be implemented and return a "default" weight for Filters 
> which is the current ConstantScore Logic. If the filter is used as a real 
> filter (where the API wants a Filter), the getDocIdSet part could be directly 
> used, the weight is useless (as it is currently, too). The constant score 
> default implementation is only used when the Filter is used as a Query (e.g. 
> as direct parameter to Searcher.search()). For the special case of 
> BooleanQueries combining Filters and Queries the idea is, to optimize the 
> BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
> Filter (using instanceof) and then directly uses the Filter API and not take 
> the burden of the ConstantScoreQuery (see LUCENE-1345).
> Here some ideas how to implement Searcher.search() with Query and Filter:
> - User runs Searcher.search() using a Filter as the only parameter. As every 
> Filter is also a ConstantScoreQuery, the query can be executed and returns 
> score 1.0 for all matching documents.
> - User runs Searcher.search() using a Query as the only parameter: No change, 
> all is the same as before
> - User runs Searcher.search() using a BooleanQuery as parameter: If the 
> BooleanQuery does not contain a Query that is subclass of Filter (the new 
> Filter) everything as usual. If the BooleanQuery only contains exactly one 
> Filter and nothing else the Filter is used as a constant score query. If 
> BooleanQuery contains clauses with Queries and Filters the new algorithm 
> could be used: The queries are executed and the results filtered with the 
> filters.
> For the user this has the main advantage: That he can construct his query 
> using a simplified API without thinking about Filters oder Queries, you can 
> just combine clauses together. The scorer/weight logic then identifies the 
> cases to use the filter or the query weight API. Just like the query 
> optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6191) Spatial 2D faceting (heatmaps)

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316800#comment-14316800
 ] 

ASF subversion and git services commented on LUCENE-6191:
-

Commit 1659042 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659042 ]

LUCENE-6191: fix test bug when given 0-area input

> Spatial 2D faceting (heatmaps)
> --
>
> Key: LUCENE-6191
> URL: https://issues.apache.org/jira/browse/LUCENE-6191
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.1
>
> Attachments: LUCENE-6191__Spatial_heatmap.patch, 
> LUCENE-6191__Spatial_heatmap.patch, LUCENE-6191__Spatial_heatmap.patch
>
>
> Lucene spatial's PrefixTree (grid) based strategies index data in a way 
> highly amenable to faceting on grids cells to compute a so-called _heatmap_. 
> The underlying code in this patch uses the PrefixTreeFacetCounter utility 
> class which was recently refactored out of faceting for NumberRangePrefixTree 
> LUCENE-5735.  At a low level, the terms (== grid cells) are navigated 
> per-segment, forward only with TermsEnum.seek, so it's pretty quick and 
> furthermore requires no extra caches & no docvalues.  Ideally you should use 
> QuadPrefixTree (or Flex once it comes out) to maximize the number grid levels 
> which in turn maximizes the fidelity of choices when you ask for a grid 
> covering a region.  Conveniently, the provided capability returns the data in 
> a 2-D grid of counts, so the caller needn't know a thing about how the data 
> is encoded in the prefix tree.  Well almost... at this point they need to 
> provide a grid level, but I'll soon provide a means of deriving the grid 
> level based on a min/max cell count.
> I recommend QuadPrefixTree with geo=false so that you can provide a square 
> world-bounds (360x360 degrees), which means square grid cells which are more 
> desirable to display than rectangular cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316798#comment-14316798
 ] 

Uwe Schindler commented on LUCENE-6240:
---

+1

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6191) Spatial 2D faceting (heatmaps)

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316796#comment-14316796
 ] 

ASF subversion and git services commented on LUCENE-6191:
-

Commit 1659041 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1659041 ]

LUCENE-6191: fix test bug when given 0-area input

> Spatial 2D faceting (heatmaps)
> --
>
> Key: LUCENE-6191
> URL: https://issues.apache.org/jira/browse/LUCENE-6191
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.1
>
> Attachments: LUCENE-6191__Spatial_heatmap.patch, 
> LUCENE-6191__Spatial_heatmap.patch, LUCENE-6191__Spatial_heatmap.patch
>
>
> Lucene spatial's PrefixTree (grid) based strategies index data in a way 
> highly amenable to faceting on grids cells to compute a so-called _heatmap_. 
> The underlying code in this patch uses the PrefixTreeFacetCounter utility 
> class which was recently refactored out of faceting for NumberRangePrefixTree 
> LUCENE-5735.  At a low level, the terms (== grid cells) are navigated 
> per-segment, forward only with TermsEnum.seek, so it's pretty quick and 
> furthermore requires no extra caches & no docvalues.  Ideally you should use 
> QuadPrefixTree (or Flex once it comes out) to maximize the number grid levels 
> which in turn maximizes the fidelity of choices when you ask for a grid 
> covering a region.  Conveniently, the provided capability returns the data in 
> a 2-D grid of counts, so the caller needn't know a thing about how the data 
> is encoded in the prefix tree.  Well almost... at this point they need to 
> provide a grid level, but I'll soon provide a means of deriving the grid 
> level based on a min/max cell count.
> I recommend QuadPrefixTree with geo=false so that you can provide a square 
> world-bounds (360x360 degrees), which means square grid cells which are more 
> desirable to display than rectangular cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316794#comment-14316794
 ] 

Michael McCandless commented on LUCENE-6240:


+1

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316791#comment-14316791
 ] 

Adrien Grand commented on LUCENE-6198:
--

bq. New patch that adds two-phase support to ConjunctionScorer.

By that I mean that not only ConjunctionScorer can take sub-clauses that 
supports approximations, but also that in that case it will support 
approximation too.

> two phase intersection
> --
>
> Key: LUCENE-6198
> URL: https://issues.apache.org/jira/browse/LUCENE-6198
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6198.patch, LUCENE-6198.patch, LUCENE-6198.patch
>
>
> Currently some scorers have to do a lot of per-document work to determine if 
> a document is a match. The simplest example is a phrase scorer, but there are 
> others (spans, sloppy phrase, geospatial, etc).
> Imagine a conjunction with two MUST clauses, one that is a term that matches 
> all odd documents, another that is a phrase matching all even documents. 
> Today this conjunction will be very expensive, because the zig-zag 
> intersection is reading a ton of useless positions.
> The same problem happens with filteredQuery and anything else that acts like 
> a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6198:
-
Attachment: LUCENE-6198.patch

New patch that adds two-phase support to ConjunctionScorer. luceneutil seems 
happy with the patch too:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  HighPhrase   12.26 (11.3%)   11.89  (5.3%)   
-3.0% ( -17% -   15%)
  AndHighLow  894.95  (9.5%)  874.08  (2.9%)   
-2.3% ( -13% -   11%)
   LowPhrase   18.81  (9.2%)   18.51  (4.8%)   
-1.6% ( -14% -   13%)
  Fuzzy1   72.76 (12.2%)   71.65  (9.6%)   
-1.5% ( -20% -   23%)
   MedPhrase   54.31 (11.0%)   53.81  (3.2%)   
-0.9% ( -13% -   14%)
 LowTerm  806.00 (11.9%)  808.20  (4.5%)
0.3% ( -14% -   18%)
 Respell   55.89 (10.2%)   56.57  (4.2%)
1.2% ( -11% -   17%)
OrNotHighLow 1102.88 (11.4%) 1116.63  (4.3%)
1.2% ( -13% -   19%)
 LowSpanNear9.48  (9.5%)9.61  (4.4%)
1.4% ( -11% -   16%)
 LowSloppyPhrase   71.86  (8.8%)   72.89  (3.5%)
1.4% (  -9% -   15%)
 MedSloppyPhrase   29.92 (10.3%)   30.35  (4.2%)
1.4% ( -11% -   17%)
 MedSpanNear   79.24  (8.6%)   80.39  (3.2%)
1.5% (  -9% -   14%)
  IntNRQ   16.81  (9.4%)   17.06  (6.1%)
1.5% ( -12% -   18%)
HighSloppyPhrase   23.27 (11.6%)   23.64  (8.1%)
1.6% ( -16% -   24%)
  OrHighHigh   16.79 (10.6%)   17.08  (7.7%)
1.7% ( -15% -   22%)
OrHighNotLow   84.84 (10.3%)   86.32  (3.2%)
1.7% ( -10% -   17%)
   OrNotHighHigh   56.28  (9.4%)   57.30  (1.9%)
1.8% (  -8% -   14%)
HighTerm  123.91 (10.8%)  126.29  (2.8%)
1.9% ( -10% -   17%)
 MedTerm  243.44 (11.1%)  248.40  (2.9%)
2.0% ( -10% -   18%)
Wildcard   74.84  (9.9%)   76.36  (3.1%)
2.0% (  -9% -   16%)
   OrHighNotHigh   45.48  (9.9%)   46.47  (1.9%)
2.2% (  -8% -   15%)
   OrHighLow   79.36 (11.3%)   81.10  (6.5%)
2.2% ( -14% -   22%)
 Prefix3   74.29 (10.5%)   75.96  (4.9%)
2.2% ( -11% -   19%)
OrHighNotMed   53.37 (10.7%)   54.62  (2.5%)
2.3% (  -9% -   17%)
PKLookup  266.92 (10.4%)  273.30  (3.4%)
2.4% ( -10% -   18%)
HighSpanNear   19.64 (10.4%)   20.11  (3.0%)
2.4% (  -9% -   17%)
OrNotHighMed  167.57 (11.7%)  171.67  (2.4%)
2.4% ( -10% -   18%)
   OrHighMed   72.90 (12.5%)   74.87  (6.6%)
2.7% ( -14% -   24%)
  Fuzzy2   50.70 (13.8%)   52.58  (8.4%)
3.7% ( -16% -   30%)
  AndHighMed  160.13 (10.1%)  169.60  (3.4%)
5.9% (  -6% -   21%)
 AndHighHigh   69.49  (8.8%)   74.19  (3.3%)
6.8% (  -4% -   20%)
{noformat}

> two phase intersection
> --
>
> Key: LUCENE-6198
> URL: https://issues.apache.org/jira/browse/LUCENE-6198
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6198.patch, LUCENE-6198.patch, LUCENE-6198.patch
>
>
> Currently some scorers have to do a lot of per-document work to determine if 
> a document is a match. The simplest example is a phrase scorer, but there are 
> others (spans, sloppy phrase, geospatial, etc).
> Imagine a conjunction with two MUST clauses, one that is a term that matches 
> all odd documents, another that is a phrase matching all even documents. 
> Today this conjunction will be very expensive, because the zig-zag 
> intersection is reading a ton of useless positions.
> The same problem happens with filteredQuery and anything else that acts like 
> a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316785#comment-14316785
 ] 

Uwe Schindler commented on LUCENE-1518:
---

Long time ago :-)

So this looks fine, makes it easy to use Filters as real queries. There is only 
one thing: The score returned is now always be 0. If you want to get the old 
behaviour where you get the boost as score, you just have to wrap the Filter 
with ConstantScoreQuery, like it was before?

There is a typo in description of Filter: "Convenient base class for building 
queries that only perform matching, but no scoring. The scorer produced by such 
queries always returns 0." - i think it should be "returns 0 as score".

One other thing: QueryWrapperFilter is now obsolete, or not?

So looks really fine.

> Merge Query and Filter classes
> --
>
> Key: LUCENE-1518
> URL: https://issues.apache.org/jira/browse/LUCENE-1518
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 2.4
>Reporter: Uwe Schindler
> Fix For: 4.9, Trunk
>
> Attachments: LUCENE-1518.patch, LUCENE-1518.patch
>
>
> This issue presents a patch, that merges Queries and Filters in a way, that 
> the new Filter class extends Query. This would make it possible, to use every 
> filter as a query.
> The new abstract filter class would contain all methods of 
> ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
> Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
> just use the filter as a normal query.
> I do not want to completely convert Filters to ConstantScoreQueries. The idea 
> is to combine Queries and Filters in such a way, that every Filter can 
> automatically be used at all places where a Query can be used (e.g. also 
> alone a search query without any other constraint). For that, the abstract 
> Query methods must be implemented and return a "default" weight for Filters 
> which is the current ConstantScore Logic. If the filter is used as a real 
> filter (where the API wants a Filter), the getDocIdSet part could be directly 
> used, the weight is useless (as it is currently, too). The constant score 
> default implementation is only used when the Filter is used as a Query (e.g. 
> as direct parameter to Searcher.search()). For the special case of 
> BooleanQueries combining Filters and Queries the idea is, to optimize the 
> BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
> Filter (using instanceof) and then directly uses the Filter API and not take 
> the burden of the ConstantScoreQuery (see LUCENE-1345).
> Here some ideas how to implement Searcher.search() with Query and Filter:
> - User runs Searcher.search() using a Filter as the only parameter. As every 
> Filter is also a ConstantScoreQuery, the query can be executed and returns 
> score 1.0 for all matching documents.
> - User runs Searcher.search() using a Query as the only parameter: No change, 
> all is the same as before
> - User runs Searcher.search() using a BooleanQuery as parameter: If the 
> BooleanQuery does not contain a Query that is subclass of Filter (the new 
> Filter) everything as usual. If the BooleanQuery only contains exactly one 
> Filter and nothing else the Filter is used as a constant score query. If 
> BooleanQuery contains clauses with Queries and Filters the new algorithm 
> could be used: The queries are executed and the results filtered with the 
> filters.
> For the user this has the main advantage: That he can construct his query 
> using a simplified API without thinking about Filters oder Queries, you can 
> just combine clauses together. The scorer/weight logic then identifies the 
> cases to use the filter or the query weight API. Just like the query 
> optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Sachin Goyal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316779#comment-14316779
 ] 

Sachin Goyal commented on SOLR-6832:


Thank you [~thelabdude].
Please let me know how we can get this committed into the trunk and I can edit 
the Solr reference guide.
I would also like to back-port this into the 5x branch.

> Queries be served locally rather than being forwarded to another replica
> 
>
> Key: SOLR-6832
> URL: https://issues.apache.org/jira/browse/SOLR-6832
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Sachin Goyal
>Assignee: Timothy Potter
> Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
> SOLR-6832.patch
>
>
> Currently, I see that code flow for a query in SolrCloud is as follows:
> For distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> HttpShardHandler.submit()
> For non-distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> QueryComponent.process()
> \\
> \\
> \\
> For a distributed query, the request is always sent to all the shards even if 
> the originating SolrCore (handling the original distributed query) is a 
> replica of one of the shards.
> If the original Solr-Core can check itself before sending http requests for 
> any shard, we can probably save some network hopping and gain some 
> performance.
> \\
> \\
> We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
> to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316778#comment-14316778
 ] 

Ryan Ernst commented on LUCENE-6240:


+1

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316768#comment-14316768
 ] 

Alan Woodward commented on LUCENE-6240:
---

+1!  And thanks for fixing.

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316767#comment-14316767
 ] 

Timothy Potter commented on SOLR-6311:
--

nm! If I look at branch5x, my question is answered ;-) sometimes you have to 
look outside of trunk to see clearly!

> SearchHandler should use path when no qt or shard.qt parameter is specified
> ---
>
> Key: SOLR-6311
> URL: https://issues.apache.org/jira/browse/SOLR-6311
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: Timothy Potter
> Attachments: SOLR-6311.patch
>
>
> When performing distributed searches, you have to specify shards.qt unless 
> you're on the default /select path for your handler. As this is configurable, 
> even the default search handler could be on another path. The shard requests 
> should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316751#comment-14316751
 ] 

Timothy Potter commented on SOLR-6311:
--

I'm going with [~hossman]'s suggestion of using the LUCENE_MATCH_VERSION and am 
targeting this fix for the 5.1 release. So my first inclination was to do:

{code}
if 
(req.getCore().getSolrConfig().luceneMatchVersion.onOrAfter(Version.LUCENE_5_1_0))
 {
 ...
{code}

But Version.LUCENE_5_1_0 is deprecated, so do I do this instead? 

{code}
if (req.getCore().getSolrConfig().luceneMatchVersion.onOrAfter(Version.LATEST)) 
{
...
{code}

I guess it's the deprecated thing that's throwing me off.

> SearchHandler should use path when no qt or shard.qt parameter is specified
> ---
>
> Key: SOLR-6311
> URL: https://issues.apache.org/jira/browse/SOLR-6311
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: Timothy Potter
> Attachments: SOLR-6311.patch
>
>
> When performing distributed searches, you have to specify shards.qt unless 
> you're on the default /select path for your handler. As this is configurable, 
> even the default search handler could be on another path. The shard requests 
> should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler edited comment on LUCENE-6239 at 2/11/15 6:46 PM:


Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
instance object.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.


was (Author: thetaphi):
Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler edited comment on LUCENE-6239 at 2/11/15 6:46 PM:


Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.


was (Author: thetaphi):
Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just be getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cabnot see those constants, but thats not different from the 
HotspotBean.

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7101:
--
Attachment: SOLR-7101.patch

> JmxMonitoredMap can throw an exception in clear when queryNames fails.
> --
>
> Key: SOLR-7101
> URL: https://issues.apache.org/jira/browse/SOLR-7101
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7101.patch
>
>
> This was added in SOLR-2927 - we should be lienant on failures here like we 
> are in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler commented on LUCENE-6239:
---

Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just be getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cabnot see those constants, but thats not different from the 
HotspotBean.

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread Mark Miller (JIRA)
Mark Miller created SOLR-7101:
-

 Summary: JmxMonitoredMap can throw an exception in clear when 
queryNames fails.
 Key: SOLR-7101
 URL: https://issues.apache.org/jira/browse/SOLR-7101
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.1


This was added in SOLR-2927 - we should be lienant on failures here like we are 
in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



'ant test' -- calculation for tests.jvms

2015-02-11 Thread Shawn Heisey
If the computer has four CPU cores, running tests via the build system
will set tests.jvms to 3, but if it has three CPU cores, it will set
tests.jvms to 1.

IMHO, this calculation should be adjusted so that a 3-core system gets a
value of 2.  I've been trying to find the code that calculates it, but
I've come up empty so far.

Does anyone like or hate this idea?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316727#comment-14316727
 ] 

Adrien Grand commented on LUCENE-6240:
--

+1

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6240:

Attachment: LUCENE-6240.patch

Patch. You can still use this annotation when debugging, but just don't commit 
it.

precommit / jenkins will fail like this:
{noformat}
[forbidden-apis] Forbidden class/interface/annotation use: 
com.carrotsearch.randomizedtesting.annotations.Seed [Don't commit hardcoded 
seeds]
[forbidden-apis]   in org.apache.lucene.TestDemo (TestDemo.java, annotation on 
class declaration)
[forbidden-apis] Scanned 1118 (and 910 related) class file(s) for forbidden API 
invocations (in 0.42s), 1 error(s).

BUILD FAILED
{noformat}

> ban @Seed in tests.
> ---
>
> Key: LUCENE-6240
> URL: https://issues.apache.org/jira/browse/LUCENE-6240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6240.patch
>
>
> If someone is debugging, they can easily accidentally commit \@Seed 
> annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316702#comment-14316702
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659025 from [~rcmuir] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1659025 ]

LUCENE-6030: remove fixed @Seed

> Add norms patched compression which uses table for most common values
> -
>
> Key: LUCENE-6030
> URL: https://issues.apache.org/jira/browse/LUCENE-6030
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6030.patch
>
>
> We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
> to mark documents that have the most common value (when >97% of the documents 
> have that value).  This works well for fields that have a predominant value 
> length, and then a small number of docs with some other random values.  But 
> another common case is having a handful of very common value lengths, like 
> with a title field.
> We can use a table (see TABLE_COMPRESSION) to store the most common values, 
> and save an oridinal for the "other" case, at which point we can lookup in 
> the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316697#comment-14316697
 ] 

Timothy Potter commented on SOLR-6832:
--

Also, I don't think we need to include this parameter in all of the configs, as 
we're trying to get away from bloated configs. So I changed the patch to just 
include in the sample techproducts configs. We'll also need to document this 
parameter in the Solr reference guide.

> Queries be served locally rather than being forwarded to another replica
> 
>
> Key: SOLR-6832
> URL: https://issues.apache.org/jira/browse/SOLR-6832
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Sachin Goyal
>Assignee: Timothy Potter
> Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
> SOLR-6832.patch
>
>
> Currently, I see that code flow for a query in SolrCloud is as follows:
> For distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> HttpShardHandler.submit()
> For non-distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> QueryComponent.process()
> \\
> \\
> \\
> For a distributed query, the request is always sent to all the shards even if 
> the originating SolrCore (handling the original distributed query) is a 
> replica of one of the shards.
> If the original Solr-Core can check itself before sending http requests for 
> any shard, we can probably save some network hopping and gain some 
> performance.
> \\
> \\
> We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
> to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6832:
-
Attachment: SOLR-6832.patch

[~sachingoyal] It seems like your latest patch was created / tested against 
branch4x vs. trunk? It's better to work against trunk for new features and then 
we'll back-port the changes as needed. I went ahead and migrated your patch to 
work with trunk and cleaned up a few places in the code. Overall looking good!

> Queries be served locally rather than being forwarded to another replica
> 
>
> Key: SOLR-6832
> URL: https://issues.apache.org/jira/browse/SOLR-6832
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Sachin Goyal
>Assignee: Timothy Potter
> Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
> SOLR-6832.patch
>
>
> Currently, I see that code flow for a query in SolrCloud is as follows:
> For distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> HttpShardHandler.submit()
> For non-distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> QueryComponent.process()
> \\
> \\
> \\
> For a distributed query, the request is always sent to all the shards even if 
> the originating SolrCore (handling the original distributed query) is a 
> replica of one of the shards.
> If the original Solr-Core can check itself before sending http requests for 
> any shard, we can probably save some network hopping and gain some 
> performance.
> \\
> \\
> We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
> to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11780 - Failure!

2015-02-11 Thread david.w.smi...@gmail.com
It reproduces; I’m on it.

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Wed, Feb 11, 2015 at 12:30 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11780/
> Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC
>
> 1 tests failed.
> FAILED:
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom {#3
> seed=[6B7EE18F8044BF08:1263454538DCD1B5]}
>
> Error Message:
> expected:<1> but was:<0>
>
> Stack Trace:
> java.lang.AssertionError: expected:<1> but was:<0>
> at
> __randomizedtesting.SeedInfo.seed([6B7EE18F8044BF08:1263454538DCD1B5]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.junit.Assert.assertEquals(Assert.java:456)
> at
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:221)
> at
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:188)
> at
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:201)
> at
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> org

RE: [VOTE] 5.0.0 RC2

2015-02-11 Thread Uwe Schindler
Hi,

For me it worked, maybe Europe is a better location for downloads from 
people.a.o. With Java 7 and Java 8 tested, I got the following result:

SUCCESS! [2:33:21.113312]

I also did some manual checks of documentation and Solr artifacts under Windows 
with whitespace in user name (no adoption to my Lucene apps - too much work).

Finally,
My vote is:
+1 to release!

Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Steve Rowe [mailto:sar...@gmail.com]
> Sent: Wednesday, February 11, 2015 1:23 AM
> To: dev@lucene.apache.org
> Subject: Re: [VOTE] 5.0.0 RC2
> 
> I’ll work on adding multiple retries with a pause between, hopefully that’ll
> help. - Steve
> 
> > On Feb 10, 2015, at 6:08 PM, Anshum Gupta 
> wrote:
> >
> > Thanks Uwe. I've tried it a few times and it's failed after retrying so I'm 
> > just
> sticking to running it after manually downloading.
> >
> > On Tue, Feb 10, 2015 at 2:17 PM, Uwe Schindler 
> wrote:
> > Actually this is how it looked like:
> >
> >
> >
> > thetaphi@opteron:~/lucene$ tail -100f nohup.out
> >
> > Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76
> >
> > Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31
> >
> > NOTE: output encoding is UTF-8
> >
> >
> >
> > Load release URL
> "http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-
> rev1658469"...
> >
> >   unshortened: http://people.apache.org/~anshum/staging_area/lucene-
> solr-5.0.0-RC2-rev1658469/
> >
> > Retrying download of url
> http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-
> rev1658469/ after exception:  >
> > rror [Errno 110] Connection timed out>
> >
> >
> >
> > Test Lucene...
> >
> >   test basics...
> >
> >   get KEYS
> >
> > 0.1 MB in 1.42 sec (0.1 MB/sec)
> >
> >   check changes HTML...
> >
> >   download lucene-5.0.0-src.tgz...
> >
> > 27.9 MB in 8.32 sec (3.4 MB/sec)
> >
> > verify md5/sha1 digests
> >
> > verify sig
> >
> > verify trust
> >
> >   GPG: gpg: WARNING: This key is not certified with a trusted signature!
> >
> >   download lucene-5.0.0.tgz...
> >
> > 64.0 MB in 15.83 sec (4.0 MB/sec)
> >
> > verify md5/sha1 digests
> >
> > verify sig
> >
> > verify trust
> >
> >   GPG: gpg: WARNING: This key is not certified with a trusted signature!
> >
> >   download lucene-5.0.0.zip...
> >
> > 73.5 MB in 23.91 sec (3.1 MB/sec)
> >
> > verify md5/sha1 digests
> >
> > verify sig
> >
> > verify trust
> >
> >   GPG: gpg: WARNING: This key is not certified with a trusted signature!
> >
> >   unpack lucene-5.0.0.tgz...
> >
> > verify JAR metadata/identity/no javax.* or java.* classes...
> >
> > test demo with 1.7...
> >
> >   got 5647 hits for query "lucene"
> >
> > checkindex with 1.7...
> >
> > test demo with 1.8...
> >
> >   got 5647 hits for query "lucene"
> >
> > checkindex with 1.8...
> >
> > check Lucene's javadoc JAR
> >
> >   unpack lucene-5.0.0.zip...
> >
> > verify JAR metadata/identity/no javax.* or java.* classes...
> >
> > test demo with 1.7...
> >
> >   got 5647 hits for query "lucene"
> >
> > checkindex with 1.7...
> >
> > test demo with 1.8...
> >
> >   got 5647 hits for query "lucene"
> >
> > checkindex with 1.8...
> >
> > check Lucene's javadoc JAR
> >
> >   unpack lucene-5.0.0-src.tgz...
> >
> > make sure no JARs/WARs in src dist...
> >
> > run "ant validate"
> >
> > run tests w/ Java 7 and testArgs=''...
> >
> > test demo with 1.7...
> >
> >   got 210 hits for query "lucene"
> >
> > checkindex with 1.7...
> >
> > generate javadocs w/ Java 7...
> >
> >
> >
> > -
> >
> > Uwe Schindler
> >
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> >
> > http://www.thetaphi.de
> >
> > eMail: u...@thetaphi.de
> >
> >
> >
> > From: Uwe Schindler [mailto:u...@thetaphi.de]
> > Sent: Tuesday, February 10, 2015 11:15 PM
> > To: dev@lucene.apache.org
> > Subject: RE: [VOTE] 5.0.0 RC2
> >
> >
> >
> > It is still running with http. For me it repeated one download because of
> timeout, but it passed through this.
> >
> >
> >
> > Uwe
> >
> >
> >
> > -
> >
> > Uwe Schindler
> >
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> >
> > http://www.thetaphi.de
> >
> > eMail: u...@thetaphi.de
> >
> >
> >
> > From: Anshum Gupta [mailto:ans...@anshumgupta.net]
> > Sent: Tuesday, February 10, 2015 10:59 PM
> > To: dev@lucene.apache.org
> > Subject: Re: [VOTE] 5.0.0 RC2
> >
> >
> >
> > I'm curious to know how many people actually ran it using http vs
> downloading the tgz. Did someone succeed with http?
> >
> >
> >
> > On Tue, Feb 10, 2015 at 1:43 PM, Uwe Schindler 
> wrote:
> >
> > Don’t forget to also test Java 8!
> >
> >
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8
> /path/to/jdk1.8.0 http://people.apache.org/~anshum/staging_area/lucene-
> solr-5.0.0-RC2-rev1658469
> >
> >
> >
> > Uwe
> >
> >
> >
> > -
> >
> >

[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316676#comment-14316676
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659022 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659022 ]

LUCENE-6030: remove fixed @Seed

> Add norms patched compression which uses table for most common values
> -
>
> Key: LUCENE-6030
> URL: https://issues.apache.org/jira/browse/LUCENE-6030
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6030.patch
>
>
> We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
> to mark documents that have the most common value (when >97% of the documents 
> have that value).  This works well for fields that have a predominant value 
> length, and then a small number of docs with some other random values.  But 
> another common case is having a handful of very common value lengths, like 
> with a title field.
> We can use a table (see TABLE_COMPRESSION) to store the most common values, 
> and save an oridinal for the "other" case, at which point we can lookup in 
> the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1431#comment-1431
 ] 

ASF subversion and git services commented on LUCENE-4524:
-

Commit 1659021 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659021 ]

LUCENE-4524: remove fixed @Seed

> Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
> -
>
> Key: LUCENE-4524
> URL: https://issues.apache.org/jira/browse/LUCENE-4524
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/index, core/search
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Alan Woodward
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
> LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch
>
>
> spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
> {noformat}
> hey folks, 
> I have spend a hell lot of time on the positions branch to make 
> positions and offsets working on all queries if needed. The one thing 
> that bugged me the most is the distinction between DocsEnum and 
> DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
> DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
> Same is true for 
> DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
> don't really see the benefits from this. We should rather make the 
> interface simple and call it something like PostingsEnum where you 
> have to specify flags on the TermsIterator and if we can't provide the 
> sufficient enum we throw an exception? 
> I just want to bring up the idea here since it might simplify a lot 
> for users as well for us when improving our positions / offset etc. 
> support. 
> thoughts? Ideas? 
> simon 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316659#comment-14316659
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659020 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659020 ]

LUCENE-6030: remove fixed @Seed

> Add norms patched compression which uses table for most common values
> -
>
> Key: LUCENE-6030
> URL: https://issues.apache.org/jira/browse/LUCENE-6030
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6030.patch
>
>
> We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
> to mark documents that have the most common value (when >97% of the documents 
> have that value).  This works well for fields that have a predominant value 
> length, and then a small number of docs with some other random values.  But 
> another common case is having a handful of very common value lengths, like 
> with a title field.
> We can use a table (see TABLE_COMPRESSION) to store the most common values, 
> and save an oridinal for the "other" case, at which point we can lookup in 
> the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6240:
---

 Summary: ban @Seed in tests.
 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


If someone is debugging, they can easily accidentally commit \@Seed annotation, 
hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4375 - Still Failing!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4375/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([6029226F8A9430BA:E87D1DB524685D42]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1

[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316651#comment-14316651
 ] 

ASF subversion and git services commented on LUCENE-4524:
-

Commit 1659018 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659018 ]

LUCENE-4524: remove fixed @Seed

> Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
> -
>
> Key: LUCENE-4524
> URL: https://issues.apache.org/jira/browse/LUCENE-4524
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/index, core/search
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Alan Woodward
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
> LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch
>
>
> spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
> {noformat}
> hey folks, 
> I have spend a hell lot of time on the positions branch to make 
> positions and offsets working on all queries if needed. The one thing 
> that bugged me the most is the distinction between DocsEnum and 
> DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
> DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
> Same is true for 
> DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
> don't really see the benefits from this. We should rather make the 
> interface simple and call it something like PostingsEnum where you 
> have to specify flags on the TermsIterator and if we can't provide the 
> sufficient enum we throw an exception? 
> I just want to bring up the idea here since it might simplify a lot 
> for users as well for us when improving our positions / offset etc. 
> support. 
> thoughts? Ideas? 
> simon 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316615#comment-14316615
 ] 

Alan Woodward edited comment on LUCENE-6226 at 2/11/15 5:43 PM:


New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

Edit: Meant to add, the patch also includes a RangeFilteredQuery that will only 
match queries that have intervals within a given range in a document, and a 
couple of tests to show how the various bits work.


was (Author: romseygeek):
New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

> Allow TermScorer to expose positions, offsets and payloads
> --
>
> Key: LUCENE-6226
> URL: https://issues.apache.org/jira/browse/LUCENE-6226
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch, 
> LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-11 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6226:
--
Attachment: LUCENE-6226.patch

New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

> Allow TermScorer to expose positions, offsets and payloads
> --
>
> Key: LUCENE-6226
> URL: https://issues.apache.org/jira/browse/LUCENE-6226
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch, 
> LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11780 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11780/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom 
{#3 seed=[6B7EE18F8044BF08:1263454538DCD1B5]}

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([6B7EE18F8044BF08:1263454538DCD1B5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:221)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:188)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:201)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakCont

[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316584#comment-14316584
 ] 

Robert Muir commented on LUCENE-6239:
-

+1

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2627 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2627/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([12765C5D892F585B:9A22638727D335A3]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.ut

[jira] [Created] (SOLR-7100) SpellCheckComponent should throw error if queryAnalyzerFieldType provided doesn't exist

2015-02-11 Thread David Smiley (JIRA)
David Smiley created SOLR-7100:
--

 Summary: SpellCheckComponent should throw error if 
queryAnalyzerFieldType provided doesn't exist
 Key: SOLR-7100
 URL: https://issues.apache.org/jira/browse/SOLR-7100
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.10.2
Reporter: David Smiley
Priority: Minor


If you typo or otherwise mess up the queryAnalyzerFieldType setting in 
solrconfig.xml for the spellcheck component, you will not get an error.  
Instead, the code falls back to the default (WhitespaceTokenizer).  This should 
really be an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6198) two phase intersection

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316568#comment-14316568
 ] 

Robert Muir commented on LUCENE-6198:
-

I am +1 for this API because it solves my major complaint with the first stab i 
took, invasive methods being added to very low level apis.

But i think, on the implementation we should support approximations of 
conjunctions like the first patch. I think its important because this way 
nested conjunctions/filters work and there is not so much performance pressure 
for users to "flatten" things. If we later fix scorers like disjunctionscorer 
too, then it starts to have bigger benefits because users can e.g. put 
proximity queries or "slow filters" that should be checked last anywhere 
arbitrarily in the query, and we always do the right thing. 


> two phase intersection
> --
>
> Key: LUCENE-6198
> URL: https://issues.apache.org/jira/browse/LUCENE-6198
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6198.patch, LUCENE-6198.patch
>
>
> Currently some scorers have to do a lot of per-document work to determine if 
> a document is a match. The simplest example is a phrase scorer, but there are 
> others (spans, sloppy phrase, geospatial, etc).
> Imagine a conjunction with two MUST clauses, one that is a term that matches 
> all odd documents, another that is a phrase matching all even documents. 
> Today this conjunction will be very expensive, because the zig-zag 
> intersection is reading a ton of useless positions.
> The same problem happens with filteredQuery and anything else that acts like 
> a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6239:
--
Attachment: LUCENE-6239.patch

Path removing Unsafe.

I also found out that Constants.Java also used Unsfae for the bitness. Now it 
uses solely sun.misc.data.model sysprop. I will investigate if we can get the 
information otherwise.

[~dweiss]: Can you look at the array header value? The one prevously looked 
strange to me, now the constant does the same as the comment says. I am not 
sure where the comment is documented, I assume, you wrote that.

> Remove RAMUsageEstimator Unsafe calls
> -
>
> Key: LUCENE-6239
> URL: https://issues.apache.org/jira/browse/LUCENE-6239
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6239.patch
>
>
> This is unnecessary risk. We should remove this stuff, it is not needed here. 
> We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (32bit/jdk1.8.0_31) - Build # 127 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/127/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35  NAME 
'schemaModifyTimestamp'  DESC time which schema was modified  SUP 
modifyTimestamp  EQUALITY generalizedTimeMatch  ORDERING 
generalizedTimeOrderingMatch  SYNTAX 1.3.6.1.4.1.1466.115.121.1.24  USAGE 
directoryOperation  ) '

Stack Trace:
java.lang.RuntimeException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35
 NAME 'schemaModifyTimestamp'
 DESC time which schema was modified
 SUP modifyTimestamp
 EQUALITY generalizedTimeMatch
 ORDERING generalizedTimeOrderingMatch
 SYNTAX 1.3.6.1.4.1.1466.115.121.1.24
 USAGE directoryOperation
 )
'
at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:204)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:74)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertio

[jira] [Updated] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread yuanyun.cn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanyun.cn updated SOLR-7097:
-
Description: 
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool("cachMode", false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i < sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i < sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();

  was:
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially)  .

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool("cachMode", false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i < sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i < sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();


> Update other Document in DocTransformer
> ---
>
> Key: SOLR-7097
> URL: https://issues.apache.org/jira/browse/SOLR-7097
> Project: Solr
>  Issue Type: Improvement
>Reporter: yuanyun.cn
>Priority: Minor
>  Labels: searcher, transformers
>
> Solr DocTransformer is good, but it only allows us to change current 
> document: add or remove, update fields.
> It would be great if we can update other document(previous especially) , or 
> better we can delete doc(especially useful during test) or add doc in 
> User case:
> We can use flat group mode(group.main=true) to put parent and child close to 
> each other(parent first), then we can use DocTransformer to update parent 
> document when access its child document.
> Some thought about Implementation:
> org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
> ResultContext, ReturnFields)
> when cachMode=true, in the for loop, after transform, we can store the 
> solrdoc in a list, write these doc at the end.
> cachMode = req.getParams().getBool("cachMode", false);
> SolrDocument[] cachedDocs = new SolrDocument[sz];
> for (int i = 0; i < sz; i++) {
>  SolrDocument sdoc = toSolrDocument(doc);
>  if (transformer != null) {
>   transformer.transform(sdoc, id);
>  }
>  if(cachMode)
>  {
> cachedDocs[i] = sdoc;
>  }
>  else{
> writeSolrDocument( null, sdoc, returnFields, i );
>  }
>   
> }
> if (transformer != null) {
>  transformer.setContext(null);
> }
> if(cachMode) {
>  for (int i = 0; i < sz; i++) {
>   writeSolrDocument(null, cachedDocs[i], returnFields, i);
>  }
> }
> writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread yuanyun.cn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanyun.cn updated SOLR-7097:
-
Description: 
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 
DocTransformer.

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool("cachMode", false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i < sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i < sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();

  was:
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool("cachMode", false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i < sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i < sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();


> Update other Document in DocTransformer
> ---
>
> Key: SOLR-7097
> URL: https://issues.apache.org/jira/browse/SOLR-7097
> Project: Solr
>  Issue Type: Improvement
>Reporter: yuanyun.cn
>Priority: Minor
>  Labels: searcher, transformers
>
> Solr DocTransformer is good, but it only allows us to change current 
> document: add or remove, update fields.
> It would be great if we can update other document(previous especially) , or 
> better we can delete doc(especially useful during test) or add doc in 
> DocTransformer.
> User case:
> We can use flat group mode(group.main=true) to put parent and child close to 
> each other(parent first), then we can use DocTransformer to update parent 
> document when access its child document.
> Some thought about Implementation:
> org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
> ResultContext, ReturnFields)
> when cachMode=true, in the for loop, after transform, we can store the 
> solrdoc in a list, write these doc at the end.
> cachMode = req.getParams().getBool("cachMode", false);
> SolrDocument[] cachedDocs = new SolrDocument[sz];
> for (int i = 0; i < sz; i++) {
>  SolrDocument sdoc = toSolrDocument(doc);
>  if (transformer != null) {
>   transformer.transform(sdoc, id);
>  }
>  if(cachMode)
>  {
> cachedDocs[i] = sdoc;
>  }
>  else{
> writeSolrDocument( null, sdoc, returnFields, i );
>  }
>   
> }
> if (transformer != null) {
>  transformer.setContext(null);
> }
> if(cachMode) {
>  for (int i = 0; i < sz; i++) {
>   writeSolrDocument(null, cachedDocs[i], returnFields, i);
>  }
> }
> writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1348: POMs out of sync

2015-02-11 Thread Steve Rowe
[javadoc] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/core/RequestHandlers.java:250:
 warning: empty  tag
  [javadoc]* 
  [javadoc]  ^
  [javadoc] Generating 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/org/apache/solr/util/package-summary.html...
  [javadoc] Copying file 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/util/doc-files/min-should-match.html
 to directory 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/org/apache/solr/util/doc-files...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/help-doc.html...
  [javadoc] 1 warning


> On Feb 11, 2015, at 11:43 AM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1348/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 17949 lines...]
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:535:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:185:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:61:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:58:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:453:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:276:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/build.xml:49:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:298:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2054:
>  Javadocs warnings were found!
> 
> Total time: 9 minutes 20 seconds
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1348: POMs out of sync

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1348/

No tests ran.

Build Log:
[...truncated 17949 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:535:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:185:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:58:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:453:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:276:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/build.xml:49:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:298:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2054:
 Javadocs warnings were found!

Total time: 9 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316499#comment-14316499
 ] 

Hoss Man commented on SOLR-7099:


i've mentioned this in the past: ideally the _example_ mode of bin/solr will 
launch a single node zk server for you as needed, but will do so using a script 
and echo out what script command it ran.  (similar to how it echos out 
collection creation / health check commands)

when you run solr in (non-example) cloud mode, it should expect zk to already 
be running and by this point you should either already know what you need to 
setup a zk quorom, or you will remember that bin/solr has a command line option 
to launch solr that you saw when you were running the examples

> bin/solr -cloud mode should launch a local ZK in its own process using 
> zkcli's runzk option (instead of embedded in the first Solr process)
> ---
>
> Key: SOLR-7099
> URL: https://issues.apache.org/jira/browse/SOLR-7099
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Embedded ZK is great for unit testing and quick examples, but as soon as 
> someone wants to restart their cluster, embedded mode causes a lot of issues, 
> esp. if you restart the node that embeds ZK. Of course we don't want users to 
> have to install ZooKeeper just to get started with Solr either. 
> Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
> process but still within the Solr directory structure. We can hide the 
> details and complexity of working with ZK in the bin/solr script. The 
> solution to this should still make it very clear that this is for getting 
> started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316493#comment-14316493
 ] 

Hoss Man commented on SOLR-6311:


bq.  It should have been done this way to begin with. I consider it a bug that 
distributed requests were apparently hard-coded to use /select

Definitely not a bug.

you have to remember the context of how distributed search was added -- prior 
to SolrCloud, you had to specify a "shards" param listing all of the cores you 
wanted to do a distributed search over, and the primary convinience mechanism 
for doing that was to register a handler like this...

{noformat}

  
foo:8983/solr,bar:8983/solr
100
  

{noformat}

...so the choice to have "shards.qt" default to "/select" instead of the 
current qt was _extremely_ important to make distributed search function 
correctly for most users for multiple reasons:

1) so that the shards param wouldn't cause infinite recursion
2) so that the "defaults" wouldn't be automatically inherited by the per-shard 
requests

But now is not then -- the default behavior of shards.qt should change to make 
the most sense given the features and best practice currently available in 
Solr.  SolrCloud solves #1, and IIUC useParams solves #2, so we can move 
forward.


> SearchHandler should use path when no qt or shard.qt parameter is specified
> ---
>
> Key: SOLR-6311
> URL: https://issues.apache.org/jira/browse/SOLR-6311
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: Timothy Potter
> Attachments: SOLR-6311.patch
>
>
> When performing distributed searches, you have to specify shards.qt unless 
> you're on the default /select path for your handler. As this is configurable, 
> even the default search handler could be on another path. The shard requests 
> should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 760 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/760/

6 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:20657//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:20657//collection1
at 
__randomizedtesting.SeedInfo.seed([3B901651DA539072:B3C4298B74AFFD8A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:547)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOv

Re: [VOTE] 5.0.0 RC2

2015-02-11 Thread david.w.smi...@gmail.com
Thanks for the clarifications on these two issues, Shalin, Ryan, and Uwe.

I got it to pass when my CWD is 5x and current JAVA_HOME is Java 7, with
—test-java8 test to my Java 8.

SUCCESS! [1:24:57.743374]

+1 to Ship!

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Wed, Feb 11, 2015 at 10:36 AM, Uwe Schindler  wrote:

> I think the problem is the inverse:
>
>
>
> RuntimeError: JAR file
> "/private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar"
> is missing "X-Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
>
>
>
> The problem: Smoketester expects to find Java 1.8 in the JAR file’s
> metadata. The problem: Shalin said, he runs trunk’s smoke tester on the 5.0
> branch. This will break here, because Trunk’s smoketester expects Lucene
> compiled with Java 8.
>
>
>
> Uwe
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Ryan Ernst [mailto:r...@iernst.net]
> *Sent:* Wednesday, February 11, 2015 3:27 PM
> *To:* dev@lucene.apache.org
> *Subject:* Re: [VOTE] 5.0.0 RC2
>
>
>
> And I got this:
> Java 1.8
> JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home
>
>
>
> Did you change your JAVA_HOME to point to java 8 as well (that's what it
> looks like since only jdk is listed in that output)? --test-java8 is meant
> to take the java 8 home, but your regular JAVA_HOME should stay java 7.
>
>
>
> On Wed, Feb 11, 2015 at 6:13 AM, david.w.smi...@gmail.com <
> david.w.smi...@gmail.com> wrote:
>
> I found two problems, and I’m not sure what to make of them.
>
>
>
> First, perhaps the simplest.  I ran it with Java 8 with this at the
> command-line (copied from Uwe’s email, inserting my environment variable):
>
>
>
> python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME
> http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
>
>
>
> And I got this:
>
>
>
> Java 1.8
> JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home
>
> NOTE: output encoding is UTF-8
>
>
>
> Load release URL "
> http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
> "...
>
>   unshortened:
> http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/
>
>
>
> Test Lucene...
>
>   test basics...
>
>   get KEYS
>
> 0.1 MB in 0.69 sec (0.2 MB/sec)
>
>   check changes HTML...
>
>   download lucene-5.0.0-src.tgz...
>
> 27.9 MB in 129.06 sec (0.2 MB/sec)
>
> verify md5/sha1 digests
>
> verify sig
>
> verify trust
>
>   GPG: gpg: WARNING: This key is not certified with a trusted
> signature!
>
>   download lucene-5.0.0.tgz...
>
> 64.0 MB in 154.61 sec (0.4 MB/sec)
>
> verify md5/sha1 digests
>
> verify sig
>
> verify trust
>
>   GPG: gpg: WARNING: This key is not certified with a trusted
> signature!
>
>   download lucene-5.0.0.zip...
>
> 73.5 MB in 223.35 sec (0.3 MB/sec)
>
> verify md5/sha1 digests
>
> verify sig
>
> verify trust
>
>   GPG: gpg: WARNING: This key is not certified with a trusted
> signature!
>
>   unpack lucene-5.0.0.tgz...
>
> verify JAR metadata/identity/no javax.* or java.* classes...
>
> Traceback (most recent call last):
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 1486, in 
>
> main()
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 1431, in main
>
> smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
> c.is_signed, ' '.join(c.test_args))
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 1468, in smokeTest
>
> unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision,
> version, testArgs, baseURL)
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 616, in
> unpackAndVerify
>
> verifyUnpacked(java, project, artifact, unpackPath, svnRevision,
> version, testArgs, tmpDir, baseURL)
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 737, in verifyUnpacked
>
> checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir,
> baseURL)
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 257, in checkAllJARs
>
> checkJARMetaData('JAR file "%s"' % fullPath, fullPath, svnRevision,
> version)
>
>   File "dev-tools/scripts/smokeTestRelease.py", line 185, in
> checkJARMetaData
>
> (desc, verify))
>
> RuntimeError: JAR file
> "/private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar"
> is missing "X-Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
>
>
>
> When I executed the above command, my CWS was a trunk checkout. Should
> that matter?  It seems unlikely; the specific error references the unpacked
> location, not CWD.
>
>
>
>
>
>
>
> I also executed with Java 7; I did this first, actually.  This time, my
> JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the
> Solr tes

[jira] [Updated] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-1518:
-
Attachment: LUCENE-1518.patch

I'd like to revisit this issue now that queries can be configured to not 
produce scores and that boolean queries accept filter clauses. Here is a new 
patch. Like Uwe's patch, it makes Filter extend Query and removes the 
ContantScoreQuery(Filter) constructor. So Filter is now mostly a helper class 
in order to build queries that do not produce scores (Scorer.score() always 
returns 0). I also added changes to Filter in order not to break existing 
Filter implementations (this is why I override equals() and hashCode() in 
Filter to go back to the way that they are implemented in Object).

[~thetaphi] what do you think?

> Merge Query and Filter classes
> --
>
> Key: LUCENE-1518
> URL: https://issues.apache.org/jira/browse/LUCENE-1518
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 2.4
>Reporter: Uwe Schindler
> Fix For: 4.9, Trunk
>
> Attachments: LUCENE-1518.patch, LUCENE-1518.patch
>
>
> This issue presents a patch, that merges Queries and Filters in a way, that 
> the new Filter class extends Query. This would make it possible, to use every 
> filter as a query.
> The new abstract filter class would contain all methods of 
> ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
> Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
> just use the filter as a normal query.
> I do not want to completely convert Filters to ConstantScoreQueries. The idea 
> is to combine Queries and Filters in such a way, that every Filter can 
> automatically be used at all places where a Query can be used (e.g. also 
> alone a search query without any other constraint). For that, the abstract 
> Query methods must be implemented and return a "default" weight for Filters 
> which is the current ConstantScore Logic. If the filter is used as a real 
> filter (where the API wants a Filter), the getDocIdSet part could be directly 
> used, the weight is useless (as it is currently, too). The constant score 
> default implementation is only used when the Filter is used as a Query (e.g. 
> as direct parameter to Searcher.search()). For the special case of 
> BooleanQueries combining Filters and Queries the idea is, to optimize the 
> BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
> Filter (using instanceof) and then directly uses the Filter API and not take 
> the burden of the ConstantScoreQuery (see LUCENE-1345).
> Here some ideas how to implement Searcher.search() with Query and Filter:
> - User runs Searcher.search() using a Filter as the only parameter. As every 
> Filter is also a ConstantScoreQuery, the query can be executed and returns 
> score 1.0 for all matching documents.
> - User runs Searcher.search() using a Query as the only parameter: No change, 
> all is the same as before
> - User runs Searcher.search() using a BooleanQuery as parameter: If the 
> BooleanQuery does not contain a Query that is subclass of Filter (the new 
> Filter) everything as usual. If the BooleanQuery only contains exactly one 
> Filter and nothing else the Filter is used as a constant score query. If 
> BooleanQuery contains clauses with Queries and Filters the new algorithm 
> could be used: The queries are executed and the results filtered with the 
> filters.
> For the user this has the main advantage: That he can construct his query 
> using a simplified API without thinking about Filters oder Queries, you can 
> just combine clauses together. The scorer/weight logic then identifies the 
> cases to use the filter or the query weight API. Just like the query 
> optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >