[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2457 - Still Failing

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2457/

5 tests failed.
REGRESSION:  
org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  A val for path [params, a] full output null

Stack Trace:
java.lang.AssertionError: Could not get expected value  A val for path [params, 
a] full output null
at 
__randomizedtesting.SeedInfo.seed([CD9BAE86A647B598:4C7D209ED118D5A4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:166)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carr

[jira] [Commented] (SOLR-5287) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272345#comment-14272345
 ] 

ASF subversion and git services commented on SOLR-5287:
---

Commit 1650721 from [~erickoerickson] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650721 ]

no separate JIRA, just updating some obsolete JIRAs related to ripping out 
SOLR-5287

> Allow at least solrconfig.xml and schema.xml to be edited via the admin screen
> --
>
> Key: SOLR-5287
> URL: https://issues.apache.org/jira/browse/SOLR-5287
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis, web gui
>Affects Versions: 4.5, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, 
> SOLR-5287.patch, SOLR-5287.patch
>
>
> A user asking a question on the Solr list got me to thinking about editing 
> the main config files from the Solr admin screen. I chatted briefly with 
> [~steffkes] about the mechanics of this on the browser side, he doesn't see a 
> problem on that end. His comment is there's no end point that'll write the 
> file back.
> Am I missing something here or is this actually not a hard problem? I see a 
> couple of issues off the bat, neither of which seem troublesome.
> 1> file permissions. I'd imagine lots of installations will get file 
> permission exceptions if Solr tries to write the file out. Well, do a 
> chmod/chown.
> 2> screwing up the system maliciously or not. I don't think this is an issue, 
> this would be part of the admin handler after all.
> Does anyone have objections to the idea? And how does this fit into the work 
> that [~sar...@syr.edu] has been doing?
> I can imagine this extending to SolrCloud with a "push this to ZK" option or 
> something like that, perhaps not in V1 unless it's easy.
> Of course any pointers gratefully received. Especially ones that start with 
> "Don't waste your effort, it'll never work (or be accepted)"...
> Because what scares me is this seems like such an easy thing to do that would 
> be a significant ease-of-use improvement, so there _has_ to be something I'm 
> missing.
> So if we go forward with this we'll make this the umbrella JIRA, the two 
> immediate sub-JIRAs that spring to mind will be the UI work and the endpoints 
> for the UI work to use.
> I think there are only two end-points here
> 1> list all the files in the conf (or arbitrary from /collection) 
> directory.
> 2> write this text to this file
> Possibly later we could add "clone the configs from coreX to coreY".
> BTW, I've assigned this to myself so I don't lose it, but if anyone wants to 
> take it over it won't hurt my feelings a bit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5287) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272344#comment-14272344
 ] 

ASF subversion and git services commented on SOLR-5287:
---

Commit 1650720 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1650720 ]

no separate JIRA, just updating some obsolete JIRAs related to ripping out 
SOLR-5287

> Allow at least solrconfig.xml and schema.xml to be edited via the admin screen
> --
>
> Key: SOLR-5287
> URL: https://issues.apache.org/jira/browse/SOLR-5287
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis, web gui
>Affects Versions: 4.5, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, 
> SOLR-5287.patch, SOLR-5287.patch
>
>
> A user asking a question on the Solr list got me to thinking about editing 
> the main config files from the Solr admin screen. I chatted briefly with 
> [~steffkes] about the mechanics of this on the browser side, he doesn't see a 
> problem on that end. His comment is there's no end point that'll write the 
> file back.
> Am I missing something here or is this actually not a hard problem? I see a 
> couple of issues off the bat, neither of which seem troublesome.
> 1> file permissions. I'd imagine lots of installations will get file 
> permission exceptions if Solr tries to write the file out. Well, do a 
> chmod/chown.
> 2> screwing up the system maliciously or not. I don't think this is an issue, 
> this would be part of the admin handler after all.
> Does anyone have objections to the idea? And how does this fit into the work 
> that [~sar...@syr.edu] has been doing?
> I can imagine this extending to SolrCloud with a "push this to ZK" option or 
> something like that, perhaps not in V1 unless it's easy.
> Of course any pointers gratefully received. Especially ones that start with 
> "Don't waste your effort, it'll never work (or be accepted)"...
> Because what scares me is this seems like such an easy thing to do that would 
> be a significant ease-of-use improvement, so there _has_ to be something I'm 
> missing.
> So if we go forward with this we'll make this the umbrella JIRA, the two 
> immediate sub-JIRAs that spring to mind will be the UI work and the endpoints 
> for the UI work to use.
> I think there are only two end-points here
> 1> list all the files in the conf (or arbitrary from /collection) 
> directory.
> 2> write this text to this file
> Possibly later we could add "clone the configs from coreX to coreY".
> BTW, I've assigned this to myself so I don't lose it, but if anyone wants to 
> take it over it won't hurt my feelings a bit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1874 - Still Failing!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1874/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=9738, name=Thread-3857, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9738, name=Thread-3857, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58255/uqlp/v: collection already exists: 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([5BB7AB7398DA3C96]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:353)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:312)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1641)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1662)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:921)




Build Log:
[...truncated 9727 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIDistributedZkTest
 5BB7AB7398DA3C96-001/init-core-data-001
   [junit4]   2> 2842187 T9454 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 2842188 T9454 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /uqlp/v
   [junit4]   2> 2842193 T9454 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 2842195 T9454 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2842198 T9455 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 2842298 T9454 oasc.ZkTestServer.run start zk server on 
port:58245
   [junit4]   2> 2842300 T9454 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2842305 T9454 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2842313 T9462 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@79dd6eaf 
name:ZooKeeperConnection Watcher:127.0.0.1:58245 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2842314 T9454 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2842315 T9454 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2842316 T9454 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 2842325 T9454 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2842327 T9454 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2842331 T9465 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@ee194bd name:ZooKeeperConnection 
Watcher:127.0.0.1:58245/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 2842331 T9454 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2842332 T9454 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2842332 T9454 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 2842343 T9454 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 2842351 T9454 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 2842358 T9454 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 2842366 T9454 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/workspace/Lucene-Solr

[jira] [Resolved] (SOLR-4512) /browse GUI: Extra URL params should be sticky

2015-01-09 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-4512.

Resolution: Fixed

> /browse GUI: Extra URL params should be sticky
> --
>
> Key: SOLR-4512
> URL: https://issues.apache.org/jira/browse/SOLR-4512
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Velocity
>Reporter: Jan Høydahl
>Assignee: Erik Hatcher
> Fix For: 5.0, Trunk
>
>
> Sometimes you want to experiment with extra query parms in Velocity 
> "/browse". But if you modify the URL it will be forgotten once you click 
> anything in the GUI.
> We need a way to make them sticky, either generate all the links based on the 
> current actual URL, or add a checkbox which reveals a new input field where 
> you can write all the extra params you want appended



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4512) /browse GUI: Extra URL params should be sticky

2015-01-09 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272335#comment-14272335
 ] 

Erik Hatcher edited comment on SOLR-4512 at 1/10/15 4:33 AM:
-

I'm going to close with this technique as my recommendation for making 
parameters "sticky" to any request handler.  

Here's how the data driven config defines the /browse handler:

{code}
 
{code}

These param sets get defined in /conf/params.json, which can be done through 
API calls like this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"update" : {
  "facets": {
"facet.field":"genre"
}
  }
}'
{code}

Using this technique allows not only params to be used across request handlers, 
but makes this "sticky" desire in /browse straightforward to deal with.  I left 
an empty/undefined "browse" param set in there that can be used to attach UI 
only types of parameters, such as this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"set" : {
  "browse": {
"hl":"on",
"hl.fl":"name"
}
  }
}'
{code}


was (Author: ehatcher):
I'm going to close this ticket as wont-fix with this technique as my 
recommendation for making parameters "sticky" to any request handler.  

Here's how the data driven config defines the /browse handler:

{code}
 
{code}

These param sets get defined in /conf/params.json, which can be done through 
API calls like this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"update" : {
  "facets": {
"facet.field":"genre"
}
  }
}'
{code}

Using this technique allows not only params to be used across request handlers, 
but makes this "sticky" desire in /browse straightforward to deal with.  I left 
an empty/undefined "browse" param set in there that can be used to attach UI 
only types of parameters, such as this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"set" : {
  "browse": {
"hl":"on",
"hl.fl":"name"
}
  }
}'
{code}

> /browse GUI: Extra URL params should be sticky
> --
>
> Key: SOLR-4512
> URL: https://issues.apache.org/jira/browse/SOLR-4512
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Velocity
>Reporter: Jan Høydahl
>Assignee: Erik Hatcher
> Fix For: 5.0, Trunk
>
>
> Sometimes you want to experiment with extra query parms in Velocity 
> "/browse". But if you modify the URL it will be forgotten once you click 
> anything in the GUI.
> We need a way to make them sticky, either generate all the links based on the 
> current actual URL, or add a checkbox which reveals a new input field where 
> you can write all the extra params you want appended



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4512) /browse GUI: Extra URL params should be sticky

2015-01-09 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272335#comment-14272335
 ] 

Erik Hatcher commented on SOLR-4512:


I'm going to close this ticket as wont-fix with this technique as my 
recommendation for making parameters "sticky" to any request handler.  

Here's how the data driven config defines the /browse handler:

{code}
 
{code}

These param sets get defined in /conf/params.json, which can be done through 
API calls like this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"update" : {
  "facets": {
"facet.field":"genre"
}
  }
}'
{code}

Using this technique allows not only params to be used across request handlers, 
but makes this "sticky" desire in /browse straightforward to deal with.  I left 
an empty/undefined "browse" param set in there that can be used to attach UI 
only types of parameters, such as this:

{code}
curl http://localhost:8983/solr/films/config/params -H 
'Content-type:application/json'  -d '{
"set" : {
  "browse": {
"hl":"on",
"hl.fl":"name"
}
  }
}'
{code}

> /browse GUI: Extra URL params should be sticky
> --
>
> Key: SOLR-4512
> URL: https://issues.apache.org/jira/browse/SOLR-4512
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Velocity
>Reporter: Jan Høydahl
>Assignee: Erik Hatcher
> Fix For: 5.0, Trunk
>
>
> Sometimes you want to experiment with extra query parms in Velocity 
> "/browse". But if you modify the URL it will be forgotten once you click 
> anything in the GUI.
> We need a way to make them sticky, either generate all the links based on the 
> current actual URL, or add a checkbox which reveals a new input field where 
> you can write all the extra params you want appended



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1723) VelocityResponseWriter improvements

2015-01-09 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-1723.

Resolution: Fixed

Closing this out; if there's anything lingering that needs to be done please 
make a new JIRA.

> VelocityResponseWriter improvements
> ---
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2035) Add a VelocityResponseWriter $resource tool for locale-specific string lookups

2015-01-09 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-2035.

Resolution: Fixed

> Add a VelocityResponseWriter $resource tool for locale-specific string lookups
> --
>
> Key: SOLR-2035
> URL: https://issues.apache.org/jira/browse/SOLR-2035
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-2035.patch
>
>
> Being able to look up string resources through Java's ResourceBundle facility 
> can be really useful in Velocity templates (through VelocityResponseWriter).  
> Velocity Tools includes a ResourceTool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-09 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6496:
---
Attachment: SOLR-6496.patch

Updated the patch to provide the same exiting functionality in the duplicate 
request implementation. I created SOLR-6949 to capture the refactoring that 
should be done to consolidate the two implementations.

> LBHttpSolrServer should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6949) Refactor LBHttpSolrClient to consolidate the two different request implementations

2015-01-09 Thread Steve Davids (JIRA)
Steve Davids created SOLR-6949:
--

 Summary: Refactor LBHttpSolrClient to consolidate the two 
different request implementations
 Key: SOLR-6949
 URL: https://issues.apache.org/jira/browse/SOLR-6949
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
 Fix For: 5.0, Trunk


LBHttpSolrClient has two duplicate request implementations:

1. public Rsp request(Req req) throws SolrServerException, IOException
2. public NamedList request(final SolrRequest request) throws 
SolrServerException, IOException

Refactor the client to provide a single implementation that both can use since 
they should be consistent and are non-trivial implementations which makes 
maintenance a bit more burdensome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-09 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6915:
-
Attachment: SOLR-6915.patch

Here's a patch implementing the ACLProvider and a test.

Notes:
1) the MiniKdc in hadoop didn't come in until hadoop 2.3.0, so I upgraded the 
dependency version
2) As the test demonstrates, you don't need to provide a CredentialsProvider to 
get the sasl authentication; modifying the javax.security Configuration takes 
care of that
3) ZooKeeper maintains a static collection of AuthenticationProviders.  Thus, 
if we don't add the SASLAuthenticationProvider to the system properties the 
first time we spin up a zookeeper, we won't ever be able to use Sasl, even if 
it's in a subsequent test in the same jvm.  So, we now set this up in 
ZkTestServer.
4) For the apache directory server dependency, I used apacheds-all, rather than 
picking the individual components we need.  If we picked the individual 
components we'd save ~33% of the size of the jar for the complexity of 
maintaining all the version, licenses/noticies/etc.  I can go either way with 
that.

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6915.patch
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272191#comment-14272191
 ] 

Steve Rowe commented on SOLR-6913:
--

I reverted my initial commit, then made changes to {{schema.xml}}, putting back 
most field type and dynamic fields I had removed, added dynamic fields for each 
field type when they weren't there, added a warning about the catch-all 
{{_text}} field to the schema, then renamed {{schema.xml}} to 
{{managed-schema}}.   This keeps the comments-as-documentation intact in the 
configset, where they won't be overwritten.  Also, the schema will be much 
easier to maintain, and track history for.

I think this is done. (Should have reopened and then resolved again - too late 
now...)

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2456 - Still Failing

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2456/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:25950/_mxb/h/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:25950/_mxb/h/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([6541A4CC0DF99371:E4A72AD47AA6F34D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.Test

[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272182#comment-14272182
 ] 

ASF subversion and git services commented on SOLR-6913:
---

Commit 1650706 from [~sar...@syr.edu] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650706 ]

SOLR-6913: In data_driven_schema_configs configset, rename schema.xml to 
managed-schema, remove example-only fieldtypes, add dynamic fields for each 
fieldtype where they don't exist, and add a warning about the catch-all _text 
field (merged trunk r1650701)

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-6945.

Resolution: Fixed

Thanks guys.

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272176#comment-14272176
 ] 

ASF subversion and git services commented on SOLR-6945:
---

Commit 1650704 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650704 ]

SOLR-6945: sample schema.xml bbox field distanceUnits=kilometers

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272175#comment-14272175
 ] 

ASF subversion and git services commented on SOLR-6945:
---

Commit 1650703 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1650703 ]

SOLR-6945: sample schema.xml bbox field distanceUnits=kilometers

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-6945:
--

Assignee: David Smiley

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272157#comment-14272157
 ] 

ASF subversion and git services commented on SOLR-6913:
---

Commit 1650702 from [~sar...@syr.edu] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650702 ]

SOLR-6913: revert cleanup schema in data_drive_schema_configs configset (schema 
modifications will follow)

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272153#comment-14272153
 ] 

ASF subversion and git services commented on SOLR-6913:
---

Commit 1650701 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1650701 ]

SOLR-6913: In data_driven_schema_configs configset, rename schema.xml to 
managed-schema, remove example-only fieldtypes, add dynamic fields for each 
fieldtype where they don't exist, and add a warning about the catch-all _text 
field

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6948) "bin/solr -e cloud" shouldn't bother to ask about collection options if it already exists

2015-01-09 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6948:
--

 Summary: "bin/solr -e cloud" shouldn't bother to ask about 
collection options if it already exists
 Key: SOLR-6948
 URL: https://issues.apache.org/jira/browse/SOLR-6948
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Minor


if you run "bin/solr -e cloud" and select all defaults, and then later you run 
it again the output looks like this...

{noformat}
...
Now let's create a new collection for indexing documents in your 2-node cluster.

Please provide a name for your new collection: [gettingstarted] 
gettingstarted
How many shards would you like to split gettingstarted into? [2] 
2
How many replicas per shard would you like to create? [2] 
2
Please choose a configuration for the gettingstarted collection, available 
options are:
basic_configs, data_driven_schema_configs, or sample_techproducts_configs 
[data_driven_schema_configs] 
Connecting to ZooKeeper at localhost:9983

Collection 'gettingstarted' already exists!

Checked collection existence using Collections API command:
http://127.0.1.1:8983/solr/admin/collections?action=list



SolrCloud example running, please visit http://localhost:8983/solr 



{noformat}

...instead of asking about shards, replicas, and config set the cloud example 
should probably check for hte existing of the collection name as soon as the 
user supplies a name, and then exit immediately with just the final part...

{noformat}
Now let's create a new collection for indexing documents in your 2-node cluster.

Please provide a name for your new collection: [gettingstarted] 
gettingstarted

Collection 'gettingstarted' already exists!

Checked collection existence using Collections API command:
http://127.0.1.1:8983/solr/admin/collections?action=list



SolrCloud example running, please visit http://localhost:8983/solr 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272063#comment-14272063
 ] 

ASF subversion and git services commented on SOLR-6913:
---

Commit 1650696 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1650696 ]

SOLR-6913: revert cleanup schema in data_drive_schema_configs configset (schema 
modifications will follow)

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_72) - Build # 4301 - Still Failing!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4301/
Java: 64bit/jdk1.7.0_72 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51554/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51554/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:276)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Commented] (SOLR-6127) Improve Solr's exampledocs data

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271996#comment-14271996
 ] 

ASF subversion and git services commented on SOLR-6127:
---

Commit 1650689 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650689 ]

SOLR-6127: More improvements to the films example: remove fake document, README 
steps polished (merged from trunk r1650688)

> Improve Solr's exampledocs data
> ---
>
> Key: SOLR-6127
> URL: https://issues.apache.org/jira/browse/SOLR-6127
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, scripts and tools
>Reporter: Varun Thacker
>Assignee: Erik Hatcher
> Fix For: 5.0, Trunk
>
> Attachments: LICENSE.txt, README.txt, README.txt, SOLR-6127.patch, 
> film.csv, film.json, film.xml, freebase_film_dump.py, freebase_film_dump.py, 
> freebase_film_dump.py, freebase_film_dump.py, freebase_film_dump.py, 
> freebase_film_dump.py, freebase_film_dump.py
>
>
> Currently 
> - The CSV example has 10 documents.
> - The JSON example has 4 documents.
> - The XML example has 32 documents.
> 1. We should have equal number of documents and the same documents in all the 
> example formats
> 2. A data set which is slightly more comprehensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6127) Improve Solr's exampledocs data

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271994#comment-14271994
 ] 

ASF subversion and git services commented on SOLR-6127:
---

Commit 1650688 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1650688 ]

SOLR-6127: More improvements to the films example: remove fake document, README 
steps polished

> Improve Solr's exampledocs data
> ---
>
> Key: SOLR-6127
> URL: https://issues.apache.org/jira/browse/SOLR-6127
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, scripts and tools
>Reporter: Varun Thacker
>Assignee: Erik Hatcher
> Fix For: 5.0, Trunk
>
> Attachments: LICENSE.txt, README.txt, README.txt, SOLR-6127.patch, 
> film.csv, film.json, film.xml, freebase_film_dump.py, freebase_film_dump.py, 
> freebase_film_dump.py, freebase_film_dump.py, freebase_film_dump.py, 
> freebase_film_dump.py, freebase_film_dump.py
>
>
> Currently 
> - The CSV example has 10 documents.
> - The JSON example has 4 documents.
> - The XML example has 32 documents.
> 1. We should have equal number of documents and the same documents in all the 
> example formats
> 2. A data set which is slightly more comprehensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2035) Add a VelocityResponseWriter $resource tool for locale-specific string lookups

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271987#comment-14271987
 ] 

ASF subversion and git services commented on SOLR-2035:
---

Commit 1650687 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650687 ]

SOLR-1723: VelocityResponseWriter improvements
SOLR-2035: Add a VelocityResponseWriter $resource tool for locale-specific 
string lookups.
Lots of VrW code cleanup, more and improved test cases.
(merged from r1650685 of trunk)

> Add a VelocityResponseWriter $resource tool for locale-specific string lookups
> --
>
> Key: SOLR-2035
> URL: https://issues.apache.org/jira/browse/SOLR-2035
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-2035.patch
>
>
> Being able to look up string resources through Java's ResourceBundle facility 
> can be really useful in Velocity templates (through VelocityResponseWriter).  
> Velocity Tools includes a ResourceTool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1723) VelocityResponseWriter improvements

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271986#comment-14271986
 ] 

ASF subversion and git services commented on SOLR-1723:
---

Commit 1650687 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650687 ]

SOLR-1723: VelocityResponseWriter improvements
SOLR-2035: Add a VelocityResponseWriter $resource tool for locale-specific 
string lookups.
Lots of VrW code cleanup, more and improved test cases.
(merged from r1650685 of trunk)

> VelocityResponseWriter improvements
> ---
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1723) VelocityResponseWriter improvements

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271981#comment-14271981
 ] 

ASF subversion and git services commented on SOLR-1723:
---

Commit 1650685 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1650685 ]

SOLR-1723: VelocityResponseWriter improvements
SOLR-2035: Add a VelocityResponseWriter $resource tool for locale-specific 
string lookups.
Lots of VrW code cleanup, more and improved test cases.

> VelocityResponseWriter improvements
> ---
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2035) Add a VelocityResponseWriter $resource tool for locale-specific string lookups

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271982#comment-14271982
 ] 

ASF subversion and git services commented on SOLR-2035:
---

Commit 1650685 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1650685 ]

SOLR-1723: VelocityResponseWriter improvements
SOLR-2035: Add a VelocityResponseWriter $resource tool for locale-specific 
string lookups.
Lots of VrW code cleanup, more and improved test cases.

> Add a VelocityResponseWriter $resource tool for locale-specific string lookups
> --
>
> Key: SOLR-2035
> URL: https://issues.apache.org/jira/browse/SOLR-2035
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-2035.patch
>
>
> Being able to look up string resources through Java's ResourceBundle facility 
> can be really useful in Velocity templates (through VelocityResponseWriter).  
> Velocity Tools includes a ResourceTool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1723) VelocityResponseWriter improvements

2015-01-09 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1723:
---
Summary: VelocityResponseWriter improvements  (was: VelocityResponseWriter 
view enhancement ideas)

> VelocityResponseWriter improvements
> ---
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6947) Add built-in generic support for other (besides field) faceting display and filtering

2015-01-09 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-6947:
--

 Summary: Add built-in generic support for other (besides field) 
faceting display and filtering
 Key: SOLR-6947
 URL: https://issues.apache.org/jira/browse/SOLR-6947
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Velocity
Reporter: Erik Hatcher
Assignee: Erik Hatcher


Built-in faceting/filtering is for facet.field's only currently.  Let's add 
support for the other faceting modes for general support.

(note the "techproducts" example comes with more faceting support, but needs to 
be pulled out in a generic fashion)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1723) VelocityResponseWriter view enhancement ideas

2015-01-09 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271957#comment-14271957
 ] 

Erik Hatcher commented on SOLR-1723:


Do we still need a button for rebuilding the spell check index as mentioned 
above?   I don't think so, but if so let's open a new ticket for that one.

> VelocityResponseWriter view enhancement ideas
> -
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1723) VelocityResponseWriter view enhancement ideas

2015-01-09 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1723:
---
Description: 
Catch-all for a number of VelocityResponseWriter cleanups/improvements for 5.0:

* CSS overhaul needed. Color scheme change. Add styling for  tags so 
highlighting stands out better.
* Look up uniqueKey field name (for use by highlighting, explain, and other 
response extras)
* spurious velocity.log's => route to logging to Solr's logging facility
* Add back Velocity file resource loader, off by default.  Set 
template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
(4.6), enabled by a request-time v.base_dir parameter.  Current implementation 
is enabled by an init-time parameter if specified and exists
* Make params resource loader optional, off by default.  Set 
params.resource.loader.enabled=true to enable.
* Make solr resource loader optional, on by default.  Set 
solr.resource.loader.enabled=false to disable.
* Allow custom Velocity engine init properties to load from custom file: 
init.properties.file (formerly there was v.properties request-time)
  - can go to town with trickery from 
http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
* Allow layout to be disabled, even if v.layout is set; use 
v.layout.enabled=false to disable layout (request-time)
* Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
easy to #if($debug)...#end 
* Improve macros facility, put macros in your macros.vm.  (with legacy support 
for VM_global_library.vm)

  was:
Jotting down some ideas for improvement in the Solritas default view:

  * Look up uniqueKey field name (for use by highlighting, explain, and other 
response extras)
  * Add highlighting support - don't show "..." when whole field is highlighted 
(fragsize=0), add hover to see stored field value that may be returned also



I'm usurping this issue to encompass a number of improvements for 
VelocityResponseWriter strictly for 5.0.  Other items not tackled for 5.0 will 
be spun out into separate tickets.

> VelocityResponseWriter view enhancement ideas
> -
>
> Key: SOLR-1723
> URL: https://issues.apache.org/jira/browse/SOLR-1723
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Catch-all for a number of VelocityResponseWriter cleanups/improvements for 
> 5.0:
> * CSS overhaul needed. Color scheme change. Add styling for  tags so 
> highlighting stands out better.
> * Look up uniqueKey field name (for use by highlighting, explain, and other 
> response extras)
> * spurious velocity.log's => route to logging to Solr's logging facility
> * Add back Velocity file resource loader, off by default.  Set 
> template.base.dir writer init param to enable.  This was in pre-SOLR-4882 
> (4.6), enabled by a request-time v.base_dir parameter.  Current 
> implementation is enabled by an init-time parameter if specified and exists
> * Make params resource loader optional, off by default.  Set 
> params.resource.loader.enabled=true to enable.
> * Make solr resource loader optional, on by default.  Set 
> solr.resource.loader.enabled=false to disable.
> * Allow custom Velocity engine init properties to load from custom file: 
> init.properties.file (formerly there was v.properties request-time)
>   - can go to town with trickery from 
> http://velocity.apache.org/engine/devel/developer-guide.html#Velocity_Configuration_Keys_and_Values
> * Allow layout to be disabled, even if v.layout is set; use 
> v.layout.enabled=false to disable layout (request-time)
> * Added $debug to context (it's just QueryResponse#getDebugMap()); makes it 
> easy to #if($debug)...#end 
> * Improve macros facility, put macros in your macros.vm.  (with legacy 
> support for VM_global_library.vm)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6946) create_core should accept the port as an optional param

2015-01-09 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6946:
---
Description: 
While documenting legacy distributed search, for the purpose of an example, I 
wanted to start 2 instances on the same machine in standalone mode with a core 
each and the same config set.
Here's what I did to start the 2 nodes:
{code}
bin/solr start -s example/nodes/node1 -p 8983
bin/solr start -s example/nodes/node2 -p 8984 
{code}
So far so good. Now, create_core doesn't accept a port number and so it 
pseudo-randomly picks a node to create the core i.e. I can't create a core 
using scripts on both nodes smoothly unless we support "-p " with 
that call (and may be collection too?).

FYI, I also tried :
{code}
bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
{code}

but this failed as -e overrides -s. I don't really remember why we did that, 
but perhaps we can consider not overriding -s, even when -e is specified i.e. 
copy whatever is required and use -s.

  was:
While documenting legacy distributed search, for the purpose of an example, I 
wanted to start 2 instances on the same machine in standalone mode with a core 
each and the same config set.
Here's what I did to start the 2 nodes:
{code}
bin/solr start -s example/nodes/node1 -p 8983
bin/solr start -s example/nodes/node2 -p 8984 
{code}
So far so good. Now, create_core doesn't accept a port number and so it 
pseudo-randomly picks a node to create the core i.e. I can't create a core 
using scripts on both nodes smoothly unless we support "-p " with 
that call (and may be collection too?).

FYI, I also tried :
{code}
bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
bin/solr start -s example/nodes/node1 -p 8984 -e techproducts
{code}

but this failed as -e overrides -s. I don't really remember why we did that, 
but perhaps we can consider not overriding -s, even when -e is specified i.e. 
copy whatever is required and use -s.


> create_core should accept the port as an optional param
> ---
>
> Key: SOLR-6946
> URL: https://issues.apache.org/jira/browse/SOLR-6946
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Anshum Gupta
>Priority: Critical
>
> While documenting legacy distributed search, for the purpose of an example, I 
> wanted to start 2 instances on the same machine in standalone mode with a 
> core each and the same config set.
> Here's what I did to start the 2 nodes:
> {code}
> bin/solr start -s example/nodes/node1 -p 8983
> bin/solr start -s example/nodes/node2 -p 8984 
> {code}
> So far so good. Now, create_core doesn't accept a port number and so it 
> pseudo-randomly picks a node to create the core i.e. I can't create a core 
> using scripts on both nodes smoothly unless we support "-p " 
> with that call (and may be collection too?).
> FYI, I also tried :
> {code}
> bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
> bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
> {code}
> but this failed as -e overrides -s. I don't really remember why we did that, 
> but perhaps we can consider not overriding -s, even when -e is specified i.e. 
> copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6946) create_core should accept the port as an optional param

2015-01-09 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-6946:
--

 Summary: create_core should accept the port as an optional param
 Key: SOLR-6946
 URL: https://issues.apache.org/jira/browse/SOLR-6946
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Priority: Critical


While documenting legacy distributed search, for the purpose of an example, I 
wanted to start 2 instances on the same machine in standalone mode with a core 
each and the same config set.
Here's what I did to start the 2 nodes:
{code}
bin/solr start -s example/nodes/node1 -p 8983
bin/solr start -s example/nodes/node2 -p 8984 
{code}
So far so good. Now, create_core doesn't accept a port number and so it 
pseudo-randomly picks a node to create the core i.e. I can't create a core 
using scripts on both nodes smoothly unless we support "-p " with 
that call (and may be collection too?).

FYI, I also tried :
{code}
bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
bin/solr start -s example/nodes/node1 -p 8984 -e techproducts
{code}

but this failed as -e overrides -s. I don't really remember why we did that, 
but perhaps we can consider not overriding -s, even when -e is specified i.e. 
copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271914#comment-14271914
 ] 

Ishan Chattopadhyaya edited comment on SOLR-6945 at 1/9/15 9:43 PM:


The patch (trunk) changes the bbox fields to use distanceUnits="kilometers" 
instead of the deprecated units="degrees" (SOLR-6797).


was (Author: ichattopadhyaya):
The patch (trunk) changes the bbox fields to use distanceUnits="kilometers" 
instead of the deprecated units="degrees".

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-6945:
---
Attachment: SOLR-6945.patch

The patch (trunk) changes the bbox fields to use distanceUnits="kilometers" 
instead of the deprecated units="degrees".

> example configs use deprecated spatial options: "units" on BBoxField
> 
>
> Key: SOLR-6945
> URL: https://issues.apache.org/jira/browse/SOLR-6945
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6945.patch
>
>
> bin/solr -e techproducts causes the following WARN from BBoxField...
> {noformat}
> units parameter is deprecated,​ please use distanceUnits instead for field 
> types with class BBoxField
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6942) DistribDocExpirationUpdateProcessorTest failure.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271905#comment-14271905
 ] 

Mark Miller commented on SOLR-6942:
---

bq. Isn't this a dup of SOLR-6640 ?

Couldn't tell you yet. If I see a test fail and it doesn't come up in a search, 
I add an issue to track it down.

> DistribDocExpirationUpdateProcessorTest failure.
> 
>
> Key: SOLR-6942
> URL: https://issues.apache.org/jira/browse/SOLR-6942
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
> Attachments: SOLR-6942.jenkins.log.txt
>
>
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1873/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6945) example configs use deprecated spatial options: "units" on BBoxField

2015-01-09 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6945:
--

 Summary: example configs use deprecated spatial options: "units" 
on BBoxField
 Key: SOLR-6945
 URL: https://issues.apache.org/jira/browse/SOLR-6945
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Priority: Blocker
 Fix For: 5.0


bin/solr -e techproducts causes the following WARN from BBoxField...

{noformat}
units parameter is deprecated,​ please use distanceUnits instead for field 
types with class BBoxField
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6942) DistribDocExpirationUpdateProcessorTest failure.

2015-01-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6942:
---
Attachment: SOLR-6942.jenkins.log.txt

Isn't this a dup of SOLR-6640 ?

attaching jenkins log file from the build mark refrenced which shows...

{noformat}
   [junit4]   2> 149936 T431 C67 P51553 oasc.SolrException.log ERROR SnapPull 
failed :org.apache.solr.common.SolrException: Index fetch failed : 
   [junit4]   2>at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:506)
   [junit4]   2>at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:338)
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:163)
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:447)
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)
   [junit4]   2>Caused by: 
org.apache.lucene.index.CorruptIndexException: file mismatch, expected 
id=ab71veqfh2v3hn8xguebkpfhw, got=ab71veqfh2v3hn8xguebkpfhx 
(resource=BufferedChecksumIndexInput(MockIndexInputWrapper(RAMInputStream(name=_0.si
   [junit4]   2>at 
org.apache.lucene.codecs.CodecUtil.checkIndexHeaderID(CodecUtil.java:267)
   [junit4]   2>at 
org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:257)
   [junit4]   2>at 
org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:91)
   [junit4]   2>at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:326)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:824)
   [junit4]   2>at 
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:77)
   [junit4]   2>at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
   [junit4]   2>at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:267)
   [junit4]   2>at 
org.apache.solr.update.DefaultSolrCoreState.openIndexWriter(DefaultSolrCoreState.java:250)
   [junit4]   2>at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:470)
   [junit4]   2>... 4 more
   [junit4]   2>Suppressed: 
org.apache.lucene.index.CorruptIndexException: checksum passed (f8b4c3ab). 
possibly transient resource issue, or a Lucene or JVM bug 
(resource=BufferedChecksumIndexInput(MockIndexInputWrapper(RAMInputStream(name=_0.si
   [junit4]   2>at 
org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:380)
   [junit4]   2>at 
org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:111)
   [junit4]   2>... 11 more
   [junit4]   2>
   [junit4]   2> 149940 T431 C67 P51553 oasc.SolrException.log ERROR Error 
while trying to recover:org.apache.solr.common.SolrException: Replication for 
recovery failed.
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:166)
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:447)
   [junit4]   2>at 
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)
   [junit4]   2>
{noformat}

> DistribDocExpirationUpdateProcessorTest failure.
> 
>
> Key: SOLR-6942
> URL: https://issues.apache.org/jira/browse/SOLR-6942
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
> Attachments: SOLR-6942.jenkins.log.txt
>
>
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1873/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2455 - Still Failing

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2455/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:18618/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:18618/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([7D7D3B7EFA5587C3:FC9BB5668D0AE7FF]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertions

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4404 - Still Failing!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4404/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([F32C7CA8628125EF:72CAF2B015DE45D3]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor72.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.Sta

[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271806#comment-14271806
 ] 

Grant Ingersoll commented on SOLR-6913:
---

Awesome, thanks Steve!

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271790#comment-14271790
 ] 

Mark Miller commented on SOLR-6931:
---

I'll commit this one soon so it is in for 5.0.

> We should do a limited retry when using HttpClient.
> ---
>
> Key: SOLR-6931
> URL: https://issues.apache.org/jira/browse/SOLR-6931
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6931.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271776#comment-14271776
 ] 

Anshum Gupta commented on SOLR-6496:


You're right. I'll let you get to it as I'm not sure if I'd get time to work on 
it today.
If you're unable to get to it, I'll try handling it over the weekend.

About the approaches, ideally, it'd be good to refactor but that certainly is 
more invasive. With the rc a week away, I'd like it to be less invasive at this 
point and work on refactoring (and adding tests) separately with the next 
release.

> LBHttpSolrServer should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-01-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6944:
-

 Summary: ReplicationFactorTest and HttpPartitionTest both fail 
with org.apache.http.NoHttpResponseException: The target server failed to 
respond
 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller


Our only recourse is to do a client side retry on such errors. I have some 
retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: need help with Solr on HDFS ref guide page for 5.0

2015-01-09 Thread Mark Miller
Refactoring of config files has broken all of that doc.

- Mark
On Fri Jan 09 2015 at 12:54:56 PM Chris Hostetter 
wrote:

>
> As some folks may have noticed, i started a page a while back to try and
> audit all of the "big" things that needed changed in the ref guide related
> to the new "bin/solr" and example->server changes that will be coming in
> 5.0 (above and beyond the usual "new feature in changes, so let's add
> it to the doc" work that happens just before release)...
>
> https://cwiki.apache.org/confluence/display/solr/INTERNAL+-+5.0+Ref+Guide+
> Overhaul+Notes
>
> A few people have already been helping out on the edits needed, but one
> area i'd like to ask for particula help is from omeone (*cough* miller)
> who understands the HdfsDirectoryFactory.  The wiki page about using SOlr
> with HDFS needs to have it's examples updated to play nicely with
> bin/solr and the existing configsets ...
>
> Running Solr on HDFS
>
> * still has refrences to start.jar – needs to expalin how to do this using
> bin/solr
> * has refrences to "the default configuration files" ... but i think
>this only applies to the techproducts example? the basic_configs &
>data_driven_configs were simplified to not include the kitchen sink of
>every possible option, so these may not worth there.
>
>
> If you could help out with testing & editing this page (or post comments
> with suggested edits based on testing) that would be apprecaited...
>
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
>
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


[jira] [Created] (SOLR-6943) HdfsDirectoryFactory should fall back to system props for most of it's config if it is not found in solrconfig.xml.

2015-01-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6943:
-

 Summary: HdfsDirectoryFactory should fall back to system props for 
most of it's config if it is not found in solrconfig.xml.
 Key: SOLR-6943
 URL: https://issues.apache.org/jira/browse/SOLR-6943
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk


The new server and config sets has undone the work I did to make hdfs easy out 
of the box. Rather than count on config for that, we should just allow most of 
this config to be specified at the sys property level. This improves the global 
cache config situation as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6942) DistribDocExpirationUpdateProcessorTest failure.

2015-01-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6942:
-

 Summary: DistribDocExpirationUpdateProcessorTest failure.
 Key: SOLR-6942
 URL: https://issues.apache.org/jira/browse/SOLR-6942
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller


http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1873/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271699#comment-14271699
 ] 

Mark Miller commented on SOLR-6367:
---

[~praneeth.varma], but we hflush or hsync on the stream that FastOutputStream 
wraps right after.

The non HdfsTransactionLog also only calls flushBuffer unless you configure to 
then fsync.

It may be necessary to call flush instead of flushBuffer here, but I don't 
understand why that would be yet.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6423) HdfsCollectionsAPIDistributedZkTest test fail: Could not find new collection awholynewcollection_1

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271678#comment-14271678
 ] 

Mark Miller commented on SOLR-6423:
---

I don't think it is clear if this is addressed yet or not. Finding out takes 
running the nightly tests many, many times. Once I get my local Jenkins box 
fully up to speed again, I'll pay some more attention to this test.

> HdfsCollectionsAPIDistributedZkTest test fail: Could not find new collection 
> awholynewcollection_1
> --
>
> Key: SOLR-6423
> URL: https://issues.apache.org/jira/browse/SOLR-6423
> Project: Solr
>  Issue Type: Test
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> {noformat}
> java.lang.AssertionError: Could not find new collection awholynewcollection_1
>   at 
> __randomizedtesting.SeedInfo.seed([655D020D02309D33:E4BB8C15756FFD0F]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkForCollection(AbstractFullDistribZkTestBase.java:1642)
>   at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:723)
>   at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:203)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2015-01-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271676#comment-14271676
 ] 

Joel Bernstein commented on SOLR-6581:
--

Very close to committing this now. I'll do some more manual testing and if all 
looks good I plan to commit in the next day or two.

> Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
> ---
>
> Key: SOLR-6581
> URL: https://issues.apache.org/jira/browse/SOLR-6581
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, renames.diff
>
>
> *Background*
> The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
> are optimized to work with a top level FieldCache. Top level FieldCaches have 
> a very fast docID to top-level ordinal lookup. Fast access to the top-level 
> ordinals allows for very high performance field collapsing on high 
> cardinality fields. 
> LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
> FieldCache is no longer in regular use. Instead all top level caches are 
> accessed through MultiDocValues. 
> There are some major advantages of using the MultiDocValues rather then a top 
> level FieldCache. But there is one disadvantage, the lookup from docId to 
> top-level ordinals is slower using MultiDocValues.
> My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
> to use MultiDocValues, the performance drop is around 100%.  For some use 
> cases this performance drop is a blocker.
> *What About Faceting?*
> String faceting also relies on the top level ordinals. Is faceting 
> performance affected also? My testing has shown that the faceting performance 
> is affected much less then collapsing. 
> One possible reason for this may be that field collapsing is memory bound and 
> faceting is not. So the additional memory accesses needed for MultiDocValues 
> affects field collapsing much more then faceting.
> *Proposed Solution*
> The proposed solution is to have the default Collapse and Expand algorithm 
> use MultiDocValues, but to provide an option to use a top level FieldCache if 
> the performance of MultiDocValues is a blocker.
> The proposed mechanism for switching to the FieldCache would be a new "hint" 
> parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
> FieldCache would be used for both Collapse and Expand.
> Example syntax:
> {code}
> fq={!collapse field=x hint=FAST_QUERY}
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2015-01-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6846:
--
Fix Version/s: Trunk
   5.0

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2015-01-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Attachment: SOLR-6581.patch

Added more error handling and removed all debugging/timing code.

> Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
> ---
>
> Key: SOLR-6581
> URL: https://issues.apache.org/jira/browse/SOLR-6581
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, renames.diff
>
>
> *Background*
> The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
> are optimized to work with a top level FieldCache. Top level FieldCaches have 
> a very fast docID to top-level ordinal lookup. Fast access to the top-level 
> ordinals allows for very high performance field collapsing on high 
> cardinality fields. 
> LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
> FieldCache is no longer in regular use. Instead all top level caches are 
> accessed through MultiDocValues. 
> There are some major advantages of using the MultiDocValues rather then a top 
> level FieldCache. But there is one disadvantage, the lookup from docId to 
> top-level ordinals is slower using MultiDocValues.
> My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
> to use MultiDocValues, the performance drop is around 100%.  For some use 
> cases this performance drop is a blocker.
> *What About Faceting?*
> String faceting also relies on the top level ordinals. Is faceting 
> performance affected also? My testing has shown that the faceting performance 
> is affected much less then collapsing. 
> One possible reason for this may be that field collapsing is memory bound and 
> faceting is not. So the additional memory accesses needed for MultiDocValues 
> affects field collapsing much more then faceting.
> *Proposed Solution*
> The proposed solution is to have the default Collapse and Expand algorithm 
> use MultiDocValues, but to provide an option to use a top level FieldCache if 
> the performance of MultiDocValues is a blocker.
> The proposed mechanism for switching to the FieldCache would be a new "hint" 
> parameter. If the hint parameter is set to "FAST_QUERY" then the top-level 
> FieldCache would be used for both Collapse and Expand.
> Example syntax:
> {code}
> fq={!collapse field=x hint=FAST_QUERY}
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1873 - Failure!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1873/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([9959D59D7B597EB4:18BF5B850C061E88]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:835)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1454)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:69)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgn

[jira] [Updated] (LUCENE-6172) Improve the in-order / out-of-order collection decision process

2015-01-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6172:
-
Attachment: LUCENE-6172.patch

Here is an (in-progress) patch which should give an idea of what I'm trying to 
do. The interesting bits are mainly in Top(Docs|Field)Collector, IndexSearcher 
and BooleanWeight. There is one failing lucene/facets test and a couple of 
failing solr tests that I still need to understand.

> Improve the in-order / out-of-order collection decision process
> ---
>
> Key: LUCENE-6172
> URL: https://issues.apache.org/jira/browse/LUCENE-6172
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6172.patch
>
>
> Today the logic is the following:
>  - IndexSearcher looks if the weight can score out-of-order
>  - Depending on the value it creates the appropriate top docs/field collector
> I think this has several issues:
>  - Only IndexSearcher can actually make the decision correctly, and it only 
> works for top docs/field collectors. If you want to make a multi collector in 
> order to have both facets and top docs, then you're clueless about whether 
> you should create a top docs collector that supports out-of-order collection
>  - It is quite fragile: you need to make sure that 
> Weight.scoresDocsOutOfOrder and Weight.bulkScorer agree on when they can 
> score out-of-order. Some queries like BooleanQuery duplicate the logic and 
> other queries like FilteredQuery just always return true to avoid complexity. 
> This is inefficient as this means that IndexSearcher will create a collector 
> that supports out-of-order collection while the common case actually scores 
> documents in order (leap frog between the query and the filter).
> Instead I would like to take advantage of the new collection API to make 
> out-of-order scoring an implementation detail of the bulk scorers. My current 
> idea is as follows:
>  - remove Weight.scoresDocsOutOfOrder
>  - change Collector.getLeafCollector(LeafReaderContext) to 
> Collector.getLeafCollector(LeafReaderContext, boolean canScoreOutOfOrder)
> This new boolean in Collector.getLeafCollector tells the collector that the 
> scorer supports out-of-order scoring. So by returning a leaf collector that 
> supports out-of-order collection, things will be faster.
> The new logic would be the following. First Weights cannot tell whether they 
> support out-of-order scoring or not. However when a weight knows it supports 
> out-of-order scoring, it will pass canScoreOutOfOrder=true when getting the 
> leaf collector. If the returned collector accepts documents out of order, 
> then the weight will return an out-of order scorer. Otherwise, an in-order 
> scorer is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6172) Improve the in-order / out-of-order collection decision process

2015-01-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6172:


 Summary: Improve the in-order / out-of-order collection decision 
process
 Key: LUCENE-6172
 URL: https://issues.apache.org/jira/browse/LUCENE-6172
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, Trunk


Today the logic is the following:

 - IndexSearcher looks if the weight can score out-of-order
 - Depending on the value it creates the appropriate top docs/field collector

I think this has several issues:
 - Only IndexSearcher can actually make the decision correctly, and it only 
works for top docs/field collectors. If you want to make a multi collector in 
order to have both facets and top docs, then you're clueless about whether you 
should create a top docs collector that supports out-of-order collection
 - It is quite fragile: you need to make sure that Weight.scoresDocsOutOfOrder 
and Weight.bulkScorer agree on when they can score out-of-order. Some queries 
like BooleanQuery duplicate the logic and other queries like FilteredQuery just 
always return true to avoid complexity. This is inefficient as this means that 
IndexSearcher will create a collector that supports out-of-order collection 
while the common case actually scores documents in order (leap frog between the 
query and the filter).

Instead I would like to take advantage of the new collection API to make 
out-of-order scoring an implementation detail of the bulk scorers. My current 
idea is as follows:
 - remove Weight.scoresDocsOutOfOrder
 - change Collector.getLeafCollector(LeafReaderContext) to 
Collector.getLeafCollector(LeafReaderContext, boolean canScoreOutOfOrder)

This new boolean in Collector.getLeafCollector tells the collector that the 
scorer supports out-of-order scoring. So by returning a leaf collector that 
supports out-of-order collection, things will be faster.

The new logic would be the following. First Weights cannot tell whether they 
support out-of-order scoring or not. However when a weight knows it supports 
out-of-order scoring, it will pass canScoreOutOfOrder=true when getting the 
leaf collector. If the returned collector accepts documents out of order, then 
the weight will return an out-of order scorer. Otherwise, an in-order scorer is 
returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6876) Remove unused legacy scripts.conf

2015-01-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271636#comment-14271636
 ] 

Hoss Man commented on SOLR-6876:


bq. I think that whole section refers to the stuff that no longer ships with 
Solr. I don't know as of when the shipping it stopped.

pruned now.

> Remove unused legacy scripts.conf
> -
>
> Key: SOLR-6876
> URL: https://issues.apache.org/jira/browse/SOLR-6876
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Alexandre Rafalovitch
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6876.patch
>
>
> Some of the example collections include *scripts.conf* in the *conf* 
> directory. It is not used by anything in the distribution and is somehow left 
> over from the Solr 1.x legacy days.
> It should be possible to safe delete it to avoid confusing users trying to 
> understand what different files actually do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271632#comment-14271632
 ] 

Mark Miller commented on SOLR-6941:
---

{noformat}
   [junit4] ERROR   81.9s J0 | TestRebalanceLeaders.testDistribSearch <<<
   [junit4]> Throwable #1: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:33947/_d/iz, 
http://127.0.0.1:41117/_d/iz, http://127.0.0.1:33021/_d/iz, 
http://127.0.0.1:44859/_d/iz, http://127.0.0.1:46670/_d/iz]
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([9EE8FA44BCDCD049:1F0E745CCB83B075]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:332)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
   [junit4]>at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:292)
   [junit4]>at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:119)
   [junit4]>at 
org.apache.solr.cloud.TestRebalanceLeaders.doTest(TestRebalanceLeaders.java:85)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41117/_d/iz: java.lang.NullPointerException
   [junit4]>at 
org.apache.solr.common.cloud.ZkStateReader.fromJSON(ZkStateReader.java:140)
   [junit4]>at 
org.apache.solr.common.cloud.ZkNodeProps.load(ZkNodeProps.java:92)
   [junit4]>at 
org.apache.solr.cloud.DistributedQueue.containsTaskWithRequestId(DistributedQueue.java:125)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.overseerCollectionQueueContains(CollectionsHandler.java:688)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:713)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:693)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.rejoinElection(CollectionsHandler.java:488)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.insurePreferredIsLeader(CollectionsHandler.java:403)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.handleBalanceLeaders(CollectionsHandler.java:310)
   [junit4]>at 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:276)
   [junit4]>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
   [junit4]>at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:740)
   [junit4]>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:266)
   [junit4]>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:204)
   [junit4]>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]>at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:138)
   [junit4]>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]>at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
   [junit4]>at 
org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
   [junit4]>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]>at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
   [junit4]>at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
   [junit4]>at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
   [junit4]>at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
   [junit4]>at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   [junit4]>at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
   [junit4]>at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:

[jira] [Commented] (SOLR-4408) Server hanging on startup

2015-01-09 Thread Daniel Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271629#comment-14271629
 ] 

Daniel Davis commented on SOLR-4408:


Francois's comment matches the hang I observed, also correlated with 
spellcheckers.
{quote}
I made some more tests. I had set spellCheck with collate in /select 
requestHandler, if I remove it everything works fine.
Maybe it is the spellCheck that causes the problem with the firstSearcher.
{quote}

I am fairly new to Solr, so I had to do some reading before coming to try 
things.   I tried removing killing JVM, removing write.lock, and restarting 
JVM.   That did not work - the searchHandler again dead-locked.   I tried 
editing core.properties to use a new, empty data directory, this worked fine.   
I tried restarting with the spellchecker commented out with the original 
directory.   This also worked fine.

Running 4.10.2

> Server hanging on startup
> -
>
> Key: SOLR-4408
> URL: https://issues.apache.org/jira/browse/SOLR-4408
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
> Environment: OpenJDK 64-Bit Server VM (23.2-b09 mixed mode)
> Tomcat 7.0
> Eclipse Juno + WTP
>Reporter: Francois-Xavier Bonnet
> Attachments: patch-4408.txt
>
>
> While starting, the server hangs indefinitely. Everything works fine when I 
> first start the server with no index created yet but if I fill the index then 
> stop and start the server, it hangs. Could it be a lock that is never 
> released?
> Here is what I get in a full thread dump:
> 2013-02-06 16:28:52
> Full thread dump OpenJDK 64-Bit Server VM (23.2-b09 mixed mode):
> "searcherExecutor-4-thread-1" prio=10 tid=0x7fbdfc16a800 nid=0x42c6 in 
> Object.wait() [0x7fbe0ab1]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xc34c1c48> (a java.lang.Object)
>   at java.lang.Object.wait(Object.java:503)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
>   - locked <0xc34c1c48> (a java.lang.Object)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
>   at 
> org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:94)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:213)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:112)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:203)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:180)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
>   at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:64)
>   at org.apache.solr.core.SolrCore$5.call(SolrCore.java:1594)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> "coreLoadExecutor-3-thread-1" prio=10 tid=0x7fbe04194000 nid=0x42c5 in 
> Object.wait() [0x7fbe0ac11000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xc34c1c48> (a java.lang.Object)
>   at java.lang.Object.wait(Object.java:503)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
>   - locked <0xc34c1c48> (a java.lang.Object)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
>   at 
> org.apache.solr.handler.ReplicationHandler.getIndexVersion(ReplicationHandler.java:495)
>   at 
> org.apache.solr.handler.ReplicationHandler.getStatistics(ReplicationHandler.java:518)
>   at 
> org.apache.solr.core.JmxMonitoredMap$SolrDynamicMBean.getMBeanInfo(JmxMonitoredMap.java:232)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanSer

[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271627#comment-14271627
 ] 

Mark Miller commented on SOLR-6932:
---

I think all of these type of things should be made closeable - including SolrJ 
clients for 5.0 (rather than shutdown).



> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6917) TestDynamicLoading fails frequently.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271622#comment-14271622
 ] 

Mark Miller commented on SOLR-6917:
---

bq. isn't this already being tracked in SOLR-6801?

Given how frequently this fails in my local runs, I'd prefer it has a dedicated 
issue to track it so it can be bad appled if not fixed.

> TestDynamicLoading fails frequently.
> 
>
> Key: SOLR-6917
> URL: https://issues.apache.org/jira/browse/SOLR-6917
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>Priority: Minor
>
> most recent failure:
> {noformat}
>[junit4] FAILURE 39.7s J5 | TestDynamicLoading.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: New version of class 
> is not loaded {
>[junit4]>   "responseHeader":{
>[junit4]> "status":404,
>[junit4]> "QTime":2},
>[junit4]>   "error":{
>[junit4]> "msg":"no such blob or version available: test/2",
>[junit4]> "code":404}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B49634A982DC7AFE:3570BAB1F5831AC2]:0)
>[junit4]>  at 
> org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:154)
>[junit4]>  at 
> org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6917) TestDynamicLoading fails frequently.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271618#comment-14271618
 ] 

Mark Miller commented on SOLR-6917:
---

Another fail I see:   

{noformat} [junit4]> Throwable #1: java.lang.AssertionError: Could not 
successfully add blob after 150 attempts. Expecting 2 items. time elapsed 
15.587  output  for url is {
   [junit4]>   "responseHeader":{
   [junit4]> "status":0,
   [junit4]> "QTime":1},
   [junit4]>   "response":{
   [junit4]> "numFound":1,
   [junit4]> "start":0,
   [junit4]> "docs":[{
   [junit4]> "id":"test/1",
   [junit4]> "md5":"9cea0ff5afa8f603388031a0ae1f4a8d",
   [junit4]> "blobName":"test",
   [junit4]> "version":1,
   [junit4]> "timestamp":"2015-01-09T18:01:16.014Z",
   [junit4]> "size":5325}]}}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([1F266CF75BC3FAE5:9EC0E2EF2C9C9AD9]:0)
   [junit4]>at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:150)
   [junit4]>at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:114)
   [junit4]>at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:70)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
   [junit4]>at java.lang.Thread.run(Thread.java:745)

{noformat} 

> TestDynamicLoading fails frequently.
> 
>
> Key: SOLR-6917
> URL: https://issues.apache.org/jira/browse/SOLR-6917
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>Priority: Minor
>
> most recent failure:
> {noformat}
>[junit4] FAILURE 39.7s J5 | TestDynamicLoading.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: New version of class 
> is not loaded {
>[junit4]>   "responseHeader":{
>[junit4]> "status":404,
>[junit4]> "QTime":2},
>[junit4]>   "error":{
>[junit4]> "msg":"no such blob or version available: test/2",
>[junit4]> "code":404}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B49634A982DC7AFE:3570BAB1F5831AC2]:0)
>[junit4]>  at 
> org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:154)
>[junit4]>  at 
> org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271609#comment-14271609
 ] 

Tomás Fernández Löbbe commented on SOLR-6932:
-

+1
I'm wondering if SolrClient should be made closeable. It would sometimes warn 
of resource leaks

> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



need help with Solr on HDFS ref guide page for 5.0

2015-01-09 Thread Chris Hostetter


As some folks may have noticed, i started a page a while back to try and 
audit all of the "big" things that needed changed in the ref guide related 
to the new "bin/solr" and example->server changes that will be coming in 
5.0 (above and beyond the usual "new feature in changes, so let's add 
it to the doc" work that happens just before release)...


https://cwiki.apache.org/confluence/display/solr/INTERNAL+-+5.0+Ref+Guide+Overhaul+Notes

A few people have already been helping out on the edits needed, but one 
area i'd like to ask for particula help is from omeone (*cough* miller) 
who understands the HdfsDirectoryFactory.  The wiki page about using SOlr 
with HDFS needs to have it's examples updated to play nicely with 
bin/solr and the existing configsets ...


Running Solr on HDFS

* still has refrences to start.jar – needs to expalin how to do this using 
bin/solr
* has refrences to "the default configuration files" ... but i think
  this only applies to the techproducts example? the basic_configs &
  data_driven_configs were simplified to not include the kitchen sink of
  every possible option, so these may not worth there.


If you could help out with testing & editing this page (or post comments 
with suggested edits based on testing) that would be apprecaited...


https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271591#comment-14271591
 ] 

ASF subversion and git services commented on SOLR-6932:
---

Commit 1650612 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650612 ]

SOLR-6932: All HttpClient ConnectionManagers and SolrJ clients should always be 
shutdown in tests and regular code.

> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2454 - Still Failing

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2454/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51340/fzps/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51340/fzps/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([73438113F54AC029:F2A50F0B8215A015]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRule

[jira] [Resolved] (SOLR-6872) Starting techproduct example fails on Trunk with "Version is too old" for PackedInts

2015-01-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6872.

Resolution: Cannot Reproduce

> Starting techproduct example fails on Trunk with "Version is too old" for 
> PackedInts
> 
>
> Key: SOLR-6872
> URL: https://issues.apache.org/jira/browse/SOLR-6872
> Project: Solr
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Alexandre Rafalovitch
>Priority: Blocker
> Fix For: Trunk
>
>
> {quote}
> bin/solr -e techproducts
> {quote}
> causes:
> {quote}
> ...
> Caused by: java.lang.ExceptionInInitializerError
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.(Lucene50PostingsWriter.java:111)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsConsumer(Lucene50PostingsFormat.java:429)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:196)
>   at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:107)
>   at 
> org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:112)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:420)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:504)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:614)
>   at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2714)
> 
> Caused by: java.lang.IllegalArgumentException: Version is too old, should be 
> at least 2 (got 0)
>   at 
> org.apache.lucene.util.packed.PackedInts.checkVersion(PackedInts.java:77)
>   at 
> org.apache.lucene.util.packed.PackedInts.getDecoder(PackedInts.java:742)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271574#comment-14271574
 ] 

ASF subversion and git services commented on SOLR-6932:
---

Commit 1650608 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1650608 ]

SOLR-6932: All HttpClient ConnectionManagers and SolrJ clients should always be 
shutdown in tests and regular code.

> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1316: POMs out of sync

2015-01-09 Thread Mark Miller
Hopefully this is now fixed and we won't have to manually clean anything
up. You can find background on the Jetty 9 update JIRA issue.

- Mark

On Fri Jan 09 2015 at 11:57:24 AM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1316/
>
> No tests ran.
>
> Build Log:
> [...truncated 39090 lines...]
>   [mvn] [INFO] --
> ---
>   [mvn] [INFO] --
> ---
>   [mvn] [ERROR] COMPILATION ERROR :
>   [mvn] [INFO] --
> ---
>
> [...truncated 696 lines...]
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
> The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
> The following error occurred while executing this line:
> : Java returned: 1
>
> Total time: 20 minutes 28 seconds
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271550#comment-14271550
 ] 

Alan Woodward commented on SOLR-6932:
-

+1

> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271547#comment-14271547
 ] 

Steve Rowe commented on SOLR-6913:
--

bq. The main issue is that OOTB, this is the default and it thus leaves us 
pretty underpowered for an OOTB experience. 

Okay, I'll buy it: since {{data_driven_schema_configs}} is the default 
configset when creating a core or a collection from {{bin/solr}}, broad field 
type and dynamic field support is called for.

In addition to putting back the geo related and currency dynamic fields and 
field types, I'll put back the lang-specific field types, and add (previously 
missing) dynamic fields for them.

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6838) Bulk loading with the default of updateDocument blocks all indexing for long periods of time.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271536#comment-14271536
 ] 

Mark Miller commented on SOLR-6838:
---

Could be - I've related them.

> Bulk loading with the default of updateDocument blocks all indexing for long 
> periods of time.
> -
>
> Key: SOLR-6838
> URL: https://issues.apache.org/jira/browse/SOLR-6838
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1316: POMs out of sync

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1316/

No tests ran.

Build Log:
[...truncated 39090 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 696 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
 The following error occurred while executing this line:
: Java returned: 1

Total time: 20 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6932:
--
Attachment: SOLR-6932.patch

Patch with pertinent work from SOLR-4509. This breaks quickly, so I intend to 
commit relatively soon if you anyone wants to help with some review.

> All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
> in tests and regular code.
> 
>
> Key: SOLR-6932
> URL: https://issues.apache.org/jira/browse/SOLR-6932
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6932.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6119) Add auto-io-throttle to ConcurrentMergeScheduler

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271516#comment-14271516
 ] 

ASF subversion and git services commented on LUCENE-6119:
-

Commit 1650595 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1650595 ]

LUCENE-6119: must check merge for abort even when we are not rate limiting; 
don't wrap rate limiter when doing addIndexes (it's not abortable); don't leak 
file handle when wrapping

> Add auto-io-throttle to ConcurrentMergeScheduler
> 
>
> Key: LUCENE-6119
> URL: https://issues.apache.org/jira/browse/LUCENE-6119
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, 
> LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch
>
>
> This method returns number of "incoming" bytes IW has written since it
> was opened, excluding merging.
> It tracks flushed segments, new commits (segments_N), incoming
> files/segments by addIndexes, newly written live docs / doc values
> updates files.
> It's an easy statistic for IW to track and should be useful to help
> applications more intelligently set defaults for IO throttling
> (RateLimiter).
> For example, an application that does hardly any indexing but finally
> triggered a large merge can afford to heavily throttle that large
> merge so it won't interfere with ongoing searches.
> But an application that's causing IW to write new bytes at 50 MB/sec
> must set a correspondingly higher IO throttling otherwise merges will
> clearly fall behind.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_25) - Build # 4300 - Still Failing!

2015-01-09 Thread Michael McCandless
I committed a fix; this was from LUCENE-6119.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Jan 9, 2015 at 7:32 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4300/
> Java: 32bit/jdk1.8.0_25 -client -XX:+UseParallelGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback
>
> Error Message:
> MockDirectoryWrapper: cannot close: there are still open files: {_dx.fnm=1}
>
> Stack Trace:
> java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are 
> still open files: {_dx.fnm=1}
> at 
> __randomizedtesting.SeedInfo.seed([C33B410169B5ABE0:251CAEEECD1AC09E]:0)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:762)
> at 
> org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads.closeDir(TestAddIndexes.java:727)
> at 
> org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thre

[jira] [Commented] (LUCENE-6119) Add auto-io-throttle to ConcurrentMergeScheduler

2015-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271509#comment-14271509
 ] 

ASF subversion and git services commented on LUCENE-6119:
-

Commit 1650594 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650594 ]

LUCENE-6119: must check merge for abort even when we are not rate limiting; 
don't wrap rate limiter when doing addIndexes (it's not abortable); don't leak 
file handle when wrapping

> Add auto-io-throttle to ConcurrentMergeScheduler
> 
>
> Key: LUCENE-6119
> URL: https://issues.apache.org/jira/browse/LUCENE-6119
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, 
> LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch
>
>
> This method returns number of "incoming" bytes IW has written since it
> was opened, excluding merging.
> It tracks flushed segments, new commits (segments_N), incoming
> files/segments by addIndexes, newly written live docs / doc values
> updates files.
> It's an easy statistic for IW to track and should be useful to help
> applications more intelligently set defaults for IO throttling
> (RateLimiter).
> For example, an application that does hardly any indexing but finally
> triggered a large merge can afford to heavily throttle that large
> merge so it won't interfere with ongoing searches.
> But an application that's causing IW to write new bytes at 50 MB/sec
> must set a correspondingly higher IO throttling otherwise merges will
> clearly fall behind.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271499#comment-14271499
 ] 

Grant Ingersoll commented on SOLR-6913:
---

IOW, it's not about schemaless, it's about schema-later

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 729 - Still Failing

2015-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/729/

5 tests failed.
REGRESSION:  
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([1E06DB145A0CCFC6:9FE0550C2D53AFFA]:0)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:192)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:413)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:143)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.Th

[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271487#comment-14271487
 ] 

Grant Ingersoll commented on SOLR-6913:
---

bq. My thinking was that the schemaless example should be minimal. In 
particular, if we don't have a way for field types to be used (via 
(dynamic)field definitions or field guessing), why include them? If the user 
can add fields, they can add field types too.

The main issue is that OOTB, this is the default and it thus leaves us pretty 
underpowered for an OOTB experience.  Those Field Types have been in Solr for a 
long time and I think they hold up reasonably well, so I would vote for putting 
them back in.

I think the big difference is, Solr experts come at the situation from edit 
schema/config first.  New users come at data stores as let me manipulate my 
data first and then harden it later.

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6941:
--
Attachment: SOLR-6941.patch

> DistributedQueue#containsTaskWithRequestId can fail with NPE.
> -
>
> Key: SOLR-6941
> URL: https://issues.apache.org/jira/browse/SOLR-6941
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6941.patch
>
>
> I've seen this happen some recently. Seems data can be return as null and we 
> need to guard against it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271314#comment-14271314
 ] 

Mark Miller commented on SOLR-6941:
---

I ran into this fail while working on SOLR-4509.

> DistributedQueue#containsTaskWithRequestId can fail with NPE.
> -
>
> Key: SOLR-6941
> URL: https://issues.apache.org/jira/browse/SOLR-6941
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> I've seen this happen some recently. Seems data can be return as null and we 
> need to guard against it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2015-01-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271301#comment-14271301
 ] 

Alan Woodward commented on SOLR-6840:
-

I'm going to try using MiniSolrCloudCluster to launch both the control cluster 
and test clusters, and see if that helps any.  At the moment this is all very 
difficult to follow and untangle...

> Remove legacy solr.xml mode
> ---
>
> Key: SOLR-6840
> URL: https://issues.apache.org/jira/browse/SOLR-6840
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: SOLR-6840.patch, SOLR-6840.patch, SOLR-6840.patch
>
>
> On the [Solr Cores and solr.xml 
> page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
>  the Solr Reference Guide says:
> {quote}
> Starting in Solr 4.3, Solr will maintain two distinct formats for 
> {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
> have become accustomed to in which all of the cores one wishes to define in a 
> Solr instance are defined in {{solr.xml}} in 
> {{...}} tags. This format will continue to be 
> supported through the entire 4.x code line.
> As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
> Solr will support _core discovery_. [...]
> The new "core discovery mode" structure for solr.xml will become mandatory as 
> of Solr 5.0, see: Format of solr.xml.
> {quote}
> AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6941:
-

 Summary: DistributedQueue#containsTaskWithRequestId can fail with 
NPE.
 Key: SOLR-6941
 URL: https://issues.apache.org/jira/browse/SOLR-6941
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


I've seen this happen some recently. Seems data can be return as null and we 
need to guard against it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6169) Recent Java 9 commit breaks fsync on directory

2015-01-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271300#comment-14271300
 ] 

Uwe Schindler commented on LUCENE-6169:
---

I started a mail thread on nio-dev mailing list: 
http://mail.openjdk.java.net/pipermail/nio-dev/2015-January/002979.html

> Recent Java 9 commit breaks fsync on directory
> --
>
> Key: LUCENE-6169
> URL: https://issues.apache.org/jira/browse/LUCENE-6169
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Uwe Schindler
>  Labels: Java9
>
> I open this issue to keep track of the communication with Oracle and OpenJDK 
> about this:
> Basically, what happens: In LUCENE-5588 we added support to FSDirectory to be 
> able to sync on directory metadata changes (means the contents of the 
> directory itsself). This is very important on Unix system (maybe also on 
> Windows), because fsyncing a single file does not necessarily writes the 
> directory's contents to disk. Lucene uses this for commits. We first do an 
> atomic rename of the segments file  (to make the commit public), but we have 
> to be sure that the rename operation is written to disk. Because of that we 
> must fsync the directory.
> To enforce this with plain system calls (libc), you open a directory for read 
> and then call fsync. In java this can be done by opening a FileChannel on the 
> direczory(for read) and call fc.force() on it.
> Unfortunately the commit 
> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45 in OpenJDK 9 break 
> this. The corresponding issue is 
> https://bugs.openjdk.java.net/browse/JDK-8066915. The JDK now explicitly 
> checks if a file is a directory and disallows opening a FileChannel on it. 
> This breaks our commit safety.
> Because this behaviour is undocumented (not even POSIX has explicit semantics 
> for syncing directories), we know that it worked at least on MacOSX and 
> Linux. The code in IOUtils is currently written in a way that it tries to 
> sync the diretory, but swallows any Exception. So this change does not break 
> Liucene, but it breaks our commit safety. During testing we assert that the 
> fsync actually works on Linux and MacOSX, in production code the user will 
> notice nothing.
> We should take action and contact Alan Bateman about his commit and this 
> issue on the mailing list, possibly through Rory O'Donnell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1915 - Still Failing!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1915/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([3473D19A11DC16EE]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([3473D19A11DC16EE]:0)




Build Log:
[...truncated 8848 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest
 3473D19A11DC16EE-001/init-core-data-001
   [junit4]   2> 1584864 T5014 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 1584865 T5014 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /xb/s
   [junit4]   2> 1584878 T5014 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 1584880 T5014 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1584881 T5015 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 1584887 T5014 oasc.ZkTestServer.run start zk server on 
port:50987
   [junit4]   2> 1584889 T5014 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1584898 T5014 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1584907 T5022 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@67f3fdd3 
name:ZooKeeperConnection Watcher:127.0.0.1:50987 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1584907 T5014 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1584908 T5014 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 1584908 T5014 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 1584919 T5014 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1584923 T5014 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1584941 T5025 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@7c470aeb 
name:ZooKeeperConnection Watcher:127.0.0.1:50987/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1584951 T5014 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1584953 T5014 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 1584953 T5014 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 1585021 T5014 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 1585082 T5014 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 1585128 T5014 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 1585178 T5014 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1585179 T5014 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 1585205 T5014 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1585205 T5014 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 1585218 T5014 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1585218 T5014 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1585226 T5014 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1585227 T5014 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2> 1585233 T5014 oasc.AbstractZkTestCase.putConfig put 
/Users/jenkins/work

Recent Java 9 commit (e5b66323ae45) breaks fsync on directory

2015-01-09 Thread Uwe Schindler
Hi,

I just subscribed to this mailing list on behalf of the Apache Lucene 
committers. You might know that we recently test Apache Lucene/Solr and also 
Elasticsearch to detect problems with especially Hotspot. We recently updated 
our testing infrastructure to make use of JDK 9 preview build 40. We mainly did 
this to check for issues around Jigsaw, but, toi toi toi, nothing breaks the 
build. :-)

Unfortunately some recent commit in OpenJDK 9, caused some headaches: 
http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45, the corresponding 
issue is https://bugs.openjdk.java.net/browse/JDK-8066915. To keep track on 
this, we opened an issue on our side, too: 
https://issues.apache.org/jira/browse/LUCENE-6169

Let me first describe what we currently do: Apache Lucene is using "write once" 
approach (every file is written only once). When we "commit" a given "commit 
point" in Lucene, we have the following semantics: We write to some temporary 
file name, then we fsync this file (and all related files). This is easy with a 
file channel: Just call fc.force(). The final "publish" of the commit is done 
by an atomic rename using Files.move(Path, Path, 
StandardCopyOption.ATOMIC_MOVE). This works fine unless you have a real 
disaster :-) - like power outage. In that case on POSIX operating systems, the 
rename operation might not be visible at all. On Linux, the whole thing 
explicitly has the following statement in the MAN page 
(http://linux.die.net/man/2/fsync): "Calling fsync() does not necessarily 
ensure that the entry in the directory containing the file has also reached 
disk. For that an explicit fsync() on a file descriptor for the directory is 
also needed."

Basically we currently do the same on Apache Lucene through the following: We 
open a FileChannel (for READ) on the directory itself and then also call 
fc.force(). Of course, as this is not really documented in the Java API (but we 
know it always worked!) we do this on a "buest guess": We cannot fail if this 
throws any IOException. We know, for example, that this does not work on 
Windows*). 

The issue is now, that the above commit now causes this approach to fail with a 
FileSystemException on OpenJDK 9: FileSystemException("Is a directory"). This 
does not break our Lucene releases outside, because - as said before - we 
swallow any exceptions on this. But in our testing infrastructure, we at least 
assert that this works on Linux and MacOSX. And this assert failed: The current 
code is here: http://goo.gl/vKhtsW

We really would like to keep the possibility to fsync a directory on supported 
operating systems. We hope that the above commit will not be backported into 
8u40 and 7u80 releases! In Java 9 we can discuss about other solutions how to 
handle this:
- Keep current semantics as of Java 7 and Java 8 and just fail if you really 
want to READ/WRITE from this FileChannel? This is how the underlying operatinmg 
system and libc handles this. You can open a file descriptor on anything, 
file/directory/device/..., but not all operations work on this descriptor, some 
of them throw exception/return error.
- Add a new API for fsyncing a directory (maybe for any file type). Like 
Files.fsync(Path)? On Windows this could just be a no-op for directories? 
Basically something like our IOUtils.fsync() from the link above.

What's you opinion and how should we proceed?

Uwe

*) But there, the semantics on the file system make sure that we can see the 
file, so this is not really an issue - out of scope here (Opening a directory 
for read causes a "Access Denied" mapped to Java IOException, but that's 
perfectly fine).

-
Uwe Schindler
uschind...@apache.org 
Apache Lucene PMC Member / Committer
Bremen, Germany
http://lucene.apache.org/



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6913) audit & cleanup "schema" in data_driven_schema_configs

2015-01-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271198#comment-14271198
 ] 

Steve Rowe commented on SOLR-6913:
--

bq. What's the reasoning behind removing so many of the field types?

My thinking was that the schemaless example should be minimal.  In particular, 
if we don't have a way for field types to be used (via (dynamic)field 
definitions or field guessing), why include them?  If the user can add fields, 
they can add field types too.

{quote}
I'd vote for returning:
# geo related
# currency
# Language support
{quote}

In the cases of language support, there was no way to use those field types 
without manually adding fields (there were no dynamic fields defined for them), 
and as it stands we don't have a way to document the schema so that people can 
figure out what field types to use (though see my schema annotation proposal: 
[http://mail-archives.apache.org/mod_mbox/lucene-dev/201308.mbox/%3c7384f7f2-ad35-480b-8523-3db75aa06...@gmail.com%3E]).

There were geo dynamic field to go with the defined field types, but I removed 
them because understanding which geo type to use seemed confusing, and solr 
spatial is evolving, so it seemed better to let the user find the latest advice 
for how to use this and update the schema themselves.

I removed the currency capabilities because it seemed esoteric, and didn't fit 
with a simple example.

> audit & cleanup "schema" in data_driven_schema_configs
> --
>
> Key: SOLR-6913
> URL: https://issues.apache.org/jira/browse/SOLR-6913
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6913-trim-schema.patch, 
> SOLR-6913-trim-schema.patch, SOLR-6913.patch
>
>
> the data_driven_schema_configs configset has some issues that should be 
> reviewed carefully & cleaned up...
> * currentkly includes a schema.xml file:
> ** this was previously pat of the old example to show the automatic 
> "bootstraping" of schema.xml -> managed-schema, but at this point it's just 
> kind of confusing
> ** we should just rename this to "managed-schema" in svn - the ref guide 
> explains the bootstraping
> * the effective schema as it currently stands includes a bunch of copyFields 
> & dynamicFields that are taken wholesale from the techproducts example
> ** some of these might make sense to keep in a general example (ie: "\*_txt") 
> but in general they should all be reviewed.
> ** a bunch of this cruft is actually commented out already, but anything we 
> don't want to keep should be removed to eliminate confusion
> * SOLR-6471 added an explicit "_text" field as the default and made it a 
> copyField catchall (ie: "\*")
> ** the ref guide schema API example responses need to reflect the existence 
> of this field: 
> https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
> ** we should draw heavy attention to this field+copyField -- both with a "/!\ 
> NOTE" in the refguide and call it out in solrconfig.xml & "managed-schema" 
> file comments since people who start with these configs may be suprised and 
> wind up with a very bloated index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_72) - Build # 11392 - Failure!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11392/
Java: 32bit/jdk1.7.0_72 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob after 150 attempts. Expecting 2 items. time 
elapsed 16 215  output  for url is {   "responseHeader":{ "status":0, 
"QTime":1},   "response":{ "numFound":1, "start":0, "docs":[{   
  "id":"test/1", "md5":"e5f36bb70010a9f9a45b66229137e88e", 
"blobName":"test", "version":1, 
"timestamp":"2015-01-09T14:59:19.556Z", "size":5222}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob after 150 attempts. 
Expecting 2 items. time elapsed 16 215  output  for url is {
  "responseHeader":{
"status":0,
"QTime":1},
  "response":{
"numFound":1,
"start":0,
"docs":[{
"id":"test/1",
"md5":"e5f36bb70010a9f9a45b66229137e88e",
"blobName":"test",
"version":1,
"timestamp":"2015-01-09T14:59:19.556Z",
"size":5222}]}}
at 
__randomizedtesting.SeedInfo.seed([2A7F79DEE40BECE7:AB99F7C693548CDB]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:148)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.ran

[jira] [Commented] (SOLR-6939) UpdateProcessor to buffer & sample documents and then batch create neccessary fields

2015-01-09 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271152#comment-14271152
 ] 

Alexandre Rafalovitch commented on SOLR-6939:
-

So the interesting question is how the URP will know the upgrade path of types. 
That Int should upgrade to Float, etc.

May need a *Type Tree* of some sort with Strings on top. 
{quote}
In the beginning, L*ne created String type. And it was good!
But then the numbers had to be stored and they did not sort of facet well.
And then two numbers looked at each other and realised that they were 
different. 
One of them was straight and precise and another was imprecise and always 
floating.
And they saw each other, different as they were, next to each other in the bad 
sort and got embarrassed.
And L*ne got annoyed and cast them out of the uniform String type and created 
individual types, and packers.
And L*ne made some of the types special and more unique by letting them be 
stored as DocValues, but kept others individual and stored one-by-one on disk.
And then the flood came and cast some of the older types out to the legacy hell.
{quote}


> UpdateProcessor to buffer & sample documents and then batch create neccessary 
> fields
> 
>
> Key: SOLR-6939
> URL: https://issues.apache.org/jira/browse/SOLR-6939
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> spun off of an idea in SOLR-6016...
> {quote}
> bq. We could add a SchemaGeneratorHandler which would generate the "best" 
> schema.
> You wouldn't need/want a handler for this – you'd just need an 
> UpdateProcessorFactory to use in place of RunUpdateProcessorFactory that 
> would look at the datatypes of the fields in each document w/o doing any 
> indexing and pick the least common denominator.
> So then you'd have a chain with all of your normal update processors 
> including the TypeMapping processors configured with the preccedence orders 
> and locales and format strings you want – and at the end you'd have your 
> BestFitScheamGeneratorUpdateProcessorFactory that would look at all those 
> docs, study their values, and throw them away – until a commit comes along, 
> at which point it does all the under the hood schema field addition calls.
> So to learn, you'd send docs using whatever handler/format you wnat (json, 
> xml, extraction, etc...) with an 
> update.chain=my.datatype.learning.processor.chain request param ... and once 
> you've sent a bunch and giving it a lot of variety to see, then you send a 
> commit so it creates the schema and then you re-index your docs for real w/o 
> that special chain.
> {quote}
> ...not mentioned originally: this factory could also default to assuming 
> fields should be single valued, unless/until it sees multiple values in a doc 
> that it samples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6876) Remove unused legacy scripts.conf

2015-01-09 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271113#comment-14271113
 ] 

Alexandre Rafalovitch commented on SOLR-6876:
-

I think that whole section refers to the stuff that no longer ships with Solr. 
I don't know as of when the shipping it stopped.

> Remove unused legacy scripts.conf
> -
>
> Key: SOLR-6876
> URL: https://issues.apache.org/jira/browse/SOLR-6876
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Alexandre Rafalovitch
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6876.patch
>
>
> Some of the example collections include *scripts.conf* in the *conf* 
> directory. It is not used by anything in the distribution and is somehow left 
> over from the Solr 1.x legacy days.
> It should be possible to safe delete it to avoid confusing users trying to 
> understand what different files actually do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271084#comment-14271084
 ] 

Robert Muir commented on LUCENE-6170:
-

if you can reproduce it completely from scratch (no G1-created index) with CMS, 
that would be very useful.

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271081#comment-14271081
 ] 

Robert Muir commented on LUCENE-6170:
-

Well, it could be that G1GC caused the corruption. I'm afraid to spend a lot of 
time on this knowing just how buggy it is.

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271080#comment-14271080
 ] 

Littlestar commented on LUCENE-6170:


java version "1.7.0_60" + G1GC.
G1GC or default CMS has same problem.




> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6171) Make lucene completely write-once

2015-01-09 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6171:
---

 Summary: Make lucene completely write-once
 Key: LUCENE-6171
 URL: https://issues.apache.org/jira/browse/LUCENE-6171
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Today, lucene is mostly write-once, but not always, and these are just very 
exceptional cases. 

This is an invitation for exceptional bugs: (and we have occasional test 
failures when doing "no-wait close" because of this). 

I would prefer it if we didn't try to delete files before we open them for 
write, and if we opened them with the CREATE_NEW option by default to throw an 
exception, if the file already exists.

The trickier parts of the change are going to be IndexFileDeleter and 
exceptions on merge / CFS construction logic.

Overall for IndexFileDeleter I think the least invasive option might be to only 
delete files older than the current commit point? This will ensure that 
inflateGens() always avoids trying to overwrite any files that were from an 
aborted segment. 

For CFS construction/exceptions on merge, we really need to remove the custom 
"sniping" of index files there and let only IndexFileDeleter delete files. My 
previous failed approach involved always consistently using 
TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
tests, because of LUCENE-6146 (but i could never figure that out). I am hoping 
this time I will be successful :)

Longer term we should think about more simplifications, progress has been made 
on LUCENE-5987, but I think overall we still try to be a superhero for 
exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6940) Query UI in admin should support other facet options

2015-01-09 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created SOLR-6940:
-

 Summary: Query UI in admin should support other facet options
 Key: SOLR-6940
 URL: https://issues.apache.org/jira/browse/SOLR-6940
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Grant Ingersoll


As of right now in the Admin Query UI, you can only easily provide facet 
options for field, query and prefix.  It would be nice to have easy to use 
options for pivots, ranges, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_25) - Build # 11552 - Still Failing!

2015-01-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11552/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob after 150 attempts. Expecting 2 items. time 
elapsed 15,550  output  for url is {   "responseHeader":{ "status":0, 
"QTime":1},   "response":{ "numFound":1, "start":0, "docs":[{   
  "id":"test/1", "md5":"b79b8c52a68c642d0356b24e8f68feba", 
"blobName":"test", "version":1, 
"timestamp":"2015-01-09T13:56:48.372Z", "size":5193}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob after 150 attempts. 
Expecting 2 items. time elapsed 15,550  output  for url is {
  "responseHeader":{
"status":0,
"QTime":1},
  "response":{
"numFound":1,
"start":0,
"docs":[{
"id":"test/1",
"md5":"b79b8c52a68c642d0356b24e8f68feba",
"blobName":"test",
"version":1,
"timestamp":"2015-01-09T13:56:48.372Z",
"size":5193}]}}
at 
__randomizedtesting.SeedInfo.seed([8C8D622F997E32BB:D6BEC37EE215287]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:148)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carro

Re: Jigsaw early-access builds updated (JDK 9 build 40)

2015-01-09 Thread Rory O'Donnell

Hi Uwe,

Dalibor announced preliminary timelines to 7u80 back in December:
http://mail.openjdk.java.net/pipermail/jdk7u-dev/2014-December/010126.html

Hope that answers all your questions, let me know if there is anything 
missing and

please do test.

I will be sending availability emails on Monday for all the latest builds.

Rgds,Rory

On 09/01/2015 13:56, Uwe Schindler wrote:


Thanks Rory for the feedback!

I just noticed that there is a build for a coming Java 7u80 on the way 
(https://jdk7.java.net/download.html). The last time I talked with 
you, I had the impression that there will be no more Java 7u80 anymore 
before the final countdown for Java 7 J


Is this still true, and we should start testing this builds? What’s 
the time frame for those? We currently only test Java 8u40 and Java 9 
previews. I just want to be sure that nothing breaks for the final 
Java 7 builds. Maybe Dalibor knows more.


Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de 

eMail: u...@thetaphi.de

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Friday, January 09, 2015 2:42 PM
*To:* dev@lucene.apache.org
*Cc:* Uwe Schindler; 'Dawid Weiss'; 'Vaidya Balchandra'; 'Dalibor 
Topic'; rory.odonn...@oracle.com

*Subject:* Re: Jigsaw early-access builds updated (JDK 9 build 40)

On 09/01/2015 12:53, Uwe Schindler wrote:

Hi Rory,

I finally had some time to update the JDK versions on our build
server. I bumped up JDK 9 to build 40 yesterday. In general, the
missing rt.jar is not causing problems for us. Only some bytecode
checking tools or the optionally used Eclipse ECJ compiler for
Javadocs cannot handle that. In the case of my own forbidden-apis
checker, the problem is just that it cannot understand the new
classpath format, so the deep bytecode checks are disabled and a
warning is printed). Finally: The build does not break.

Thanks for that feedback Uwe!

But we have a problem with running tests, which is caused by some 
unrelated change in the Java FileSystem 2 API, please see this issue 
for a description:


https://issues.apache.org/jira/browse/LUCENE-6169

My question: What would be the right contact person and mailing list 
about this (NIO.2 Filesystem stuff)? Alan Bateman did the commit that 
breaks our Lucene index commit safety (this is really important to 
make sure that a commit on an Lucene/Solr/Elasticsearch index is 
correctly persisted to disk).


Let me check, I will try to get back to you today if not then on Monday.

Rgds,Rory

Regards,

Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de 

eMail: u...@thetaphi.de 

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Monday, November 24, 2014 2:48 PM
*To:* Uwe Schindler; Dawid Weiss
*Cc:* rory.odonn...@oracle.com ; 
dev@lucene.apache.org ; Vaidya 
Balchandra; Dalibor Topic

*Subject:* Jigsaw early-access builds updated (JDK 9 build 40)


Hi Uwe & Dawid,

JDK 9 Early Access with Project Jigsaw build b40 is available for 
download at :

https://jdk9.java.net/jigsaw/

The goal of Project Jigsaw [2] is to design and implement a standard 
module
system for the Java SE Platform, and to apply that system to the 
Platform itself

and to the JDK.

The main change in this build is that it includes the jrt: file-system 
provider,

so it now implements all of the changes described in JEP 220.

Please refer to Project Jigsaw's updated project pages [2] & [4] and Mark
Reinhold's update [5] for further details.

We are very interested in your experiences testing this build. Comments,
questions, and suggestions are welcome on the jigsaw-dev 
 mailing 
list or

through bug reports via bugs.java.com .

Note: If you haven’t already subscribed to that mailing list then 
please do

so first, otherwise your message will be discarded as spam.

Rgds, Rory

[1] https://jdk9.java.net/jigsaw/
[2] http://openjdk.java.net/projects/jigsaw/
[3] http://openjdk.java.net/jeps/220
[4] http://openjdk.java.net/projects/jigsaw/ea
[5] 
http://mail.openjdk.java.net/pipermail/jigsaw-dev/2014-November/004014.html 




--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland



  1   2   >