[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b65) - Build # 4384 - Failure!

2013-02-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4384/
Java: 64bit/jdk1.8.0-ea-b65 -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage

Error Message:
expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but was:<[70 6f 6c 69 74 69 63 73]>

Stack Trace:
java.lang.AssertionError: expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but 
was:<[70 6f 6c 69 74 69 63 73]>
at 
__randomizedtesting.SeedInfo.seed([39EE3E2ACD77DFE6:62FD87CF417F8006]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.classification.ClassificationTestBase.checkCorrectClassification(ClassificationTestBase.java:68)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage(SimpleNaiveBayesClassifierTest.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:474)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 6270 lines...]
[junit4:junit4] Suite: 
org.apache.lucene.classification.SimpleNaiveBayesClassifierT

[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582960#comment-13582960
 ] 

Shawn Heisey commented on SOLR-4450:


bq. That's already how it is basically. Solr.xml is what allows you to then 
send in values by system prop - which is much easier to do for the getting 
started demo.

So I can specify zkHost and all that stuff that's currently on the commandline 
in solr.xml?  I *like* that!  I haven't put numShards on the commandline, I did 
that when I used the collections API to create the collection.  I will be 
investigating this!


> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



question with ConjunctionTermScorer

2013-02-20 Thread John Wang
Hi folks:

In the class ConjunctionTermScorer, method doNext, line 52, it looks
like in the case where any of the sub iterators, e.g. docsAndFreqs[i].doc
has reached to the end, e.g. returning NO_MORE_DOCS, the lead iterator
would continue to scan/iterate through the posting list. Because the if
block online 62 will always be triggering the break, and causing the lead
iterator to scan.

   Looks to me there should be either:

1) a check here for NO_MORE_DOCS and exit the top loop and terminate the
iteration
or
2) perhaps more optimal, if docsAndFreqs[i].doc > doc, we should let the
lead.doc advance to that doc.

It is possible I am missing something. Any comments appreciated!

Thanks

-John


[jira] [Commented] (LUCENE-4570) Release ForbiddenAPI checker on Google Code

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582850#comment-13582850
 ] 

Commit Tag Bot commented on LUCENE-4570:


[trunk commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1447138

LUCENE-4570: Maven ForbiddenAPIs configuration cleanups:
- Clean up overly long execution IDs
- Make at least one text execution per module include 
internalRuntimeForbidden=true
- Make at least one text execution per module include signatureFile 
executors.txt
- Include bundledSignature commons-io-unsafe in solr test-framework 
forbiddenapis check
- Note in the Solr shared test-check configuration to include bundledSignature 
commons-io-unsafe only in modules with commons-io on their classpath


> Release ForbiddenAPI checker on Google Code
> ---
>
> Key: LUCENE-4570
> URL: https://issues.apache.org/jira/browse/LUCENE-4570
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: general/build
>Reporter: Robert Muir
>Assignee: Uwe Schindler
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4570-maven-inherited.patch, 
> LUCENE-4570-maven-inherited.patch, LUCENE-4570-maven.patch, 
> LUCENE-4570-maven.patch, LUCENE-4570.patch, LUCENE-4570.patch, 
> LUCENE-4570.patch, LUCENE-4570.patch, LUCENE-4570.patch
>
>
> Currently there is source code in lucene/tools/src (e.g. Forbidden APIs 
> checker ant task).
> It would be convenient if you could download this thing in your ant build 
> from ivy (especially if maybe it included our definitions .txt files as 
> resources).
> In general checking for locale/charset violations in this way is a pretty 
> general useful thing for a server-side app.
> Can we either release lucene-tools.jar as an artifact, or maybe alternatively 
> move this somewhere else as a standalone project and suck it in ourselves?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4467) Ephemeral directory implementations may not recover correctly because the code to clear the tlog files on startup is off.

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582848#comment-13582848
 ] 

Commit Tag Bot commented on SOLR-4467:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revision&revision=1447308

SOLR-4467: While looking into what looks like some kind of resource leak, make 
this hard fail a soft logging fail


> Ephemeral directory implementations may not recover correctly because the 
> code to clear the tlog files on startup is off.
> -
>
> Key: SOLR-4467
> URL: https://issues.apache.org/jira/browse/SOLR-4467
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.2, 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582849#comment-13582849
 ] 

Commit Tag Bot commented on LUCENE-4728:


[trunk commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1447141

LUCENE-4728: IntelliJ configuration: add queries module dependency to 
highlighter module


> Allow CommonTermsQuery to be highlighted
> 
>
> Key: LUCENE-4728
> URL: https://issues.apache.org/jira/browse/LUCENE-4728
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 4.1
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4728.patch, LUCENE-4728.patch, LUCENE-4728.patch, 
> LUCENE-4728.patch
>
>
> Add support for CommonTermsQuery to all highlighter impls. 
> This might add a dependency (query-jar) to the highlighter so we might think 
> about adding it to core?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4414) MoreLikeThis on a shard finds no interesting terms if the document queried is not in that shard

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582847#comment-13582847
 ] 

Commit Tag Bot commented on SOLR-4414:
--

[trunk commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447336

SOLR-4414: Add 'state' to shards (default to 'active') and read/write them to 
ZooKeeper


> MoreLikeThis on a shard finds no interesting terms if the document queried is 
> not in that shard
> ---
>
> Key: SOLR-4414
> URL: https://issues.apache.org/jira/browse/SOLR-4414
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis, SolrCloud
>Affects Versions: 4.1
>Reporter: Colin Bartolome
>
> Running a MoreLikeThis query in a cloud works only when the document being 
> queried exists in whatever shard serves the request. If the document is not 
> present in the shard, no "interesting terms" are found and, consequently, no 
> matches are found.
> h5. Steps to reproduce
> * Edit example/solr/collection1/conf/solrconfig.xml and add this line, with 
> the rest of the request handlers:
> {code:xml}
> 
> {code}
> * Follow the [simplest SolrCloud 
> example|http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster]
>  to get two shards running.
> * Hit this URL: 
> [http://localhost:8983/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> * Compare that output to that of this URL: 
> [http://localhost:7574/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> The former URL will return a result and list some interesting terms. The 
> latter URL will return no results and list no interesting terms. It will also 
> show this odd XML element:
> {code:xml}
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4415) Read/Write shard’s state to ZooKeeper

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582846#comment-13582846
 ] 

Commit Tag Bot commented on SOLR-4415:
--

[trunk commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447341

SOLR-4415: Add 'state' to shards (default to 'active') and read/write them to 
ZooKeeper (Fixed issue number in change log)


> Read/Write shard’s state to ZooKeeper
> -
>
> Key: SOLR-4415
> URL: https://issues.apache.org/jira/browse/SOLR-4415
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-4415.patch, SOLR-4415-withTests.patch
>
>
> Read/Write shard’s (at the Slice level) state to ZK. Make sure that the state 
> is watched and available on nodes.
> Also, check state of shard at read/write points where required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582845#comment-13582845
 ] 

Commit Tag Bot commented on SOLR-3755:
--

[trunk commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447516

SOLR-3755: Do not create core on split action, use 'targetCore' param instead


> shard splitting
> ---
>
> Key: SOLR-3755
> URL: https://issues.apache.org/jira/browse/SOLR-3755
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Attachments: SOLR-3755-CoreAdmin.patch, SOLR-3755.patch, 
> SOLR-3755.patch, SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch
>
>
> We can currently easily add replicas to handle increases in query volume, but 
> we should also add a way to add additional shards dynamically by splitting 
> existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4394) Add SSL tests and example configs

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582844#comment-13582844
 ] 

Commit Tag Bot commented on SOLR-4394:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revision&revision=1447885

SOLR-4394: phase 2, promoted SSL randomization logic up to SolrJettyTestBase


> Add SSL tests and example configs
> -
>
> Key: SOLR-4394
> URL: https://issues.apache.org/jira/browse/SOLR-4394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4394.patch, SOLR-4394.patch, SOLR-4394.patch, 
> SOLR-4394__phase2.patch
>
>
> We should provide some examples of running Solr+Jetty with SSL enabled, and 
> have some basic tests using jetty over SSL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4765) Multi-valued docvalues field

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582842#comment-13582842
 ] 

Commit Tag Bot commented on LUCENE-4765:


[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1447999

LUCENE-4765: Multi-valued docvalues field


> Multi-valued docvalues field
> 
>
> Key: LUCENE-4765
> URL: https://issues.apache.org/jira/browse/LUCENE-4765
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4765.patch, LUCENE-4765.patch
>
>
> The general idea is basically the docvalues parallel to 
> FieldCache.getDocTermOrds/UninvertedField
> Currently this stuff is used in e.g. grouping and join for multivalued 
> fields, and in solr for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4394) Add SSL tests and example configs

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582843#comment-13582843
 ] 

Commit Tag Bot commented on SOLR-4394:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revision&revision=1447952

SOLR-4394: move CHANGES entry in prep for backporting


> Add SSL tests and example configs
> -
>
> Key: SOLR-4394
> URL: https://issues.apache.org/jira/browse/SOLR-4394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4394.patch, SOLR-4394.patch, SOLR-4394.patch, 
> SOLR-4394__phase2.patch
>
>
> We should provide some examples of running Solr+Jetty with SSL enabled, and 
> have some basic tests using jetty over SSL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4782) Let the NaiveBayes classifier have a fallback docCount method if codec doesn't support Terms#docCount()

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582841#comment-13582841
 ] 

Commit Tag Bot commented on LUCENE-4782:


[trunk commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448204

LUCENE-4782 - fixed SNBC docsWithClassSize initialization in case of codec 
doesn't support Terms#getDocCount


> Let the NaiveBayes classifier have a fallback docCount method if codec 
> doesn't support Terms#docCount()
> ---
>
> Key: LUCENE-4782
> URL: https://issues.apache.org/jira/browse/LUCENE-4782
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2, 5.0
>
>
> In _SimpleNaiveBayesClassifier_ _docsWithClassSize_ variable is initialized 
> to _MultiFields.getTerms(this.atomicReader, 
> this.classFieldName).getDocCount()_ which may be -1 if the codec doesn't 
> support doc counts, therefore there should be an alternative way to 
> initialize such a variable with the documents count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4782) Let the NaiveBayes classifier have a fallback docCount method if codec doesn't support Terms#docCount()

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582840#comment-13582840
 ] 

Commit Tag Bot commented on LUCENE-4782:


[trunk commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448207

LUCENE-4782 - removed wrong line in build.xml


> Let the NaiveBayes classifier have a fallback docCount method if codec 
> doesn't support Terms#docCount()
> ---
>
> Key: LUCENE-4782
> URL: https://issues.apache.org/jira/browse/LUCENE-4782
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2, 5.0
>
>
> In _SimpleNaiveBayesClassifier_ _docsWithClassSize_ variable is initialized 
> to _MultiFields.getTerms(this.atomicReader, 
> this.classFieldName).getDocCount()_ which may be -1 if the codec doesn't 
> support doc counts, therefore there should be an alternative way to 
> initialize such a variable with the documents count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582838#comment-13582838
 ] 

Commit Tag Bot commented on LUCENE-4790:


[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448368

LUCENE-4790: FieldCache.getDocTermOrds back to the future bug


> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582837#comment-13582837
 ] 

Commit Tag Bot commented on LUCENE-4789:


[trunk commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1448400

LUCENE-4789: fix typos


> Typos in API documentation
> --
>
> Key: LUCENE-4789
> URL: https://issues.apache.org/jira/browse/LUCENE-4789
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Hao Zhong
>Assignee: Steve Rowe
> Fix For: 4.2, 5.0
>
>
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
> neccessary->necessary 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
> exceesd->exceed 
> http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
> followng->following
> http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
> qccuracy->accuracy
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
> methonds->methods
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
> implemetation->implementation
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
> construcutor->constructor 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
> bufer->buffer
> http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
> horizonal->horizontal
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
>  
> cahce->cache
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
> precidence->precedence
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
> commmands->commands
> Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4477) match-only query support (terms,wildcards,ranges) for docvalues fields.

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582836#comment-13582836
 ] 

Commit Tag Bot commented on SOLR-4477:
--

[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448440

SOLR-4477: match-only query support for docvalues fields


> match-only query support (terms,wildcards,ranges) for docvalues fields.
> ---
>
> Key: SOLR-4477
> URL: https://issues.apache.org/jira/browse/SOLR-4477
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4477.patch
>
>
> Historically, you had to invert fields (indexed=true) to do any queries 
> against them.
> But now its possible to build a forward index for the field (docValues=true).
> I think in many cases (e.g. a string field you only sort and match on), its 
> unnecessary and wasteful
> to force the user to also invert if they don't need scoring.
> So I think solr should support match-only semantics in this case for 
> term,wildcard,range,etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582835#comment-13582835
 ] 

Commit Tag Bot commented on LUCENE-4790:


[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448489

LUCENE-4790: nuke test workaround now that bug is fixed


> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4467) Ephemeral directory implementations may not recover correctly because the code to clear the tlog files on startup is off.

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582832#comment-13582832
 ] 

Commit Tag Bot commented on SOLR-4467:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revision&revision=1447312

SOLR-4467: While looking into what looks like some kind of resource leak, make 
this hard fail a soft logging fail


> Ephemeral directory implementations may not recover correctly because the 
> code to clear the tlog files on startup is off.
> -
>
> Key: SOLR-4467
> URL: https://issues.apache.org/jira/browse/SOLR-4467
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.2, 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4570) Release ForbiddenAPI checker on Google Code

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582834#comment-13582834
 ] 

Commit Tag Bot commented on LUCENE-4570:


[branch_4x commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1447139

LUCENE-4570: Maven ForbiddenAPIs configuration cleanups:
- Clean up overly long execution IDs
- Make at least one text execution per module include 
internalRuntimeForbidden=true
- Make at least one text execution per module include signatureFile 
executors.txt
- Include bundledSignature commons-io-unsafe in solr test-framework 
forbiddenapis check
- Note in the Solr shared test-check configuration to include bundledSignature 
commons-io-unsafe only in modules with commons-io on their classpath
(merged trunk r1447138)


> Release ForbiddenAPI checker on Google Code
> ---
>
> Key: LUCENE-4570
> URL: https://issues.apache.org/jira/browse/LUCENE-4570
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: general/build
>Reporter: Robert Muir
>Assignee: Uwe Schindler
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4570-maven-inherited.patch, 
> LUCENE-4570-maven-inherited.patch, LUCENE-4570-maven.patch, 
> LUCENE-4570-maven.patch, LUCENE-4570.patch, LUCENE-4570.patch, 
> LUCENE-4570.patch, LUCENE-4570.patch, LUCENE-4570.patch
>
>
> Currently there is source code in lucene/tools/src (e.g. Forbidden APIs 
> checker ant task).
> It would be convenient if you could download this thing in your ant build 
> from ivy (especially if maybe it included our definitions .txt files as 
> resources).
> In general checking for locale/charset violations in this way is a pretty 
> general useful thing for a server-side app.
> Can we either release lucene-tools.jar as an artifact, or maybe alternatively 
> move this somewhere else as a standalone project and suck it in ourselves?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582833#comment-13582833
 ] 

Commit Tag Bot commented on LUCENE-4728:


[branch_4x commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1447142

LUCENE-4728: IntelliJ configuration: add queries module dependency to 
highlighter module (merged trunk r1447141)


> Allow CommonTermsQuery to be highlighted
> 
>
> Key: LUCENE-4728
> URL: https://issues.apache.org/jira/browse/LUCENE-4728
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 4.1
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4728.patch, LUCENE-4728.patch, LUCENE-4728.patch, 
> LUCENE-4728.patch
>
>
> Add support for CommonTermsQuery to all highlighter impls. 
> This might add a dependency (query-jar) to the highlighter so we might think 
> about adding it to core?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4414) MoreLikeThis on a shard finds no interesting terms if the document queried is not in that shard

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582831#comment-13582831
 ] 

Commit Tag Bot commented on SOLR-4414:
--

[branch_4x commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447339

SOLR-4414: Add 'state' to shards (default to 'active') and read/write them to 
ZooKeeper


> MoreLikeThis on a shard finds no interesting terms if the document queried is 
> not in that shard
> ---
>
> Key: SOLR-4414
> URL: https://issues.apache.org/jira/browse/SOLR-4414
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis, SolrCloud
>Affects Versions: 4.1
>Reporter: Colin Bartolome
>
> Running a MoreLikeThis query in a cloud works only when the document being 
> queried exists in whatever shard serves the request. If the document is not 
> present in the shard, no "interesting terms" are found and, consequently, no 
> matches are found.
> h5. Steps to reproduce
> * Edit example/solr/collection1/conf/solrconfig.xml and add this line, with 
> the rest of the request handlers:
> {code:xml}
> 
> {code}
> * Follow the [simplest SolrCloud 
> example|http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster]
>  to get two shards running.
> * Hit this URL: 
> [http://localhost:8983/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> * Compare that output to that of this URL: 
> [http://localhost:7574/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> The former URL will return a result and list some interesting terms. The 
> latter URL will return no results and list no interesting terms. It will also 
> show this odd XML element:
> {code:xml}
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4415) Read/Write shard’s state to ZooKeeper

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582830#comment-13582830
 ] 

Commit Tag Bot commented on SOLR-4415:
--

[branch_4x commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447342

SOLR-4415: Add 'state' to shards (default to 'active') and read/write them to 
ZooKeeper (Fixed issue number in change log)


> Read/Write shard’s state to ZooKeeper
> -
>
> Key: SOLR-4415
> URL: https://issues.apache.org/jira/browse/SOLR-4415
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-4415.patch, SOLR-4415-withTests.patch
>
>
> Read/Write shard’s (at the Slice level) state to ZK. Make sure that the state 
> is watched and available on nodes.
> Also, check state of shard at read/write points where required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582829#comment-13582829
 ] 

Commit Tag Bot commented on SOLR-3755:
--

[branch_4x commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revision&revision=1447517

SOLR-3755: Do not create core on split action, use 'targetCore' param instead


> shard splitting
> ---
>
> Key: SOLR-3755
> URL: https://issues.apache.org/jira/browse/SOLR-3755
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Attachments: SOLR-3755-CoreAdmin.patch, SOLR-3755.patch, 
> SOLR-3755.patch, SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch
>
>
> We can currently easily add replicas to handle increases in query volume, but 
> we should also add a way to add additional shards dynamically by splitting 
> existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4394) Add SSL tests and example configs

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582828#comment-13582828
 ] 

Commit Tag Bot commented on SOLR-4394:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revision&revision=1447956

SOLR-4394: Tests and example configs demonstrating SSL with both server and 
client certs (merge r1445971 + r1447885 + r1447952)


> Add SSL tests and example configs
> -
>
> Key: SOLR-4394
> URL: https://issues.apache.org/jira/browse/SOLR-4394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4394.patch, SOLR-4394.patch, SOLR-4394.patch, 
> SOLR-4394__phase2.patch
>
>
> We should provide some examples of running Solr+Jetty with SSL enabled, and 
> have some basic tests using jetty over SSL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4765) Multi-valued docvalues field

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582827#comment-13582827
 ] 

Commit Tag Bot commented on LUCENE-4765:


[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448085

LUCENE-4765: Multi-valued docvalues field


> Multi-valued docvalues field
> 
>
> Key: LUCENE-4765
> URL: https://issues.apache.org/jira/browse/LUCENE-4765
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4765.patch, LUCENE-4765.patch
>
>
> The general idea is basically the docvalues parallel to 
> FieldCache.getDocTermOrds/UninvertedField
> Currently this stuff is used in e.g. grouping and join for multivalued 
> fields, and in solr for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4781) Backport classification module to branch_4x

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582826#comment-13582826
 ] 

Commit Tag Bot commented on LUCENE-4781:


[branch_4x commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448105

LUCENE-4781 - backporting classification module to branch_4x


> Backport classification module to branch_4x
> ---
>
> Key: LUCENE-4781
> URL: https://issues.apache.org/jira/browse/LUCENE-4781
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2
>
> Attachments: LUCENE-4781.patch
>
>
> Backport lucene/classification from trunk to branch_4x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4781) Backport classification module to branch_4x

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582825#comment-13582825
 ] 

Commit Tag Bot commented on LUCENE-4781:


[branch_4x commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448110

LUCENE-4781 - backporting missing javadoc fix


> Backport classification module to branch_4x
> ---
>
> Key: LUCENE-4781
> URL: https://issues.apache.org/jira/browse/LUCENE-4781
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2
>
> Attachments: LUCENE-4781.patch
>
>
> Backport lucene/classification from trunk to branch_4x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4782) Let the NaiveBayes classifier have a fallback docCount method if codec doesn't support Terms#docCount()

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582823#comment-13582823
 ] 

Commit Tag Bot commented on LUCENE-4782:


[branch_4x commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448210

LUCENE-4782 - backported fix to branch_4x


> Let the NaiveBayes classifier have a fallback docCount method if codec 
> doesn't support Terms#docCount()
> ---
>
> Key: LUCENE-4782
> URL: https://issues.apache.org/jira/browse/LUCENE-4782
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2, 5.0
>
>
> In _SimpleNaiveBayesClassifier_ _docsWithClassSize_ variable is initialized 
> to _MultiFields.getTerms(this.atomicReader, 
> this.classFieldName).getDocCount()_ which may be -1 if the codec doesn't 
> support doc counts, therefore there should be an alternative way to 
> initialize such a variable with the documents count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4781) Backport classification module to branch_4x

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582822#comment-13582822
 ] 

Commit Tag Bot commented on LUCENE-4781:


[branch_4x commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1448346

LUCENE-4781: Add Maven configuration and fix IntelliJ configuration


> Backport classification module to branch_4x
> ---
>
> Key: LUCENE-4781
> URL: https://issues.apache.org/jira/browse/LUCENE-4781
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2
>
> Attachments: LUCENE-4781.patch
>
>
> Backport lucene/classification from trunk to branch_4x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4781) Backport classification module to branch_4x

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582824#comment-13582824
 ] 

Commit Tag Bot commented on LUCENE-4781:


[branch_4x commit] Tommaso Teofili
http://svn.apache.org/viewvc?view=revision&revision=1448155

LUCENE-4781 - fixed forbidden APIs (java.util.Random)


> Backport classification module to branch_4x
> ---
>
> Key: LUCENE-4781
> URL: https://issues.apache.org/jira/browse/LUCENE-4781
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2
>
> Attachments: LUCENE-4781.patch
>
>
> Backport lucene/classification from trunk to branch_4x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582821#comment-13582821
 ] 

Commit Tag Bot commented on LUCENE-4790:


[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448371

LUCENE-4790: FieldCache.getDocTermOrds back to the future bug


> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582820#comment-13582820
 ] 

Commit Tag Bot commented on LUCENE-4789:


[branch_4x commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1448410

LUCENE-4789: fix typos (merge trunk r1448400)


> Typos in API documentation
> --
>
> Key: LUCENE-4789
> URL: https://issues.apache.org/jira/browse/LUCENE-4789
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Hao Zhong
>Assignee: Steve Rowe
> Fix For: 4.2, 5.0
>
>
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
> neccessary->necessary 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
> exceesd->exceed 
> http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
> followng->following
> http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
> qccuracy->accuracy
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
> methonds->methods
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
> implemetation->implementation
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
> construcutor->constructor 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
> bufer->buffer
> http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
> horizonal->horizontal
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
>  
> cahce->cache
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
> precidence->precedence
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
> commmands->commands
> Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4477) match-only query support (terms,wildcards,ranges) for docvalues fields.

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582819#comment-13582819
 ] 

Commit Tag Bot commented on SOLR-4477:
--

[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448451

SOLR-4477: match-only query support for docvalues fields


> match-only query support (terms,wildcards,ranges) for docvalues fields.
> ---
>
> Key: SOLR-4477
> URL: https://issues.apache.org/jira/browse/SOLR-4477
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4477.patch
>
>
> Historically, you had to invert fields (indexed=true) to do any queries 
> against them.
> But now its possible to build a forward index for the field (docValues=true).
> I think in many cases (e.g. a string field you only sort and match on), its 
> unnecessary and wasteful
> to force the user to also invert if they don't need scoring.
> So I think solr should support match-only semantics in this case for 
> term,wildcard,range,etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4781) Backport classification module to branch_4x

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582818#comment-13582818
 ] 

Commit Tag Bot commented on LUCENE-4781:


[branch_4x commit] Steven Rowe
http://svn.apache.org/viewvc?view=revision&revision=1448473

LUCENE-4781: drop unnecessary specialization of dist-maven


> Backport classification module to branch_4x
> ---
>
> Key: LUCENE-4781
> URL: https://issues.apache.org/jira/browse/LUCENE-4781
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 4.2
>
> Attachments: LUCENE-4781.patch
>
>
> Backport lucene/classification from trunk to branch_4x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582817#comment-13582817
 ] 

Commit Tag Bot commented on LUCENE-4790:


[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revision&revision=1448490

LUCENE-4790: nuke test workaround now that bug is fixed


> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4571) speedup disjunction with minShouldMatch

2013-02-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582816#comment-13582816
 ] 

Robert Muir commented on LUCENE-4571:
-

{quote}
But given that this new scorer drastically speeds up the BS2 case in the highly
restrictive cases, and only slows it down a bit for the other cases, I
think we should commit the new scorer, and then separately iterate on
the heuristics for when to choose which sub scorer?
{quote}

I think so too. I am jetlagged so will spend lots of time reviewing the patch 
tomorrow morning.

I do think we should fix our DisjunctionSumScorer to no longer do 
minShouldMatch and use it for mm=1
as indicated by Stefan in his comment. This would remove all the XXX0 cases 
from being slower to
either being the same or slightly faster. :)

Separately I can't help but be curious how the patch would perform if we 
combined it with LUCENE-4607
patch (as we know this significantly helped conjunctions). As Stefan's TODO 
indicates, this would 
reduce some of the cpu overhead of this in some of the worst cases as well, and 
I think the 
performance would all look just fine.


> speedup disjunction with minShouldMatch 
> 
>
> Key: LUCENE-4571
> URL: https://issues.apache.org/jira/browse/LUCENE-4571
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.1
>Reporter: Mikhail Khludnev
> Attachments: LUCENE-4571.patch
>
>
> even minShouldMatch is supplied to DisjunctionSumScorer it enumerates whole 
> disjunction, and verifies minShouldMatch condition [on every 
> doc|https://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/search/DisjunctionSumScorer.java#L70]:
> {code}
>   public int nextDoc() throws IOException {
> assert doc != NO_MORE_DOCS;
> while(true) {
>   while (subScorers[0].docID() == doc) {
> if (subScorers[0].nextDoc() != NO_MORE_DOCS) {
>   heapAdjust(0);
> } else {
>   heapRemoveRoot();
>   if (numScorers < minimumNrMatchers) {
> return doc = NO_MORE_DOCS;
>   }
> }
>   }
>   afterNext();
>   if (nrMatchers >= minimumNrMatchers) {
> break;
>   }
> }
> 
> return doc;
>   }
> {code}
> [~spo] proposes (as well as I get it) to pop nrMatchers-1 scorers from the 
> heap first, and then push them back advancing behind that top doc. For me the 
> question no.1 is there a performance test for minShouldMatch constrained 
> disjunction. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Commit Tag Bot: MIA

2013-02-20 Thread Mark Miller
No real clue why - hard power out I guess…

Feb 17 21:58:01 fullmetal CRON[6806]: (mark) CMD (cd ~/git-jira-tagger;java -cp 
/home/mark/git-jira-tagger/bin:/home/mark/git-jira-tagger/lib/* GitJiraTagger 
2>&1 > /home/mark/git-jira-tagger/log.txt)
Feb 17 22:00:01 fullmetal CRON[6837]: (mark) CMD (cd ~/git-jira-tagger;java -cp 
/home/mark/git-jira-tagger/bin:/home/mark/git-jira-tagger/lib/* GitJiraTagger 
2>&1 > /home/mark/git-jira-tagger/log.txt)
Feb 17 22:02:01 fullmetal CRON[6866]: (mark) CMD (cd ~/git-jira-tagger;java -cp 
/home/mark/git-jira-tagger/bin:/home/mark/git-jira-tagger/lib/* GitJiraTagger 
2>&1 > /home/mark/git-jira-tagger/log.txt)
Feb 20 21:32:12 fullmetal kernel: imklog 5.8.6, log source = /proc/kmsg started.
Feb 20 21:32:12 fullmetal rsyslogd: [origin software="rsyslogd" 
swVersion="5.8.6" x-pid="958" x-info="http://www.rsyslog.com";] start
Feb 20 21:32:12 fullmetal rsyslogd: rsyslogd's groupid changed to 103

On Feb 20, 2013, at 9:32 PM, Mark Miller  wrote:

> Sorry - I didn't notice the machine was shutdown. Not sure the cause. Was out 
> of town for a bit and had not turned the monitor on since I got home. It's 
> booting now.
> 
> - Mark
> 
> On Feb 20, 2013, at 6:48 PM, Steve Rowe  wrote:
> 
>> I haven't seen any activity from the Commit Tag Bot for about 72 hours.
>> 
>> Mark, is there something wrong with it?
>> 
>> Steve
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Commit Tag Bot: MIA

2013-02-20 Thread Mark Miller
Sorry - I didn't notice the machine was shutdown. Not sure the cause. Was out 
of town for a bit and had not turned the monitor on since I got home. It's 
booting now.

- Mark

On Feb 20, 2013, at 6:48 PM, Steve Rowe  wrote:

> I haven't seen any activity from the Commit Tag Bot for about 72 hours.
> 
> Mark, is there something wrong with it?
> 
> Steve
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4449) Enable backup requests for the internal solr load balancer

2013-02-20 Thread Raintung Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582805#comment-13582805
 ] 

Raintung Li commented on SOLR-4449:
---

Hi Philip, although all threads are in the ThreadPool that don't create the new 
thread, it still occupy the thread in the threads pool.
For one search request, you will use double threads than before in the first 
request(Normal case).  If it is coming 100 request, that means 300 threads will 
be used additional for 3 shards.
 
If wait the response and send the second request in the HttpShardHandler.submit 
method, it will reduce the unnecessary threads cost.


> Enable backup requests for the internal solr load balancer
> --
>
> Key: SOLR-4449
> URL: https://issues.apache.org/jira/browse/SOLR-4449
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: philip hoy
>Priority: Minor
> Attachments: SOLR-4449.patch
>
>
> Add the ability to configure the built-in solr load balancer such that it 
> submits a backup request to the next server in the list if the initial 
> request takes too long. Employing such an algorithm could improve the latency 
> of the 9xth percentile albeit at the expense of increasing overall load due 
> to additional requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4432) Developer Curb Appeal: Eliminate the need to run Solr example once in order to unpack needed files

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582801#comment-13582801
 ] 

Mark Miller commented on SOLR-4432:
---

bq. and that has to be consistent done on all 4 machines.

That's never been a problem for me. The only sane way I've ever managed a 
SolrCloud cluster is minimally using shell scripts and ssh. I write one script, 
and it runs on all the nodes.

You also don't need to extract the war on all 4 machines - just the machine you 
are going to run the zkcli script on - you need to access the class files to 
run it. The other nodes will auto extract the war when you start jetty/solr.

> Developer Curb Appeal: Eliminate the need to run Solr example once in order 
> to unpack needed files
> --
>
> Key: SOLR-4432
> URL: https://issues.apache.org/jira/browse/SOLR-4432
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> In the SolrCloud instructions it says you must run the solr in the example 
> directory at least once in order to unpack some files, in order to then use 
> the example directory as a template for shards.
> Ideally we would unpack whatever we need, or do this automatically.
> Doc reference:
> http://lucidworks.lucidimagination.com/display/solr/Getting+Started+with+SolrCloud
> See the red box that says:
> "Make sure to run Solr from the example directory in non-SolrCloud mode at 
> least once before beginning; this process unpacks the jar files necessary to 
> run SolrCloud. On the other hand, make sure also that there are no documents 
> in the example directory before making copies."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4432) Developer Curb Appeal: Eliminate the need to run Solr example once in order to unpack needed files

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582799#comment-13582799
 ] 

Mark Miller commented on SOLR-4432:
---

bq. And if we know we need it, then why not just do it automatically?

Because the only way to do it automatically in a webapp world is to put it in 
the Solr dist unzipped already - meaning we either stop shipping the war file 
as well (which some people use), or we balloon the size of the dist.

Making people put in a one liner to unzip it before using the zkcli tool seemed 
preferable to me.

I suppose you could make the argument that it's better we ship it exploded and 
force those using the war to zip it up into a war themselves. I'd be open to 
that.

> Developer Curb Appeal: Eliminate the need to run Solr example once in order 
> to unpack needed files
> --
>
> Key: SOLR-4432
> URL: https://issues.apache.org/jira/browse/SOLR-4432
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> In the SolrCloud instructions it says you must run the solr in the example 
> directory at least once in order to unpack some files, in order to then use 
> the example directory as a template for shards.
> Ideally we would unpack whatever we need, or do this automatically.
> Doc reference:
> http://lucidworks.lucidimagination.com/display/solr/Getting+Started+with+SolrCloud
> See the red box that says:
> "Make sure to run Solr from the example directory in non-SolrCloud mode at 
> least once before beginning; this process unpacks the jar files necessary to 
> run SolrCloud. On the other hand, make sure also that there are no documents 
> in the example directory before making copies."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4434) Developer Curb Appeal: Better options than the manual copy step, and doc changes

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582796#comment-13582796
 ] 

Mark Miller commented on SOLR-4434:
---

bq. That really only makes sense if you're trying to run multiple nodes on a 
single laptop.

Which is what you are doing in the documentation that you refer to on the 
Apache wiki.

bq. I don't fully understand the distribution of labor between the wiki and 
Lucid's search hub. Not sure who "keeps them in sync".

Lucid does - like I said, it has nothing to do with the Apache community. Our 
stuff is only on the Apache Solr wiki.

> Developer Curb Appeal: Better options than the manual copy step, and doc 
> changes
> 
>
> Key: SOLR-4434
> URL: https://issues.apache.org/jira/browse/SOLR-4434
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> We make developers manually copy the example directory to a named shard 
> directory.
> Doc references:
> http://lucidworks.lucidimagination.com/display/solr/Getting+Started+with+SolrCloud
> http://wiki.apache.org/solr/SolrCloud
> Sample commands:
> cp -r example shard1
> cp -r example shard2
> The doc is perhaps geared towards a developer laptop, so in that case you 
> really would need to make sure they have different names.
> But if you're running on a more realistic multi-node system, let's say 4 
> nodes handling 2 shards, the the actual shard allocation (shard1 vs. shard2) 
> will be fixed by the order each node is started in FOR THE FIRST TIME.
> At a minimum, we should do a better job of explaining the somewhat arbitrary 
> nature of the destination directories, and that the start order is what 
> really matters.
> We should also document that the actual shard assignment will not change, 
> regardless of the name, and where this information is persisted?
> Could we have an intelligent guess as to what template directory to use, and 
> do the copy when the node is first started.
> It's apparently also possible to startup the first Solr node with no cores 
> and just point it at a template.  This would be good to document.  There's 
> currently a bug in the Web UI if you do this, but I'll be logging another 
> JIRA for that.
> When combined with all the other little details of bringing up Solr Cloud 
> nodes, this is confusing to a newcomer and midly annoying.  Other engines 
> don't require this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582791#comment-13582791
 ] 

Mark Miller commented on SOLR-4450:
---

bq. Everything else should be specified in solr.xml or its .properties 
replacement when that's ready.

That's already how it is basically. Solr.xml is what allows you to then send in 
values by system prop - which is much easier to do for the getting started demo.

Anyone deploying to production doesn't need to pass those sys props. They can 
just put them in solr.xml. The outlier is numShards - but it's kind of a hack 
living in the old world. The new world is the collections API.

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582789#comment-13582789
 ] 

Mark Miller commented on SOLR-4450:
---

bq Another possibility, which might work better in a "do everything the same" 
environment - start up initially with no config sets, no cores, no bootstrap 
options then use the zkcli script included with Solr to load configs.

Right. And we have a few options here regarding config - we should continue 
improving the zkcli tool to make it very easy to pop up a config folder for a 
collection (it's not bad now, but could be polished), and we should continue on 
the issues that make it easy to post new config files to a collection. That 
would also give you the option of starting a new collection with whatever 
random default minimal config, and then you could just post a new schema.xml, 
then solrconfig.xml when you have the schema right, etc. We also have issues 
around specifying a configuration files when you use the collections create 
API. 

I think we already have a lot of momentum towards making a lot of this simpler, 
it just requires some work from people to finish it off.

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4481) SwitchQParserPlugin

2013-02-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582785#comment-13582785
 ] 

Hoss Man commented on SOLR-4481:


I'd appreciate any feedback on this idea, particularly the user API ("s", 
"s.*", and "defSwitch" .. not super stoked about "defSwitch" but i couldn't 
think of a better name)

I also wrote a blog post with some background about where this idea came from...

http://searchhub.org/2013/02/20/custom-solr-request-params/

> SwitchQParserPlugin
> ---
>
> Key: SOLR-4481
> URL: https://issues.apache.org/jira/browse/SOLR-4481
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-4481.patch
>
>
> Inspired by a conversation i had with someone on IRC a while back about using 
> "append" fq params + local params to create custom request params, it 
> occurred to me that it would be handy to have a "switch" qparser that could 
> be configured with some set of fixed "switch case" localparams that it would 
> delegate too based on it's input string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582783#comment-13582783
 ] 

Shawn Heisey commented on SOLR-4450:


I'm going to suggest something radical now.  I think that the only config 
option you should NEED to give Solr is solr.solr.home, which of course has 
./solr as a default.  Everything else should be specified in solr.xml or its 
.properties replacement when that's ready.


> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582782#comment-13582782
 ] 

Mark Miller commented on SOLR-4450:
---

bq. The idea I was thinking of was that we'd come up in multicast by default

Not a big fan of discovery through multicast.

I don't think this is really the direction we want to go in general regarding 
the command line params. The current SolrCloud examples where built before the 
Collections API and many other pieces were finished, and leaned heavily on 
single node Solr conventions. We should be migrating towards the Collections 
APi - where you start a bunch of nodes and then call a create command with the 
collections API as your first order of business. It's the favored way already. 
You can't preconfigure multiple collections with different numShards right now.

I think the right approach here is to simply finish polishing off the 
Collections API and starting up Solr without any cores, and when that is really 
nice, hopefully someone can port the getting started wiki to that style.

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4481) SwitchQParserPlugin

2013-02-20 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4481:
---

Attachment: SOLR-4481.patch

Patch for review (includes docs & tests).

some example usages straight from javadocs...

{noformat}
 In the examples below, the result of each query would be XXX

 q = {!switch s.foo=XXX s.bar=zzz s.yak=qqq}foo
 q = {!switch s.foo=qqq s.bar=XXX s.yak=zzz} bar // extra whitespace
 q = {!switch defSwitch=XXX s.foo=qqq s.bar=zzz}asdf // fallback on defSwitch
 q = {!switch s=XXX s.bar=zzz s.yak=qqq} // blank input
{noformat}

{panel}
 A practical usage of this QParsePlugin, is in specifying "appends" fq params 
in the configuration of a SearchHandler, to provide a fixed set of filter 
options for clients using custom parameter names. Using the example 
configuration below, clients can optionally specify the custom parameters 
in_stock and shipping to override the default filtering behavior, but are 
limited to the specific set of legal values (shipping=any|free, 
in_stock=yes|no|all).

{code}
 
   
 yes
 any
   
   
 {!switch s.all='*:*'
 s.yes='inStock:true'
 s.no='inStock:false'
 v=$in_stock}
 {!switch s.any='*:*'
 s.free='shipping_cost:0.0'
 v=$shipping}
   
 
{code}
{panel}

> SwitchQParserPlugin
> ---
>
> Key: SOLR-4481
> URL: https://issues.apache.org/jira/browse/SOLR-4481
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-4481.patch
>
>
> Inspired by a conversation i had with someone on IRC a while back about using 
> "append" fq params + local params to create custom request params, it 
> occurred to me that it would be handy to have a "switch" qparser that could 
> be configured with some set of fixed "switch case" localparams that it would 
> delegate too based on it's input string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4481) SwitchQParserPlugin

2013-02-20 Thread Hoss Man (JIRA)
Hoss Man created SOLR-4481:
--

 Summary: SwitchQParserPlugin
 Key: SOLR-4481
 URL: https://issues.apache.org/jira/browse/SOLR-4481
 Project: Solr
  Issue Type: New Feature
Reporter: Hoss Man
Assignee: Hoss Man


Inspired by a conversation i had with someone on IRC a while back about using 
"append" fq params + local params to create custom request params, it occurred 
to me that it would be handy to have a "switch" qparser that could be 
configured with some set of fixed "switch case" localparams that it would 
delegate too based on it's input string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582771#comment-13582771
 ] 

Mark Miller commented on SOLR-4450:
---

bq. which seems logical but doesn't work

It doesn't work because you are using ZkRun incorrectly - probably because it 
is not documented well on the wiki.

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582769#comment-13582769
 ] 

Shawn Heisey edited comment on SOLR-4450 at 2/21/13 1:30 AM:
-

The following paragraph does not address the initial idea in this issue of 
allowing startup re-bootstrapping of config sets when using an init script.  It 
only addresses the fact that configName would (IMHO) be a bad option name for 
differentiating multicast.

I'm going to have several config sets stored in zookeeper and even more 
collections that use those config sets, so using something called configName 
for a multicast identifier is *very* confusing.  If multicasting is added to 
Solr, a better name for that option would be mcastName or multicastName.  It 
would be even better to also allow configuring the multicast address and UDP 
port number.


  was (Author: elyograg):
The following paragraph does not address the initial idea in this issue of 
allowing startup re-bootstrapping of config steps when using an init script.  
It only addresses the fact that configName would (IMHO) be a bad option name 
for differentiating multicast.

I'm going to have several config sets stored in zookeeper and even more 
collections that use those config sets, so using something called configName 
for a multicast identifier is *very* confusing.  If multicasting is added to 
Solr, a better name for that option would be mcastName or multicastName.  It 
would be even better to also allow configuring the multicast address and UDP 
port number.

  
> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582769#comment-13582769
 ] 

Shawn Heisey commented on SOLR-4450:


The following paragraph does not address the initial idea in this issue of 
allowing startup re-bootstrapping of config steps when using an init script.  
It only addresses the fact that configName would (IMHO) be a bad option name 
for differentiating multicast.

I'm going to have several config sets stored in zookeeper and even more 
collections that use those config sets, so using something called configName 
for a multicast identifier is *very* confusing.  If multicasting is added to 
Solr, a better name for that option would be mcastName or multicastName.  It 
would be even better to also allow configuring the multicast address and UDP 
port number.


> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Paul Doscher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582761#comment-13582761
 ] 

Paul Doscher commented on SOLR-4450:


So what you are saying is you want to copy ElasticSearch?

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3633) web UI reports an error if CoreAdminHandler says there are no SolrCores

2013-02-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582757#comment-13582757
 ] 

Mark Miller commented on SOLR-3633:
---

I just see it as one of a few pieces, but I only updated the existing patch 
which is essentially just what hossman describes above - I can tweak the UI 
around, but I don't have any immediate plans to develop any features. Hopefully 
the guys that have been pushing the UI forward will lend a hand for further 
work in this area.

> web UI reports an error if CoreAdminHandler says there are no SolrCores
> ---
>
> Key: SOLR-3633
> URL: https://issues.apache.org/jira/browse/SOLR-3633
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.2
>
> Attachments: SOLR-3633.patch, SOLR-3633.patch
>
>
> Spun off from SOLR-3591...
> * having no SolrCores is a valid situation
> * independent of what may happen in SOLR-3591, the web UI should cleanly deal 
> with there being no SolrCores, and just hide/grey out any tabs that can't be 
> supported w/o at least one core
> * even if there are no SolrCores the core admin features (ie: creating a new 
> core) should be accessible in the UI

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4477) match-only query support (terms,wildcards,ranges) for docvalues fields.

2013-02-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-4477.
---

   Resolution: Fixed
Fix Version/s: 5.0
   4.2

> match-only query support (terms,wildcards,ranges) for docvalues fields.
> ---
>
> Key: SOLR-4477
> URL: https://issues.apache.org/jira/browse/SOLR-4477
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4477.patch
>
>
> Historically, you had to invert fields (indexed=true) to do any queries 
> against them.
> But now its possible to build a forward index for the field (docValues=true).
> I think in many cases (e.g. a string field you only sort and match on), its 
> unnecessary and wasteful
> to force the user to also invert if they don't need scoring.
> So I think solr should support match-only semantics in this case for 
> term,wildcard,range,etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3633) web UI reports an error if CoreAdminHandler says there are no SolrCores

2013-02-20 Thread Mark Bennett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582729#comment-13582729
 ] 

Mark Bennett commented on SOLR-3633:


Hi Mark, thanks for the patch.

I see this:
+   // :TODO: "Add Core" Button

Any thoughts on that?  To me this seems like the most important part of the 
issue.

> web UI reports an error if CoreAdminHandler says there are no SolrCores
> ---
>
> Key: SOLR-3633
> URL: https://issues.apache.org/jira/browse/SOLR-3633
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.2
>
> Attachments: SOLR-3633.patch, SOLR-3633.patch
>
>
> Spun off from SOLR-3591...
> * having no SolrCores is a valid situation
> * independent of what may happen in SOLR-3591, the web UI should cleanly deal 
> with there being no SolrCores, and just hide/grey out any tabs that can't be 
> supported w/o at least one core
> * even if there are no SolrCores the core admin features (ie: creating a new 
> core) should be accessible in the UI

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4480) EDisMax parser blows up with query containing single plus or minus

2013-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582725#comment-13582725
 ] 

Jan Høydahl commented on SOLR-4480:
---

Thanks for reporting this. As EDismax is all about being robust and never 
crash, this must be fixed.

> EDisMax parser blows up with query containing single plus or minus
> --
>
> Key: SOLR-4480
> URL: https://issues.apache.org/jira/browse/SOLR-4480
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Reporter: Fiona Tay
>Priority: Critical
> Fix For: 4.2
>
>
> We are running solr with sunspot and when we set up a query containing a 
> single plus, Solr blows up with the following error:
> SOLR Request (5.0ms)  [ path=# parameters={data: 
> fq=type%3A%28Attachment+OR+User+OR+GpdbDataSource+OR+HadoopInstance+OR+GnipInstance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=type_name_s%3A%28Attachment+OR+User+OR+Instance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&q=%2B&fl=%2A+score&qf=name_texts+first_name_texts+last_name_texts+file_name_texts&defType=edismax&hl=on&hl.simple.pre=%40%40%40hl%40%40%40&hl.simple.post=%40%40%40endhl%40%40%40&start=0&rows=3,
>  method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: 
> {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: 
> select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , 
> read_timeout: } ]
> RSolr::Error::Http (RSolr::Error::Http - 400 Bad Request
> Error: org.apache.lucene.queryParser.ParseException: Cannot parse '': 
> Encountered "" at line 1, column 0.
> Was expecting one of:
>  ...
> "+" ...
> "-" ...
> "(" ...
> "*" ...
>  ...
>  ...
>  ...
>  ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4480) EDisMax parser blows up with query containing single plus or minus

2013-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4480:
--

Fix Version/s: 4.2

> EDisMax parser blows up with query containing single plus or minus
> --
>
> Key: SOLR-4480
> URL: https://issues.apache.org/jira/browse/SOLR-4480
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Reporter: Fiona Tay
>Priority: Minor
> Fix For: 4.2
>
>
> We are running solr with sunspot and when we set up a query containing a 
> single plus, Solr blows up with the following error:
> SOLR Request (5.0ms)  [ path=# parameters={data: 
> fq=type%3A%28Attachment+OR+User+OR+GpdbDataSource+OR+HadoopInstance+OR+GnipInstance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=type_name_s%3A%28Attachment+OR+User+OR+Instance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&q=%2B&fl=%2A+score&qf=name_texts+first_name_texts+last_name_texts+file_name_texts&defType=edismax&hl=on&hl.simple.pre=%40%40%40hl%40%40%40&hl.simple.post=%40%40%40endhl%40%40%40&start=0&rows=3,
>  method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: 
> {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: 
> select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , 
> read_timeout: } ]
> RSolr::Error::Http (RSolr::Error::Http - 400 Bad Request
> Error: org.apache.lucene.queryParser.ParseException: Cannot parse '': 
> Encountered "" at line 1, column 0.
> Was expecting one of:
>  ...
> "+" ...
> "-" ...
> "(" ...
> "*" ...
>  ...
>  ...
>  ...
>  ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4480) EDisMax parser blows up with query containing single plus or minus

2013-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4480:
--

Priority: Critical  (was: Minor)

> EDisMax parser blows up with query containing single plus or minus
> --
>
> Key: SOLR-4480
> URL: https://issues.apache.org/jira/browse/SOLR-4480
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Reporter: Fiona Tay
>Priority: Critical
> Fix For: 4.2
>
>
> We are running solr with sunspot and when we set up a query containing a 
> single plus, Solr blows up with the following error:
> SOLR Request (5.0ms)  [ path=# parameters={data: 
> fq=type%3A%28Attachment+OR+User+OR+GpdbDataSource+OR+HadoopInstance+OR+GnipInstance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=type_name_s%3A%28Attachment+OR+User+OR+Instance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&q=%2B&fl=%2A+score&qf=name_texts+first_name_texts+last_name_texts+file_name_texts&defType=edismax&hl=on&hl.simple.pre=%40%40%40hl%40%40%40&hl.simple.post=%40%40%40endhl%40%40%40&start=0&rows=3,
>  method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: 
> {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: 
> select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , 
> read_timeout: } ]
> RSolr::Error::Http (RSolr::Error::Http - 400 Bad Request
> Error: org.apache.lucene.queryParser.ParseException: Cannot parse '': 
> Encountered "" at line 1, column 0.
> Was expecting one of:
>  ...
> "+" ...
> "-" ...
> "(" ...
> "*" ...
>  ...
>  ...
>  ...
>  ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4434) Developer Curb Appeal: Better options than the manual copy step, and doc changes

2013-02-20 Thread Mark Bennett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582718#comment-13582718
 ] 

Mark Bennett commented on SOLR-4434:


Understood, exammpleN vs. shardN, but still using some ordinal set of 
directories.  That really only makes sense if you're trying to run multiple 
nodes on a single laptop.

I don't fully understand the distribution of labor between the wiki and Lucid's 
search hub.  Not sure who "keeps them in sync".

> Developer Curb Appeal: Better options than the manual copy step, and doc 
> changes
> 
>
> Key: SOLR-4434
> URL: https://issues.apache.org/jira/browse/SOLR-4434
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> We make developers manually copy the example directory to a named shard 
> directory.
> Doc references:
> http://lucidworks.lucidimagination.com/display/solr/Getting+Started+with+SolrCloud
> http://wiki.apache.org/solr/SolrCloud
> Sample commands:
> cp -r example shard1
> cp -r example shard2
> The doc is perhaps geared towards a developer laptop, so in that case you 
> really would need to make sure they have different names.
> But if you're running on a more realistic multi-node system, let's say 4 
> nodes handling 2 shards, the the actual shard allocation (shard1 vs. shard2) 
> will be fixed by the order each node is started in FOR THE FIRST TIME.
> At a minimum, we should do a better job of explaining the somewhat arbitrary 
> nature of the destination directories, and that the start order is what 
> really matters.
> We should also document that the actual shard assignment will not change, 
> regardless of the name, and where this information is persisted?
> Could we have an intelligent guess as to what template directory to use, and 
> do the copy when the node is first started.
> It's apparently also possible to startup the first Solr node with no cores 
> and just point it at a template.  This would be good to document.  There's 
> currently a bug in the Web UI if you do this, but I'll be logging another 
> JIRA for that.
> When combined with all the other little details of bringing up Solr Cloud 
> nodes, this is confusing to a newcomer and midly annoying.  Other engines 
> don't require this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4432) Developer Curb Appeal: Eliminate the need to run Solr example once in order to unpack needed files

2013-02-20 Thread Mark Bennett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582717#comment-13582717
 ] 

Mark Bennett commented on SOLR-4432:


Hi Mark,

Although I agree with your comment, it's yet another extra manual step to get 
wrong, and that has to be consistent done on all 4 machines.

If this were the only issue, maybe it's minor, but all those stupid little 
commands to remember all add up, especially when you're new.  Solr has a lot of 
those fiddly little things that more modern engines take care of automatically.

And if we know we need it, then why not just do it automatically?

> Developer Curb Appeal: Eliminate the need to run Solr example once in order 
> to unpack needed files
> --
>
> Key: SOLR-4432
> URL: https://issues.apache.org/jira/browse/SOLR-4432
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> In the SolrCloud instructions it says you must run the solr in the example 
> directory at least once in order to unpack some files, in order to then use 
> the example directory as a template for shards.
> Ideally we would unpack whatever we need, or do this automatically.
> Doc reference:
> http://lucidworks.lucidimagination.com/display/solr/Getting+Started+with+SolrCloud
> See the red box that says:
> "Make sure to run Solr from the example directory in non-SolrCloud mode at 
> least once before beginning; this process unpacks the jar files necessary to 
> run SolrCloud. On the other hand, make sure also that there are no documents 
> in the example directory before making copies."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2013-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582714#comment-13582714
 ] 

Jan Høydahl commented on SOLR-4470:
---

{quote}
1) Make Solr URL structure "right" - e.g. "/solr/update/collection1"
2) Make obvious security constraints like "protecting update" or "protecting 
search" etc. impossible to be done by web.xml configuration, and leave it up to 
"programmatic protection"
{quote}
I think 1) is a no-starter, because someone may have another usecase, namely 
assigning collections to different customers and thus collection is more 
important than action. It all boils down to that you should trust those that 
you authenticate to access your server enough for them not to mock around 
deleting indices or something. If you need like crazy detailed authorization 
scheme, then put a programmable proxy in front of Solr, like Varnish or 
something!

This issue should be about BASIC auth and perhaps certificate based auth, with 
the intention of blocking out people or machines that should not have access to 
search at all, versus those that should. Then it would be a completely 
different beast of a JIRA to add detailed authorization support.

> Support for basic http auth in internal solr requests
> -
>
> Key: SOLR-4470
> URL: https://issues.apache.org/jira/browse/SOLR-4470
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, multicore, replication (java), SolrCloud
>Affects Versions: 4.0
>Reporter: Per Steffensen
>  Labels: authentication, solrclient, solrcloud
> Fix For: 4.2
>
>
> We want to protect any HTTP-resource (url). We want to require credentials no 
> matter what kind of HTTP-request you make to a Solr-node.
> It can faily easy be acheived as described on 
> http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
> also make "internal" request to other Solr-nodes, and for it to work 
> credentials need to be provided here also.
> Ideally we would like to "forward" credentials from a particular request to 
> all the "internal" sub-requests it triggers. E.g. for search and update 
> request.
> But there are also "internal" requests
> * that only indirectly/asynchronously triggered from "outside" requests (e.g. 
> shard creation/deletion/etc based on calls to the "Collection API")
> * that do not in any way have relation to an "outside" "super"-request (e.g. 
> replica synching stuff)
> We would like to aim at a solution where "original" credentials are 
> "forwarded" when a request directly/synchronously trigger a subrequest, and 
> fallback to a configured "internal credentials" for the 
> asynchronous/non-rooted requests.
> In our solution we would aim at only supporting basic http auth, but we would 
> like to make a "framework" around it, so that not to much refactoring is 
> needed if you later want to make support for other kinds of auth (e.g. digest)
> We will work at a solution but create this JIRA issue early in order to get 
> input/comments from the community as early as possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4450) Developer Curb Appeal: Need consistent command line arguments for all nodes

2013-02-20 Thread Mark Bennett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582710#comment-13582710
 ] 

Mark Bennett commented on SOLR-4450:


The idea I was thinking of was that we'd come up in multicast by default, BUT 
also with a named config.

So I could startup 4 instances with -configName "MarksLab"

Then you start yours up with -configName "ShawnsLab"

And even though we're using multicast on the same network segment, we don't 
accidentally collide with each other.

> Developer Curb Appeal: Need consistent command line arguments for all nodes
> ---
>
> Key: SOLR-4450
> URL: https://issues.apache.org/jira/browse/SOLR-4450
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Mark Bennett
> Fix For: 4.2
>
>
> Suppose you want to create a small 4 node cluster (2x2, two shards, each 
> replicated), each on it's own machine.
> It'd be nice to use the same script in /etc/init.d to start them all, but 
> it's hard to come up with a set of arguments that works for both the first 
> and subsequent nodes.
> When MANUALLY starting them, the arguments for the first node are different 
> than for subsequent nodes:
> Node A like this:
> -DzkRun -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig -jar start.jar
> Vs. the other 3 nodes, B, C, D:
>   -DzkHost=nodeA:9983 -jar start.jar
> But if you combine them, you either still have to rely on Node A being up 
> first, and have all nodes reference it:
> -DzkRun -DzkHost=nodeA:9983 -DnumShards=2 
> -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=MyConfig
> OR you can try to specify the address of all 4 machines, in all 4 startup 
> scripts, which seems logical but doesn't work:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> This gives an error:
> org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
> This thread suggests a possible change in syntax, but doesn't seem to work 
> (at least with the embedded ZooKeeper)
> Thread:
> http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-td4014440.html
> Syntax:
> -DzkRun -DzkHost=nodeA:9983,nodeB:9983,nodeC:9983,nodeD:9983/solrroot 
> -DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf 
> -Dcollection.configName=MyConfig
> Error:
> SEVERE: Could not start Solr. Check solr/home property and the logs
> Feb 12, 2013 1:36:49 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.NumberFormatException: For input string: 
> "9983/solrroot"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> So:
> * There needs to be some syntax that all nodes can run, even if it requires 
> listing addresses  (or multicast!)
> * And then clear documentation about suggesting external ZooKeeper to be used 
> for production (list being maintained in SOLR-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3191) field exclusion from fl

2013-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582700#comment-13582700
 ] 

Jan Høydahl commented on SOLR-3191:
---

[~lucacavanna], now that SOLR-2719 is fixed, I think it should be green lights 
for this, if you'd like to attempt a patch. I don't know if the code from 
[UserFields 
class|https://github.com/apache/lucene-solr/blob/eca4d7b44e43e84add4b37cb9b4dde910f58e7c7/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L1276]
 may be helpful at all..

> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_38) - Build # 4380 - Failure!

2013-02-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4380/
Java: 32bit/jdk1.6.0_38 -server -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage

Error Message:
expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but was:<[70 6f 6c 69 74 69 63 73]>

Stack Trace:
java.lang.AssertionError: expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but 
was:<[70 6f 6c 69 74 69 63 73]>
at 
__randomizedtesting.SeedInfo.seed([778B15F1523F921F:2C98AC14DE37CDFF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.classification.ClassificationTestBase.checkCorrectClassification(ClassificationTestBase.java:68)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage(SimpleNaiveBayesClassifierTest.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:662)




Build Log:
[...truncated 5665 lines...]
[junit4:junit4] Suite: 
org.apache.lucene.classification.SimpleNaiveBayesClass

Commit Tag Bot: MIA

2013-02-20 Thread Steve Rowe
I haven't seen any activity from the Commit Tag Bot for about 72 hours.

Mark, is there something wrong with it?

Steve

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Artifacts-4.x - Build # 234 - Failure

2013-02-20 Thread Steve Rowe
This failed because the lucene/classification/build.xml improperly specialized 
the dist-maven target.

I committed a fix in r1448473, on branch_4x only, since Tommaso already fixed 
the problem on trunk earlier today.

Steve

On Feb 20, 2013, at 6:08 PM, Apache Jenkins Server  
wrote:

> Build: https://builds.apache.org/job/Lucene-Artifacts-4.x/234/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 11360 lines...]
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/build.xml:510:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:1745:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:1368:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:500:
>  Unable to initialize POM pom.xml: Could not find the model file 
> '/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/build/poms/lucene/classification/src/java/pom.xml'.
>  for project unknown
> 
> Total time: 7 minutes 43 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Publishing Javadoc
> Email was triggered for: Failure
> Sending email for trigger: Failure
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4480) EDisMax parser blows up with query containing single plus or minus

2013-02-20 Thread Fiona Tay (JIRA)
Fiona Tay created SOLR-4480:
---

 Summary: EDisMax parser blows up with query containing single plus 
or minus
 Key: SOLR-4480
 URL: https://issues.apache.org/jira/browse/SOLR-4480
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Reporter: Fiona Tay
Priority: Minor


We are running solr with sunspot and when we set up a query containing a single 
plus, Solr blows up with the following error:
SOLR Request (5.0ms)  [ path=# parameters={data: 
fq=type%3A%28Attachment+OR+User+OR+GpdbDataSource+OR+HadoopInstance+OR+GnipInstance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=type_name_s%3A%28Attachment+OR+User+OR+Instance+OR+Workspace+OR+Workfile+OR+Tag+OR+Dataset+OR+HdfsEntry%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&fq=-%28security_type_name_sm%3A%28Dataset%29+AND+-instance_account_ids_im%3A%282+OR+1%29%29&fq=-%28security_type_name_sm%3AChorusView+AND+-member_ids_im%3A1+AND+-public_b%3Atrue%29&q=%2B&fl=%2A+score&qf=name_texts+first_name_texts+last_name_texts+file_name_texts&defType=edismax&hl=on&hl.simple.pre=%40%40%40hl%40%40%40&hl.simple.post=%40%40%40endhl%40%40%40&start=0&rows=3,
 method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: 
{"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: 
select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , 
read_timeout: } ]

RSolr::Error::Http (RSolr::Error::Http - 400 Bad Request
Error: org.apache.lucene.queryParser.ParseException: Cannot parse '': 
Encountered "" at line 1, column 0.
Was expecting one of:
 ...
"+" ...
"-" ...
"(" ...
"*" ...
 ...
 ...
 ...
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4414) MoreLikeThis on a shard finds no interesting terms if the document queried is not in that shard

2013-02-20 Thread Colin Bartolome (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582673#comment-13582673
 ] 

Colin Bartolome commented on SOLR-4414:
---

Using the {{MoreLikeThisHandler}}, following the steps to reproduce I wrote 
produces interesting terms on one server, but not the other. On the server that 
produces interesting terms, the MLT search _is_ performed, but it returns 
matching documents from that server only.

I don't know enough about broker cores to say for sure whether your issue is 
related.

> MoreLikeThis on a shard finds no interesting terms if the document queried is 
> not in that shard
> ---
>
> Key: SOLR-4414
> URL: https://issues.apache.org/jira/browse/SOLR-4414
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis, SolrCloud
>Affects Versions: 4.1
>Reporter: Colin Bartolome
>
> Running a MoreLikeThis query in a cloud works only when the document being 
> queried exists in whatever shard serves the request. If the document is not 
> present in the shard, no "interesting terms" are found and, consequently, no 
> matches are found.
> h5. Steps to reproduce
> * Edit example/solr/collection1/conf/solrconfig.xml and add this line, with 
> the rest of the request handlers:
> {code:xml}
> 
> {code}
> * Follow the [simplest SolrCloud 
> example|http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster]
>  to get two shards running.
> * Hit this URL: 
> [http://localhost:8983/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> * Compare that output to that of this URL: 
> [http://localhost:7574/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> The former URL will return a result and list some interesting terms. The 
> latter URL will return no results and list no interesting terms. It will also 
> show this odd XML element:
> {code:xml}
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-4.x - Build # 234 - Failure

2013-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-4.x/234/

No tests ran.

Build Log:
[...truncated 11360 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/build.xml:510:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:1745:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:1368:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/common-build.xml:500:
 Unable to initialize POM pom.xml: Could not find the model file 
'/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-4.x/lucene/build/poms/lucene/classification/src/java/pom.xml'.
 for project unknown

Total time: 7 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4414) MoreLikeThis on a shard finds no interesting terms if the document queried is not in that shard

2013-02-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582635#comment-13582635
 ] 

Shawn Heisey commented on SOLR-4414:


Colin, are you able to make distributed MLT work?  I can't make it work at all. 
 Do my problems require a separate issue?


> MoreLikeThis on a shard finds no interesting terms if the document queried is 
> not in that shard
> ---
>
> Key: SOLR-4414
> URL: https://issues.apache.org/jira/browse/SOLR-4414
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis, SolrCloud
>Affects Versions: 4.1
>Reporter: Colin Bartolome
>
> Running a MoreLikeThis query in a cloud works only when the document being 
> queried exists in whatever shard serves the request. If the document is not 
> present in the shard, no "interesting terms" are found and, consequently, no 
> matches are found.
> h5. Steps to reproduce
> * Edit example/solr/collection1/conf/solrconfig.xml and add this line, with 
> the rest of the request handlers:
> {code:xml}
> 
> {code}
> * Follow the [simplest SolrCloud 
> example|http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster]
>  to get two shards running.
> * Hit this URL: 
> [http://localhost:8983/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> * Compare that output to that of this URL: 
> [http://localhost:7574/solr/collection1/mlt?mlt.fl=includes&q=id:3007WFP&mlt.match.include=false&mlt.interestingTerms=list&mlt.mindf=1&mlt.mintf=1]
> The former URL will return a result and list some interesting terms. The 
> latter URL will return no results and list no interesting terms. It will also 
> show this odd XML element:
> {code:xml}
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2013-02-20 Thread Vitali Kviatkouski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitali Kviatkouski updated SOLR-4479:
-

Description: 
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
. Skipped

To reproduce, follow the guide in wiki (http://wiki.apache.org/solr/SolrCloud), 
add some documents and then request 
http://localhost:8983/solr/collection1/tvrh?q=*%3A*

If I include term search vector component in search handler, I get (on second 
shard):
SEVERE: null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



  was:
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
. Skipped

To reproduce, follow the guide in wiki (http://wiki.apache.org/solr/SolrCloud), 
add some documents and then request 
http://localhost:8983/solr/collection1/tvrh?q=*%3A*

If I include term search vector component in search handler, I get (on second 
shard):
SEVERE: null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
at 
org.apache.s

[jira] [Updated] (SOLR-4465) Configurable Collectors

2013-02-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4465:
-

Attachment: SOLR-4465.patch

Added CollectorParams.java to hold the http collector parameters. Using the 
prefix "cl" collector parameters.

> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch
>
>
> This issue is to add configurable custom collectors to Solr. This expands the 
> design and work done in issue SOLR-1680 to include:
> 1) CollectorFactory configuration in solconfig.xml
> 2) Http parameters to allow clients to dynamically select a CollectorFactory 
> and construct a custom Collector.
> 3) Make aspects of QueryComponent pluggable so that the output from 
> distributed search can conform with custom collectors at the shard level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-669) SOLR currently does not support caching for (Query, FacetFieldList)

2013-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-669.


Resolution: Won't Fix

Changed resolution state to "Won't fix". It appears this is not a feature 
anyone finds useful enough to even comment on, far less contribute to for 
almost 5 years, so to me that's a theoretical need, not a real one. Please 
re-open if you (or anyone else) want to see this solved.

> SOLR currently does not support caching for (Query, FacetFieldList)
> ---
>
> Key: SOLR-669
> URL: https://issues.apache.org/jira/browse/SOLR-669
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
>Reporter: Fuad Efendi
>   Original Estimate: 1,680h
>  Remaining Estimate: 1,680h
>
> It is huge performance bottleneck and it describes huge difference between 
> qtime and SolrJ's elapsedTime. I quickly browsed SolrIndexSearcher: it caches 
> only (Key, DocSet/DocList ) key-value pairs and it does not have 
> cache for (Query, FacetFieldList).
> filterCache stores DocList for each 'filter' and is used for constant 
> recalculations...
> This would be significant performance improvement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-669) SOLR currently does not support caching for (Query, FacetFieldList)

2013-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reopened SOLR-669:
--


> SOLR currently does not support caching for (Query, FacetFieldList)
> ---
>
> Key: SOLR-669
> URL: https://issues.apache.org/jira/browse/SOLR-669
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
>Reporter: Fuad Efendi
>   Original Estimate: 1,680h
>  Remaining Estimate: 1,680h
>
> It is huge performance bottleneck and it describes huge difference between 
> qtime and SolrJ's elapsedTime. I quickly browsed SolrIndexSearcher: it caches 
> only (Key, DocSet/DocList ) key-value pairs and it does not have 
> cache for (Query, FacetFieldList).
> filterCache stores DocList for each 'filter' and is used for constant 
> recalculations...
> This would be significant performance improvement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4465) Configurable Collectors

2013-02-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4465:
-

Attachment: SOLR-4465.patch

Added CollectorFactory.java to patch

> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4465.patch, SOLR-4465.patch
>
>
> This issue is to add configurable custom collectors to Solr. This expands the 
> design and work done in issue SOLR-1680 to include:
> 1) CollectorFactory configuration in solconfig.xml
> 2) Http parameters to allow clients to dynamically select a CollectorFactory 
> and construct a custom Collector.
> 3) Make aspects of QueryComponent pluggable so that the output from 
> distributed search can conform with custom collectors at the shard level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 233 - Failure!

2013-02-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/233/
Java: 64bit/jdk1.6.0 -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 29195 lines...]
BUILD FAILED
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/build.xml:381: 
The following error occurred while executing this line:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/build.xml:320: 
The following error occurred while executing this line:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/extra-targets.xml:120:
 The following files are missing svn:eol-style (or binary svn:mime-type):
* dev-tools/maven/lucene/classification/pom.xml.template

Total time: 89 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 64bit/jdk1.6.0 -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-4789:
---

Fix Version/s: 5.0
   4.2

> Typos in API documentation
> --
>
> Key: LUCENE-4789
> URL: https://issues.apache.org/jira/browse/LUCENE-4789
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Hao Zhong
>Assignee: Steve Rowe
> Fix For: 4.2, 5.0
>
>
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
> neccessary->necessary 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
> exceesd->exceed 
> http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
> followng->following
> http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
> qccuracy->accuracy
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
> methonds->methods
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
> implemetation->implementation
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
> construcutor->constructor 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
> bufer->buffer
> http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
> horizonal->horizontal
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
>  
> cahce->cache
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
> precidence->precedence
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
> commmands->commands
> Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-4789.


Resolution: Fixed

Committed fixes for some of these (and some more I noticed along the way) to 
trunk and branch_4x.

Thanks Hao!

{quote}
http://lucehttp://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
[...]
http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
{quote}

JavaCC generated these ParseException and CharStream files (and several others 
in the project) - I'm not going to change them.

> Typos in API documentation
> --
>
> Key: LUCENE-4789
> URL: https://issues.apache.org/jira/browse/LUCENE-4789
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Hao Zhong
>Assignee: Steve Rowe
>
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
> neccessary->necessary 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
> exceesd->exceed 
> http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
> followng->following
> http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
> qccuracy->accuracy
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
> methonds->methods
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
> implemetation->implementation
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
> construcutor->constructor 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
> bufer->buffer
> http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
> horizonal->horizontal
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
>  
> cahce->cache
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
> precidence->precedence
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
> commmands->commands
> Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4465) Configurable Collectors

2013-02-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4465:
-

Attachment: SOLR-4465.patch

First patch which adds the code to read the collectorFactory element from the 
solrconfig.xml. This will be iterated to add more detail.

> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.2, 5.0
>
> Attachments: SOLR-4465.patch
>
>
> This issue is to add configurable custom collectors to Solr. This expands the 
> design and work done in issue SOLR-1680 to include:
> 1) CollectorFactory configuration in solconfig.xml
> 2) Http parameters to allow clients to dynamically select a CollectorFactory 
> and construct a custom Collector.
> 3) Make aspects of QueryComponent pluggable so that the output from 
> distributed search can conform with custom collectors at the shard level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1357 - Failure

2013-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1357/

1 tests failed.
REGRESSION:  
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage

Error Message:
expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but was:<[70 6f 6c 69 74 69 63 73]>

Stack Trace:
java.lang.AssertionError: expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but 
was:<[70 6f 6c 69 74 69 63 73]>
at 
__randomizedtesting.SeedInfo.seed([73E949696A7AEC3:5C2D2D731AAFF123]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.classification.ClassificationTestBase.checkCorrectClassification(ClassificationTestBase.java:68)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage(SimpleNaiveBayesClassifierTest.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)




Build Log:
[...truncated 5697 lines...]
[junit4:junit4] Suite: 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest
[junit4:junit4]   2> NOTE: reproduce 

[jira] [Updated] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2013-02-20 Thread Vitali Kviatkouski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitali Kviatkouski updated SOLR-4479:
-

Description: 
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
. Skipped

To reproduce, follow the guide in wiki (http://wiki.apache.org/solr/SolrCloud), 
add some documents and then request 
http://localhost:8983/solr/collection1/tvrh?q=*%3A*

If I include term search vector component in search handler, I get (on second 
shard):
SEVERE: null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)

Also for our project needs I rewrote TermVectorComponent and the NPE above gone 
away, but new appeared:
java.lang.NullPointerException
at 
org.apache.solr.common.util.NamedList.nameValueMapToList(NamedList.java:109)
at org.apache.solr.common.util.NamedList.(NamedList.java:75)
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:452)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)


  was:
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(Ses

[jira] [Updated] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2013-02-20 Thread Vitali Kviatkouski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitali Kviatkouski updated SOLR-4479:
-

Description: 
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
. Skipped

To reproduce, follow the guide in wiki (http://wiki.apache.org/solr/SolrCloud), 
add some documents and then request 
http://localhost:8983/solr/collection1/tvrh?q=*%3A*

If I include term search vector component in search handler, I get (on second 
shard):
SEVERE: null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)


  was:
When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:365)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(Blocking

[jira] [Resolved] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4790.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.2

> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.2, 5.0
>
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2013-02-20 Thread Vitali Kviatkouski (JIRA)
Vitali Kviatkouski created SOLR-4479:


 Summary: TermVectorComponent NPE when running Solr Cloud
 Key: SOLR-4479
 URL: https://issues.apache.org/jira/browse/SOLR-4479
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Vitali Kviatkouski


When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:365)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)


To reproduce, follow the guide in wiki (http://wiki.apache.org/solr/SolrCloud), 
add some documents and then request 
http://localhost:8983/solr/collection1/tvrh?q=*%3A*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1448369 - /lucene/dev/branches/branch_4x/dev-tools/maven/lucene/classification/pom.xml.template

2013-02-20 Thread Steve Rowe
Thanks Robert!

On Feb 20, 2013, at 2:48 PM, rm...@apache.org wrote:

> Author: rmuir
> Date: Wed Feb 20 19:48:39 2013
> New Revision: 1448369
> 
> URL: http://svn.apache.org/r1448369
> Log:
> add eol-style
> 
> Modified:
>
> lucene/dev/branches/branch_4x/dev-tools/maven/lucene/classification/pom.xml.template
>(props changed)
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4783) Inconsistent results, changing based on recent previous searches (caching?)

2013-02-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582470#comment-13582470
 ] 

Michael McCandless commented on LUCENE-4783:


Can you post a test case showing the issue?


> Inconsistent results, changing based on recent previous searches (caching?)
> ---
>
> Key: LUCENE-4783
> URL: https://issues.apache.org/jira/browse/LUCENE-4783
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
> Environment: Ubuntu Linux & Java application running under Tomcat
>Reporter: William Johnson
>
> We have several repeatable cases where Lucene is returning different 
> candidates for the same search, on the same (static) index, depending on what 
> other searches have been run before hand.
> It appears as though Lucene is failing to find matches in some cases if they 
> have not been cached by a previous search.
> In specific (although it is happening with more than just fuzzy searches), a 
> fuzzy search on a misspelled street name returns no result.  If you then 
> search on the correctly spelled street name, and THEN return to the original 
> fuzzy query on the original incorrect spelling, you now receive the result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_38) - Build # 4377 - Still Failing!

2013-02-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4377/
Java: 32bit/jdk1.6.0_38 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 29113 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:381: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:320: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:120: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* dev-tools/maven/lucene/classification/pom.xml.template

Total time: 54 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.6.0_38 -server -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.6.0_38) - Build # 4349 - Failure!

2013-02-20 Thread Michael McCandless
On Wed, Feb 20, 2013 at 8:15 AM, Robert Muir  wrote:
> I'm not sure i really fixed it!
>
> I fixed IWC to use this mergescheduler and for the test to not be so
> slow, but i noticed the value it always got for totalBytesSize is 0...

That's not right!

I'll dig.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4571) speedup disjunction with minShouldMatch

2013-02-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582453#comment-13582453
 ] 

Michael McCandless commented on LUCENE-4571:


I fixed luceneutil to recognize +minShouldMatch=N, and made a trivial
tasks file:

{noformat}
HighMinShouldMatch4: ref http from name title +minShouldMatch=4
HighMinShouldMatch3: ref http from name title +minShouldMatch=3
HighMinShouldMatch2: ref http from name title +minShouldMatch=2
HighMinShouldMatch0: ref http from name title
Low1MinShouldMatch4: ref http from name dublin +minShouldMatch=4
Low1MinShouldMatch3: ref http from name dublin +minShouldMatch=3
Low1MinShouldMatch2: ref http from name dublin +minShouldMatch=2
Low1MinShouldMatch0: ref http from name dublin
Low2MinShouldMatch4: ref http from wings dublin +minShouldMatch=4
Low2MinShouldMatch3: ref http from wings dublin +minShouldMatch=3
Low2MinShouldMatch2: ref http from wings dublin +minShouldMatch=2
Low2MinShouldMatch0: ref http from wings dublin
Low3MinShouldMatch4: ref http struck wings dublin +minShouldMatch=4
Low3MinShouldMatch3: ref http struck wings dublin +minShouldMatch=3
Low3MinShouldMatch2: ref http struck wings dublin +minShouldMatch=2
Low3MinShouldMatch0: ref http struck wings dublin
Low4MinShouldMatch4: ref restored struck wings dublin +minShouldMatch=4
Low4MinShouldMatch3: ref restored struck wings dublin +minShouldMatch=3
Low4MinShouldMatch2: ref restored struck wings dublin +minShouldMatch=2
Low4MinShouldMatch0: ref restored struck wings dublin
{noformat}

So, each query has 5 terms.  High* means all 5 are high freq, Low1*
means one term is low freq and 4 are high, Low2* means 2 terms are low
freq and 3 are high, etc.

I tested on the 10 M doc wikimedium index, and for both base (= trunk)
and comp (= this patch) I forcefully disabled BS1:

{noformat}
TaskQPS base  StdDevQPS comp  StdDev
Pct diff
 Low3MinShouldMatch23.95  (3.5%)3.00  (2.1%)  
-24.1% ( -28% -  -19%)
 Low1MinShouldMatch21.93  (3.1%)1.50  (2.1%)  
-22.4% ( -26% -  -17%)
 Low2MinShouldMatch22.52  (3.4%)1.96  (2.0%)  
-22.3% ( -26% -  -17%)
 HighMinShouldMatch21.62  (3.2%)1.27  (2.2%)  
-21.3% ( -25% -  -16%)
 HighMinShouldMatch31.65  (3.5%)1.31  (2.3%)  
-20.7% ( -25% -  -15%)
 Low4MinShouldMatch06.91  (3.9%)5.79  (1.6%)  
-16.2% ( -20% -  -11%)
 Low1MinShouldMatch31.98  (3.4%)1.66  (2.3%)  
-15.8% ( -20% -  -10%)
 Low3MinShouldMatch03.69  (3.2%)3.21  (2.1%)  
-13.0% ( -17% -   -8%)
 Low2MinShouldMatch02.38  (3.0%)2.09  (1.9%)  
-12.3% ( -16% -   -7%)
 Low1MinShouldMatch01.84  (2.7%)1.65  (2.2%)  
-10.4% ( -14% -   -5%)
 HighMinShouldMatch01.56  (2.9%)1.41  (2.5%)   
-9.8% ( -14% -   -4%)
 HighMinShouldMatch41.67  (3.6%)1.55  (2.8%)   
-7.1% ( -13% -0%)
 Low2MinShouldMatch32.64  (3.8%)2.65  (2.4%)
0.3% (  -5% -6%)
 Low1MinShouldMatch42.02  (3.5%)2.36  (2.8%)   
16.8% (  10% -   23%)
 Low4MinShouldMatch28.53  (5.3%)   33.74  (5.8%)  
295.8% ( 270% -  324%)
 Low4MinShouldMatch38.56  (5.4%)   44.93  (8.6%)  
424.8% ( 389% -  463%)
 Low3MinShouldMatch34.25  (4.1%)   23.48  (8.8%)  
452.7% ( 422% -  485%)
 Low4MinShouldMatch48.59  (5.2%)   59.53 (11.1%)  
593.3% ( 548% -  643%)
 Low2MinShouldMatch42.68  (3.9%)   21.38 (14.3%)  
696.8% ( 653% -  743%)
 Low3MinShouldMatch44.25  (4.1%)   34.97 (15.4%)  
722.5% ( 675% -  773%)
{noformat}

The new scorer is waaay faster when the minShouldMatch constraint is
highly restrictive, i.e. when .advance is being used on only low-freq
terms (I think?).  It a bit slower for the no-minShouldMatch case
(*MinShouldMatch0).  When .advance is sometimes used on the high freq
terms it's a bit slower than BS2 today.

I ran a 2nd test, this time with BS1 as the baseline.  BS1 is faster
than BS2, but indeed it still evaluates all subs and only rules out
minShouldMmatch in the end.  I had to turn off luceneutil's score
comparisons since BS1/BS2 produce different scores:

{noformat}
TaskQPS base  StdDevQPS comp  StdDev
Pct diff
 HighMinShouldMatch23.33  (8.8%)1.30  (0.8%)  
-60.9% ( -64% -  -56%)
 HighMinShouldMatch33.35  (8.8%)1.33  (1.0%)  
-60.5% ( -64% -  -55%)
 Low1MinShouldMatch23.79  (8.4%)1.52  (0.9%)  
-59.9% ( -63% -  -55%)
 HighMinShouldMatch0   

[jira] [Commented] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13582436#comment-13582436
 ] 

Michael McCandless commented on LUCENE-4790:


+1

> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4790:


Attachment: LUCENE-4790.patch

Here's a test with my proposed fix. Again its to just make the livedocs always 
an explicit parameter so there are no traps or confusion, and FieldCacheImpl 
passes null always.

> FieldCache.getDocTermOrds back to the future bug
> 
>
> Key: LUCENE-4790
> URL: https://issues.apache.org/jira/browse/LUCENE-4790
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4790.patch
>
>
> Found while working on LUCENE-4765:
> FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.
> This means in cases if you have readers at two points in time (r1, r2), and 
> you happen to call getDocTermOrds first on r2, then call it on r1, the 
> results will be incorrect.
> Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
> FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing 
> what its doing today (since its a top-level reader, and cached somewhere 
> else).
> Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4790) FieldCache.getDocTermOrds back to the future bug

2013-02-20 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-4790:
---

 Summary: FieldCache.getDocTermOrds back to the future bug
 Key: LUCENE-4790
 URL: https://issues.apache.org/jira/browse/LUCENE-4790
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Found while working on LUCENE-4765:

FieldCache.getDocTermOrds unsafely "bakes in" liveDocs into its structure.

This means in cases if you have readers at two points in time (r1, r2), and you 
happen to call getDocTermOrds first on r2, then call it on r1, the results will 
be incorrect.

Simple fix is to make DocTermOrds uninvert take liveDocs explicitly: 
FieldCacheImpl always passes null, Solr's UninvertedField just keeps doing what 
its doing today (since its a top-level reader, and cached somewhere else).

Also DocTermOrds had a telescoping ctor that was uninverting twice. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-4789:
--

Assignee: Steve Rowe

> Typos in API documentation
> --
>
> Key: LUCENE-4789
> URL: https://issues.apache.org/jira/browse/LUCENE-4789
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Hao Zhong
>Assignee: Steve Rowe
>
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
> neccessary->necessary 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
> exceesd->exceed 
> http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
> followng->following
> http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
> qccuracy->accuracy
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
> methonds->methods
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
> implemetation->implementation
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
> construcutor->constructor 
> http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
> bufer->buffer
> http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
> horizonal->horizontal
> http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
>  
> cahce->cache
> http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
> precidence->precedence
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
> http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
> commmands->commands
> Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4789) Typos in API documentation

2013-02-20 Thread Hao Zhong (JIRA)
Hao Zhong created LUCENE-4789:
-

 Summary: Typos in API documentation
 Key: LUCENE-4789
 URL: https://issues.apache.org/jira/browse/LUCENE-4789
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Hao Zhong


http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/analysis/package-summary.html
neccessary->necessary 

http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/LogMergePolicy.html
exceesd->exceed 

http://lucene.apache.org/core/4_1_0/queryparser/serialized-form.html
http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/classic/ParseException.html
followng->following

http://lucene.apache.org/core/4_1_0/codecs/org/apache/lucene/codecs/bloom/FuzzySet.html
qccuracy->accuracy

http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/search/params/FacetRequest.html
methonds->methods

http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/parser/CharStream.html
implemetation->implementation

http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/TimeLimitingCollector.html
construcutor->constructor 

http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/store/BufferedIndexInput.html
bufer->buffer

http://lucene.apache.org/core/4_1_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.html
horizonal->horizontal


http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/writercache/lru/NameHashIntCacheLRU.html
 
cahce->cache

http://lucene.apache.org/core/4_1_0/queryparser/org/apache/lucene/queryparser/flexible/standard/processors/BooleanQuery2ModifierNodeProcessor.html
precidence->precedence


http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie.html
http://lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/stemmer/MultiTrie2.html
commmands->commands

Please revise the documentation. 




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 186 - Failure

2013-02-20 Thread Steve Rowe
Reproduces for me locally with the same seed.

I also saw this in IntelliJ while getting the classification module 
configuration in shape - different seed though: CF99EEAD4D1B8F7E.  This seed 
reproduces the failure for me under Ant.

This test sometimes succeeds under Ant, Maven, and IntelliJ.

Steve

On Feb 20, 2013, at 1:15 PM, Apache Jenkins Server  
wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/186/
> 
> 1 tests failed.
> FAILED:  
> org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage
> 
> Error Message:
> expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but was:<[70 6f 6c 69 74 69 63 73]>
> 
> Stack Trace:
> java.lang.AssertionError: expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but 
> was:<[70 6f 6c 69 74 69 63 73]>
>   at 
> __randomizedtesting.SeedInfo.seed([D18A2E4C5ACD05CE:8A9997A9D6C55A2E]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at 
> org.apache.lucene.classification.ClassificationTestBase.checkCorrectClassification(ClassificationTestBase.java:68)
>   at 
> org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage(SimpleNaiveBayesClassifierTest.java:33)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:616)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>   at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnor

[jira] [Created] (LUCENE-4788) Out of date code examples

2013-02-20 Thread Hao Zhong (JIRA)
Hao Zhong created LUCENE-4788:
-

 Summary: Out of date code examples
 Key: LUCENE-4788
 URL: https://issues.apache.org/jira/browse/LUCENE-4788
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.1
Reporter: Hao Zhong
Priority: Critical


The following API documents have code examples:
http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/index/OrdinalMappingAtomicReader.html
http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/index/OrdinalMappingAtomicReader.html
"// merge the old taxonomy with the new one.
 OrdinalMap map = DirectoryTaxonomyWriter.addTaxonomies();"

The two code examples call the DirectoryTaxonomyWriter.addTaxonomies method. 
Lucene 3.5 has that method, according to its document:
http://lucene.apache.org/core/old_versioned_docs/versions/3_5_0/api/all/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyWriter.html

However, lucene 4.1 does not have such a method, according to its document:
http://lucene.apache.org/core/4_1_0/facet/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyWriter.html
Please update the code examples to reflect the latest implementation.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 186 - Failure

2013-02-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/186/

1 tests failed.
FAILED:  
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage

Error Message:
expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but was:<[70 6f 6c 69 74 69 63 73]>

Stack Trace:
java.lang.AssertionError: expected:<[74 65 63 68 6e 6f 6c 6f 67 79]> but 
was:<[70 6f 6c 69 74 69 63 73]>
at 
__randomizedtesting.SeedInfo.seed([D18A2E4C5ACD05CE:8A9997A9D6C55A2E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.classification.ClassificationTestBase.checkCorrectClassification(ClassificationTestBase.java:68)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testBasicUsage(SimpleNaiveBayesClassifierTest.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)




Build Log:
[...truncated 5832 lines...]
[junit4:junit4] Suite: 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest
[junit4:junit4]   2> NOTE: download the 

[jira] [Created] (LUCENE-4787) The QueryScorer.getMaxWeight method is not found.

2013-02-20 Thread Hao Zhong (JIRA)
Hao Zhong created LUCENE-4787:
-

 Summary: The QueryScorer.getMaxWeight method is not found.
 Key: LUCENE-4787
 URL: https://issues.apache.org/jira/browse/LUCENE-4787
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 4.1
Reporter: Hao Zhong
Priority: Critical


The following API documents refer to the QueryScorer.getMaxWeight method:
http://lucene.apache.org/core/4_1_0/highlighter/org/apache/lucene/search/highlight/package-summary.html
"The QueryScorer.getMaxWeight method is useful when passed to the 
GradientFormatter constructor to define the top score which is associated with 
the top color."
http://lucene.apache.org/core/4_1_0/highlighter/org/apache/lucene/search/highlight/GradientFormatter.html
"See QueryScorer.getMaxWeight which can be used to calibrate scoring scale"

However, the QueryScorer class does not declare a getMaxWeight method in lucene 
4.1, according to its document:
http://lucene.apache.org/core/4_1_0/highlighter/org/apache/lucene/search/highlight/QueryScorer.html

Instead, the class declares a getMaxTermWeight method. Is that the correct 
method in the preceding two documents? If it is, please revise the two 
documents. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >