[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11292 - Failure!

2014-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11292/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:52432, 
https://127.0.0.1:49101, https://127.0.0.1:48784]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:52432, https://127.0.0.1:49101, 
https://127.0.0.1:48784]
at 
__randomizedtesting.SeedInfo.seed([5CFE5A8848933A6E:DD18D4903FCC5A52]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:874)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
o

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 195 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/195/

No tests ran.

Build Log:
[...truncated 52989 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/build.xml:393:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/solr/build.xml:596:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/solr/build.xml:588:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/common-build.xml:1578:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/common-build.xml:564:
 Unable to initialize POM pom.xml: Could not find the model file 
'/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/poms/solr/contrib/analytics/pom.xml'.
 for project unknown

Total time: 19 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Joel Bernstein
Congratulations Gregory!

Joel Bernstein
Search Engineer at Heliosearch

On Fri, Sep 19, 2014 at 10:23 PM, Han Jiang  wrote:

> Welcome Gregory!
>
> On Sat, Sep 20, 2014 at 9:26 AM, Ryan Ernst  wrote:
> > Welcome Gregory!
> >
> > On Sep 19, 2014 3:33 PM, "Steve Rowe"  wrote:
> >>
> >> I'm pleased to announce that Gregory Chanan has accepted the PMC's
> >> invitation to become a committer.
> >>
> >> Gregory, it's tradition that you introduce yourself with a brief bio.
> >>
> >> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
> >> account to the "lucene" LDAP group, so you now have commit privileges.
> >> Please test this by adding yourself to the committers section of the
> Who We
> >> Are page on the website:  (use
> the
> >> ASF CMS bookmarklet at the bottom of the page here:
> >>  - more info here
> >> ).
> >>
> >> Since you're a committer on the Apache HBase project, you probably
> already
> >> know about it, but I'll include a link to the ASF dev page anyway -
> lots of
> >> useful links: .
> >>
> >> Congratulations and welcome!
> >>
> >> Steve
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
>
>
> --
> Han Jiang
>
> Team of Search Engine and Web Mining,
> School of Electronic Engineering and Computer Science,
> Peking University, China
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Han Jiang
Welcome Gregory!

On Sat, Sep 20, 2014 at 9:26 AM, Ryan Ernst  wrote:
> Welcome Gregory!
>
> On Sep 19, 2014 3:33 PM, "Steve Rowe"  wrote:
>>
>> I'm pleased to announce that Gregory Chanan has accepted the PMC's
>> invitation to become a committer.
>>
>> Gregory, it's tradition that you introduce yourself with a brief bio.
>>
>> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
>> account to the "lucene" LDAP group, so you now have commit privileges.
>> Please test this by adding yourself to the committers section of the Who We
>> Are page on the website:  (use the
>> ASF CMS bookmarklet at the bottom of the page here:
>>  - more info here
>> ).
>>
>> Since you're a committer on the Apache HBase project, you probably already
>> know about it, but I'll include a link to the ASF dev page anyway - lots of
>> useful links: .
>>
>> Congratulations and welcome!
>>
>> Steve
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>



-- 
Han Jiang

Team of Search Engine and Web Mining,
School of Electronic Engineering and Computer Science,
Peking University, China

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6354) Support stats over functions

2014-09-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6354:
---
Attachment: SOLR-6354.patch



Updated patch, all tests & javadocs written - no more nocommits.

Other then tests, there's really only one code change between this patch and 
the last one -- and that was fixing AbstractStatsValues.setNextReader to call 
ValueSource.newContext() instead of using Collections.emptyMap() -- it's never 
really been a problem before, but it was problematic now if you tried to do 
stats over a QueryValueSource.


I'm hoping to get this committed on monday unless anyone sees any problems.


> Support stats over functions
> 
>
> Key: SOLR-6354
> URL: https://issues.apache.org/jira/browse/SOLR-6354
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6354.patch, SOLR-6354.patch, SOLR-6354.patch, 
> TstStatsComponent.java
>
>
> The majority of the logic in StatsValuesFactory for dealing with stats over 
> fields just uses the ValueSource API.  There's very little reason we can't 
> generalize this to support computing aggregate stats over any arbitrary 
> function (or the scores from an arbitrary query).
> Example...
> {noformat}
> stats.field={!func key=mean_rating 
> mean=true}prod(user_rating,pow(editor_rating,2))
> {noformat}
> ...would mean that we can compute a conceptual "rating" for each doc by 
> multiplying the user_rating field by the square of the editor_rating field, 
> and then we'd compute the mean of that "rating" across all docs in the set 
> and return it as "mean_rating"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Ryan Ernst
Welcome Gregory!
On Sep 19, 2014 3:33 PM, "Steve Rowe"  wrote:

> I'm pleased to announce that Gregory Chanan has accepted the PMC's
> invitation to become a committer.
>
> Gregory, it's tradition that you introduce yourself with a brief bio.
>
> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
> account to the “lucene" LDAP group, so you now have commit privileges.
> Please test this by adding yourself to the committers section of the Who We
> Are page on the website:  (use
> the ASF CMS bookmarklet at the bottom of the page here: <
> https://cms.apache.org/#bookmark> - more info here <
> http://www.apache.org/dev/cms.html>).
>
> Since you’re a committer on the Apache HBase project, you probably already
> know about it, but I'll include a link to the ASF dev page anyway - lots of
> useful links: .
>
> Congratulations and welcome!
>
> Steve
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Resolved] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5965.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Give Version parsing exceptions more descriptive error messages

2014-09-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141633#comment-14141633
 ] 

Robert Muir commented on LUCENE-5952:
-

thanks for beefing this up. the .si file is really centric to the segment, so 
any safety we can add is good.

A few questions:
* Can we encode 3 ints instead of 4? As far as I know, the 'prerelease' was 
added to support 4.0-alpha/4.0-beta. This was confusing (my fault), and this 
confusion ultimately worked its way into an index corruption bug. I think we 
should try to contain it to 4.0 instead and not keep things complicated like 
that.
* Can we consider just making a new 5.0 si writer? its a pain to bump the codec 
version, but I'll do the work here. We can remove conditionals like 'supports 
checksums' as well. 
* I agree we should put these methods in CodecUtil (CodecUtil.readVersion, 
writeVersion). To answer Uwe's questions about why a format change is needed 
for the version, IMO its way better to encode this in a way that does not 
require parsing,.

We can followup with this by improving the exceptions for tiny "slurp-in" 
classes like this (I would personally, as in do the work, also fix .fnm, 
segments_N, .nvm, .dvm, .fdt, .tvx as well). I would add a 
CodecUtil.addSuppressedChecksum or something, to easily allow these guys to 
'annotate' any exc on init with checksum failure information. These are small 
but important and it would help considering we are dodging challenges like JVM 
bugs here.

I also want to bump 5.0 codec anyway, to fix the bug where 
Lucene42TermVectorsFormat uses the same codecName as Lucene41StoredFieldsFormat 
in the codec header, thats a stupid bug we should fix.

> Give Version parsing exceptions more descriptive error messages
> ---
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, 
> LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141620#comment-14141620
 ] 

ASF subversion and git services commented on LUCENE-5965:
-

Commit 1626375 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626375 ]

LUCENE-5965: CorruptIndexException requires a String or DataInput resource

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141604#comment-14141604
 ] 

ASF subversion and git services commented on LUCENE-5965:
-

Commit 1626372 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626372 ]

LUCENE-5965: CorruptIndexException requires a String or DataInput resource

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6543) Give HttpSolrServer the ability to send PUT requests

2014-09-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6543:
-
Attachment: SOLR-6543.patch

One thing I noticed reading over the previous patch again: there's a 
useMultiPartPost setting, but I use it for put requests.  A multipart PUT seems 
pretty rare, so I'll just leave the setting name as is and make sure it's only 
used with POST requests.  Attached patch does this.

> Give HttpSolrServer the ability to send PUT requests
> 
>
> Key: SOLR-6543
> URL: https://issues.apache.org/jira/browse/SOLR-6543
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-6543.patch, SOLR-6543.patch
>
>
> Given that the schema API has a PUT request 
> (https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-Createonenewschemafield)
>  it would be nice if HttpSolrServer supported sending PUTs, so it could be 
> used for sending that type of request.  Note if we really wanted to fully 
> support that request we'd probably want a Request/Response type in solrj as 
> well, but that can be handled in a separate issue.
> Also, administrators may add arbitrary filters that require PUT requests.  In 
> my own setup, I have a version of Hadoop's 
> DelegationTokenAuthenticationFilter sitting in front of the dispatch filter.  
> Here also it would be nice if I could send all requests via HttpSolrServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5958) OOM or exceptions during checkpoint make IndexWriter have a bad day

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141551#comment-14141551
 ] 

ASF subversion and git services commented on LUCENE-5958:
-

Commit 1626368 from [~rcmuir] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1626368 ]

LUCENE-5958: add logic to merge exc handling as well

> OOM or exceptions during checkpoint make IndexWriter have a bad day
> ---
>
> Key: LUCENE-5958
> URL: https://issues.apache.org/jira/browse/LUCENE-5958
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: LUCENE-5958.patch
>
>
> During finishCommit(), we run checkpoint after we wrote the commit to disk, 
> but if things go wrong here (e.g. IOError when IFD deletes a pending file, 
> OOM), then everything will go wrong (we won't even properly incref things, 
> and may end out deleting wrong files if the user calls rollback, leaving a 
> corrupt index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5958) OOM or exceptions during checkpoint make IndexWriter have a bad day

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141541#comment-14141541
 ] 

ASF subversion and git services commented on LUCENE-5958:
-

Commit 1626366 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626366 ]

LUCENE-5958: add logic to merge exc handling as well

> OOM or exceptions during checkpoint make IndexWriter have a bad day
> ---
>
> Key: LUCENE-5958
> URL: https://issues.apache.org/jira/browse/LUCENE-5958
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: LUCENE-5958.patch
>
>
> During finishCommit(), we run checkpoint after we wrote the commit to disk, 
> but if things go wrong here (e.g. IOError when IFD deletes a pending file, 
> OOM), then everything will go wrong (we won't even properly incref things, 
> and may end out deleting wrong files if the user calls rollback, leaving a 
> corrupt index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Shawn Heisey
On 9/19/2014 4:33 PM, Steve Rowe wrote:
> I'm pleased to announce that Gregory Chanan has accepted the PMC's invitation 
> to become a committer.

Welcome to the madness!

Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5958) OOM or exceptions during checkpoint make IndexWriter have a bad day

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141532#comment-14141532
 ] 

ASF subversion and git services commented on LUCENE-5958:
-

Commit 1626363 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1626363 ]

LUCENE-5958: add logic to merge exc handling as well

> OOM or exceptions during checkpoint make IndexWriter have a bad day
> ---
>
> Key: LUCENE-5958
> URL: https://issues.apache.org/jira/browse/LUCENE-5958
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: LUCENE-5958.patch
>
>
> During finishCommit(), we run checkpoint after we wrote the commit to disk, 
> but if things go wrong here (e.g. IOError when IFD deletes a pending file, 
> OOM), then everything will go wrong (we won't even properly incref things, 
> and may end out deleting wrong files if the user calls rollback, leaving a 
> corrupt index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Erick Erickson
Welcome aboard!

Erick

On Fri, Sep 19, 2014 at 4:10 PM, Michael McCandless
 wrote:
> Welcome Gregory!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Sep 19, 2014 at 7:09 PM, Otis Gospodnetic
>  wrote:
>> Congratulations!
>>
>> Otis
>> --
>> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
>> Solr & Elasticsearch Support * http://sematext.com/
>>
>>
>> On Fri, Sep 19, 2014 at 6:33 PM, Steve Rowe  wrote:
>>>
>>> I'm pleased to announce that Gregory Chanan has accepted the PMC's
>>> invitation to become a committer.
>>>
>>> Gregory, it's tradition that you introduce yourself with a brief bio.
>>>
>>> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
>>> account to the “lucene" LDAP group, so you now have commit privileges.
>>> Please test this by adding yourself to the committers section of the Who We
>>> Are page on the website:  (use the
>>> ASF CMS bookmarklet at the bottom of the page here:
>>>  - more info here
>>> ).
>>>
>>> Since you’re a committer on the Apache HBase project, you probably already
>>> know about it, but I'll include a link to the ASF dev page anyway - lots of
>>> useful links: .
>>>
>>> Congratulations and welcome!
>>>
>>> Steve
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-Artifacts-5.x - Build # 616 - Still Failing

2014-09-19 Thread Steve Rowe
Sorry I haven’t fixed this yet.  I plan to fix it today. - Steve

On Sep 19, 2014, at 7:16 PM, Apache Jenkins Server  
wrote:

> Build: https://builds.apache.org/job/Solr-Artifacts-5.x/616/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 36196 lines...]
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:596:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:588:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:440:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1578:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:564:
>  Unable to initialize POM pom.xml: Could not find the model file 
> '/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/build/poms/solr/contrib/analytics/pom.xml'.
>  for project unknown
> 
> Total time: 13 minutes 41 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Sending artifact delta relative to Solr-Artifacts-5.x #612
> Archived 89 artifacts
> Archive block size is 32768
> Received 2589 blocks and 300850584 bytes
> Compression is 22.0%
> Took 4 min 21 sec
> Publishing Javadoc
> Email was triggered for: Failure
> Sending email for trigger: Failure
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-5.x - Build # 616 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/616/

No tests ran.

Build Log:
[...truncated 36196 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:596:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:588:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1578:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:564:
 Unable to initialize POM pom.xml: Could not find the model file 
'/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/build/poms/solr/contrib/analytics/pom.xml'.
 for project unknown

Total time: 13 minutes 41 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Solr-Artifacts-5.x #612
Archived 89 artifacts
Archive block size is 32768
Received 2589 blocks and 300850584 bytes
Compression is 22.0%
Took 4 min 21 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Michael McCandless
Welcome Gregory!

Mike McCandless

http://blog.mikemccandless.com


On Fri, Sep 19, 2014 at 7:09 PM, Otis Gospodnetic
 wrote:
> Congratulations!
>
> Otis
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
> On Fri, Sep 19, 2014 at 6:33 PM, Steve Rowe  wrote:
>>
>> I'm pleased to announce that Gregory Chanan has accepted the PMC's
>> invitation to become a committer.
>>
>> Gregory, it's tradition that you introduce yourself with a brief bio.
>>
>> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
>> account to the “lucene" LDAP group, so you now have commit privileges.
>> Please test this by adding yourself to the committers section of the Who We
>> Are page on the website:  (use the
>> ASF CMS bookmarklet at the bottom of the page here:
>>  - more info here
>> ).
>>
>> Since you’re a committer on the Apache HBase project, you probably already
>> know about it, but I'll include a link to the ASF dev page anyway - lots of
>> useful links: .
>>
>> Congratulations and welcome!
>>
>> Steve
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Otis Gospodnetic
Congratulations!

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Fri, Sep 19, 2014 at 6:33 PM, Steve Rowe  wrote:

> I'm pleased to announce that Gregory Chanan has accepted the PMC's
> invitation to become a committer.
>
> Gregory, it's tradition that you introduce yourself with a brief bio.
>
> Mark Miller, the Lucene PMC chair, has already added your "gchanan"
> account to the “lucene" LDAP group, so you now have commit privileges.
> Please test this by adding yourself to the committers section of the Who We
> Are page on the website:  (use
> the ASF CMS bookmarklet at the bottom of the page here: <
> https://cms.apache.org/#bookmark> - more info here <
> http://www.apache.org/dev/cms.html>).
>
> Since you’re a committer on the Apache HBase project, you probably already
> know about it, but I'll include a link to the ASF dev page anyway - lots of
> useful links: .
>
> Congratulations and welcome!
>
> Steve
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2122 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2122/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=5881, 
name=Thread-2058, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=5881, name=Thread-2058, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)
at __randomizedtesting.SeedInfo.seed([A98D47E8931C2C82]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5881, name=Thread-2058, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=5881, name=Thread-2058, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.

Re: [VOTE] Release 4.9.1 RC1

2014-09-19 Thread Mark Miller
+1

SUCCESS! [0:45:18.052513]

-- 
- Mark

http://about.me/markrmiller

On Thu, Sep 18, 2014 at 4:53 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Artifacts here:
>
> http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1-RC1-rev1625909
>
> Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
>
> http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1-RC1-rev1625909
> 1625909 4.9.1 /tmp/smoke491 True
>
> > SUCCESS! [0:23:57.460556]
>
> Here's my +1
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Yonik Seeley
Congrats, Greg!

-Yonik
http://heliosearch.org - native code faceting, facet functions,
sub-facets, off-heap data


On Fri, Sep 19, 2014 at 6:33 PM, Steve Rowe  wrote:
> I'm pleased to announce that Gregory Chanan has accepted the PMC's invitation 
> to become a committer.
>
> Gregory, it's tradition that you introduce yourself with a brief bio.
>
> Mark Miller, the Lucene PMC chair, has already added your "gchanan" account 
> to the “lucene" LDAP group, so you now have commit privileges.  Please test 
> this by adding yourself to the committers section of the Who We Are page on 
> the website:  (use the ASF CMS 
> bookmarklet at the bottom of the page here: 
>  - more info here 
> ).
>
> Since you’re a committer on the Apache HBase project, you probably already 
> know about it, but I'll include a link to the ASF dev page anyway - lots of 
> useful links: .
>
> Congratulations and welcome!
>
> Steve

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Welcome Gregory Chanan as Lucene/Solr committer

2014-09-19 Thread Steve Rowe
I'm pleased to announce that Gregory Chanan has accepted the PMC's invitation 
to become a committer.

Gregory, it's tradition that you introduce yourself with a brief bio.

Mark Miller, the Lucene PMC chair, has already added your "gchanan" account to 
the “lucene" LDAP group, so you now have commit privileges.  Please test this 
by adding yourself to the committers section of the Who We Are page on the 
website:  (use the ASF CMS bookmarklet 
at the bottom of the page here:  - more info 
here ).

Since you’re a committer on the Apache HBase project, you probably already know 
about it, but I'll include a link to the ASF dev page anyway - lots of useful 
links: .

Congratulations and welcome!

Steve


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141451#comment-14141451
 ] 

Anshum Gupta commented on SOLR-5986:


Thanks for that feedback Steve. I think I overlooked it over the iterations and 
changes. I spoke to Steve Rowe and he's about to post an updated patch that 
would also include that change.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-5x-Linux-Java7-64-test-only - Build # 31521 - Failure!

2014-09-19 Thread Robert Muir
This is a test bug. I will fix it.

On Fri, Sep 19, 2014 at 5:51 PM,   wrote:
> Build: builds.flonkings.com/job/Lucene-5x-Linux-Java7-64-test-only/31521/
>
> 2 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.index.TestIndexFileDeleter
>
> Error Message:
> The test or suite printed 12948 bytes to stdout and stderr, even though the 
> limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>
> Stack Trace:
> java.lang.AssertionError: The test or suite printed 12948 bytes to stdout and 
> stderr, even though the limit was set to 8192 bytes. Increase the limit with 
> @Limit, ignore it completely with @SuppressSysoutChecks or run with 
> -Dtests.verbose=true
> at __randomizedtesting.SeedInfo.seed([2970400A822126F8]:0)
> at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:210)
> at 
> com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thread.run(Thread.java:745)
>
>
> REGRESSION:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=208, name=Lucene Merge 
> Thread #0, state=RUNNABLE, group=TGRP-TestIndexFileDeleter]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=208, name=Lucene Merge Thread #0, 
> state=RUNNABLE, group=TGRP-TestIndexFileDeleter]
> at 
> __randomizedtesting.SeedInfo.seed([2970400A822126F8:C0ED3738F4E8C105]:0)
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> org.apache.lucene.store.AlreadyClosedException: refusing to delete any files: 
> this IndexWriter hit an unrecoverable exception
> at __randomizedtesting.SeedInfo.seed([2970400A822126F8]:0)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
> at 
> org.apache.lucene.index.TestIndexFileDeleter$3.handleMergeException(TestIndexFileDeleter.java:439)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: org.apache.lucene.store.AlreadyClosedException: refusing to delete 
> any files: this IndexWriter hit an unrecoverable exception
> at 
> org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:350)
> at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:719)
> at 
> org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:450)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3551)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> Caused by: java.lang.RuntimeException: fake fail
> at 
> org.apache.lucene.index.TestIndexFileDeleter$2.eval(TestIndexFileDeleter.java:421)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowDeterministicException(MockDirectoryWrapper.java:957)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:500)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:473)
> at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:724)
> at 
> org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:656)
> at 
> org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:619)
> at 
> org.apache.lucene.index.IndexFileDeleter.deleteCommits(IndexFileDeleter.java:377)
> at 
> org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:568)
> at 
> org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:2911)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2886)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2846)

[jira] [Updated] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-09-19 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6511:
-
Attachment: SOLR-6511.patch

Here's an updated patch. It'll need to be updated again after SOLR-6530 is 
committed. Key things in this patch are:

1) HttpPartitionTest.testLeaderZkSessionLoss: reproduces the scenario described 
in this ticket

2) DistributedUpdateProcessor now checks to see if the reason for a failure is 
because of a leader change and if so, the request fails and an error is sent to 
the client

I had to add a way to pass-thru some additional context information about an 
error from server to client, which I'll do that work in another ticket but this 
patch shows the approach I'm taking.

Lastly, HttpPartitionTest continues to be a problem - I beast'd it for 10 times 
and it failed after 6 runs locally (sometimes fewer), so will need to get that 
problem resolved before committing this patch too. It consistently fails in the 
testRf3WithLeaderFailover but for different reasons. My thinking is that I'll 
break the problem test case (testRf3WithLeaderFailover) out to its own test 
class as the other tests in this class work well and cover a lot of important 
functionality.

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6537) A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD

2014-09-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6537:
---
Attachment: SOLR-6537.patch

Removed an unwanted logging from in there.

> A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD
> -
>
> Key: SOLR-6537
> URL: https://issues.apache.org/jira/browse/SOLR-6537
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Anshum Gupta
>  Labels: difficulty-easy, impact-low
> Fix For: 5.0
>
> Attachments: SOLR-6537.patch, SOLR-6537.patch, SOLR-6537.patch
>
>
> If one has defined a uLogDir which is not inside the dataDir then the 
> CoreAdmin#UNLOAD call will not clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6537) A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD

2014-09-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6537:
---
Attachment: (was: SOLR-6537.patch)

> A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD
> -
>
> Key: SOLR-6537
> URL: https://issues.apache.org/jira/browse/SOLR-6537
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Anshum Gupta
>  Labels: difficulty-easy, impact-low
> Fix For: 5.0
>
> Attachments: SOLR-6537.patch, SOLR-6537.patch, SOLR-6537.patch
>
>
> If one has defined a uLogDir which is not inside the dataDir then the 
> CoreAdmin#UNLOAD call will not clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4870 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4870/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:49084, 
http://127.0.0.1:49097, http://127.0.0.1:49045, http://127.0.0.1:49062, 
http://127.0.0.1:49032]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49084, http://127.0.0.1:49097, 
http://127.0.0.1:49045, http://127.0.0.1:49062, http://127.0.0.1:49032]
at 
__randomizedtesting.SeedInfo.seed([E5BC2CF747600B4C:645AA2EF303F6B70]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:874)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.Statemen

[jira] [Updated] (SOLR-6537) A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD

2014-09-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6537:
---
Attachment: SOLR-6537.patch

An updated patch with a working test. I need to spend just some more time on 
this to clean it up and make variable names better.

> A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD
> -
>
> Key: SOLR-6537
> URL: https://issues.apache.org/jira/browse/SOLR-6537
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Anshum Gupta
>  Labels: difficulty-easy, impact-low
> Fix For: 5.0
>
> Attachments: SOLR-6537.patch, SOLR-6537.patch, SOLR-6537.patch
>
>
> If one has defined a uLogDir which is not inside the dataDir then the 
> CoreAdmin#UNLOAD call will not clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6530) Commits under network partition can put any node in down state by any node

2014-09-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6530:

Attachment: SOLR-6530.patch

My last fix was not complete. Checking if I am the leader is not enough because 
commits are broadcast to the entire collection without caring for the shards. 
So it is still possible that a core which is a leader of shard2 may run LIR 
code for a leader/replica of another shard.

I've added a test case to reproduce this. The fix is again simple - we just 
don't run recovery for commits at all.

> Commits under network partition can put any node in down state by any node
> --
>
> Key: SOLR-6530
> URL: https://issues.apache.org/jira/browse/SOLR-6530
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
> SOLR-6530.patch
>
>
> Commits are executed by any node in SolrCloud i.e. they're not routed via the 
> leader like other updates. 
> # Suppose there's 1 collection, 1 shard, 2 replicas (A and B) and A is the 
> leader
> # Suppose a commit request is made to node B during a time where B cannot 
> talk to A due to a partition for any reason (failing switch, heavy GC, 
> whatever)
> # B fails to distribute the commit to A (times out) and asks A to recover
> # This was okay earlier because a leader just ignores recovery requests but 
> with leader initiated recovery code, B puts A in the "down" state and A can 
> never get out of that state.
> tl;dr; During network partitions, if enough commit/optimize requests are sent 
> to the cluster, all the nodes in the cluster will eventually be marked as 
> "down".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-5x-Linux-Java7-64-test-only - Build # 31521 - Failure!

2014-09-19 Thread builder
Build: builds.flonkings.com/job/Lucene-5x-Linux-Java7-64-test-only/31521/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexFileDeleter

Error Message:
The test or suite printed 12948 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 12948 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([2970400A822126F8]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:210)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef

Error Message:
Captured an uncaught exception in thread: Thread[id=208, name=Lucene Merge 
Thread #0, state=RUNNABLE, group=TGRP-TestIndexFileDeleter]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=208, name=Lucene Merge Thread #0, 
state=RUNNABLE, group=TGRP-TestIndexFileDeleter]
at 
__randomizedtesting.SeedInfo.seed([2970400A822126F8:C0ED3738F4E8C105]:0)
Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
org.apache.lucene.store.AlreadyClosedException: refusing to delete any files: 
this IndexWriter hit an unrecoverable exception
at __randomizedtesting.SeedInfo.seed([2970400A822126F8]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.TestIndexFileDeleter$3.handleMergeException(TestIndexFileDeleter.java:439)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: org.apache.lucene.store.AlreadyClosedException: refusing to delete 
any files: this IndexWriter hit an unrecoverable exception
at 
org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:350)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:719)
at 
org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:450)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3551)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.lang.RuntimeException: fake fail
at 
org.apache.lucene.index.TestIndexFileDeleter$2.eval(TestIndexFileDeleter.java:421)
at 
org.apache.lucene.store.MockDirectoryWrapper.maybeThrowDeterministicException(MockDirectoryWrapper.java:957)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:500)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:473)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:724)
at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:656)
at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:619)
at 
org.apache.lucene.index.IndexFileDeleter.deleteCommits(IndexFileDeleter.java:377)
at 
org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:568)
at 
org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:2911)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2886)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2846)
at 
org.apache.lucene.index.RandomIndexWriter.commit(RandomIndexWriter.java:254)
at 
org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  

[jira] [Commented] (SOLR-6537) A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD

2014-09-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141340#comment-14141340
 ] 

Anshum Gupta commented on SOLR-6537:


You're trying to set core level (uLogDir, instanceDir, dataDir) properties 
while creating the collection on the same machine and the same directory. 
That's leading to failure in core creation due to conflict.
You might want to look at creating a collection without those properties and 
then adding a core with those properties (for the purpose of testing this) or 
perhaps just create a collection with a single core (1 shard, 1 replica) and 
delete that.

> A uLogDir configured outside of dataDir is not cleaned up by CoreAdmin UNLOAD
> -
>
> Key: SOLR-6537
> URL: https://issues.apache.org/jira/browse/SOLR-6537
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Anshum Gupta
>  Labels: difficulty-easy, impact-low
> Fix For: 5.0
>
> Attachments: SOLR-6537.patch, SOLR-6537.patch
>
>
> If one has defined a uLogDir which is not inside the dataDir then the 
> CoreAdmin#UNLOAD call will not clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6501) Binary Response Writer does not return wildcard fields

2014-09-19 Thread Burke Webster (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141326#comment-14141326
 ] 

Burke Webster commented on SOLR-6501:
-

I just opened SOLR-6545 which I believe is somehow related to this issue.  If 
fixes the exception reported in SOLR-6545, but there are still some cases where 
the correct response isn't returned.

> Binary Response Writer does not return wildcard fields
> --
>
> Key: SOLR-6501
> URL: https://issues.apache.org/jira/browse/SOLR-6501
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Mike Hugo
>Assignee: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: SOLR-6501.patch
>
>
> In solr 4.10.0 queries that request dynamic fields by passing in a fl=*_exact 
> parameter do not return any fields.  This appears to only be a problem when 
> requesting wildcarded fields via SolrJ (BinaryResponseWriter).  Looks like 
> this may have been introduced via 
> https://issues.apache.org/jira/browse/SOLR-5968
> With Solr 4.10.0 - I downloaded the binary and set up the example:
> cd example
> java -jar start.jar
> java -jar post.jar solr.xml monitor.xml
> In a browser, if I request 
> http://localhost:8983/solr/collection1/select?q=*:*&wt=json&indent=true&fl=*d
> All is well with the world:
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 1,
> "params": {
> "fl": "*d",
> "indent": "true",
> "q": "*:*",
> "wt": "json"
> }
> },
> "response": {
> "numFound": 2,
> "start": 0,
> "docs": [
> {
> "id": "SOLR1000"
> },
> {
> "id": "3007WFP"
> }
> ]
> }
> }
> {code}
> However if I do the same query with SolrJ (groovy script)
> {code}
> @Grab(group = 'org.apache.solr', module = 'solr-solrj', version = '4.10.0')
> import org.apache.solr.client.solrj.SolrQuery
> import org.apache.solr.client.solrj.impl.HttpSolrServer
> HttpSolrServer solrServer = new 
> HttpSolrServer("http://localhost:8983/solr/collection1";)
> SolrQuery q = new SolrQuery("*:*")
> q.setFields("*d")
> println solrServer.query(q)
> {code}
> No fields are returned:
> {code}
> {responseHeader={status=0,QTime=0,params={fl=*d,q=*:*,wt=javabin,version=2}},response={numFound=2,start=0,docs=[SolrDocument{},
>  SolrDocument{}]}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6544) Remove unused arguments from methods in DeleteReplicaTest

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141294#comment-14141294
 ] 

ASF subversion and git services commented on SOLR-6544:
---

Commit 1626330 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626330 ]

SOLR-6544: Fix extra iterations and unwanted arguments in helper methods in 
DeleteReplicaTest (Merge from trunk)

> Remove unused arguments from methods in DeleteReplicaTest
> -
>
> Key: SOLR-6544
> URL: https://issues.apache.org/jira/browse/SOLR-6544
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6544.patch
>
>
> There are unused arguments being passed in helper methods in 
> DeleteReplicaTest. We should remove those to avoid confusion.
> Also, the test has some unwanted iterations to get an active replica, fix 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6544) Remove unused arguments from methods in DeleteReplicaTest

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141282#comment-14141282
 ] 

ASF subversion and git services commented on SOLR-6544:
---

Commit 1626328 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1626328 ]

SOLR-6544: Fix extra iterations and unwanted arguments in helper methods in 
DeleteReplicaTest

> Remove unused arguments from methods in DeleteReplicaTest
> -
>
> Key: SOLR-6544
> URL: https://issues.apache.org/jira/browse/SOLR-6544
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6544.patch
>
>
> There are unused arguments being passed in helper methods in 
> DeleteReplicaTest. We should remove those to avoid confusion.
> Also, the test has some unwanted iterations to get an active replica, fix 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-09-19 Thread Burke Webster (JIRA)
Burke Webster created SOLR-6545:
---

 Summary: Query field list with wild card on dynamic field fails
 Key: SOLR-6545
 URL: https://issues.apache.org/jira/browse/SOLR-6545
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
 Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
Reporter: Burke Webster
Priority: Critical


Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
  bin/solr -e cloud 

Accepting all the default options and you will have a 2 node cloud running with 
replication factor of 2.  

Now add 2 documents by going to example/exampledocs, creating the following 
file named my_test.xml:


 
  1000
  test 1
  Text about test 1.
  Category A
 
 
  1001
  test 2
  Stuff about test 2.
  Category B
 


Then import these documents by running:
  java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
my_test.xml

Verify the docs are there by hitting:
  http://localhost:8983/solr/gettingstarted/select?q=*:*

Now run a query and ask for only the id and cat_*_s fields:
  http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,cat_*

You will only get the id fields back.  Change the query a little to include a 
third field:
  http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,name,cat_*

You will now get the following exception:
java.lang.NullPointerException
at 
org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:744)

I believe SOLR-6501 partially fixes the issue.  After downloading build 607 
(4.11.0-2014-09-11_22-31-51 1624413 - jenkins - 2014-09-11 22:32:47) which 
contains the fix for SOLR-6501 and going through the same setup as above, I 
still see some issues but no exceptions are thrown.

With build 607, running a query for id and a wild card 

[JENKINS] Solr-Artifacts-5.x - Build # 615 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/615/

No tests ran.

Build Log:
[...truncated 36206 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:596:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:588:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1578:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:564:
 Unable to initialize POM pom.xml: Could not find the model file 
'/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/build/poms/solr/contrib/analytics/pom.xml'.
 for project unknown

Total time: 13 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Solr-Artifacts-5.x #612
Archived 89 artifacts
Archive block size is 32768
Received 2589 blocks and 300844174 bytes
Compression is 22.0%
Took 3 min 17 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6544) Remove unused arguments from methods in DeleteReplicaTest

2014-09-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6544:
---
Attachment: SOLR-6544.patch

> Remove unused arguments from methods in DeleteReplicaTest
> -
>
> Key: SOLR-6544
> URL: https://issues.apache.org/jira/browse/SOLR-6544
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6544.patch
>
>
> There are unused arguments being passed in helper methods in 
> DeleteReplicaTest. We should remove those to avoid confusion.
> Also, the test has some unwanted iterations to get an active replica, fix 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6544) Remove unused arguments from methods in DeleteReplicaTest

2014-09-19 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-6544:
--

 Summary: Remove unused arguments from methods in DeleteReplicaTest
 Key: SOLR-6544
 URL: https://issues.apache.org/jira/browse/SOLR-6544
 Project: Solr
  Issue Type: Improvement
  Components: Tests
Reporter: Anshum Gupta
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0


There are unused arguments being passed in helper methods in DeleteReplicaTest. 
We should remove those to avoid confusion.

Also, the test has some unwanted iterations to get an active replica, fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-09-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141269#comment-14141269
 ] 

Alan Woodward commented on SOLR-6511:
-

I think the safest response is to return the error to the client.  Updates are 
idempotent, right?  An ADD will just overwrite the previous ADD, DELETE doesn't 
necessarily have to delete anything to be successful, etc.  So if the client 
gets a 503 back again it can just resend.

The only tricky bit might be what happens if a replica finds itself ahead of 
its leader, as would be the case here.  Does it automatically try and send 
updates on, or does it roll back?

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Attachments: SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2014-09-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141267#comment-14141267
 ] 

Alexandre Rafalovitch commented on SOLR-6520:
-

I'd skip the Confluence bit (TMI), but the rest looks fine. Frankly, as the 
next step I care that we put *something* in front of both users and Google. If 
that works - great. If not, we can improve later. 

Would be nice though if the Confluence space had Google or other analytics 
enabled. Could make for a fun little project to figure out what people are 
actually looking for and whether they are finding it. Currently, there is no 
feedback loop at all on the usefulness of documentation. We know it is overall, 
but it's a lousy granularity to be satisfied with.

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2121 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2121/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D1E9884D0A34B708:500F06557D6BD734]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:706)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:285)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:271)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.del(AbstractFullDistribZkTestBase.java:729)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.tryDelete(ChaosMonkeySafeLeaderTest.java:194)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:112)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr

2014-09-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141222#comment-14141222
 ] 

Joel Bernstein commented on SOLR-6266:
--

Karol,

Can you explain your thinking with the SolrCloud design? Why only run the 
CAPIServer on the shard leader, why not run it on all replicas?

It seems like it would be a simpler design to run it on all replicas. 

> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141220#comment-14141220
 ] 

Michael McCandless commented on LUCENE-5965:


+1, looks awesome.

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-19 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141216#comment-14141216
 ] 

Steve Davids commented on SOLR-5986:


Looks good to me, the only nit-picky thing I would say is the QueryTimeoutBase 
name for an interface is strange, you may consider renaming it to 
"QueryTimeout" and rename the current QueryTimeout class to something along the 
lines of LuceneQueryTimeout / DefaultQueryTimeout / SimpleQueryTimeout? 

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141210#comment-14141210
 ] 

Uwe Schindler commented on LUCENE-5965:
---

+1 for this!

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5965:

Attachment: LUCENE-5965.patch

Patch requiring resourceDescription (either datainput, or string). We had quite 
a few places missing this.

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5965.patch
>
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-5964.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Removed from trunk and branch_5x.

Thanks Torsten!

> Update READ_BEFORE_REGENERATING.txt
> ---
>
> Key: LUCENE-5964
> URL: https://issues.apache.org/jira/browse/LUCENE-5964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.10
>Reporter: Torsten Krah
>Assignee: Steve Rowe
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> Reading the file READ_BEFORE_REGENERATING.txt from 
> analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
> jflex trunk.
> {{ant regenerate}} already uses ivy to get current jflex (1.6) which should 
> be used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141193#comment-14141193
 ] 

ASF subversion and git services commented on LUCENE-5964:
-

Commit 1626321 from [~sar...@syr.edu] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626321 ]

LUCENE-5964: drop obsolete file telling users how to set up JFlex, since this 
is now automated (merged trunk r1626318)

> Update READ_BEFORE_REGENERATING.txt
> ---
>
> Key: LUCENE-5964
> URL: https://issues.apache.org/jira/browse/LUCENE-5964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.10
>Reporter: Torsten Krah
>Assignee: Steve Rowe
>Priority: Trivial
>
> Reading the file READ_BEFORE_REGENERATING.txt from 
> analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
> jflex trunk.
> {{ant regenerate}} already uses ivy to get current jflex (1.6) which should 
> be used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141189#comment-14141189
 ] 

ASF subversion and git services commented on LUCENE-5964:
-

Commit 1626318 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1626318 ]

LUCENE-5964: drop obsolete file telling users how to set up JFlex, since this 
is now automated

> Update READ_BEFORE_REGENERATING.txt
> ---
>
> Key: LUCENE-5964
> URL: https://issues.apache.org/jira/browse/LUCENE-5964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.10
>Reporter: Torsten Krah
>Assignee: Steve Rowe
>Priority: Trivial
>
> Reading the file READ_BEFORE_REGENERATING.txt from 
> analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
> jflex trunk.
> {{ant regenerate}} already uses ivy to get current jflex (1.6) which should 
> be used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6266) Couchbase plug-in for Solr

2014-09-19 Thread Karol Abramczyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karol Abramczyk updated SOLR-6266:
--
Attachment: solr-couchbase-plugin.tar.gz

Updated plugin source

> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz, 
> solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr

2014-09-19 Thread Karol Abramczyk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141185#comment-14141185
 ] 

Karol Abramczyk commented on SOLR-6266:
---

[~joel.bernstein] In the meantime I finished my basic implementation of 
CAPIServer failover. Solr plugin runs only one CAPIServer on the leader of 
shard1 and replicas put a watch on it to start a new CAPIServer when the first 
one goes down. I will update the source and remove unnecessary dependencies.

> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141164#comment-14141164
 ] 

Michael McCandless commented on LUCENE-5965:


+1

> Make CorruptIndexException require 'resource' like TooOld/TooNew do
> ---
>
> Key: LUCENE-5965
> URL: https://issues.apache.org/jira/browse/LUCENE-5965
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4869 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4869/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([976921D26888F6B7:168FAFCA1FD7968B]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:706)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:285)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:271)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.del(AbstractFullDistribZkTestBase.java:729)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.tryDelete(ChaosMonkeySafeLeaderTest.java:194)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:112)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  

[jira] [Commented] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141137#comment-14141137
 ] 

Steve Rowe commented on LUCENE-5964:


Thanks for bringing it up [~tkrah], that file is out of date.  That file can 
just be removed, since the build does the right thing now, pulling exact 
versions of JFlex it needs from Maven Central.

> Update READ_BEFORE_REGENERATING.txt
> ---
>
> Key: LUCENE-5964
> URL: https://issues.apache.org/jira/browse/LUCENE-5964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.10
>Reporter: Torsten Krah
>Assignee: Steve Rowe
>Priority: Trivial
>
> Reading the file READ_BEFORE_REGENERATING.txt from 
> analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
> jflex trunk.
> {{ant regenerate}} already uses ivy to get current jflex (1.6) which should 
> be used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-5964:
--

Assignee: Steve Rowe

> Update READ_BEFORE_REGENERATING.txt
> ---
>
> Key: LUCENE-5964
> URL: https://issues.apache.org/jira/browse/LUCENE-5964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.10
>Reporter: Torsten Krah
>Assignee: Steve Rowe
>Priority: Trivial
>
> Reading the file READ_BEFORE_REGENERATING.txt from 
> analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
> jflex trunk.
> {{ant regenerate}} already uses ivy to get current jflex (1.6) which should 
> be used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6543) Give HttpSolrServer the ability to send PUT requests

2014-09-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6543:
-
Attachment: SOLR-6543.patch

Here's a patch with a unit test.  Note I only changed the client side.  On the 
server side, we may eventually want to support better parsing in 
SolrRequestParsers (right now, only the queryString of puts are parsed) and to 
handle the caching in HttpCacheHeaderUtil.  But since puts are only used in the 
schema API, and that is handled by the REST API that doesn't use those classes, 
this is sufficient.

> Give HttpSolrServer the ability to send PUT requests
> 
>
> Key: SOLR-6543
> URL: https://issues.apache.org/jira/browse/SOLR-6543
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-6543.patch
>
>
> Given that the schema API has a PUT request 
> (https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-Createonenewschemafield)
>  it would be nice if HttpSolrServer supported sending PUTs, so it could be 
> used for sending that type of request.  Note if we really wanted to fully 
> support that request we'd probably want a Request/Response type in solrj as 
> well, but that can be handled in a separate issue.
> Also, administrators may add arbitrary filters that require PUT requests.  In 
> my own setup, I have a version of Hadoop's 
> DelegationTokenAuthenticationFilter sitting in front of the dispatch filter.  
> Here also it would be nice if I could send all requests via HttpSolrServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6543) Give HttpSolrServer the ability to send PUT requests

2014-09-19 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-6543:


 Summary: Give HttpSolrServer the ability to send PUT requests
 Key: SOLR-6543
 URL: https://issues.apache.org/jira/browse/SOLR-6543
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Gregory Chanan
Priority: Minor


Given that the schema API has a PUT request 
(https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-Createonenewschemafield)
 it would be nice if HttpSolrServer supported sending PUTs, so it could be used 
for sending that type of request.  Note if we really wanted to fully support 
that request we'd probably want a Request/Response type in solrj as well, but 
that can be handled in a separate issue.

Also, administrators may add arbitrary filters that require PUT requests.  In 
my own setup, I have a version of Hadoop's DelegationTokenAuthenticationFilter 
sitting in front of the dispatch filter.  Here also it would be nice if I could 
send all requests via HttpSolrServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141114#comment-14141114
 ] 

Michael McCandless commented on LUCENE-5879:


bq. I simply mean is this useful outside of numeric fields?

Oh, it's for any terms ... numeric or not.

bq. Another question I have is, does the automatic prefix length calculation do 
it at byte boundaries or is it intra-byte?

It's currently byte-boundary only, though this is an impl detail and we could 
do e.g. nibbles in the future ...

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141108#comment-14141108
 ] 

David Smiley commented on LUCENE-5879:
--

bq. Is this applicable to variable-length String fields that you might want to 
do range queries on for whatever reason? Such as... A*, B*, C* or A-G, H-P, ... 
etc. ? It appears this is applicable.

bq. I don't quite understand the question ... the indexed terms can be any 
variable length.

I simply mean is this useful outside of numeric fields?

Another question I have is, does the automatic prefix length calculation do it 
at byte boundaries or is it intra-byte?

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5967) Allow WildcardQuery and RegexpQuery to also use auto-prefix terms

2014-09-19 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5967:
--

 Summary: Allow WildcardQuery and RegexpQuery to also use 
auto-prefix terms
 Key: LUCENE-5967
 URL: https://issues.apache.org/jira/browse/LUCENE-5967
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


In LUCENE-5879, we added auto-prefix terms, where the terms dict finds good 
prefix terms to index, so that at search time PrefixQuery and TermRangeQuery 
can visit far fewer terms than the full set.

WildcardQuery and RegexpQuery will only make use of auto-prefix terms if it's 
"effectively" a PrefixQuery (e.g. WildcardQuery("foo*")), but we could fix them 
so they could also use auto-prefix terms for other cases (e.g. foo?b*) though 
in practice it's less likely to have an impact I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141101#comment-14141101
 ] 

Michael McCandless commented on LUCENE-5879:


OK I opened LUCENE-5967 to allow Wildcard/RegexpQuery to use auto-prefix 
terms...

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141096#comment-14141096
 ] 

Michael McCandless commented on LUCENE-5879:


bq. Wow, awesome work Mike! And fantastic idea Adrien!

Thanks [~dsmiley]

bq.  I mean, are the intervals that are computed from the data determined and 
fixed within a given segment, or is it variable throughout the segment?

It's per-segment, so each segment will look at how its terms fall and find 
"good" places to insert the auto-prefix terms.

bq. Is this applicable to variable-length String fields that you might want to 
do range queries on for whatever reason? Such as... A*, B*, C* or A-G, H-P, ... 
etc. ? It appears this is applicable.

I don't quite understand the question ... the indexed terms can be any variable 
length.

bq. Would any CompiledAutomaton (e.g. a wildcard query) that has a leading 
prefix benefit from this or is it strictly Prefix & Range queries? Mike's 
comments suggest it will sometime but not yet. Can you create an issue for it, 
Mike? This would be especially useful in Lucene-spatial; I'm excited at the 
prospects!

Currently auto-prefix terms are only used for PrefixQuery and TermRangeQuery, 
or for any automaton query that "becomes" a PrefixQuery on rewrite (e.g. 
WildcardQuery("foo*")).

Enabling them for WildcardQuery and RegexpQuery should be fairly easy, however 
they will only kick in in somewhat exotic situations, where there is a portion 
of the term space accepted by the automaton which "suddenly" accepts any 
suffix.  E.g. foo*bar will never use auto-prefix terms, but foo?b* will.

I'll open an issue!

bq. When you iterate a TermsEnum, will the prefix terms be exposed or is it 
internal to the Codec?

No, these auto-prefix terms are invisible in all APIs, except when you call 
Terms.intersect.

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1213: POMs out of sync

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1213/

No tests ran.

Build Log:
[...truncated 28230 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:507:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:180:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:588:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1577:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:563:
 Unable to initialize POM pom.xml: Could not find the model file 
'/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build/poms/solr/contrib/analytics/pom.xml'.
 for project unknown

Total time: 14 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Help with `ant beast`

2014-09-19 Thread Ramkumar R. Aiyengar
That sounds good. Btw, I first tried it at the toplevel, and that came back
with a beast target not found, hence went to lucene subdir. At least that's
guessable, so less important,  would be to add that failure message to the
toplevel as well..
On 19 Sep 2014 18:02, "Ryan Ernst"  wrote:

>
> On Fri, Sep 19, 2014 at 9:55 AM, Chris Hostetter  > wrote:
>
>> can we just add something like this inside the "beast" target?...
>>
>>
>> 
>>   
>> 
>>   
>> 
>>
>
> +1
>


[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141061#comment-14141061
 ] 

Michael McCandless commented on LUCENE-5879:


OK I created LUCENE-5966 for the migration plan...

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5966) How to migrate from numeric fields to auto-prefix terms

2014-09-19 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5966:
--

 Summary: How to migrate from numeric fields to auto-prefix terms
 Key: LUCENE-5966
 URL: https://issues.apache.org/jira/browse/LUCENE-5966
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


In LUCENE-5879 we are adding auto-prefix terms to the default terms dict, which 
is generalized from numeric fields and offers faster performance while using 
less indexing space and about the same indexing time.

But there are many users out there with indices already created containing 
numeric fields ... so ideally we have some simple way for such users to switch 
over to auto-prefix  terms.

Robert has a good plan (copied from LUCENE-5879):

Here are some thoughts.
# keep current trie "Encoding" for terms, it just uses precision step=Inf and 
lets the term dictionary do it automatically.
# create a filteratomicreader, that for a previous trie encoded field, removes 
"fake" terms on merge.

Users could continue to use NumericRangeQuery just with the infinite precision 
step, and it will always work, just execute slower for old segments as it 
doesnt take advantage of the trie terms that are not yet merged away.

One issue to making it really nice, is that lucene doesnt know for sure that a 
field is numeric, so it cannot be "full-auto". Apps would have to use their 
schema or whatever to wrap with this reader in their merge policy.

Maybe we could provide some sugar for this, such as a wrapping merge policy 
that takes a list of field names that are numeric, or sugar to pass this to IWC 
in IndexUpgrader to force it, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5965) Make CorruptIndexException require 'resource' like TooOld/TooNew do

2014-09-19 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5965:
---

 Summary: Make CorruptIndexException require 'resource' like 
TooOld/TooNew do
 Key: LUCENE-5965
 URL: https://issues.apache.org/jira/browse/LUCENE-5965
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


and review all of these to ensure other pertinent information is included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 626 - Still Failing

2014-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/626/

3 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: null

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: null
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.eva

[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14141048#comment-14141048
 ] 

Michael McCandless commented on LUCENE-5879:


I like that plan Rob, except the term encoding that numeric field uses is 
somewhat wasteful: 1 byte used to encode the "shift", and then only 7 of 8 bits 
use for subsequent bytes ... but maybe we just live with that since it 
simplifies migration (both old and new can co-exist in one index).

I'll open an issue for this.

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5964) Update READ_BEFORE_REGENERATING.txt

2014-09-19 Thread Torsten Krah (JIRA)
Torsten Krah created LUCENE-5964:


 Summary: Update READ_BEFORE_REGENERATING.txt
 Key: LUCENE-5964
 URL: https://issues.apache.org/jira/browse/LUCENE-5964
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.10
Reporter: Torsten Krah
Priority: Trivial


Reading the file READ_BEFORE_REGENERATING.txt from 
analysis/common/src/java/org/apache/lucene/analysis/standard tells me to use 
jflex trunk.
{{ant regenerate}} already uses ivy to get current jflex (1.6) which should be 
used - does the text still apply or is it obsolete?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-09-19 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140974#comment-14140974
 ] 

Timothy Potter commented on SOLR-6511:
--

I now have a test case that duplicates Alan's scenario exactly, which is good. 
In devising a fix, the following problem has come up -- the request has been 
accepted locally on the used to be leader and is failing on one of the replicas 
because of the leader change ("Request says it is coming from leader, but we 
are the leader").

So does the old leader (the one receiving the error back from the new leader) 
try to be clever and forward the request to the leader as any replica would do 
under normal circumstances? Keep in mind that this request has already been 
accepted locally and possibly on other replicas. Or does this old leader just 
propagate the failure back to the client and let it decide what to do? Guess it 
comes down to whether we think its safe to just re-process a request? Seems 
like it would be but wanted feedback before assuming that.

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Attachments: SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140942#comment-14140942
 ] 

David Smiley commented on LUCENE-5879:
--

bq. Is the auto-prefixing done on a per-segment basis or is it something 
different that has to do with Codec internals? It appears to be the latter.

I asked that in a confusing way.  I mean, are the intervals that are computed 
from the data determined and fixed within a given segment, or is it variable 
throughout the segment?

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6425) If you using the new global hdfs block cache option, you can end up reading corrupt files on file name reuse.

2014-09-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6425.
---
   Resolution: Fixed
Fix Version/s: 4.10.1

> If you using the new global hdfs block cache option, you can end up reading 
> corrupt files on file name reuse.
> -
>
> Key: SOLR-6425
> URL: https://issues.apache.org/jira/browse/SOLR-6425
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.10.1, 5.0, Trunk
>
> Attachments: SOLR-6425.patch
>
>
> Revealed by 'HdfsBasicDistributedZkTest frequently fails'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140925#comment-14140925
 ] 

David Smiley commented on LUCENE-5879:
--

Wow, awesome work Mike!  And fantastic idea Adrien!

I just read through the comments but don't have time to dig into the source yet.
* Is the auto-prefixing done on a per-segment basis or is it something 
different that has to do with Codec internals?  It appears to be the latter.
* Is this applicable to variable-length String fields that you might want to do 
range queries on for whatever reason? Such as... A*, B*, C*   or A-G, H-P, ... 
etc. ?  It appears this is applicable.
* Would any CompiledAutomaton (e.g. a wildcard query) that has a leading prefix 
benefit from this or is it strictly Prefix & Range queries?  Mike's comments 
suggest it will sometime but not yet. Can you create an issue for it, Mike?  
This would be especially useful in Lucene-spatial; I'm excited at the prospects!
* When you iterate a TermsEnum, will the prefix terms be exposed or is it 
internal to the Codec?

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140916#comment-14140916
 ] 

Robert Muir commented on LUCENE-5879:
-

We should think about a migration plan for numerics? 

This should be a followup issue.

Here are some thoughts.
1. keep current trie "Encoding" for terms, it just uses precision step=Inf and 
lets the term dictionary do it automatically.
2. create a filteratomicreader, that for a previous trie encoded field, removes 
"fake" terms on merge.

Users could continue to use NumericRangeQuery just with the infinite precision 
step, and it will always work, just execute slower for old segments as it 
doesnt take advantage of the trie terms that are not yet merged away.

One issue to making it really nice, is that lucene doesnt know for sure that a 
field is numeric, so it cannot be "full-auto". Apps would have to use their 
schema or whatever to wrap with this reader in their merge policy.

Maybe we could provide some sugar for this, such as a wrapping merge policy 
that takes a list of field names that are numeric, or sugar to pass this to IWC 
in IndexUpgrader to force it, and so on.

I think its complicated enough for a followup issue though.

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1800 - Still Failing!

2014-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1800/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrSchemalessExampleTest.testAddDelete

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:57127/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:57127/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([50B839B6A66F747E:985844BC57C7A7A8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:563)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at 
org.apache.solr.client.solrj.SolrExampleTestsBase.testAddDelete(SolrExampleTestsBase.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apach

[jira] [Comment Edited] (SOLR-6266) Couchbase plug-in for Solr

2014-09-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140907#comment-14140907
 ] 

Joel Bernstein edited comment on SOLR-6266 at 9/19/14 5:20 PM:
---

I reviewed Karol's contribution today, it looks great. Let's use this as our 
base implementation.

It looks like Karol has worked out a lot of details of how to embed the 
Couchbase API's and handle documents. This is excellent.

I think we need to take a step back and do some planning around two areas 
before iterating on what's here.

1) SolrCloud architecture. Some questions to think about:

How does the plugin work in the context of a single collection?  Should it run 
in all replicas or just leaders?

How does the plugin work in the context of multiple collections sharing the 
same Solr nodes? Should there be a different CAPIServer running for each 
collection? Or should there be a CAPIServer per Solr node?

2) Error handling. We'll need to understand the different failure scenarios and 
have strategies for handling them. And we'll need to fully understand how the 
Couchbases API's account for failure scenarios.

I'll need to catch up on the Couchbase API's before I can weigh-in on these 
issue. I should have time to review the API's next week. In the meantime if 
anyone has any thoughts fire away.


was (Author: joel.bernstein):
I reviewed Karol's contribution today, it looks great. Let's use this as our 
base implementation.

It looks like Karol has worked out a lot of details of how to embed the 
Couchbase API's and handle documents. This is excellent.

I think we need to take a step back and do some planning around two areas 
before iterating on what's here.

1) SolrCloud architecture. Some questions to think about:

How does the plugin work in the context of single collection?  Should it run in 
all replicas or just leaders?

How does the plugin work in the context of multiple collections sharing the 
same Solr nodes? Should there be a different CAPIServer running for each 
collection? Or should there be a CAPIServer per Solr node?

2) Error handling. We'll need to understand the different failure scenarios and 
have strategies for handling them. And we'll need to fully understand how the 
Couchbases API's account for failure scenarios.

I'll need to catch up on the Couchbase API's before I can weigh-in on these 
issue. I should have time to review the API's next week. In the meantime if 
anyone has any thoughts fire away.

















> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr

2014-09-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140907#comment-14140907
 ] 

Joel Bernstein commented on SOLR-6266:
--

I reviewed Karol's contribution today, it looks great. Let's use this as our 
base implementation.

It looks like Karol has worked out a lot of details of how to embed the 
Couchbase API's and handle documents. This is excellent.

I think we need to take a step back and do some planning around two areas 
before iterating on what's here.

1) SolrCloud architecture. Some questions to think about:

How does the plugin work in the context of single collection?  Should it run in 
all replicas or just leaders?

How does the plugin work in the context of multiple collections sharing the 
same Solr nodes? Should there be a different CAPIServer running for each 
collection? Or should there be a CAPIServer per Solr node?

2) Error handling. We'll need to understand the different failure scenarios and 
have strategies for handling them. And we'll need to fully understand how the 
Couchbases API's account for failure scenarios.

I'll need to catch up on the Couchbase API's before I can weigh-in on these 
issue. I should have time to review the API's next week. In the meantime if 
anyone has any thoughts fire away.

















> Couchbase plug-in for Solr
> --
>
> Key: SOLR-6266
> URL: https://issues.apache.org/jira/browse/SOLR-6266
> Project: Solr
>  Issue Type: New Feature
>Reporter: Varun
>Assignee: Joel Bernstein
> Attachments: solr-couchbase-plugin.tar.gz
>
>
> It would be great if users could connect Couchbase and Solr so that updates 
> to Couchbase can automatically flow to Solr. Couchbase provides some very 
> nice API's which allow applications to mimic the behavior of a Couchbase 
> server so that it can receive updates via Couchbase's normal cross data 
> center replication (XDCR).
> One possible design for this is to create a CouchbaseLoader that extends 
> ContentStreamLoader. This new loader would embed the couchbase api's that 
> listen for incoming updates from couchbase, then marshal the couchbase 
> updates into the normal Solr update process. 
> Instead of marshaling couchbase updates into the normal Solr update process, 
> we could also embed a SolrJ client to relay the request through the http 
> interfaces. This may be necessary if we have to handle mapping couchbase 
> "buckets" to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140893#comment-14140893
 ] 

ASF subversion and git services commented on LUCENE-5944:
-

Commit 1626277 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626277 ]

LUCENE-5944: Fix building TAR with solr WAR (merge error)

> move trunk to 6.x, create branch_5x
> ---
>
> Key: LUCENE-5944
> URL: https://issues.apache.org/jira/browse/LUCENE-5944
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0
>
>
> In order to actually add real features (as opposed to just spending 24/7 
> fixing bugs and back compat), I need a trunk that doesn't have the back 
> compat handcuffs.
> In the meantime, we should rename the current trunk (which is totally tied 
> down in back compat already, without even a single release!) to branch_5x 
> while you guys (i wont be doing any back compat anymore) figure out what you 
> want to do with the back compat policly.
> Here is the proposal what to do in this issue: 
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/201409.mbox/%3CCAOdYfZUpAbYp-omdw=ngjsdzbkvhn2zydobzvj1gdxk+lrt...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Help with `ant beast`

2014-09-19 Thread Ryan Ernst
On Fri, Sep 19, 2014 at 9:55 AM, Chris Hostetter 
wrote:

> can we just add something like this inside the "beast" target?...
>
>
> 
>   
> 
>   
> 
>

+1


[jira] [Updated] (SOLR-6535) Adding new inKstall Solr book "Mastering Apache Solr" to official Solr website book section and news

2014-09-19 Thread Mathieu N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mathieu N updated SOLR-6535:

Labels: documentation easyfix patch  (was: )

> Adding new inKstall Solr book "Mastering Apache Solr" to official Solr 
> website book section and news
> 
>
> Key: SOLR-6535
> URL: https://issues.apache.org/jira/browse/SOLR-6535
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Mathieu N
>Priority: Minor
>  Labels: documentation, easyfix, patch
> Attachments: book_mas.jpg, books.mdtext
>
>
> Mathieu Nayrolles and Inkstall Publications are proud to announce their 
> latest book —Mastering Apache Solr.  This book will empower you to provide a 
> world-class search experience to your end users through the discovery of the 
> powerful mechanisms presented in it.
>  
> Mastering Apache Solr is a short, focused, practical, hands-on guide 
> containing crisp, relevant, systematically arranged, progressive chapters. 
> These chapters contain a wealth of information presented in a direct and 
> easy-to-understand manner. Highlighting Solr's supremacy over classical 
> databases in full-text search, this book covers key technical concepts which 
> will help you accelerate your progress in the Solr world.
>  
> Mastering Apache Solr starts with an introduction to Apache Solr, its 
> underlying technologies, the main differences between the classical database 
> engines, and gradually moves to more advance topics such as boosting 
> performance. In this book, we will look under the hood of a large number of 
> topics and discuss answers to pertinent questions such as why denormalize 
> data, how to import classical databases' data inside Apache Solr, how to 
> serve Solr through five different web servers, how to optimize them to serve 
> Solr even faster. An important and major topic covered in this book is Solr's 
> querying mechanism, which will prove to be a strong ally in our journey 
> through this book. We then look at boosting performance and deploying Solr 
> using several servlet servers. Finally, we cover how to communicate with Solr 
> using different programming languages, before deploying it in a cloud-based 
> environment.
>  
> Mastering Apache Solr is written lucidly and has a clear simple approach. 
> From the first page to the last, the book remains practical and focuses on 
> the most important topics used in the world of Apache Solr without neglecting 
> important theoretical fundamentals that help you build a strong foundation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Help with `ant beast`

2014-09-19 Thread Chris Hostetter
: 
: This is correct! Maybe we can improve the error message, but this is not 
: so easy... What is the best way to detect if a build file is a parent 
: one? Of course we could add a dummy target to all parent build files - 
: "ant test" has this to delegate to subant builds, but we don’t want to 
: do this here.

can we just add something like this inside the "beast" target?...



  

  




: 
: Uwe
: 
: -
: Uwe Schindler
: H.-H.-Meier-Allee 63, D-28213 Bremen
: http://www.thetaphi.de
: eMail: u...@thetaphi.de
: 
: 
: > -Original Message-
: > From: Steve Rowe [mailto:sar...@gmail.com]
: > Sent: Friday, September 19, 2014 5:54 PM
: > To: dev@lucene.apache.org
: > Subject: Re: Help with `ant beast`
: > 
: > I think ‘ant beast’ only works in the directory of the module containing the
: > test, not at a higher level.
: > 
: > On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar
: >  wrote:
: > 
: > > I am trying to use `ant beast` on trunk (per the recommendation in test-
: > help) and getting this error:
: > >
: > > ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
: > > -Dtestcase=TestBytesStore
: > >
: > > -beast:
: > >   [beaster] Beast round: 1
: > >
: > > BUILD FAILED
: > > ~/lucene-solr/lucene/common-build.xml:1363: The following error
: > occurred while executing this line:
: > > ~/lucene-solr/lucene/common-build.xml:1358: The following error
: > occurred while executing this line:
: > > ~/lucene-solr/lucene/common-build.xml:961: Reference junit.classpath
: > not found.
: > >
: > > `ant test` works just fine. Any idea where the problem might be?
: > 
: > 
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
: > commands, e-mail: dev-h...@lucene.apache.org
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140858#comment-14140858
 ] 

Uwe Schindler commented on LUCENE-5879:
---

And how about NRQ? The same? Currently its unsupported, because we have to 
figure out how to auto-disable trie terms and how to make NRQ autorewrite to a 
simple TermRange if no trie terms available and instead only autoprefix terms...

> Add auto-prefix terms to block tree terms dict
> --
>
> Key: LUCENE-5879
> URL: https://issues.apache.org/jira/browse/LUCENE-5879
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
> LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch
>
>
> This cool idea to generalize numeric/trie fields came from Adrien:
> Today, when we index a numeric field (LongField, etc.) we pre-compute
> (via NumericTokenStream) outside of indexer/codec which prefix terms
> should be indexed.
> But this can be inefficient: you set a static precisionStep, and
> always add those prefix terms regardless of how the terms in the field
> are actually distributed.  Yet typically in real world applications
> the terms have a non-random distribution.
> So, it should be better if instead the terms dict decides where it
> makes sense to insert prefix terms, based on how dense the terms are
> in each region of term space.
> This way we can speed up query time for both term (e.g. infix
> suggester) and numeric ranges, and it should let us use less index
> space and get faster range queries.
>  
> This would also mean that min/maxTerm for a numeric field would now be
> correct, vs today where the externally computed prefix terms are
> placed after the full precision terms, causing hairy code like
> NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
> feasible.
> The terms dict can also do tricks not possible if you must live on top
> of its APIs, e.g. to handle the adversary/over-constrained case when a
> given prefix has too many terms following it but finer prefixes
> have too few (what block tree calls "floor term blocks").



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Help with `ant beast`

2014-09-19 Thread Steve Rowe
A beast target in parent build files (of which there aren’t many) could be 
where the message is printed.
 
On Sep 19, 2014, at 12:41 PM, Uwe Schindler  wrote:

> Hi Steve,
> 
> This is correct!
> Maybe we can improve the error message, but this is not so easy... What is 
> the best way to detect if a build file is a parent one? Of course we could 
> add a dummy target to all parent build files - "ant test" has this to 
> delegate to subant builds, but we don’t want to do this here.
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
>> -Original Message-
>> From: Steve Rowe [mailto:sar...@gmail.com]
>> Sent: Friday, September 19, 2014 5:54 PM
>> To: dev@lucene.apache.org
>> Subject: Re: Help with `ant beast`
>> 
>> I think ‘ant beast’ only works in the directory of the module containing the
>> test, not at a higher level.
>> 
>> On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar
>>  wrote:
>> 
>>> I am trying to use `ant beast` on trunk (per the recommendation in test-
>> help) and getting this error:
>>> 
>>> ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
>>> -Dtestcase=TestBytesStore
>>> 
>>> -beast:
>>>  [beaster] Beast round: 1
>>> 
>>> BUILD FAILED
>>> ~/lucene-solr/lucene/common-build.xml:1363: The following error
>> occurred while executing this line:
>>> ~/lucene-solr/lucene/common-build.xml:1358: The following error
>> occurred while executing this line:
>>> ~/lucene-solr/lucene/common-build.xml:961: Reference junit.classpath
>> not found.
>>> 
>>> `ant test` works just fine. Any idea where the problem might be?
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
>> commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2014-09-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140843#comment-14140843
 ] 

Hoss Man commented on SOLR-6520:


bq. I could argue about this, except you are linking to the WIKI from those 
same pages already. So, all you are doing, is giving a preferential treatment 
to WIKI instead. 

yeah .. from the stand point of "live links" you're right ... i'm convinced we 
should link to it ... but not convinced itshould be as front and center as what 
you proposed.  The primary link for users should be to the *released* copy.

What do you think about adding this paragraph at the end of the existing "The 
Apache Solr Reference Guide" section of documentation.html? ...

{noformat}
Comments & suggestions for improving this documentation can be made on the 
[live editing version of the 
documentation](https://cwiki.apache.org/confluence/display/SOLR/) 
which is a browsable Confluence Space that always reflects the content intended 
for the _next_ 
release of the Reference Guide.
{noformat}

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Help with `ant beast`

2014-09-19 Thread Uwe Schindler
Hi Steve,

This is correct!
Maybe we can improve the error message, but this is not so easy... What is the 
best way to detect if a build file is a parent one? Of course we could add a 
dummy target to all parent build files - "ant test" has this to delegate to 
subant builds, but we don’t want to do this here.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Steve Rowe [mailto:sar...@gmail.com]
> Sent: Friday, September 19, 2014 5:54 PM
> To: dev@lucene.apache.org
> Subject: Re: Help with `ant beast`
> 
> I think ‘ant beast’ only works in the directory of the module containing the
> test, not at a higher level.
> 
> On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar
>  wrote:
> 
> > I am trying to use `ant beast` on trunk (per the recommendation in test-
> help) and getting this error:
> >
> > ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
> > -Dtestcase=TestBytesStore
> >
> > -beast:
> >   [beaster] Beast round: 1
> >
> > BUILD FAILED
> > ~/lucene-solr/lucene/common-build.xml:1363: The following error
> occurred while executing this line:
> > ~/lucene-solr/lucene/common-build.xml:1358: The following error
> occurred while executing this line:
> > ~/lucene-solr/lucene/common-build.xml:961: Reference junit.classpath
> not found.
> >
> > `ant test` works just fine. Any idea where the problem might be?
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6370) Allow tests to report/fail on many ZK watches being parallelly requested on the same data

2014-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140827#comment-14140827
 ] 

ASF GitHub Bot commented on SOLR-6370:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/84


> Allow tests to report/fail on many ZK watches being parallelly requested on 
> the same data
> -
>
> Key: SOLR-6370
> URL: https://issues.apache.org/jira/browse/SOLR-6370
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Priority: Minor
>
> Issues like SOLR-6336 uncovered cases where we were using too many ZK 
> watches. Watches are costly and we should fix such places but there's no good 
> way for tests to find out about them.
> This issue is for a mechanism for tests to report or fail on watches being 
> redundantly set on data. This would also allow for specific tests to 
> configure if there's a valid case for such a thing happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Allow tests to report/fail on many ZK wa...

2014-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/84


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2014-09-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140818#comment-14140818
 ] 

Alexandre Rafalovitch edited comment on SOLR-6520 at 9/19/14 4:24 PM:
--

I do not want to confuse two issues: the current status of WIKI vs. the 
Reference Guide and the fact that people - right now - are not finding the 
Reference Guide. I do know of the year-long TODO and participated in some of 
the discussions on the mailing list.

All I am saying is that Reference Guide is great, yet is not discoverable 
easily. Which is a shame and disservice to the community after so much work was 
and is put into that. So, I am proposing an easy solution in the meanwhile, 
while the WIKI issue is being sorted out.

I think it is important to separate us - people who have worked with Solr for a 
while - and new Solr users. They will not know or want to know the legacy 
history. They just want to be pointed to the most up-to-date information. The 
rest is our internal ongoing work.

As to the proposed link, it could be just:
{panel}
(Solr Reference Guide), Note: the live version of the Guide reflects 
latest version of Solr. Versions of the guide reflecting released versions of 
Solr are available as a  (PDF version).
{panel}


was (Author: arafalov):
I do not want to confuse two issues: the current status of WIKI vs. the 
Reference Guide and the fact that people - right now - are not finding the 
Reference Guide. I do know of the year-long TODO and participated in some of 
the discussions on the mailing list.

All I am saying is that Reference Guide is great, yet is not discoverable 
easily. Which is a shame and disservice to the community after so much work was 
and is put into that. So, I am proposing an easy solution in the meanwhile, 
while the WIKI issue is being sorted out.

I think it is important to separate us - people who have worked with Solr for a 
while - and new Solr users. They will not know or want to know the legacy 
history. They just want to be pointed to the most up-to-date information. The 
rest is our internal ongoing work.

As to the proposed link, it could be just:
```
[URL](Solr Reference Guide), Note: the live version of the Guide reflects 
latest version of Solr. Versions of the guide reflecting released versions of 
Solr are available as a  [URL](PDF version).
```

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2014-09-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140818#comment-14140818
 ] 

Alexandre Rafalovitch commented on SOLR-6520:
-

I do not want to confuse two issues: the current status of WIKI vs. the 
Reference Guide and the fact that people - right now - are not finding the 
Reference Guide. I do know of the year-long TODO and participated in some of 
the discussions on the mailing list.

All I am saying is that Reference Guide is great, yet is not discoverable 
easily. Which is a shame and disservice to the community after so much work was 
and is put into that. So, I am proposing an easy solution in the meanwhile, 
while the WIKI issue is being sorted out.

I think it is important to separate us - people who have worked with Solr for a 
while - and new Solr users. They will not know or want to know the legacy 
history. They just want to be pointed to the most up-to-date information. The 
rest is our internal ongoing work.

As to the proposed link, it could be just:
```
[URL](Solr Reference Guide), Note: the live version of the Guide reflects 
latest version of Solr. Versions of the guide reflecting released versions of 
Solr are available as a  [URL](PDF version).
```

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5936) Add BWC checks to verify what is tested matches what versions we know about

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140817#comment-14140817
 ] 

ASF subversion and git services commented on LUCENE-5936:
-

Commit 1626262 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1626262 ]

LUCENE-5936: Remove test code only needed for trunk

> Add BWC checks to verify what is tested matches what versions we know about
> ---
>
> Key: LUCENE-5936
> URL: https://issues.apache.org/jira/browse/LUCENE-5936
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 4.10.1, 5.0, 6.0
>
> Attachments: LUCENE-5936.patch, LUCENE-5936.patch
>
>
> This is a follow up from LUCENE-5934.  Mike has already has something like 
> this for the smoke tester, but here I am suggesting a test within the test 
> (similar to other Version tests we have which check things like deprecation 
> status of old versions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6542) Method setConnectionTimeout throws Exception when using HttpSolrServer with a proxy

2014-09-19 Thread Jakob Furrer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-6542:
---
Description: 
I try to get a HttpSolrServer object with the following non-standard 
functionalities:
* a proxy is used for the connection
* the connection timeout is set

A HttpClient object is required for setting proxy (see code listing 1).

The timeout can either be defined on the internal httpClient object (see option 
a) or later on the created httpSolrServer object (see option b)

Question one:
_Is there a difference in behaviour of the HttpSolrServer instance when I set 
the connection timeout directly in my own HttpClient object in comparison to 
using the method HttpSolrServer#setConnectionTimeout() ?_
I would expect that there is no difference.

Moving from Solr 4.6 to Solr 4.9, I also upgraded HttpClient to the same 
Version Solr is using (httpclient-4.2.6 was used in solr-4.6, now 
httpclient-4.3.1 is used in solr-4.9).

The newer version of HttpSolr deprecates a number of methods used in my code, 
therefore I was looking for a way to modify it according to the new API (see 
code listing 2).

I now get an java.lang.UnsupportedOperationException when using the method 
HttpSolrServer#setConnectionTimeout() 

{noformat}
java.lang.UnsupportedOperationException
at 
org.apache.http.impl.client.InternalHttpClient.getParams(InternalHttpClient.java:204)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.setConnectionTimeout(HttpClientUtil.java:249)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.setConnectionTimeout(HttpSolrServer.java:634)
at 
test.HttpSolrServerTest.newStyle_httpclient_4_3_1(HttpSolrServerTest.java:89)
{noformat}

It seems that since the switch of the library HttpClient something internally 
clashes in the HttpSolrServer object.

Question two:
_Is this something that has been overlooked when the library within SolrJ was 
changed to the newer version, or am trying something that must not be done?_
I would expect that the method HttpSolrServer#setConnectionTimeout() can be 
used, independent of the way I chose to create that object.

Bonus question:
_Am I using an acceptable way of accessing Solr over a proxy or are there 
better methods?_

{code:title=code listing 1|borderStyle=solid}
/**
 * requires the following libraries to run
 * httpclient-4.2.6.jar
 * httpcore-4.2.5.jar
 * solr-solrj-4.6.0.jar
 *
 * --> shows lots of deprecated methods when using 
httpclient-4.3.1.jar
 */
@Test
public void oldStyle_httpclient_4_2_6() throws Exception {
String solrUrlForPing = 
"http://localhost:8983/solr/collection1";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

final HttpParams httpParams = new BasicHttpParams();

// option a) timeout can be set as a parameter of the httpClient
HttpConnectionParams.setConnectionTimeout(httpParams, 
maxTimeout);
HttpConnectionParams.setSoTimeout(httpParams, maxTimeout);
ClientConnectionManager connMgr = new 
PoolingClientConnectionManager();
HttpClient httpClient = new DefaultHttpClient(connMgr, 
httpParams);

HttpHost httpProxy = new HttpHost(proxyHost, proxyPort);

httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, httpProxy);
HttpSolrServer httpSolrServer = new 
HttpSolrServer(solrUrlForPing, httpClient);

// option b) timeout can be set on the httpSolrServer object
httpSolrServer.setConnectionTimeout(maxTimeout);
httpSolrServer.setSoTimeout(maxTimeout);

httpSolrServer.ping();
}
{code}

{code:title=code listing 2|borderStyle=solid}
/**
 * requires the following libraries to run
 * httpclient-4.3.1.jar
 * httpcore-4.3.jar
 * solr-solrj-4.9.0.jar
 */
@Test
public void newStyle_httpclient_4_3_1() throws Exception {
String solrUrlForPing = 
"http://localhost:8983/solr/collection1";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

HttpClientBuilder hcBuilder = HttpClients.custom();

// setting the maximum allowed timeout
RequestConfig config = RequestConfig.custom()
.setSocketTimeout(maxTimeout)
.setConnectTimeout(maxTimeout)
.build();
hcBuilder.setDefaultRequestConfig(conf

[jira] [Commented] (LUCENE-5936) Add BWC checks to verify what is tested matches what versions we know about

2014-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140812#comment-14140812
 ] 

ASF subversion and git services commented on LUCENE-5936:
-

Commit 1626258 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1626258 ]

LUCENE-5936: Tweak test to isolate trunk only code

> Add BWC checks to verify what is tested matches what versions we know about
> ---
>
> Key: LUCENE-5936
> URL: https://issues.apache.org/jira/browse/LUCENE-5936
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 4.10.1, 5.0, 6.0
>
> Attachments: LUCENE-5936.patch, LUCENE-5936.patch
>
>
> This is a follow up from LUCENE-5934.  Mike has already has something like 
> this for the smoke tester, but here I am suggesting a test within the test 
> (similar to other Version tests we have which check things like deprecation 
> status of old versions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6542) Method setConnectionTimeout throws Exception when using HttpSolrServer with a proxy

2014-09-19 Thread Jakob Furrer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-6542:
---
Description: 
I try to get a HttpSolrServer object with the following non-standard 
functionalities:
* a proxy is used for the connection
* the connection timeout is set

A HttpClient object is required for setting proxy (see code listing 1).

The timeout can either be defined on the internal httpClient object (see option 
a) or later on the created httpSolrServer object (see option b)

Question one:
_Is there a difference in behaviour of the HttpSolrServer instance when I set 
the connection timeout directly in my own HttpClient object in comparison to 
using the method HttpSolrServer#setConnectionTimeout() ?_
I would expect that there is no difference.

Moving from Solr 4.6 to Solr 4.9, I also upgraded HttpClient to the same 
Version Solr is using (httpclient-4.2.6 was used in solr-4.6, now 
httpclient-4.3.1 is used in solr-4.9).

The newer version of HttpSolr deprecates a number of methods used in my code, 
therefore I was looking for a way to modify it according to the new API (see 
code listing 2).

I now get an java.lang.UnsupportedOperationException when using the method 
HttpSolrServer#setConnectionTimeout() 

{noformat}
java.lang.UnsupportedOperationException
at 
org.apache.http.impl.client.InternalHttpClient.getParams(InternalHttpClient.java:204)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.setConnectionTimeout(HttpClientUtil.java:249)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.setConnectionTimeout(HttpSolrServer.java:634)
at 
test.HttpSolrServerTest.newStyle_httpclient_4_3_1(HttpSolrServerTest.java:89)
{noformat}

It seems that since the switch of the library HttpClient something internally 
clashes in the HttpSolrServer object.

Question two:
_Is this something that has been overlooked when the library within SolrJ was 
changed to the newer version, or am trying something that must not be done?_
I would expect that the method HttpSolrServer#setConnectionTimeout() can be 
used, independent of the way I chose to create that object.

Bonus question:
_Am I using an acceptable way of accessing Solr over a proxy or are there 
better methods?_

{code:title=code listing 1|borderStyle=solid}
/**
 * requires the following libraries to run
 * httpclient-4.2.6.jar
 * httpcore-4.2.5.jar
 * solr-solrj-4.6.0.jar
 *
 * --> shows lots of deprecated methods when using 
httpclient-4.3.1.jar
 */
@Test
public void oldStyle_httpclient_4_2_6() throws Exception {
String solrUrlForPing = 
"http://localhost:8060/FTS-Index/WebOffice";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

final HttpParams httpParams = new BasicHttpParams();

// option a) timeout can be set as a parameter of the httpClient
HttpConnectionParams.setConnectionTimeout(httpParams, 
maxTimeout);
HttpConnectionParams.setSoTimeout(httpParams, maxTimeout);
ClientConnectionManager connMgr = new 
PoolingClientConnectionManager();
HttpClient httpClient = new DefaultHttpClient(connMgr, 
httpParams);

HttpHost httpProxy = new HttpHost(proxyHost, proxyPort);

httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, httpProxy);
HttpSolrServer httpSolrServer = new 
HttpSolrServer(solrUrlForPing, httpClient);

// option b) timeout can be set on the httpSolrServer object
httpSolrServer.setConnectionTimeout(maxTimeout);
httpSolrServer.setSoTimeout(maxTimeout);

httpSolrServer.ping();
}
{code}

{code:title=code listing 2|borderStyle=solid}
/**
 * requires the following libraries to run
 * httpclient-4.3.1.jar
 * httpcore-4.3.jar
 * solr-solrj-4.9.0.jar
 */
@Test
public void newStyle_httpclient_4_3_1() throws Exception {
String solrUrlForPing = 
"http://localhost:8060/FTS-Index/WebOffice";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

HttpClientBuilder hcBuilder = HttpClients.custom();

// setting the maximum allowed timeout
RequestConfig config = RequestConfig.custom()
.setSocketTimeout(maxTimeout)
.setConnectTimeout(maxTimeout)
.build();
hcBuilder.setDefaultRequestConfi

Re: Help with `ant beast`

2014-09-19 Thread Ramkumar R. Aiyengar
Aha.. That worked, thanks!

On Fri, Sep 19, 2014 at 4:53 PM, Steve Rowe  wrote:

> I think ‘ant beast’ only works in the directory of the module containing
> the test, not at a higher level.
>
> On Sep 19, 2014, at 11:45 AM, Ramkumar R. Aiyengar <
> andyetitmo...@gmail.com> wrote:
>
> > I am trying to use `ant beast` on trunk (per the recommendation in
> test-help) and getting this error:
> >
> > ~/lucene-solr/lucene> ant beast -Dbeast.iters=10 -Dtests.dups=6
> -Dtestcase=TestBytesStore
> >
> > -beast:
> >   [beaster] Beast round: 1
> >
> > BUILD FAILED
> > ~/lucene-solr/lucene/common-build.xml:1363: The following error occurred
> while executing this line:
> > ~/lucene-solr/lucene/common-build.xml:1358: The following error occurred
> while executing this line:
> > ~/lucene-solr/lucene/common-build.xml:961: Reference junit.classpath not
> found.
> >
> > `ant test` works just fine. Any idea where the problem might be?
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Not sent from my iPhone or my Blackberry or anyone else's


[jira] [Created] (SOLR-6542) Method setConnectionTimeout throws Exception when using HttpSolrServer with a proxy

2014-09-19 Thread Jakob Furrer (JIRA)
Jakob Furrer created SOLR-6542:
--

 Summary: Method setConnectionTimeout throws Exception when using 
HttpSolrServer with a proxy
 Key: SOLR-6542
 URL: https://issues.apache.org/jira/browse/SOLR-6542
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10, 4.9
Reporter: Jakob Furrer
Priority: Minor


I try to get a HttpSolrServer object with the following non-standard 
functionalities:
a) a proxy is used for the connection
b) the connection timeout is set

A HttpClient object is required for setting proxy (see code listing [1]).

The timeout can either be defined on the internal httpClient object (see option 
[a]) or later on the created httpSolrServer object (see option [b])

Question one:
Is there a difference in behaviour of the HttpSolrServer instance when I set 
the connection timeout directly in my own HttpClient object in comparison to 
using the method HttpSolrServer#setConnectionTimeout() ?
I would expect that there is no difference.

Moving from Solr 4.6 to Solr 4.9, I also upgraded HttpClient to the same 
Version Solr is using (httpclient-4.2.6 was used in solr-4.6, now 
httpclient-4.3.1 is used in solr-4.9).

The newer version of HttpSolr deprecates a number of methods used in my code, 
therefore I was looking for a way to modify it according to the new API (see 
code listing [2]).

I now get an java.lang.UnsupportedOperationException when using the method 
HttpSolrServer#setConnectionTimeout() 

java.lang.UnsupportedOperationException
at 
org.apache.http.impl.client.InternalHttpClient.getParams(InternalHttpClient.java:204)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.setConnectionTimeout(HttpClientUtil.java:249)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.setConnectionTimeout(HttpSolrServer.java:634)
at 
test.HttpSolrServerTest.newStyle_httpclient_4_3_1(HttpSolrServerTest.java:89)

It seems that since the switch of the library HttpClient something internally 
clashes in the HttpSolrServer object.

Question two:
Is this something that has been overlooked when the library within SolrJ was 
changed to the newer version, or am trying something that must not be done?
I would expect that the method HttpSolrServer#setConnectionTimeout() can be 
used, independent of the way I chose to create that object.

Bonus question:
Am I using an acceptable way of accessing Solr over a proxy or are there better 
methods?

/**
 * code listing [1]:
 * 
 * requires the following libraries to run
 * httpclient-4.2.6.jar
 * httpcore-4.2.5.jar
 * solr-solrj-4.6.0.jar
 *
 * --> shows lots of deprecated methods when using 
httpclient-4.3.1.jar
 */
@Test
public void oldStyle_httpclient_4_2_6() throws Exception {
String solrUrlForPing = 
"http://localhost:8060/FTS-Index/WebOffice";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

final HttpParams httpParams = new BasicHttpParams();

// option a) timeout can be set as a parameter of the httpClient
HttpConnectionParams.setConnectionTimeout(httpParams, 
maxTimeout);
HttpConnectionParams.setSoTimeout(httpParams, maxTimeout);
ClientConnectionManager connMgr = new 
PoolingClientConnectionManager();
HttpClient httpClient = new DefaultHttpClient(connMgr, 
httpParams);

HttpHost httpProxy = new HttpHost(proxyHost, proxyPort);

httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, httpProxy);
HttpSolrServer httpSolrServer = new 
HttpSolrServer(solrUrlForPing, httpClient);

// option b) timeout can be set on the httpSolrServer object
httpSolrServer.setConnectionTimeout(maxTimeout);
httpSolrServer.setSoTimeout(maxTimeout);

httpSolrServer.ping();
}

/**
 * code listing [2]:
 * 
 * requires the following libraries to run
 * httpclient-4.3.1.jar
 * httpcore-4.3.jar
 * solr-solrj-4.9.0.jar
 */
@Test
public void newStyle_httpclient_4_3_1() throws Exception {
String solrUrlForPing = 
"http://localhost:8060/FTS-Index/WebOffice";;
String proxyHost = "127.0.0.1";
int proxyPort = ; // Using "Fiddler" as dummy proxy
int maxTimeout = 1; // 10 seconds

HttpClientBuilder hcBuilder = HttpClients.custom();

// setting the maximum allowed timeout
RequestConfig config = RequestConfig.custom()
   

[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-19 Thread Kun Xi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140787#comment-14140787
 ] 

Kun Xi commented on SOLR-6307:
--

[~anuragsharma]

Here is how I reproduce the bug:

1. create a document schema with two fields
 - birth_year_is: multivalue int field
 - reservation_dts: multivalue datetime field
2. create a document with dummy data:
 - birth_year_is: [ 1960, 1970 ]
 - reservation_dts: ["2014-02-12T12:00:00Z",  "2014-07-16T12:00:00Z"]
3. try to remove 1970 from birth_year_is:
{code}
curl http://localhost:8080/update\?commit\=true -H 
'Content-type:application/json' -d '[{ "birth_year_is": { "remove": [1970]},  
"id": 1}]'
{code}
4. try to remove 2014-07-16T12:00:00Z from reservation_dts:
curl http://localhost:8080/update\?commit\=true -H 
'Content-type:application/json' -d '[{ "reservation_dts": { "remove": 
["2014-07-16T12:00:00Z"]},  "id": 1}]'
5. go to solr console and verify the two fields are *NOT* updated.


> Atomic update remove does not work for int array or date array
> --
>
> Key: SOLR-6307
> URL: https://issues.apache.org/jira/browse/SOLR-6307
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.9
>Reporter: Kun Xi
>  Labels: atomic, difficulty-medium, impact-medium
>
> Try to remove an element in the string array with curl:
> {code}
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": 
> [1960]},  "id": 1098}]'
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": 
> ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", 
> "2014-02-21T12:00:00Z"]}, "id": 1098}]'
> {code}
> Neither of them works.
> The set and add operation for int array works. 
> The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >