[jira] [Commented] (SOLR-6793) ReplicationHandler does not destroy all of it's created SnapPullers

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270264#comment-14270264
 ] 

Mark Miller commented on SOLR-6793:
---

bq.  shouldn't the finally block have the same cleanup?

No, the temp close is handled finally in the close hook. If it is changed 
first, then we close the previous one. The temp puller is kept around for stat 
calls as mentioned above currently.

While we might improve some of that, I think this is correct for the current 
code.

 ReplicationHandler does not destroy all of it's created SnapPullers
 ---

 Key: SOLR-6793
 URL: https://issues.apache.org/jira/browse/SOLR-6793
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6793.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Possibly spoofed] Re: Anybody having troubles building trunk?

2015-01-08 Thread Alexandre Rafalovitch
The latest trunk seems to work after blowing away the ivy cache
folders. So, must have been a metadata poisoning.

Thank you for the suggestions, I did not think of anything outside of
the actual subversion folder as being the contributing factor.

Regards,
   Alex.

Sign up for my Solr resources newsletter at http://www.solr-start.com/


On 8 January 2015 at 04:52, Vanlerberghe, Luc
luc.vanlerber...@bvdinfo.com wrote:
 I had exactly the same issue building Solr, but using the tips here I managed 
 to get everything working again:

 I deleted the .ant and .ivy2 folders in my user directory, edited 
 lucene\ivy-settings.xml to comment out the ibiblio namecloudera ... / and 
 resolver ref=cloudera/ elements (leave the elements for 
 releases.cloudera.com !)

 After that ant ivy-bootstrap and ant resolve ran successfully (taking 
 about 5 minutes to download all dependencies)

 I guess that one of the artifacts loaded from cloudera conflicts with one 
 of the official ones from releases.cloudera.com (perhaps the order in the 
 resolver chain should be reversed?)

 Side note: For releases.cloudera.com, Mark Miller changed https to http on 
 14/3/2014 to work around an expired SSL certificate.
 I checked the certificate on the site and switched back to using https and it 
 seems to be fine now...

 Regards,

 Luc

 -Original Message-
 From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
 Sent: donderdag 8 januari 2015 6:32
 To: dev@lucene.apache.org
 Subject: [Possibly spoofed] Re: Anybody having troubles building trunk?

 Similar but different? I got rid of cloudera references all together,
 did ant clean and it is still the same error.

 The build line that failed is:

 ivy:retrieve conf=compile,compile.hadoop type=jar,bundle
 sync=${ivy.sync} log=download-only symlink=${ivy.symlink}/

 in trunk/solr/core/build.xml:65

 Regards,
Alex.
 
 Sign up for my Solr resources newsletter at http://www.solr-start.com/


 On 8 January 2015 at 00:12, Steve Rowe sar...@gmail.com wrote:
 I had the same issue earlier today, and identified the problem here, along
 with a workaround:
 https://issues.apache.org/jira/browse/SOLR-4839?focusedCommentId=14268311page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14268311

 On Wed, Jan 7, 2015 at 10:36 PM, Alexandre Rafalovitch arafa...@gmail.com
 wrote:

 I am having dependencies issues even if I blow away everything, check
 it out again and do 'ant resolve':
 resolve:
 [ivy:retrieve]
 [ivy:retrieve] :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve] ::
 [ivy:retrieve] ::  UNRESOLVED DEPENDENCIES ::
 [ivy:retrieve] ::
 [ivy:retrieve] ::
 org.restlet.jee#org.restlet.ext.servlet;2.3.0: configuration not found
 in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 'master'. It was
 required from org.apache.solr#core;working@Alexs-MacBook-Pro.local
 compile
 [ivy:retrieve] ::
 [ivy:retrieve]
 [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

 BUILD FAILED

 Regards,
Alex.

 
 Sign up for my Solr resources newsletter at http://www.solr-start.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6937) In schemaless mode, field names with spaces should be converted

2015-01-08 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created SOLR-6937:
-

 Summary: In schemaless mode, field names with spaces should be 
converted
 Key: SOLR-6937
 URL: https://issues.apache.org/jira/browse/SOLR-6937
 Project: Solr
  Issue Type: Bug
Reporter: Grant Ingersoll


Assuming spaces in field names are still bad, we should automatically convert 
them to not have spaces.  For instance, I indexed Citibike public data set 
which has: 
{quote}
tripduration,starttime,stoptime,start station id,start station 
name,start station latitude,start station longitude,end station id,end 
station name,end station latitude,end station 
longitude,bikeid,usertype,birth year,gender{quote}

My vote would be to replace spaces w/ underscores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-08 Thread Boon Low (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270201#comment-14270201
 ] 

Boon Low commented on SOLR-6648:


Hi Tomás, will look into the test case at the weekend, my lucene-solr trunk 
wouldn't compile for some reason. 'ivy:retrieve' dependencies problems..

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270236#comment-14270236
 ] 

ASF subversion and git services commented on SOLR-6839:
---

Commit 1650422 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1650422 ]

SOLR-6839: Direct routing with CloudSolrServer will ignore the Overwrite 
document option.

 Direct routing with CloudSolrServer will ignore the Overwrite document option.
 --

 Key: SOLR-6839
 URL: https://issues.apache.org/jira/browse/SOLR-6839
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6839.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11546 - Still Failing!

2015-01-08 Thread Uwe Schindler
Seems to still have problems. But this time looks more like wrong config. I 
nuked the IVY cache, lets see if it helps:

resolve:
[ivy:retrieve] 
[ivy:retrieve] :: problems summary ::
[ivy:retrieve]  WARNINGS
[ivy:retrieve]  ::
[ivy:retrieve]  ::  UNRESOLVED DEPENDENCIES ::
[ivy:retrieve]  ::
[ivy:retrieve]  :: org.restlet.jee#org.restlet.ext.servlet;2.3.0: 
configuration not found in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 
'master'. It was required from 
org.apache.solr#core;work...@serv1.sd-datasolutions.de compile
[ivy:retrieve]  ::
[ivy:retrieve]  ERRORS
[ivy:retrieve]  unknown resolver cloudera
[ivy:retrieve]  unknown resolver cloudera
[ivy:retrieve] 
[ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

To me the message 
[ivy:retrieve] unknown resolver cloudera
Is strange. Wasn't that one removed?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Friday, January 09, 2015 12:51 AM
 To: u...@thetaphi.de; dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b20) -
 Build # 11546 - Still Failing!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11546/
 Java: 32bit/jdk1.8.0_40-ea-b20 -server -XX:+UseConcMarkSweepGC
 
 All tests passed
 
 Build Log:
 [...truncated 7976 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:61: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:39:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:187:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-
 build.xml:510: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-
 build.xml:463: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-
 build.xml:376: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
 Linux/solr/core/build.xml:65: impossible to resolve dependencies:
   resolve failed - see output for details
 
 Total time: 32 minutes 38 seconds
 Build step 'Invoke Ant' marked build as failure [description-setter]
 Description set: Java: 32bit/jdk1.8.0_40-ea-b20 -server -
 XX:+UseConcMarkSweepGC Archiving artifacts Recording test results Email
 was triggered for: Failure - Any Sending email for trigger: Failure - Any
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b20) - Build # 11547 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11547/
Java: 64bit/jdk1.8.0_40-ea-b20 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([2E00F5D6B0ABCC8B:AFE67BCEC7F4ACB7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:750)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:199)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6908) SimplePostTool's help message is incorrect -Durl parameter

2015-01-08 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270512#comment-14270512
 ] 

Alexandre Rafalovitch commented on SOLR-6908:
-

Also, the README.txt has the reversed version of the usage:
{quote}
java -jar -Dc=collection_name post.jar *.xml
{quote}

According to *java -?*, the *-D* params need to come before *-jar*. Strangely, 
the example actually works as given on Java 8/Mac. But I think it should still 
be consistent with other locations.

 SimplePostTool's help message is incorrect -Durl parameter
 --

 Key: SOLR-6908
 URL: https://issues.apache.org/jira/browse/SOLR-6908
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 {quote}
 java -jar post.jar -h
 ...
 java -Durl=http://localhost:8983/solr/update/extract -Dparams=literal.id=a 
 -Dtype=application/pdf -jar post.jar a.pdf
 ...
 {quote}
 The example is the only one for -Durl and is not correct as it is missing the 
 collection name. Also, even though this is an example, *a.pdf* does not 
 exist, but we do have *solr-word.pdf* now.
 So, this should probably say:
 {quote}
 java -Durl=http://localhost:8983/solr/techproducts/update/extract 
 -Dparams=literal.id=pdf1 -Dtype=application/pdf -jar post.jar solr-word.pdf
 {quote}
 Also, it is worth mentioning (if true) that specifying *-Durl* overrides 
 *-Dc*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-08 Thread Boon Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boon Low updated SOLR-6648:
---
Attachment: (was: SOLR-6648.patch)

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6872) Starting techproduct example fails on Trunk with Version is too old for PackedInts

2015-01-08 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270463#comment-14270463
 ] 

Alexandre Rafalovitch commented on SOLR-6872:
-

I cannot reproduce it with the latest build. It's safe to close.

 Starting techproduct example fails on Trunk with Version is too old for 
 PackedInts
 

 Key: SOLR-6872
 URL: https://issues.apache.org/jira/browse/SOLR-6872
 Project: Solr
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Alexandre Rafalovitch
Priority: Blocker
 Fix For: Trunk


 {quote}
 bin/solr -e techproducts
 {quote}
 causes:
 {quote}
 ...
 Caused by: java.lang.ExceptionInInitializerError
   at 
 org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.lt;initgt;(Lucene50PostingsWriter.java:111)
   at 
 org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsConsumer(Lucene50PostingsFormat.java:429)
   at 
 org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:196)
   at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:107)
   at 
 org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:112)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:420)
   at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:504)
   at 
 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:614)
   at 
 org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2714)
 
 Caused by: java.lang.IllegalArgumentException: Version is too old, should be 
 at least 2 (got 0)
   at 
 org.apache.lucene.util.packed.PackedInts.checkVersion(PackedInts.java:77)
   at 
 org.apache.lucene.util.packed.PackedInts.getDecoder(PackedInts.java:742)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6887) SolrResourceLoader does not canonicalise the path

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270262#comment-14270262
 ] 

Mark Miller commented on SOLR-6887:
---

Cool.

This is a crazy little bug. We should plug in this canonicalise for 5.0 anyway. 
Something similar is also done by Lucene's FSDirectory.

 SolrResourceLoader does not canonicalise the path
 -

 Key: SOLR-6887
 URL: https://issues.apache.org/jira/browse/SOLR-6887
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
Reporter: Martijn Koster
Priority: Minor

 I get 
 {quote}
 Can't find (or read) directory to add to classloader
 {quote}
 errors for valid config files.
 To reproduce:
 Step 1: run up a Solr with a zookeeper (default collection, 1 node, 1 shard):
 {noformat}
 tar xvf ~/Downloads/solr-4.10.2.tgz 
 cd solr-4.10.2/
 ./bin/solr -e cloud
 Welcome to the SolrCloud example!
 This interactive session will help you launch a SolrCloud cluster on your 
 local workstation.
 To begin, how many Solr nodes would you like to run in your local cluster? 
 (specify 1-4 nodes) [2] 1
 Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
 Please enter the port for node1 [8983] 
 8983
 Cloning /Users/mak/solr-4.10.2/example into /Users/mak/solr-4.10.2/node1
 Starting up SolrCloud node1 on port 8983 using command:
 solr start -cloud -d node1 -p 8983   
 Waiting to see Solr listening on port 8983 [/]  
 Started Solr server on port 8983 (pid=14245). Happy searching!
 Now let's create a new collection for indexing documents in your 1-node 
 cluster.
 Please provide a name for your new collection: [gettingstarted] 
 gettingstarted
 How many shards would you like to split gettingstarted into? [2] 1
 1
 How many replicas per shard would you like to create? [2] 1
 1
 Please choose a configuration for the gettingstarted collection, available 
 options are: default or schemaless [default] 
 default
 Deploying default Solr configuration files to embedded ZooKeeper using 
 command:
 /Users/mak/solr-4.10.2/example/scripts/cloud-scripts/zkcli.sh -zkhost 
 localhost:9983 -cmd upconfig -confdir 
 /Users/mak/solr-4.10.2/example/solr/collection1/conf -confname default
 Successfully deployed the 
 /Users/mak/solr-4.10.2/example/solr/collection1/conf configuration directory 
 to ZooKeeper as default
 Creating new collection gettingstarted with 1 shards and replication factor 1 
 using Collections API command:
 http://localhost:8983/solr/admin/collections?action=CREATEname=gettingstartedreplicationFactor=1numShards=1collection.configName=defaultmaxShardsPerNode=1wt=jsonindent=2
 For more information about the Collections API, please see: 
 https://cwiki.apache.org/confluence/display/solr/Collections+API
 {
   responseHeader:{
 status:0,
 QTime:2139},
   success:{
 :{
   responseHeader:{
 status:0,
 QTime:1906},
   core:gettingstarted_shard1_replica1}}}
 {noformat}
 Verify the server is running on http://localhost:8983/solr/#/
 Step 2: duplicate the zookeeper config:
 {noformat}
 mkdir zkshell
 cd zkshell/
 virtualenv venv
 source venv/bin/activate
 pip install zk_shell
 zk-shell localhost:9983
 Welcome to zk-shell (0.99.05)
 (CONNECTING) / 
 (CONNECTED) / 
 (CONNECTED) / cd configs
 (CONNECTED) /configs cp myconf myconf2 true
 (CONNECTED) /configs cd myconf
 (CONNECTED) /configs/myconf get solrconfig.xml
 (CONNECTED) /configs quit
 {noformat}
 admire the config file, and note the {{lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar /}}. That 
 configuration comes from [somewhere like 
 this|https://github.com/apache/lucene-solr/blob/lucene_solr_4_10_2/solr/example/solr/collection1/conf/solrconfig.xml#L75].
 Step 3: create a collection with the new config:
 {noformat}
 curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=collection2collection.configName=myconf2numShards=1'
 {noformat}
 Step 4: check the logs:
 {noformat}
 grep org.apache.solr.core.SolrResourceLoader ./node1/logs/solr.log
 ...
 INFO  - 2014-12-23 18:32:55.165; org.apache.solr.core.SolrResourceLoader; new 
 SolrResourceLoader for directory: 
 '/Users/mak/solr-4.10.2/node1/solr/collection2_shard1_replica1/'
 WARN  - 2014-12-23 18:32:55.218; org.apache.solr.core.SolrResourceLoader; 
 Can't find (or read) directory to add to classloader: 
 ../../../contrib/extraction/lib (resolved as: 
 /Users/mak/solr-4.10.2/node1/solr/collection2_shard1_replica1/../../../contrib/extraction/lib).
 WARN  - 2014-12-23 18:32:55.218; org.apache.solr.core.SolrResourceLoader; 
 Can't find (or read) directory to add to classloader: ../../../dist/ 
 (resolved as: 
 /Users/mak/solr-4.10.2/node1/solr/collection2_shard1_replica1/../../../dist).
 {noformat}
 Note the error for 
 

[jira] [Commented] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270351#comment-14270351
 ] 

Anshum Gupta commented on SOLR-6930:


Sure, nothing against having this at all. My main point was that this issue is 
different from the other one. They are related and overlapping but trying to 
solve different issues.

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6937) In schemaless mode, field names with spaces should be converted

2015-01-08 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll updated SOLR-6937:
--
  Component/s: Schema and Analysis
Fix Version/s: 5.0

 In schemaless mode, field names with spaces should be converted
 ---

 Key: SOLR-6937
 URL: https://issues.apache.org/jira/browse/SOLR-6937
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Grant Ingersoll
 Fix For: 5.0


 Assuming spaces in field names are still bad, we should automatically convert 
 them to not have spaces.  For instance, I indexed Citibike public data set 
 which has: 
 {quote}
 tripduration,starttime,stoptime,start station id,start station 
 name,start station latitude,start station longitude,end station 
 id,end station name,end station latitude,end station 
 longitude,bikeid,usertype,birth year,gender{quote}
 My vote would be to replace spaces w/ underscores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_25) - Build # 4402 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4402/
Java: 32bit/jdk1.8.0_25 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 8029 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:519: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:467: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:61: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:39: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:187: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:510:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:463:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:376:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\build.xml:65:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 36 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_25 -client 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-01-08 Thread Boon Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boon Low updated SOLR-6648:
---
Attachment: SOLR-6648.patch

Here is a patch w.r.t. trunk 08/01/15

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
Assignee: Tomás Fernández Löbbe
  Labels: suggester
 Fix For: 5.0, Trunk

 Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch, SOLR-6648.patch


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6933) bin/solr script should just have a single create action that creates a core or collection depending on the mode solr is running in

2015-01-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270479#comment-14270479
 ] 

Grant Ingersoll commented on SOLR-6933:
---

bq. and instead adding a new create that delegates to them as needed.

+1

bq.  First, what does the usage (bin/solr create -help) say? Options like 
-shards, -maxShardsPerNode, -replicationFactor don't apply when creating a 
core. 

If we are just delegating, than can we delegate to the underlying help too?

bq. I think the script should error out and tell the user that option is only 
when running in cloud mode.

 +1

bq. So if we think there's real benefit to having a create alias 

+1

 bin/solr script should just have a single create action that creates a core 
 or collection depending on the mode solr is running in
 --

 Key: SOLR-6933
 URL: https://issues.apache.org/jira/browse/SOLR-6933
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter

 instead of create_core and create_collection, just have create that creates a 
 core or a collection based on which mode Solr is running in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6908) SimplePostTool's help message is incorrect -Durl parameter

2015-01-08 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270507#comment-14270507
 ] 

Alexandre Rafalovitch commented on SOLR-6908:
-

Just realized that the really first help example is also having the same 
problem:
{quote}
java -jar post.jar *.xml
{quote}

Can we just update it with this JIRA or do we need a new one?

 SimplePostTool's help message is incorrect -Durl parameter
 --

 Key: SOLR-6908
 URL: https://issues.apache.org/jira/browse/SOLR-6908
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 {quote}
 java -jar post.jar -h
 ...
 java -Durl=http://localhost:8983/solr/update/extract -Dparams=literal.id=a 
 -Dtype=application/pdf -jar post.jar a.pdf
 ...
 {quote}
 The example is the only one for -Durl and is not correct as it is missing the 
 collection name. Also, even though this is an example, *a.pdf* does not 
 exist, but we do have *solr-word.pdf* now.
 So, this should probably say:
 {quote}
 java -Durl=http://localhost:8983/solr/techproducts/update/extract 
 -Dparams=literal.id=pdf1 -Dtype=application/pdf -jar post.jar solr-word.pdf
 {quote}
 Also, it is worth mentioning (if true) that specifying *-Durl* overrides 
 *-Dc*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6909) Allow pluggable atomic update merging logic

2015-01-08 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6909:
---
Attachment: SOLR-6909.patch

Updated patch to add a 'doSet' and 'doAdd' method which allows clients to 
override specific implementations of any atomic update command.

 Allow pluggable atomic update merging logic
 ---

 Key: SOLR-6909
 URL: https://issues.apache.org/jira/browse/SOLR-6909
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6909.patch, SOLR-6909.patch


 Clients should be able to introduce their own specific merging logic by 
 implementing a new class that will be used by the DistributedUpdateProcessor. 
 This is particularly useful if you require a custom hook to interrogate the 
 incoming document with the document that is already resident in the index as 
 there isn't the ability to perform that operation nor can you currently 
 extend the DistributedUpdateProcessor to provide the modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1913 - Failure!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1913/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([31D4E83971D9B4A5:B03266210686D499]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:222)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6936) Bitwise operator

2015-01-08 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270267#comment-14270267
 ] 

Ahmet Arslan commented on SOLR-6936:


Hi [~guipulsar], please ask this kind of questions on solr user mailing list : 
https://lucene.apache.org/solr/discussion.html
You will get a lot of answers there.

 Bitwise operator 
 -

 Key: SOLR-6936
 URL: https://issues.apache.org/jira/browse/SOLR-6936
 Project: Solr
  Issue Type: Improvement
Reporter: jean claude
  Labels: cql, features, newbie, performance

 Hi,
 i am just new to SOLR , i come from sql , would like to swicth to nosql/solr  
 (with raik 2 perhaps) ,
 all my app is designed for search in number like this 
 where bob  6 and foo  1 and manymanyfields  134 ,etc...
 I insist on the fact i have many fields..the logic is all input value are  
 numbers (1,2,4,8,16,etc) , all inputs are capped to 50 choices max by input , 
 so its an app for search common interest between people , from sql the 
 bitwise  was perfect because it avoid to put a ton of keywords string in a 
 big post size... many bench in past was made and we designed application like 
 this for performance reasons..
 All is here , thats why its very powerfull to search with bitwise operator 
 rather list of string..
 I just read all the doc and i didn't find reference, so i guess this option 
 doesn't exist in SOLR ?! 
 If yes, can i say why and is there some chance to see this new feature comes 
 nearly ?
 Scuse my poor english i'm too french..:)
 Any help or advises are welcome,
 gui



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6909) Allow pluggable atomic update merging logic

2015-01-08 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270431#comment-14270431
 ] 

Steve Davids commented on SOLR-6909:


The javascript approach is interesting but would seem overly complex when you 
always want the merging logic to work a specific way all the time. 
Additionally, I have a user case where I download a document in an update 
processor, extract fields from downloaded content, and index that document. The 
interesting thing here is that if I can't download the document I set the doc's 
status to error, though this is only valid if a good document already exists in 
the index, so if an error doc is trying to be merged an exception is thrown and 
won't clobber the good document. As you can see with the approach taken in this 
ticket it allows you the added flexibility with a customizable 
AtomicUpdateDocumentMerger.

Another added benefit is that it cleans up the DistributedUpdateProcessor a 
little. One modification I might want to make is to the attached patch is to 
make a `doSet` and `doAdd` which would be allow overrides of each specific 
merge type.

 Allow pluggable atomic update merging logic
 ---

 Key: SOLR-6909
 URL: https://issues.apache.org/jira/browse/SOLR-6909
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6909.patch


 Clients should be able to introduce their own specific merging logic by 
 implementing a new class that will be used by the DistributedUpdateProcessor. 
 This is particularly useful if you require a custom hook to interrogate the 
 incoming document with the document that is already resident in the index as 
 there isn't the ability to perform that operation nor can you currently 
 extend the DistributedUpdateProcessor to provide the modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6839.
---
Resolution: Fixed

 Direct routing with CloudSolrServer will ignore the Overwrite document option.
 --

 Key: SOLR-6839
 URL: https://issues.apache.org/jira/browse/SOLR-6839
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6839.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270237#comment-14270237
 ] 

ASF subversion and git services commented on SOLR-6839:
---

Commit 1650423 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650423 ]

SOLR-6839: Direct routing with CloudSolrServer will ignore the Overwrite 
document option.

 Direct routing with CloudSolrServer will ignore the Overwrite document option.
 --

 Key: SOLR-6839
 URL: https://issues.apache.org/jira/browse/SOLR-6839
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6839.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11546 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11546/
Java: 32bit/jdk1.8.0_40-ea-b20 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 7976 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:61: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:187: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:510: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:463: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:376: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/build.xml:65: 
impossible to resolve dependencies:
resolve failed - see output for details

Total time: 32 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_40-ea-b20 -server 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2450 - Still Failing

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2450/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:63613/sks/yn/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:63613/sks/yn/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([20440D012D3027F3:A1A283195A6F47CF]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build # 11548 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11548/
Java: 64bit/jdk1.9.0-ea-b44 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch

Error Message:
commitWithin did not work on node: http://127.0.0.1:42036/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:42036/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([83EFE31591829766:2096D0DE6DDF75A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Created] (SOLR-6938) Implicit configuration of Update handlers does not match previous explicit one

2015-01-08 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-6938:
---

 Summary: Implicit configuration of Update handlers does not match 
previous explicit one
 Key: SOLR-6938
 URL: https://issues.apache.org/jira/browse/SOLR-6938
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch


There seem to be confusion/inconsistency between *contentType* (stream or 
update) parameter as defined in the commented-out update handler and new 
implicit implementation.

Specifically, in (current 5 build's) techproduct's solrconfig.xml, it says:
{quote}
  !-- The following are implicitly added
  requestHandler name=/update/json class=solr.UpdateRequestHandler
lst name=defaults
 str name=stream.contentTypeapplication/json/str
   /lst
  /requestHandler
  requestHandler name=/update/csv class=solr.UpdateRequestHandler
lst name=defaults
 str name=stream.contentTypeapplication/csv/str
   /lst
  /requestHandler
  --
{quote}

The documentation also says to use *stream.contentType* at: 
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers

However, the http://localhost:8983/solr/techproducts/config says instead:
{quote}
  /update/json:{
name:/update/json,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{update.contentType:application/json}},
  /update/csv:{
name:/update/csv,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{update.contentType:application/csv}},
{quote}

Seems to be pure inconsistency, since Reference Guide does not mention 
*update.contentType*.

Yet earlier in the same *solrconfig.xml* it says:
{quote}
To override the request content type and force a specific
Content-type, use the request parameter:
  ?update.contentType=text/csv
{quote}

Are these different or same? They should definitely be consistent between code 
and comment, but it seems there is a bit of an extra confusion on top.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6938) Implicit configuration of Update handlers does not match previous explicit one

2015-01-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270651#comment-14270651
 ] 

Noble Paul commented on SOLR-6938:
--

is stream.contentType used anywhere in the code? was it some legacy stuff?

 Implicit configuration of Update handlers does not match previous explicit one
 --

 Key: SOLR-6938
 URL: https://issues.apache.org/jira/browse/SOLR-6938
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch

 There seem to be confusion/inconsistency between *contentType* (stream or 
 update) parameter as defined in the commented-out update handler and new 
 implicit implementation.
 Specifically, in (current 5 build's) techproduct's solrconfig.xml, it says:
 {quote}
   !-- The following are implicitly added
   requestHandler name=/update/json class=solr.UpdateRequestHandler
 lst name=defaults
  str name=stream.contentTypeapplication/json/str
/lst
   /requestHandler
   requestHandler name=/update/csv class=solr.UpdateRequestHandler
 lst name=defaults
  str name=stream.contentTypeapplication/csv/str
/lst
   /requestHandler
   --
 {quote}
 The documentation also says to use *stream.contentType* at: 
 https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers
 However, the http://localhost:8983/solr/techproducts/config says instead:
 {quote}
   /update/json:{
 name:/update/json,
 class:org.apache.solr.handler.UpdateRequestHandler,
 defaults:{update.contentType:application/json}},
   /update/csv:{
 name:/update/csv,
 class:org.apache.solr.handler.UpdateRequestHandler,
 defaults:{update.contentType:application/csv}},
 {quote}
 Seems to be pure inconsistency, since Reference Guide does not mention 
 *update.contentType*.
 Yet earlier in the same *solrconfig.xml* it says:
 {quote}
 To override the request content type and force a specific
 Content-type, use the request parameter:
   ?update.contentType=text/csv
 {quote}
 Are these different or same? They should definitely be consistent between 
 code and comment, but it seems there is a bit of an extra confusion on top.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270666#comment-14270666
 ] 

ASF subversion and git services commented on SOLR-6787:
---

Commit 1650449 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650449 ]

SOLR-6787 A simple class to mask a handler defined in same path

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-08 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6496:
---
Attachment: SOLR-6496.patch

The correct attachment.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
 SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6937) In schemaless mode, field names with spaces should be converted

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270662#comment-14270662
 ] 

Hoss Man commented on SOLR-6937:


bq. My vote would be to replace spaces w/ underscores.

Could probably be solved with a ~6 line subclass of FieldMutatingUpdateProcessor

 In schemaless mode, field names with spaces should be converted
 ---

 Key: SOLR-6937
 URL: https://issues.apache.org/jira/browse/SOLR-6937
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Grant Ingersoll
 Fix For: 5.0


 Assuming spaces in field names are still bad, we should automatically convert 
 them to not have spaces.  For instance, I indexed Citibike public data set 
 which has: 
 {quote}
 tripduration,starttime,stoptime,start station id,start station 
 name,start station latitude,start station longitude,end station 
 id,end station name,end station latitude,end station 
 longitude,bikeid,usertype,birth year,gender{quote}
 My vote would be to replace spaces w/ underscores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6764) Can't index exampledocs/*.xml into collection based on the data_driven_schema_configs configset

2015-01-08 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6764:
-
Attachment: SOLR-6764.patch

I tracked this issue down to some components, such as StopFilterFactory, not 
getting informed of the SolrResourceLoader. The attached patch adds code that 
was previously only used when adding fieldTypes via the REST API to inform 
objects on fieldTypes in the reloadFields function as well. This definitely 
solves the problem and doesn't seem to introduce any regression, but it needs a 
review to make sure this isn't introducing side-effects.

 Can't index exampledocs/*.xml into collection based on the 
 data_driven_schema_configs configset
 ---

 Key: SOLR-6764
 URL: https://issues.apache.org/jira/browse/SOLR-6764
 Project: Solr
  Issue Type: Bug
Reporter: Timothy Potter
Assignee: Timothy Potter
 Attachments: SOLR-6764.patch


 This is exactly what we don't want ;-) Fire up a collection that uses the 
 data_driven_schema_configs (such as by doing: bin/solr -e cloud -noprompt) 
 and then try to index our example docs using:
 $ java -Durl=http://localhost:8983/solr/gettingstarted/update -jar post.jar 
 *.xml
 Here goes the spew ...
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/gettingstarted/update 
 using content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 SimplePostTool: WARNING: Solr returned an error #500 (Server Error) for url: 
 http://localhost:8983/solr/gettingstarted/update
 SimplePostTool: WARNING: Response: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime19/int/lstlst name=errorstr name=msgServer Error
 request: 
 http://192.168.1.2:8983/solr/gettingstarted_shard2_replica2/update?update.chain=add-unknown-fields-to-the-schemaamp;update.distrib=TOLEADERamp;distrib.from=http%3A%2F%2F192.168.1.2%3A8983%2Fsolr%2Fgettingstarted_shard1_replica2%2Famp;wt=javabinamp;version=2/strstr
  name=traceorg.apache.solr.common.SolrException: Server Error
 request: 
 http://192.168.1.2:8983/solr/gettingstarted_shard2_replica2/update?update.chain=add-unknown-fields-to-the-schemaamp;update.distrib=TOLEADERamp;distrib.from=http%3A%2F%2F192.168.1.2%3A8983%2Fsolr%2Fgettingstarted_shard1_replica2%2Famp;wt=javabinamp;version=2
   at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:241)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 /strint name=code500/int/lst
 /response
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 500 for URL: 
 http://localhost:8983/solr/gettingstarted/update
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: 
 http://localhost:8983/solr/gettingstarted/update
 SimplePostTool: WARNING: Response: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime630/int/lstlst name=errorstr name=msgERROR: 
 [doc=IW-02] Error adding field 'price'='11.50' msg=For input string: 
 11.50/strint name=code400/int/lst
 /response
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/gettingstarted/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: 
 http://localhost:8983/solr/gettingstarted/update
 SimplePostTool: WARNING: Response: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime5/int/lstlst name=errorstr name=msgERROR: 
 [doc=MA147LL/A] Error adding field 'weight'='5.5' msg=For input string: 
 5.5/strint name=code400/int/lst
 /response
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/gettingstarted/update
 POSTing file manufacturers.xml
 SimplePostTool: WARNING: Solr returned an error #500 (Server Error) for url: 
 http://localhost:8983/solr/gettingstarted/update
 SimplePostTool: WARNING: Response: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime2/int/lstlst name=errorstr name=msgException writing 
 document id adata to the index; possible analysis error./strstr 
 name=traceorg.apache.solr.common.SolrException: Exception writing document 
 id adata to the index; possible analysis error.

[jira] [Commented] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270652#comment-14270652
 ] 

Hoss Man commented on SOLR-6016:


bq. This sounds like an awesome and doable idea. Has JIRA been created for 
this? If not, we should.

SOLR-6939

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
Assignee: Erik Hatcher
 Attachments: SOLR-6016.patch, SOLR-6016.patch, SOLR-6016.patch, 
 SOLR-6016.patch, solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that 

[jira] [Created] (SOLR-6939) UpdateProcessor to buffer sample documents and then batch create neccessary fields

2015-01-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6939:
--

 Summary: UpdateProcessor to buffer  sample documents and then 
batch create neccessary fields
 Key: SOLR-6939
 URL: https://issues.apache.org/jira/browse/SOLR-6939
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man


spun off of an idea in SOLR-6016...

{quote}
bq. We could add a SchemaGeneratorHandler which would generate the best 
schema.

You wouldn't need/want a handler for this – you'd just need an 
UpdateProcessorFactory to use in place of RunUpdateProcessorFactory that would 
look at the datatypes of the fields in each document w/o doing any indexing and 
pick the least common denominator.

So then you'd have a chain with all of your normal update processors including 
the TypeMapping processors configured with the preccedence orders and locales 
and format strings you want – and at the end you'd have your 
BestFitScheamGeneratorUpdateProcessorFactory that would look at all those docs, 
study their values, and throw them away – until a commit comes along, at which 
point it does all the under the hood schema field addition calls.

So to learn, you'd send docs using whatever handler/format you wnat (json, xml, 
extraction, etc...) with an update.chain=my.datatype.learning.processor.chain 
request param ... and once you've sent a bunch and giving it a lot of variety 
to see, then you send a commit so it creates the schema and then you re-index 
your docs for real w/o that special chain.
{quote}

...not mentioned originally: this factory could also default to assuming fields 
should be single valued, unless/until it sees multiple values in a doc that it 
samples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_25) - Build # 11549 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11549/
Java: 64bit/jdk1.8.0_25 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Error from server at https://127.0.0.1:44277: CLUSTERSTATUS the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:44277: CLUSTERSTATUS the collection time 
out:180s
at 
__randomizedtesting.SeedInfo.seed([348F48E0FC0297BF:B569C6F88B5DF783]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testLongAndShortRunningParallelApiCalls(MultiThreadedOCPTest.java:249)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:75)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_25) - Build # 4299 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4299/
Java: 32bit/jdk1.8.0_25 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64244/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64244/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

Any way to tell number of documents commited but not visible?

2015-01-08 Thread Alexandre Rafalovitch
Hi,

I am trying to check if there are any documents in Solr but they are
not visible yet.

If there were no commit at all, I know I can see it in the stats for
UpdateHandler under @docsPending.

But if there was a hard commit with openSearcher=false (as per example
configuration), then that number resets to 0 on commit.

Is there a different property somewhere showing what's not yet visible
even if it were committed on Lucene level?

Regards,
Alex.

Sign up for my Solr resources newsletter at http://www.solr-start.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270657#comment-14270657
 ] 

ASF subversion and git services commented on SOLR-6787:
---

Commit 1650448 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1650448 ]

SOLR-6787 A simple class to mask a handler defined in same path

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-08 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270712#comment-14270712
 ] 

Anshum Gupta commented on SOLR-6496:


Working on fixing a failing test.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
 SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2451 - Still Failing

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2451/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([9F021B8F078C7D19:1EE4959770D31D25]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1914 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1914/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
startOffset 442 expected:2698 but was:2699

Stack Trace:
java.lang.AssertionError: startOffset 442 expected:2698 but was:2699
at 
__randomizedtesting.SeedInfo.seed([207270E5760248DA:A8FB705BD5061FEF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:182)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:295)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:299)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:812)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:611)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:509)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:433)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-08 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6496:
---
Attachment: (was: SOLR-6496.patch)

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-08 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6496:
---
Attachment: SOLR-6496.patch

Patch updated to be in sync with LBHttpSolrServer - LBHttpSolrClient changes. 
Will just run a few more rounds of tests and commit.
Will also create another issue for adding tests.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6909) Allow pluggable atomic update merging logic

2015-01-08 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270431#comment-14270431
 ] 

Steve Davids edited comment on SOLR-6909 at 1/9/15 2:28 AM:


The javascript approach is interesting but would seem overly complex when you 
always want the merging logic to work a specific way all the time. 
Additionally, I have a user case where I download a document in an update 
processor, extract fields from downloaded content, and index that document. The 
interesting thing here is that if I can't download the document I set the doc's 
status to error, though this is only valid if a good document *doesn't* already 
exists in the index, so if an error doc is trying to be merged on top of an 
existing document an exception is thrown and won't clobber the good document. 
As you can see with the approach taken in this ticket it allows you the added 
flexibility with a customizable AtomicUpdateDocumentMerger.

Another added benefit is that it cleans up the DistributedUpdateProcessor a 
little. One modification I might want to make is to the attached patch is to 
make a `doSet` and `doAdd` which would be allow overrides of each specific 
merge type.


was (Author: sdavids):
The javascript approach is interesting but would seem overly complex when you 
always want the merging logic to work a specific way all the time. 
Additionally, I have a user case where I download a document in an update 
processor, extract fields from downloaded content, and index that document. The 
interesting thing here is that if I can't download the document I set the doc's 
status to error, though this is only valid if a good document already exists in 
the index, so if an error doc is trying to be merged an exception is thrown and 
won't clobber the good document. As you can see with the approach taken in this 
ticket it allows you the added flexibility with a customizable 
AtomicUpdateDocumentMerger.

Another added benefit is that it cleans up the DistributedUpdateProcessor a 
little. One modification I might want to make is to the attached patch is to 
make a `doSet` and `doAdd` which would be allow overrides of each specific 
merge type.

 Allow pluggable atomic update merging logic
 ---

 Key: SOLR-6909
 URL: https://issues.apache.org/jira/browse/SOLR-6909
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6909.patch


 Clients should be able to introduce their own specific merging logic by 
 implementing a new class that will be used by the DistributedUpdateProcessor. 
 This is particularly useful if you require a custom hook to interrogate the 
 incoming document with the document that is already resident in the index as 
 there isn't the ability to perform that operation nor can you currently 
 extend the DistributedUpdateProcessor to provide the modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b44) - Build # 11545 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11545/
Java: 32bit/jdk1.9.0-ea-b44 -client -XX:+UseParallelGC

30 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager

Error Message:
8 threads leaked from SUITE scope at 
org.apache.lucene.search.TestSearcherManager: 1) Thread[id=405, 
name=TestSearcherManager-1-thread-6, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=401, 
name=TestSearcherManager-1-thread-2, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=406, 
name=TestSearcherManager-1-thread-7, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=400, 
name=TestSearcherManager-1-thread-1, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=402, 
name=TestSearcherManager-1-thread-3, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=404, 
name=TestSearcherManager-1-thread-5, state=TIMED_WAITING, 
group=TGRP-TestSearcherManager] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 

[jira] [Commented] (LUCENE-6169) Recent Java 9 commit breaks fsync on directory

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270107#comment-14270107
 ] 

ASF subversion and git services commented on LUCENE-6169:
-

Commit 1650390 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1650390 ]

LUCENE-6169: Disable the fsync on directory assert for Java 9+, because in Java 
9 opening a FileChannel on directory no longer works

 Recent Java 9 commit breaks fsync on directory
 --

 Key: LUCENE-6169
 URL: https://issues.apache.org/jira/browse/LUCENE-6169
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Uwe Schindler
  Labels: Java9

 I open this issue to keep track of the communication with Oracle and OpenJDK 
 about this:
 Basically, what happens: In LUCENE-5588 we added support to FSDirectory to be 
 able to sync on directory metadata changes (means the contents of the 
 directory itsself). This is very important on Unix system (maybe also on 
 Windows), because fsyncing a single file does not necessarily writes the 
 directory's contents to disk. Lucene uses this for commits. We first do an 
 atomic rename of the segments file  (to make the commit public), but we have 
 to be sure that the rename operation is written to disk. Because of that we 
 must fsync the directory.
 To enforce this, you open a directory for read and then call fsync. In java 
 this can be done by opening a FileChannel on the direczory(for read) and call 
 fc.force() on it.
 Unfortunately the commit 
 http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45 in OpenJDK 9 break 
 this. The corresponding issue is 
 https://bugs.openjdk.java.net/browse/JDK-8066915. The JDK now explicitly 
 checks if a file is a directory and disallows opening a FileChannel on it. 
 This breaks our commit safety.
 Because this behaviour is undocumented (not even POSIX has explicit semantics 
 for syncing directories), we know that it worked at least on MacOSX and 
 Linux. The code in IOUtils is currently written in a way that it tries to 
 sync the diretory, but swallows any Exception. So this change does not break 
 Liucene, but it breaks our commit safety. During testing we assert that the 
 fsync actually works on Linux and MacOSX, in production code the user will 
 notice nothing.
 We should take action and contact Alan Bateman about his commit and this 
 issue on the mailing list, possibly through Rory O'Donnell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6169) Recent Java 9 commit breaks fsync on directory

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270109#comment-14270109
 ] 

ASF subversion and git services commented on LUCENE-6169:
-

Commit 1650391 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650391 ]

Merged revision(s) 1650390 from lucene/dev/trunk:
LUCENE-6169: Disable the fsync on directory assert for Java 9+, because in Java 
9 opening a FileChannel on directory no longer works

 Recent Java 9 commit breaks fsync on directory
 --

 Key: LUCENE-6169
 URL: https://issues.apache.org/jira/browse/LUCENE-6169
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Uwe Schindler
  Labels: Java9

 I open this issue to keep track of the communication with Oracle and OpenJDK 
 about this:
 Basically, what happens: In LUCENE-5588 we added support to FSDirectory to be 
 able to sync on directory metadata changes (means the contents of the 
 directory itsself). This is very important on Unix system (maybe also on 
 Windows), because fsyncing a single file does not necessarily writes the 
 directory's contents to disk. Lucene uses this for commits. We first do an 
 atomic rename of the segments file  (to make the commit public), but we have 
 to be sure that the rename operation is written to disk. Because of that we 
 must fsync the directory.
 To enforce this, you open a directory for read and then call fsync. In java 
 this can be done by opening a FileChannel on the direczory(for read) and call 
 fc.force() on it.
 Unfortunately the commit 
 http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45 in OpenJDK 9 break 
 this. The corresponding issue is 
 https://bugs.openjdk.java.net/browse/JDK-8066915. The JDK now explicitly 
 checks if a file is a directory and disallows opening a FileChannel on it. 
 This breaks our commit safety.
 Because this behaviour is undocumented (not even POSIX has explicit semantics 
 for syncing directories), we know that it worked at least on MacOSX and 
 Linux. The code in IOUtils is currently written in a way that it tries to 
 sync the diretory, but swallows any Exception. So this change does not break 
 Liucene, but it breaks our commit safety. During testing we assert that the 
 fsync actually works on Linux and MacOSX, in production code the user will 
 notice nothing.
 We should take action and contact Alan Bateman about his commit and this 
 issue on the mailing list, possibly through Rory O'Donnell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270122#comment-14270122
 ] 

Mark Miller commented on SOLR-6930:
---

Does SOLR-5986 actually use fieldcache sizing estimates to prevent OOM's or 
asking for too many rows back? I thought it was more about timing out bad 
queries...overlap, but large differences.

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270130#comment-14270130
 ] 

Mark Miller commented on SOLR-6930:
---

bq. I'm not a big fan of estimation and pre-emptive termination based on 
estimates.

You would be if you operated a SolrCloud cluster with various clients doing 
different things. You need to be able to lock things down to prevent OOM's 
(which dictate restarts). For someone like you, you can turn off the circuit 
breakers or not enable them, but not doubt they are of great value for anyone 
running SolrCloud as a service (a common use case).

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270130#comment-14270130
 ] 

Mark Miller edited comment on SOLR-6930 at 1/8/15 10:16 PM:


bq. I'm not a big fan of estimation and pre-emptive termination based on 
estimates.

You would be if you operated a SolrCloud cluster with various clients doing 
different things. You need to be able to lock things down to prevent OOM's 
(which dictate restarts). For someone like you, you can turn off the circuit 
breakers or not enable them, but no doubt they are of great value for anyone 
running SolrCloud as a service (a common use case).


was (Author: markrmil...@gmail.com):
bq. I'm not a big fan of estimation and pre-emptive termination based on 
estimates.

You would be if you operated a SolrCloud cluster with various clients doing 
different things. You need to be able to lock things down to prevent OOM's 
(which dictate restarts). For someone like you, you can turn off the circuit 
breakers or not enable them, but not doubt they are of great value for anyone 
running SolrCloud as a service (a common use case).

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_72) - Build # 11384 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11384/
Java: 32bit/jdk1.7.0_72 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
{   responseHeader:{ status:404, QTime:3},   error:{ 
msg:no such blob or version available: test/1, code:404}}

Stack Trace:
java.lang.AssertionError: {
  responseHeader:{
status:404,
QTime:3},
  error:{
msg:no such blob or version available: test/1,
code:404}}
at 
__randomizedtesting.SeedInfo.seed([A5CA2100C118E8DA:242CAF18B64788E6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:108)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (LUCENE-6169) Recent Java 9 commit breaks fsync on directory

2015-01-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270092#comment-14270092
 ] 

Uwe Schindler commented on LUCENE-6169:
---

As a quick fix so we can still run the tests on Java 9, I will commit a 
workaround and disable the assert on Java 9.

I will keep this issue open to keep track and revert the workaround once Oracle 
fixed the problem in some way.

 Recent Java 9 commit breaks fsync on directory
 --

 Key: LUCENE-6169
 URL: https://issues.apache.org/jira/browse/LUCENE-6169
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Uwe Schindler
  Labels: Java9

 I open this issue to keep track of the communication with Oracle and OpenJDK 
 about this:
 Basically, what happens: In LUCENE-5588 we added support to FSDirectory to be 
 able to sync on directory metadata changes (means the contents of the 
 directory itsself). This is very important on Unix system (maybe also on 
 Windows), because fsyncing a single file does not necessarily writes the 
 directory's contents to disk. Lucene uses this for commits. We first do an 
 atomic rename of the segments file  (to make the commit public), but we have 
 to be sure that the rename operation is written to disk. Because of that we 
 must fsync the directory.
 To enforce this, you open a directory for read and then call fsync. In java 
 this can be done by opening a FileChannel on the direczory(for read) and call 
 fc.force() on it.
 Unfortunately the commit 
 http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45 in OpenJDK 9 break 
 this. The corresponding issue is 
 https://bugs.openjdk.java.net/browse/JDK-8066915. The JDK now explicitly 
 checks if a file is a directory and disallows opening a FileChannel on it. 
 This breaks our commit safety.
 Because this behaviour is undocumented (not even POSIX has explicit semantics 
 for syncing directories), we know that it worked at least on MacOSX and 
 Linux. The code in IOUtils is currently written in a way that it tries to 
 sync the diretory, but swallows any Exception. So this change does not break 
 Liucene, but it breaks our commit safety. During testing we assert that the 
 fsync actually works on Linux and MacOSX, in production code the user will 
 notice nothing.
 We should take action and contact Alan Bateman about his commit and this 
 issue on the mailing list, possibly through Rory O'Donnell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6169) Recent Java 9 commit breaks fsync on directory

2015-01-08 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-6169:
-

 Summary: Recent Java 9 commit breaks fsync on directory
 Key: LUCENE-6169
 URL: https://issues.apache.org/jira/browse/LUCENE-6169
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Uwe Schindler


I open this issue to keep track of the communication with Oracle and OpenJDK 
about this:

Basically, what happens: In LUCENE-5588 we added support to FSDirectory to be 
able to sync on directory metadata changes (means the contents of the directory 
itsself). This is very important on Unix system (maybe also on Windows), 
because fsyncing a single file does not necessarily writes the directory's 
contents to disk. Lucene uses this for commits. We first do an atomic rename of 
the segments file  (to make the commit public), but we have to be sure that the 
rename operation is written to disk. Because of that we must fsync the 
directory.

To enforce this, you open a directory for read and then call fsync. In java 
this can be done by opening a FileChannel on the direczory(for read) and call 
fc.force() on it.

Unfortunately the commit 
http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45 in OpenJDK 9 break 
this. The corresponding issue is 
https://bugs.openjdk.java.net/browse/JDK-8066915. The JDK now explicitly checks 
if a file is a directory and disallows opening a FileChannel on it. This breaks 
our commit safety.

Because this behaviour is undocumented (not even POSIX has explicit semantics 
for syncing directories), we know that it worked at least on MacOSX and Linux. 
The code in IOUtils is currently written in a way that it tries to sync the 
diretory, but swallows any Exception. So this change does not break Liucene, 
but it breaks our commit safety. During testing we assert that the fsync 
actually works on Linux and MacOSX, in production code the user will notice 
nothing.

We should take action and contact Alan Bateman about his commit and this issue 
on the mailing list, possibly through Rory O'Donnell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270120#comment-14270120
 ] 

Mark Miller commented on SOLR-6496:
---

This is an important one to get into 5 - we should get it committed even if we 
have to a make a new JIRA to work on testing in this area.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2015-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Attachment: SOLR-6581.patch

Patch with all updated trunk code and all tests passing.

 Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
 ---

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, renames.diff


 *Background*
 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 There are some major advantages of using the MultiDocValues rather then a top 
 level FieldCache. But there is one disadvantage, the lookup from docId to 
 top-level ordinals is slower using MultiDocValues.
 My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
 to use MultiDocValues, the performance drop is around 100%.  For some use 
 cases this performance drop is a blocker.
 *What About Faceting?*
 String faceting also relies on the top level ordinals. Is faceting 
 performance affected also? My testing has shown that the faceting performance 
 is affected much less then collapsing. 
 One possible reason for this may be that field collapsing is memory bound and 
 faceting is not. So the additional memory accesses needed for MultiDocValues 
 affects field collapsing much more then faceting.
 *Proposed Solution*
 The proposed solution is to have the default Collapse and Expand algorithm 
 use MultiDocValues, but to provide an option to use a top level FieldCache if 
 the performance of MultiDocValues is a blocker.
 The proposed mechanism for switching to the FieldCache would be a new hint 
 parameter. If the hint parameter is set to FAST_QUERY then the top-level 
 FieldCache would be used for both Collapse and Expand.
 Example syntax:
 {code}
 fq={!collapse field=x hint=FAST_QUERY}
 {code}
  
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6367) empty tolg on HDFS when hard crash - no docs to replay on recovery

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6367:
--
Fix Version/s: Trunk
   5.0

 empty tolg on HDFS when hard crash - no docs to replay on recovery
 --

 Key: SOLR-6367
 URL: https://issues.apache.org/jira/browse/SOLR-6367
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
 Fix For: 5.0, Trunk


 Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
 Jul 2014)...
 {panel}
 Reproduce steps:
 1) Setup Solr to run on HDFS like this:
 {noformat}
 java -Dsolr.directoryFactory=HdfsDirectoryFactory
  -Dsolr.lock.type=hdfs
  -Dsolr.hdfs.home=hdfs://host:port/path
 {noformat}
 For the purpose of this testing, turn off the default auto commit in 
 solrconfig.xml, i.e. comment out autoCommit like this:
 {code}
 !--
 autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit
 --
 {code}
 2) Add a document without commit:
 {{curl http://localhost:8983/solr/collection1/update?commit=false; -H
 Content-type:text/xml; charset=utf-8 --data-binary @solr.xml}}
 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
 {noformat}
 [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
 /path/collection1/core_node1/data/tlog
 Found 5 items
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.001
 -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.003
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.004
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.005
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.006
 {noformat}
 4) Simulate Solr crash by killing the process with -9 option.
 5) restart the Solr process. Observation is that uncommitted document are
 not replayed, files in tlog directory are cleaned up. Hence uncommitted
 document(s) is lost.
 Am I missing anything or this is a bug?
 BTW, additional observations:
 a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
 non-empty tlog file is geneated and after re-starting Solr, uncommitted
 document is replayed as expected.
 b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
 not observed either.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6367:
--
Summary: empty tlog on HDFS when hard crash - no docs to replay on recovery 
 (was: empty tolg on HDFS when hard crash - no docs to replay on recovery)

 empty tlog on HDFS when hard crash - no docs to replay on recovery
 --

 Key: SOLR-6367
 URL: https://issues.apache.org/jira/browse/SOLR-6367
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
 Fix For: 5.0, Trunk


 Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
 Jul 2014)...
 {panel}
 Reproduce steps:
 1) Setup Solr to run on HDFS like this:
 {noformat}
 java -Dsolr.directoryFactory=HdfsDirectoryFactory
  -Dsolr.lock.type=hdfs
  -Dsolr.hdfs.home=hdfs://host:port/path
 {noformat}
 For the purpose of this testing, turn off the default auto commit in 
 solrconfig.xml, i.e. comment out autoCommit like this:
 {code}
 !--
 autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit
 --
 {code}
 2) Add a document without commit:
 {{curl http://localhost:8983/solr/collection1/update?commit=false; -H
 Content-type:text/xml; charset=utf-8 --data-binary @solr.xml}}
 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
 {noformat}
 [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
 /path/collection1/core_node1/data/tlog
 Found 5 items
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.001
 -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.003
 -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
 /path/collection1/core_node1/data/tlog/tlog.004
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.005
 -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
 /path/collection1/core_node1/data/tlog/tlog.006
 {noformat}
 4) Simulate Solr crash by killing the process with -9 option.
 5) restart the Solr process. Observation is that uncommitted document are
 not replayed, files in tlog directory are cleaned up. Hence uncommitted
 document(s) is lost.
 Am I missing anything or this is a bug?
 BTW, additional observations:
 a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
 non-empty tlog file is geneated and after re-starting Solr, uncommitted
 document is replayed as expected.
 b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
 not observed either.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6936) Bitwise operator

2015-01-08 Thread jean claude (JIRA)
jean claude created SOLR-6936:
-

 Summary: Bitwise operator 
 Key: SOLR-6936
 URL: https://issues.apache.org/jira/browse/SOLR-6936
 Project: Solr
  Issue Type: Improvement
Reporter: jean claude


Hi,
i am just new to SOLR , i come from sql , would like to swicth to nosql/solr  
(with raik 2 perhaps) ,
all my app is designed for search in number like this 
where bob  6 and foo  1 and manymanyfields  134 ,etc...

I insist on the fact i have many fields..the logic is all input value are  
numbers (1,2,4,8,16,etc) , all inputs are capped to 50 choices max by input , 
so its an app for search common interest between people , from sql the bitwise 
 was perfect because it avoid to put a ton of keywords string in a big post 
size... many bench in past was made and we designed application like this for 
performance reasons..

All is here , thats why its very powerfull to search with bitwise operator 
rather list of string..
I just read all the doc and i didn't find reference, so i guess this option 
doesn't exist in SOLR ?! 

If yes, can i say why and is there some chance to see this new feature comes 
nearly ?

Scuse my poor english i'm too french..:)
Any help or advises are welcome,
gui



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2447 - Still Failing

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2447/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64140/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64140/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([422B378AB4A05C65:C3CDB992C3FF3C59]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1999 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1999/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob {   responseHeader:{ status:0, 
QTime:0},   response:{ numFound:1, start:0, docs:[{   
  id:test/1, md5:55617a38654319522ea3b9547a95f631, 
blobName:test, version:1, 
timestamp:2015-01-08T08:21:35.603Z, size:5222}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob {
  responseHeader:{
status:0,
QTime:0},
  response:{
numFound:1,
start:0,
docs:[{
id:test/1,
md5:55617a38654319522ea3b9547a95f631,
blobName:test,
version:1,
timestamp:2015-01-08T08:21:35.603Z,
size:5222}]}}
at 
__randomizedtesting.SeedInfo.seed([5737A1E1DF90297E:D6D12FF9A8CF4942]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:146)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_20) - Build # 11702 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11702/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseParallelGC (asserts: 
true)

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob {   responseHeader:{ status:0, 
QTime:1},   response:{ numFound:1, start:0, docs:[{   
  id:test/1, md5:7e43b99f34490c4c331bdf25a74b6f4d, 
blobName:test, version:1, 
timestamp:2015-01-08T08:38:03.291Z, size:4566}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob {
  responseHeader:{
status:0,
QTime:1},
  response:{
numFound:1,
start:0,
docs:[{
id:test/1,
md5:7e43b99f34490c4c331bdf25a74b6f4d,
blobName:test,
version:1,
timestamp:2015-01-08T08:38:03.291Z,
size:4566}]}}
at 
__randomizedtesting.SeedInfo.seed([1B783CBC186C8EC9:9A9EB2A46F33EEF5]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:146)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
   

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2446 - Still Failing

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2446/

5 tests failed.
REGRESSION:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob {   responseHeader:{ status:0, 
QTime:1},   response:{ numFound:1, start:0, docs:[{   
  id:test/1, md5:766fbc1d5ab447ada08889b298fca1a5, 
blobName:test, version:1, 
timestamp:2015-01-08T08:34:45.562Z, size:4597}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob {
  responseHeader:{
status:0,
QTime:1},
  response:{
numFound:1,
start:0,
docs:[{
id:test/1,
md5:766fbc1d5ab447ada08889b298fca1a5,
blobName:test,
version:1,
timestamp:2015-01-08T08:34:45.562Z,
size:4597}]}}
at 
__randomizedtesting.SeedInfo.seed([C925EAFA42EA8E7B:48C364E235B5EE47]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:146)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-6928) solr.cmd stop works only in english

2015-01-08 Thread john.work (JIRA)
john.work created SOLR-6928:
---

 Summary: solr.cmd stop works only in english
 Key: SOLR-6928
 URL: https://issues.apache.org/jira/browse/SOLR-6928
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
 Environment: german windows 7
Reporter: john.work
Priority: Minor


in solr.cmd the stop doesnt work while executing 'netstat -nao ^| find /i 
listening ^| find :%SOLR_PORT%' so listening is not found.

e.g. in german cmd.exe the netstat -nao prints the following output:
  Proto  Lokale Adresse Remoteadresse  Status   PID
  TCP0.0.0.0:80 0.0.0.0:0  ABHÖREN 4





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11871 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11871/
Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false)

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
{   responseHeader:{ status:404, QTime:2},   error:{ 
msg:no such blob or version available: test/1, code:404}}

Stack Trace:
java.lang.AssertionError: {
  responseHeader:{
status:404,
QTime:2},
  error:{
msg:no such blob or version available: test/1,
code:404}}
at 
__randomizedtesting.SeedInfo.seed([F8787F64D19AC088:799EF17CA6C5A0B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:108)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

RE: [Possibly spoofed] Re: Anybody having troubles building trunk?

2015-01-08 Thread Vanlerberghe, Luc
I had exactly the same issue building Solr, but using the tips here I managed 
to get everything working again:

I deleted the .ant and .ivy2 folders in my user directory, edited 
lucene\ivy-settings.xml to comment out the ibiblio namecloudera ... / and 
resolver ref=cloudera/ elements (leave the elements for 
releases.cloudera.com !)

After that ant ivy-bootstrap and ant resolve ran successfully (taking about 
5 minutes to download all dependencies)

I guess that one of the artifacts loaded from cloudera conflicts with one of 
the official ones from releases.cloudera.com (perhaps the order in the 
resolver chain should be reversed?)

Side note: For releases.cloudera.com, Mark Miller changed https to http on 
14/3/2014 to work around an expired SSL certificate.
I checked the certificate on the site and switched back to using https and it 
seems to be fine now...

Regards,

Luc

-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com] 
Sent: donderdag 8 januari 2015 6:32
To: dev@lucene.apache.org
Subject: [Possibly spoofed] Re: Anybody having troubles building trunk?

Similar but different? I got rid of cloudera references all together,
did ant clean and it is still the same error.

The build line that failed is:

ivy:retrieve conf=compile,compile.hadoop type=jar,bundle
sync=${ivy.sync} log=download-only symlink=${ivy.symlink}/

in trunk/solr/core/build.xml:65

Regards,
   Alex.

Sign up for my Solr resources newsletter at http://www.solr-start.com/


On 8 January 2015 at 00:12, Steve Rowe sar...@gmail.com wrote:
 I had the same issue earlier today, and identified the problem here, along
 with a workaround:
 https://issues.apache.org/jira/browse/SOLR-4839?focusedCommentId=14268311page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14268311

 On Wed, Jan 7, 2015 at 10:36 PM, Alexandre Rafalovitch arafa...@gmail.com
 wrote:

 I am having dependencies issues even if I blow away everything, check
 it out again and do 'ant resolve':
 resolve:
 [ivy:retrieve]
 [ivy:retrieve] :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve] ::
 [ivy:retrieve] ::  UNRESOLVED DEPENDENCIES ::
 [ivy:retrieve] ::
 [ivy:retrieve] ::
 org.restlet.jee#org.restlet.ext.servlet;2.3.0: configuration not found
 in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 'master'. It was
 required from org.apache.solr#core;working@Alexs-MacBook-Pro.local
 compile
 [ivy:retrieve] ::
 [ivy:retrieve]
 [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

 BUILD FAILED

 Regards,
Alex.

 
 Sign up for my Solr resources newsletter at http://www.solr-start.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #813: POMs out of sync

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/813/

No tests ran.

Build Log:
[...truncated 40083 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269195#comment-14269195
 ] 

ASF subversion and git services commented on SOLR-6787:
---

Commit 1650251 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1650251 ]

SOLR-6787 more logging

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269197#comment-14269197
 ] 

ASF subversion and git services commented on SOLR-6787:
---

Commit 1650252 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650252 ]

SOLR-6787 more logging

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6166) Deletions alone never trigger merges

2015-01-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269160#comment-14269160
 ] 

Uwe Schindler commented on LUCENE-6166:
---

Nice :-) I think that reaches back from days, where MergePolicy never took 
deletes into account. Thanks for fixing!

 Deletions alone never trigger merges
 

 Key: LUCENE-6166
 URL: https://issues.apache.org/jira/browse/LUCENE-6166
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.x

 Attachments: LUCENE-6166.patch


 If an app has an old index and only does deletions against it, we seem to 
 never trigger a merge, so deletions are never reclaimed in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader

2015-01-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269174#comment-14269174
 ] 

Uwe Schindler commented on LUCENE-6165:
---

Hi, I think this is a clean approach. People can still reuse their old  slow 
code with conventional filterReaders, but people who take care of speed can use 
CodecReader.

I just wonder why - in contrast to SlowCompositeReaderWrapper - the 
SlowCodecReaderWrapper extends Object and does not implement CodecReader 
directly. Instead the impl is anonymous subclass in wrap(). I think we should 
make the wrapper itsself implement CodecReader, but still with private ctor. 
This would also prevent the synthetic access$XX methods to work around the 
private methods. In addition this would allow to check if a CodecReader is slow 
via instanceof.

 Change merging APIs to work on CodecReader instead of LeafReader
 

 Key: LUCENE-6165
 URL: https://issues.apache.org/jira/browse/LUCENE-6165
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6165.patch


 Patch factors out reader based on codec apis and changes all merge 
 policy/addIndexes apis to use this. 
 If you want to do slow wrapping, you can still do it, just use 
 SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing 
 it always if its not a SegmentReader).
 Also adds FilterCodecReader, to make it easier to start efficiently filtering 
 on merge. I cutover all the index splitters to this. This means they should 
 be much much faster with this patch, they just change the deletes as you 
 expect, and the merge is as optimal as a normal one.
 In other places, for now I think we should just do a rote conversion with 
 SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we 
 can incrementally fix them to do the right thing in the future rather than 
 all at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader

2015-01-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269329#comment-14269329
 ] 

Robert Muir commented on LUCENE-6165:
-

{quote}
 just wonder why - in contrast to SlowCompositeReaderWrapper - the 
SlowCodecReaderWrapper extends Object and does not implement CodecReader 
directly. Instead the impl is anonymous subclass in wrap(). I think we should 
make the wrapper itsself implement CodecReader, but still with private ctor. 
This would also prevent the synthetic access$XX methods to work around the 
private methods. In addition this would allow to check if a CodecReader is slow 
via instanceof.
{quote}

Because it is not here to stay. We need to remove it, but I cannot do this shit 
all in one patch. I don't think we need to put such investments into the slow 
wrapper for that reason.

 Change merging APIs to work on CodecReader instead of LeafReader
 

 Key: LUCENE-6165
 URL: https://issues.apache.org/jira/browse/LUCENE-6165
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6165.patch


 Patch factors out reader based on codec apis and changes all merge 
 policy/addIndexes apis to use this. 
 If you want to do slow wrapping, you can still do it, just use 
 SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing 
 it always if its not a SegmentReader).
 Also adds FilterCodecReader, to make it easier to start efficiently filtering 
 on merge. I cutover all the index splitters to this. This means they should 
 be much much faster with this patch, they just change the deletes as you 
 expect, and the merge is as optimal as a normal one.
 In other places, for now I think we should just do a rote conversion with 
 SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we 
 can incrementally fix them to do the right thing in the future rather than 
 all at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader

2015-01-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269330#comment-14269330
 ] 

Robert Muir commented on LUCENE-6165:
-

It also has no state (unlike slowwrapper). No need for overengineering here.

 Change merging APIs to work on CodecReader instead of LeafReader
 

 Key: LUCENE-6165
 URL: https://issues.apache.org/jira/browse/LUCENE-6165
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6165.patch


 Patch factors out reader based on codec apis and changes all merge 
 policy/addIndexes apis to use this. 
 If you want to do slow wrapping, you can still do it, just use 
 SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing 
 it always if its not a SegmentReader).
 Also adds FilterCodecReader, to make it easier to start efficiently filtering 
 on merge. I cutover all the index splitters to this. This means they should 
 be much much faster with this patch, they just change the deletes as you 
 expect, and the merge is as optimal as a normal one.
 In other places, for now I think we should just do a rote conversion with 
 SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we 
 can incrementally fix them to do the right thing in the future rather than 
 all at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader

2015-01-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269336#comment-14269336
 ] 

Robert Muir commented on LUCENE-6165:
-

I'm committing this as-is. Uwe if you want to refactor that reader, i have no 
problem with it.

The current code is simply moved out of SegmentMerger and is the safe approach.

 Change merging APIs to work on CodecReader instead of LeafReader
 

 Key: LUCENE-6165
 URL: https://issues.apache.org/jira/browse/LUCENE-6165
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6165.patch


 Patch factors out reader based on codec apis and changes all merge 
 policy/addIndexes apis to use this. 
 If you want to do slow wrapping, you can still do it, just use 
 SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing 
 it always if its not a SegmentReader).
 Also adds FilterCodecReader, to make it easier to start efficiently filtering 
 on merge. I cutover all the index splitters to this. This means they should 
 be much much faster with this patch, they just change the deletes as you 
 expect, and the merge is as optimal as a normal one.
 In other places, for now I think we should just do a rote conversion with 
 SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we 
 can incrementally fix them to do the right thing in the future rather than 
 all at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269695#comment-14269695
 ] 

Mike Drob commented on SOLR-6930:
-

The tricky part here is, of course, in estimating how much memory a query will 
require to complete before actually executing it. The ES page hints that 
introspecting the query to get information about the field data and then 
computing size from there is one approach.

I wonder if we can reuse some existing parsing logic to make that process much 
easier...

Getting the total heap size and the amount currently used by the field cache 
should be fairly straightforward, but ES warns that it may be innacurate by 
stale references.

Any ideas?

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-08 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6931:
-

 Summary: We should do a limited retry when using HttpClient.
 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269729#comment-14269729
 ] 

Mark Miller commented on SOLR-6931:
---

See SOLR-4509. HttpClient uses a stale connection check to avoid using a bad 
pooled connection. This check has a race, and we can use a bad connection 
sometimes. In most of these cases, it is actually safe for us to retry. We 
can't use the default retry handler because it attempts to detect idempotent 
updates and Solr allows update type requests via GET requests. If we turn off 
the idempotent detection, the retry is safe and we can avoid some very 
problematic problems like 'connection reset' exceptions. On a heavy working 
SolrCloud cluster, even a rare response like this from a replica can cause a 
recovery and heavy cluster disruption.

 We should do a limited retry when using HttpClient.
 ---

 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6931.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269731#comment-14269731
 ] 

Mark Miller commented on SOLR-4509:
---

SOLR-6931 We should do a limited retry when using HttpClient.

 Disable HttpClient stale check for performance and fewer spurious connection 
 errors.
 

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2448 - Still Failing

2015-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2448/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([F647F9CF7563C6C4:77A177D7023CA6F8]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

RE: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40-ea-b20) - Build # 4400 - Still Failing!

2015-01-08 Thread Uwe Schindler
Sorry, my fault!

The JVM version was no longer available so it fall back to platform default.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Thursday, January 08, 2015 6:12 PM
 To: er...@apache.org; ehatc...@apache.org; no...@apache.org;
 sar...@gmail.com; u...@thetaphi.de; jbern...@apache.org;
 jan...@apache.org; hoss...@apache.org; steff...@apache.org;
 tomm...@apache.org; thelabd...@apache.org; k...@apache.org;
 rm...@apache.org; yo...@apache.org; mikemcc...@apache.org;
 jpou...@apache.org; tflo...@apache.org; rjer...@apache.org;
 romseyg...@apache.org; markrmil...@apache.org; ans...@apache.org;
 gcha...@apache.org; dsmi...@apache.org; a...@apache.org;
 sha...@apache.org; sh...@apache.org; dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40-ea-b20) -
 Build # 4400 - Still Failing!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4400/
 Java: 64bit/jdk1.8.0_40-ea-b20 -XX:+UseCompressedOops -
 XX:+UseParallelGC
 
 No tests ran.
 
 Build Log:
 [...truncated 9767 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-
 Windows\build.xml:519: The following error occurred while executing this
 line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-
 Windows\build.xml:351: The following error occurred while executing this
 line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-
 Windows\lucene\build.xml:23: The following error occurred while executing
 this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-
 Windows\lucene\common-build.xml:307: Minimum supported Java version
 is 1.8.
 
 Total time: 1 second
 Build step 'Invoke Ant' marked build as failure [description-setter]
 Description set: Java: 64bit/jdk1.8.0_40-ea-b20 -XX:+UseCompressedOops -
 XX:+UseParallelGC Archiving artifacts Recording test results
 ERROR: Publisher hudson.tasks.junit.JUnitResultArchiver aborted due to
 exception
 hudson.AbortException: No test report files were found. Configuration
 error?
   at
 hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:
 116)
   at
 hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:
 93)
   at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2677)
   at hudson.remoting.UserRequest.perform(UserRequest.java:121)
   at hudson.remoting.UserRequest.perform(UserRequest.java:49)
   at hudson.remoting.Request$2.run(Request.java:324)
   at
 hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorSe
 rvice.java:68)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
 Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
 Source)
   at java.lang.Thread.run(Unknown Source)
   at ..remote call to Windows VBOX(Native Method)
   at
 hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1356)
   at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
   at hudson.remoting.Channel.call(Channel.java:752)
   at hudson.FilePath.act(FilePath.java:970)
   at hudson.FilePath.act(FilePath.java:959)
   at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:90)
   at
 hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:120)
   at
 hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:13
 7)
   at
 hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLa
 yer.java:74)
   at
 hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
   at
 hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.
 java:770)
   at
 hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(A
 bstractBuild.java:734)
   at hudson.model.Build$BuildExecution.post2(Build.java:183)
   at
 hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java
 :683)
   at hudson.model.Run.execute(Run.java:1784)
   at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
   at
 hudson.model.ResourceController.execute(ResourceController.java:89)
   at hudson.model.Executor.run(Executor.java:240)
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4509) Disable HttpClient stale check for performance.

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4509:
--
Summary: Disable HttpClient stale check for performance.  (was: Disable 
HttpClient stale check for performance and fewer spurious connection errors.)

 Disable HttpClient stale check for performance.
 ---

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-08 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6932:
-

 Summary: All HttpClient ConnectionManagers and SolrJ clients 
should always be shutdown in tests and regular code.
 Key: SOLR-6932
 URL: https://issues.apache.org/jira/browse/SOLR-6932
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269741#comment-14269741
 ] 

Hoss Man commented on SOLR-6840:


bq. Should I be setting it up so that the control Jetty doesn't join the 
cluster at all? 

my understanding for the longest time was that the control jetty/collection1 
was suppose to exist for exactly this purpose -- to be a standalone single 
core equivilent of the distributed collection1 for the purpose of 
comparisons.

but then in SOLR-2894 and SOLR-6379 miller made comments about how the control 
jetty was suppose to work in cloud based test that confused me and still 
confuse me and i defer you to those comments rather then trying to explain them.

[~markrmil...@gmail.com]: can you help clarify things for Alan so he can get 
these tests working w/o the legacy solr.xml support in there?

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch, SOLR-6840.patch, SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2015-01-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269740#comment-14269740
 ] 

Grant Ingersoll commented on SOLR-3619:
---

My pref would be:

bin/solr create name and we handle the cloud logic under the scenes.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269766#comment-14269766
 ] 

Mark Miller commented on SOLR-6840:
---

We would prefer the control to be a single replica, single shard in it's own 
SolrCloud cluster, eg it's own ZooKeeper chroot, not part of the test cluster.

Due to some history and timing and ease and complications, etc, there was some 
bleed over. It's probably best to tackles unbleeding that here.

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch, SOLR-6840.patch, SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2015-01-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269763#comment-14269763
 ] 

David Smiley commented on SOLR-3619:


bq. IMO we shouldn't talk about cores at all, or at least mark them all as 
expert and put them in a relatively obscure place. I've seen a lot of 
confusion on the lists and on site about the relationship between 
cores/collections/shards/replicas/whatever.

That point resonates well with me too.  It may be difficult to not talk about 
cores but we should try and avoid it I guess.  It may be easier to not talk 
about it in SolrCloud mode... but then not everyone is using SolrCloud.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_25) - Build # 11542 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11542/
Java: 32bit/jdk1.8.0_25 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 8046 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:61: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:187: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:510: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:463: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:376: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/build.xml:65: 
impossible to resolve dependencies:
resolve failed - see output for details

Total time: 36 minutes 30 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_25 -client 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6931:
--
Attachment: SOLR-6931.patch

 We should do a limited retry when using HttpClient.
 ---

 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6931.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build # 11541 - Failure!

2015-01-08 Thread Uwe Schindler
Hi,

This is now something new in JDK 9. I have to investigate, but it looks like 
JDK 9 has some extra checks that prevents opening a FileChannel on a directory. 
This means we can no longer sync directories in Java 9. This would not affect 
people in production, because we just assert this in tests that fsync on a 
directory actually works on Linux and MacOSX.

I will dig for the actual change.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Thursday, January 08, 2015 6:14 PM
 To: er...@apache.org; ehatc...@apache.org; no...@apache.org;
 sar...@gmail.com; u...@thetaphi.de; jbern...@apache.org;
 jan...@apache.org; hoss...@apache.org; steff...@apache.org;
 tomm...@apache.org; thelabd...@apache.org; k...@apache.org;
 rm...@apache.org; yo...@apache.org; mikemcc...@apache.org;
 jpou...@apache.org; tflo...@apache.org; rjer...@apache.org;
 romseyg...@apache.org; markrmil...@apache.org; ans...@apache.org;
 gcha...@apache.org; dsmi...@apache.org; a...@apache.org;
 sha...@apache.org; sh...@apache.org; dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build #
 11541 - Failure!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11541/
 Java: 64bit/jdk1.9.0-ea-b44 -XX:-UseCompressedOops -
 XX:+UseConcMarkSweepGC
 
 86 tests failed.
 FAILED:
 junit.framework.TestSuite.org.apache.lucene.search.TestControlledRealTim
 eReopenThread
 
 Error Message:
 2 threads leaked from SUITE scope at
 org.apache.lucene.search.TestControlledRealTimeReopenThread: 1)
 Thread[id=2657, name=NRTNoDeletes Reopen Thread,
 state=TIMED_WAITING, group=TGRP-
 TestControlledRealTimeReopenThread] at sun.misc.Unsafe.park(Native
 Method) at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
 aitNanos(AbstractQueuedSynchronizer.java:2078) at
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
 RealTimeReopenThread.java:223)2) Thread[id=2656, name=NRTDeletes
 Reopen Thread, state=TIMED_WAITING, group=TGRP-
 TestControlledRealTimeReopenThread] at sun.misc.Unsafe.park(Native
 Method) at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
 aitNanos(AbstractQueuedSynchronizer.java:2078) at
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
 RealTimeReopenThread.java:223)
 
 Stack Trace:
 com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked
 from SUITE scope at
 org.apache.lucene.search.TestControlledRealTimeReopenThread:
1) Thread[id=2657, name=NRTNoDeletes Reopen Thread,
 state=TIMED_WAITING, group=TGRP-
 TestControlledRealTimeReopenThread]
 at sun.misc.Unsafe.park(Native Method)
 at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
 aitNanos(AbstractQueuedSynchronizer.java:2078)
 at
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
 RealTimeReopenThread.java:223)
2) Thread[id=2656, name=NRTDeletes Reopen Thread,
 state=TIMED_WAITING, group=TGRP-
 TestControlledRealTimeReopenThread]
 at sun.misc.Unsafe.park(Native Method)
 at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
 aitNanos(AbstractQueuedSynchronizer.java:2078)
 at
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
 RealTimeReopenThread.java:223)
   at __randomizedtesting.SeedInfo.seed([C7A39F9C29494FB7]:0)
 
 
 FAILED:
 org.apache.lucene.codecs.compressing.TestCompressingStoredFieldsFormat
 .testMergeStability
 
 Error Message:
 On Linux and MacOSX fsyncing a directory should not throw IOException, we
 just don't want to rely on that in production (undocumented). Got:
 java.nio.file.FileSystemException: /mnt/ssd/jenkins/workspace/Lucene-Solr-
 trunk-
 Linux/lucene/build/core/test/J0/temp/lucene.codecs.compressing.TestCom
 pressingStoredFieldsFormat C7A39F9C29494FB7-001/index-MMapDirectory-
 001: Is a directory
 
 Stack Trace:
 java.lang.AssertionError: On Linux and MacOSX fsyncing a directory should
 not throw IOException, we just don't want to rely on that in production
 (undocumented). Got: java.nio.file.FileSystemException:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
 Linux/lucene/build/core/test/J0/temp/lucene.codecs.compressing.TestCom
 pressingStoredFieldsFormat C7A39F9C29494FB7-001/index-MMapDirectory-
 001: Is a directory
   at
 __randomizedtesting.SeedInfo.seed([C7A39F9C29494FB7:B3EFD9B324A34D0
 1]:0)
   at 

[jira] [Updated] (SOLR-6931) We should do a limited retry when using HttpClient.

2015-01-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6931:
--
Fix Version/s: Trunk
   5.0

 We should do a limited retry when using HttpClient.
 ---

 Key: SOLR-6931
 URL: https://issues.apache.org/jira/browse/SOLR-6931
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6931.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2015-01-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269733#comment-14269733
 ] 

Grant Ingersoll commented on SOLR-3619:
---

First off, love the new stuff!

Coming a little late to the party, but just now finally checking this out.  So, 
I built the distro, copied it to a new directory and unpacked it.

I then went to the README which is the first thing I do for any new software 
(as I suspect most devs do) and here's what I see:

{quote}
This will launch a Solr server in the background of your shell, bound
to port 8983. After starting Solr, you can create a new core for indexing
your data by doing:

  bin/solr create_core -n name
{quote}

and then a few lines later:

{quote}
After starting Solr in cloud mode, you can create a new collection for indexing
your data by doing:

  bin/solr create_collection -n name
{quote}

You've already lost me (well, not me, literally, but noobs, I'm sure).  What 
the heck is the diff between a collection and a core and why should I care so 
early on?  Why should I have to know that distinction at this stage of the 
game?  I get that it relates to the Collections API and cloud mode, but I'm a 
new user and that distinction, in my estimation, is at least a day or two away 
(and hopefully is resolved at some point and becomes a non-issue) at which time 
it can be explained via the Docs in the ref guide.

Just my 2 cents.  

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2015-01-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269745#comment-14269745
 ] 

Erick Erickson commented on SOLR-3619:
--

OK, just saw Grant's comment and that sparked

IMO we shouldn't talk about cores at all, or at least mark them all as expert 
and put them in a relatively obscure place. I've seen a _lot_ of confusion on 
the lists and on site about the relationship between 
cores/collections/shards/replicas/whatever.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6933) bin/solr script should just have a single create action that creates a core or collection depending on the mode solr is running in

2015-01-08 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6933:


 Summary: bin/solr script should just have a single create action 
that creates a core or collection depending on the mode solr is running in
 Key: SOLR-6933
 URL: https://issues.apache.org/jira/browse/SOLR-6933
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter


instead of create_core and create_collection, just have create that creates a 
core or a collection based on which mode Solr is running in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269751#comment-14269751
 ] 

Mark Miller commented on SOLR-4509:
---

SOLR-6932 All HttpClient ConnectionManagers and SolrJ clients should always be 
shutdown in tests and regular code.

 Disable HttpClient stale check for performance.
 ---

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6932) All HttpClient ConnectionManagers and SolrJ clients should always be shutdown in tests and regular code.

2015-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269747#comment-14269747
 ] 

Mark Miller commented on SOLR-6932:
---

Because we are not consistent with this, the current approach for SOLR-4509 
ends up being a problem. Threads can be started and never stopped and the test 
framework will rightly flip out. We should track and ensure proper cleanup of 
more of our closeable objects. I'll start with ConnectionManagers and SolrJ 
clients as I have already had to do this work for SOLR-4509. There are others 
we should look at as well though.

 All HttpClient ConnectionManagers and SolrJ clients should always be shutdown 
 in tests and regular code.
 

 Key: SOLR-6932
 URL: https://issues.apache.org/jira/browse/SOLR-6932
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_25) - Build # 4401 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4401/
Java: 32bit/jdk1.8.0_25 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 8213 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:519: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:467: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:61: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:39: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:187: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:510:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:463:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:376:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\build.xml:65:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 42 minutes 44 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_25 -server 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6930) Provide Circuit Breakers For Expensive Solr Queries

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269772#comment-14269772
 ] 

Hoss Man commented on SOLR-6930:


isn't this a dup of SOLR-5986?

 Provide Circuit Breakers For Expensive Solr Queries
 -

 Key: SOLR-6930
 URL: https://issues.apache.org/jira/browse/SOLR-6930
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Mike Drob

 Ref: 
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
 ES currently allows operators to configure circuit breakers to preemptively 
 fail queries that are estimated too large rather than allowing an OOM 
 Exception to happen. We might be able to do the same thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build # 11541 - Failure!

2015-01-08 Thread Uwe Schindler
Hi,

This is the problematic commit in JDK9:
http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/e5b66323ae45

I hope this one will not get backported! We should at least contact them that 
they removed the only way to fsync a directory file descriptor. For now, we 
should comment out the assert that asserts that fsync on directory works on OSX 
and Linux...

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Thursday, January 08, 2015 6:43 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) -
 Build # 11541 - Failure!
 
 Hi,
 
 This is now something new in JDK 9. I have to investigate, but it looks like 
 JDK
 9 has some extra checks that prevents opening a FileChannel on a directory.
 This means we can no longer sync directories in Java 9. This would not affect
 people in production, because we just assert this in tests that fsync on a
 directory actually works on Linux and MacOSX.
 
 I will dig for the actual change.
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
  Sent: Thursday, January 08, 2015 6:14 PM
  To: er...@apache.org; ehatc...@apache.org; no...@apache.org;
  sar...@gmail.com; u...@thetaphi.de; jbern...@apache.org;
  jan...@apache.org; hoss...@apache.org; steff...@apache.org;
  tomm...@apache.org; thelabd...@apache.org; k...@apache.org;
  rm...@apache.org; yo...@apache.org; mikemcc...@apache.org;
  jpou...@apache.org; tflo...@apache.org; rjer...@apache.org;
  romseyg...@apache.org; markrmil...@apache.org; ans...@apache.org;
  gcha...@apache.org; dsmi...@apache.org; a...@apache.org;
  sha...@apache.org; sh...@apache.org; dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build
 #
  11541 - Failure!
 
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11541/
  Java: 64bit/jdk1.9.0-ea-b44 -XX:-UseCompressedOops -
  XX:+UseConcMarkSweepGC
 
  86 tests failed.
  FAILED:
 
 junit.framework.TestSuite.org.apache.lucene.search.TestControlledRealTim
  eReopenThread
 
  Error Message:
  2 threads leaked from SUITE scope at
  org.apache.lucene.search.TestControlledRealTimeReopenThread: 1)
  Thread[id=2657, name=NRTNoDeletes Reopen Thread,
  state=TIMED_WAITING, group=TGRP-
  TestControlledRealTimeReopenThread] at
 sun.misc.Unsafe.park(Native
  Method) at
  java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
  aitNanos(AbstractQueuedSynchronizer.java:2078) at
 
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
  RealTimeReopenThread.java:223)2) Thread[id=2656, name=NRTDeletes
  Reopen Thread, state=TIMED_WAITING, group=TGRP-
  TestControlledRealTimeReopenThread] at
 sun.misc.Unsafe.park(Native
  Method) at
  java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
  aitNanos(AbstractQueuedSynchronizer.java:2078) at
 
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
  RealTimeReopenThread.java:223)
 
  Stack Trace:
  com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked
  from SUITE scope at
  org.apache.lucene.search.TestControlledRealTimeReopenThread:
 1) Thread[id=2657, name=NRTNoDeletes Reopen Thread,
  state=TIMED_WAITING, group=TGRP-
  TestControlledRealTimeReopenThread]
  at sun.misc.Unsafe.park(Native Method)
  at
  java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
  aitNanos(AbstractQueuedSynchronizer.java:2078)
  at
 
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
  RealTimeReopenThread.java:223)
 2) Thread[id=2656, name=NRTDeletes Reopen Thread,
  state=TIMED_WAITING, group=TGRP-
  TestControlledRealTimeReopenThread]
  at sun.misc.Unsafe.park(Native Method)
  at
  java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.aw
  aitNanos(AbstractQueuedSynchronizer.java:2078)
  at
 
 org.apache.lucene.search.ControlledRealTimeReopenThread.run(Controlled
  RealTimeReopenThread.java:223)
  at __randomizedtesting.SeedInfo.seed([C7A39F9C29494FB7]:0)
 
 
  FAILED:
 
 org.apache.lucene.codecs.compressing.TestCompressingStoredFieldsFormat
  .testMergeStability
 
  Error Message:
  On Linux and MacOSX fsyncing a directory should not throw IOException,
 we
  just don't want to rely on that in production 

[jira] [Commented] (SOLR-6933) bin/solr script should just have a single create action that creates a core or collection depending on the mode solr is running in

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269777#comment-14269777
 ] 

Hoss Man commented on SOLR-6933:


bq. instead of create_core and create_collection...

i would strongly suggest leving both of those commands alone, and instead 
*adding* a new create that delegates to them as needed.


 bin/solr script should just have a single create action that creates a core 
 or collection depending on the mode solr is running in
 --

 Key: SOLR-6933
 URL: https://issues.apache.org/jira/browse/SOLR-6933
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Timothy Potter
Assignee: Timothy Potter

 instead of create_core and create_collection, just have create that creates a 
 core or a collection based on which mode Solr is running in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2015-01-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269780#comment-14269780
 ] 

Hoss Man commented on SOLR-3619:


tim already spun this idea off into it's own issue -- please discuss there so 
as not to comvolute and make this one any longer then it already is.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b44) - Build # 11382 - Still Failing!

2015-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11382/
Java: 32bit/jdk1.9.0-ea-b44 -client -XX:+UseG1GC

34 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestControlledRealTimeReopenThread

Error Message:
13 threads leaked from SUITE scope at 
org.apache.lucene.search.TestControlledRealTimeReopenThread: 1) 
Thread[id=854, name=TestControlledRealTimeReopenThread-3-thread-6, 
state=TIMED_WAITING, group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=850, 
name=TestControlledRealTimeReopenThread-3-thread-2, state=TIMED_WAITING, 
group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=844, 
name=NRTDeletes Reopen Thread, state=TIMED_WAITING, 
group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.apache.lucene.search.ControlledRealTimeReopenThread.run(ControlledRealTimeReopenThread.java:223)
4) Thread[id=856, name=TestControlledRealTimeReopenThread-3-thread-8, 
state=TIMED_WAITING, group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=857, 
name=TestControlledRealTimeReopenThread-3-thread-9, state=TIMED_WAITING, 
group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=855, 
name=TestControlledRealTimeReopenThread-3-thread-7, state=TIMED_WAITING, 
group=TGRP-TestControlledRealTimeReopenThread] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 

  1   2   >