[jira] [Updated] (LUCENE-5344) Flexible StandardQueryParser behaves differently than ClassicQueryParser

2014-01-17 Thread Adriano Crestani (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adriano Crestani updated LUCENE-5344:
-

Fix Version/s: 4.6.1

> Flexible StandardQueryParser behaves differently than ClassicQueryParser
> 
>
> Key: LUCENE-5344
> URL: https://issues.apache.org/jira/browse/LUCENE-5344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.5
>Reporter: Krishna Keldec
>Assignee: Adriano Crestani
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: LUCENE-5344_adrianocrestani_2014-01-12.patch, 
> LUCENE-5344_adrianocrestani_2014-01-14.patch, 
> LUCENE-5344_adrianocrestani_2014-01-14_branch_4x.patch
>
>
> AnalyzerQueryNodeProcessor creates a BooleanQueryNode instead of a 
> MultiPhraseQueryNode for some circumstances.
> Classic query parser output: {{+content:a +content:320}}  *(correct)*
> {code:java}
> QueryParser classicQueryParser;
> classicQueryParser = new QueryParser(Version.LUCENE_45, "content", anaylzer);
> classicQueryParser.setDefaultOperator(Operator.AND);
> classicQueryParser.parse("a320"));
> {code}
> Flexible query parser output: {{content:a content:320}} *(wrong)*
> {code:java}
> StandardQueryParser flexibleQueryParser;
> flexibleQueryParser = new StandardQueryParser(anaylzer);
> flexibleQueryParser.setDefaultOperator(Operator.AND);
> flexibleQueryParser.parse("a320", "content"));
> {code}
> The used analyzer:
> {code:java}
> Analyzer anaylzer = return new Analyzer() {
>   protected TokenStreamComponents createComponents(String field, Reader in) {
>   Tokenizer   src = new WhitespaceTokenizer(Version.LUCENE_45, in);
>   TokenStream tok = new WordDelimiterFilter(src,
>  WordDelimiterFilter.SPLIT_ON_NUMERICS |
>  WordDelimiterFilter.GENERATE_WORD_PARTS |
>  WordDelimiterFilter.GENERATE_NUMBER_PARTS,
>  CharArraySet.EMPTY_SET); 
>   return new TokenStreamComponents(src, tok);
> };
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b02) - Build # 9034 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9034/
Java: 32bit/jdk1.7.0_60-ea-b02 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 50385 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:398: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:187: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/jackson-core-asl-1.7.4.jar.sha1
* ./solr/licenses/jackson-mapper-asl-1.7.4.jar.sha1
* ./solr/licenses/jersey-core-1.16.jar.sha1

Total time: 60 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_60-ea-b02 -server -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.6.0_45) - Build # 3609 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3609/
Java: 64bit/jdk1.6.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 49596 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:459: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:398: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\extra-targets.xml:87: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\extra-targets.xml:187: 
Source checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/jackson-core-asl-1.7.4.jar.sha1
* ./solr/licenses/jackson-mapper-asl-1.7.4.jar.sha1
* ./solr/licenses/jersey-core-1.16.jar.sha1

Total time: 108 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.6.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #566: POMs out of sync

2014-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/566/

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerRolesTest.testDistribSearch

Error Message:
could not set the new overseer

Stack Trace:
java.lang.AssertionError: could not set the new overseer
at 
__randomizedtesting.SeedInfo.seed([DF3C47EECA4D4B4E:5EDAC9F6BD122B72]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.OverseerRolesTest.addOverseerRole2ExistingNodes(OverseerRolesTest.java:120)
at 
org.apache.solr.cloud.OverseerRolesTest.doTest(OverseerRolesTest.java:86)




Build Log:
[...truncated 52224 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:482: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:176: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/extra-targets.xml:77:
 Java returned: 1

Total time: 124 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-5130) Implement addReplica Collections API

2014-01-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875495#comment-13875495
 ] 

Noble Paul edited comment on SOLR-5130 at 1/18/14 2:39 AM:
---

currently a user can control several core properties through coreadmin CREATE

name :The name of the new core. Same as "name" on the  element.
instanceDir : The directory where files for this SolrCore should be stored. 
Same as instanceDir on the  element.
datadir : (Optional) Name of the data directory relative to instanceDir

This new API should support this as well, otherwise we would be taking away the 
some features users need


was (Author: noble.paul):
currently a user can control several values such as

name :The name of the new core. Same as "name" on the  element.
instanceDir : The directory where files for this SolrCore should be stored. 
Same as instanceDir on the  element.
datadir : (Optional) Name of the data directory relative to instanceDir

This new API should support this as well, otherwise we would be taking away the 
some features users need

> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5130) Implement addReplica Collections API

2014-01-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5130:
-

Description: 
addReplica API will add a node to a given collection/shard.

Parameters:
# node
# collection
# shard (optional)
# _route_ (optional) (see SOLR-4221)

If shard or _route_ is not specified then physical shards will be created on 
the node for the given collection using the persisted values of 
maxShardsPerNode and replicationFactor.

  was:
addReplica API will add a node to a given collection/shard.

Parameters:
# replica
# collection
# shard (optional)
# _route_ (optional) (see SOLR-4221)

If shard or _route_ is not specified then physical shards will be created on 
the node for the given collection using the persisted values of 
maxShardsPerNode and replicationFactor.


> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5130) Implement addReplica Collections API

2014-01-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875495#comment-13875495
 ] 

Noble Paul commented on SOLR-5130:
--

currently a user can control several values such as

name :The name of the new core. Same as "name" on the  element.
instanceDir : The directory where files for this SolrCore should be stored. 
Same as instanceDir on the  element.
datadir : (Optional) Name of the data directory relative to instanceDir

This new API should support this as well, otherwise we would be taking away the 
some features users need

> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # replica
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875493#comment-13875493
 ] 

Noble Paul commented on SOLR-5476:
--

It is not really about special H/W , but mostly about 'dedicated' H/W.

If the same Overseer node is used for normal cores there is a good chance that 
those nodes do some CPU intensive operations or GC pauses which would delay 
Overseer operations. This can cause piling up of messages in large clusters

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5376) Add a demo search server

2014-01-17 Thread Areek Zillur (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875465#comment-13875465
 ] 

Areek Zillur commented on LUCENE-5376:
--

LUCENE-5404 adds support for counting # of entries a lookup was built with and 
returns it on the build command, Dictionary uses InputIterator instead of 
BytesRefIterator among other things. I think it will get rid of some nocommits 
here?

> Add a demo search server
> 
>
> Key: LUCENE-5376
> URL: https://issues.apache.org/jira/browse/LUCENE-5376
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: lucene-demo-server.tgz
>
>
> I think it'd be useful to have a "demo" search server for Lucene.
> Rather than being fully featured, like Solr, it would be minimal, just 
> wrapping the existing Lucene modules to show how you can make use of these 
> features in a server setting.
> The purpose is to demonstrate how one can build a minimal search server on 
> top of APIs like SearchManager, SearcherLifetimeManager, etc.
> This is also useful for finding rough edges / issues in Lucene's APIs that 
> make building a server unnecessarily hard.
> I don't think it should have back compatibility promises (except Lucene's 
> index back compatibility), so it's free to improve as Lucene's APIs change.
> As a starting point, I'll post what I built for the "eating your own dog 
> food" search app for Lucene's & Solr's jira issues 
> http://jirasearch.mikemccandless.com (blog: 
> http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It 
> uses Netty to expose basic indexing & searching APIs via JSON, but it's very 
> rough (lots nocommits).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5404) Add support to get number of entries a Suggester Lookup was built with and minor refactorings

2014-01-17 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5404:
-

Description: 
It would be nice to be able to tell the number of entries a suggester lookup 
was built with. This would let components using lookups to keep some stats 
regarding how many entries were used to build a lookup.
Additionally, Dictionary could use InputIterator rather than the 
BytesRefIteratator, as most of the implmentations now use it.

  was:
It would be nice to be able to tell the number of entries a suggester lookup 
was built with. This would let components using lookups to keep some stats 
regarding how many entries were used to build a lookup.
Additionally, some Dictionary could use InputIterator rather than the 
BytesRefIteratator, as most of the implmentations now use it.


> Add support to get number of entries a Suggester Lookup was built with and 
> minor refactorings
> -
>
> Key: LUCENE-5404
> URL: https://issues.apache.org/jira/browse/LUCENE-5404
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5404.patch
>
>
> It would be nice to be able to tell the number of entries a suggester lookup 
> was built with. This would let components using lookups to keep some stats 
> regarding how many entries were used to build a lookup.
> Additionally, Dictionary could use InputIterator rather than the 
> BytesRefIteratator, as most of the implmentations now use it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5404) Add support to get number of entries a Suggester Lookup was built with and minor refactorings

2014-01-17 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5404:
-

Attachment: LUCENE-5404.patch

Initial patch:
  - adds count() method to InputIterator to get the # of entries used by 
consuming Lookup implementations
  - Dictionary now uses InputIterator rather than BytesRefIterator
  - renamed Dictionary.getWordsIterator() to Dictionary.getEntryIterator()
  - Lookup.build returns number of entries used for building it
  - minor renaming

> Add support to get number of entries a Suggester Lookup was built with and 
> minor refactorings
> -
>
> Key: LUCENE-5404
> URL: https://issues.apache.org/jira/browse/LUCENE-5404
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5404.patch
>
>
> It would be nice to be able to tell the number of entries a suggester lookup 
> was built with. This would let components using lookups to keep some stats 
> regarding how many entries were used to build a lookup.
> Additionally, some Dictionary could use InputIterator rather than the 
> BytesRefIteratator, as most of the implmentations now use it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5404) Add support to get number of entries a Suggester Lookup was built with and minor refactorings

2014-01-17 Thread Areek Zillur (JIRA)
Areek Zillur created LUCENE-5404:


 Summary: Add support to get number of entries a Suggester Lookup 
was built with and minor refactorings
 Key: LUCENE-5404
 URL: https://issues.apache.org/jira/browse/LUCENE-5404
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Areek Zillur
 Fix For: 5.0, 4.7


It would be nice to be able to tell the number of entries a suggester lookup 
was built with. This would let components using lookups to keep some stats 
regarding how many entries were used to build a lookup.
Additionally, some Dictionary could use InputIterator rather than the 
BytesRefIteratator, as most of the implmentations now use it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Steve Rowe
+1

Smoke tester says: SUCCESS! [1:03:14.565590]

Changes, docs and javadocs look good.

Steve

On Jan 17, 2014, at 9:13 AM, Mark Miller  wrote:

> Please vote to release the following artifacts:
> 
> http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
> 
> Here is my +1.
> 
> --
> - Mark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5398) NormValueSource unable to read long field norm

2014-01-17 Thread Peng Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peng Cheng updated LUCENE-5398:
---

Attachment: NormValueSource.java

Removed.

Run junit TestValueSources without problem. This thing should be trivial and 
doesn't require a test case for non-byte situation

> NormValueSource unable to read long field norm
> --
>
> Key: LUCENE-5398
> URL: https://issues.apache.org/jira/browse/LUCENE-5398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.6
> Environment: Ubuntu 12.04
>Reporter: Peng Cheng
>Priority: Trivial
> Fix For: 4.7
>
> Attachments: NormValueSource.java
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Previous Lucene used to store norms in memory, hence float values are encoded 
> into byte to avoid memory overflow.
> Recent release no longer have this constraint, as a result, normValue are 
> generally encoded to/decoded from long.
> But the legacy NormValueSource still typecast any long encoding into byte, as 
> seen in line 74 in the java file, making any TFIDFSimilarity using more 
> accurate encoding useless.
> It should be removed for the greater good.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Jan Høydahl
+1 - passing smokeTestRelease

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

17. jan. 2014 kl. 18:13 skrev Mark Miller :

> Please vote to release the following artifacts:
> 
> http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
> 
> Here is my +1.
> 
> --
> - Mark



[jira] [Commented] (SOLR-5639) Return type parameter 'wt' is completely ignored when url is http encoded

2014-01-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875376#comment-13875376
 ] 

Hoss Man commented on SOLR-5639:


What you are seeing is not "http encoding" or the URL.  What you are describing 
is a URL that has been "xml escaped" (or html escaped").

if you ask a server for the raw URL {{/foo?a=b&x=y}} the application will 
receive the following parameters:
* "a" = "b"
* "amp;x" = "y"

which means if the server is expecting a parameter named "x" it will not get 
one -- this is what's happening in your case: you are sending solr a parameter 
named "amp;wt" instead of named "wt" and it's being ignored.

You can see this fairly clearly using something like "echoParams" (which is 
configured by default in the solr example...

{noformat}
$ curl 'http://localhost:8983/solr/collection1/select?indent=true&wt=json'




  0
  0
  
true
json
  




{noformat}

You're going to see the same basic behavior from any server you send a CGI 
(aka: form data) request to...

{noformat}
hossman@frisbee:~$ curl -sS 
'http://www.tipjar.com/cgi-bin/test?q=bar&wt=json' | grep 'json'
amp;wtjsonjson
hossman@frisbee:~$ curl -sS 'http://www.tipjar.com/cgi-bin/test?q=bar&wt=json' 
| grep 'json'
wtjsonjson
{noformat}

bq. This causes severe problem when solr is integrated with GWT client where 
embedded script often encode url as per http encoding and ends up failing with 
timeout exception. Specially noticeable solr JSONP queries.

I don't really know anything about GWT, but based on your description of the 
problem it sounds like something/somewhere in your client code is either 
incorrectly XML/HTML escaping the URL prior to being fetched, or it is 
correctly XML/HTML escaped for serialization, but then the code you are using 
to make the call does not know it is XML/HTML escaped and is then attempting to 
use it as a "real" URL.

I suggest that you either: 1) start a thread on solr-user with more detail 
about your GWT client code to see if other Solr folks with GWT knowledge can 
help spot the problem; 2) ask on a GWT forum about how/why/where URLs used in 
JSONP request get escaped and unescaped.

There's really no bug here.

> Return type parameter 'wt' is completely ignored when url is http encoded
> -
>
> Key: SOLR-5639
> URL: https://issues.apache.org/jira/browse/SOLR-5639
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, SearchComponents - other
>Affects Versions: 4.5.1
> Environment: Ubuntu 13.04, 
> Browser Chrome 
>Reporter: Suneeta Mall
>
> Querying solr with 'wt' parameter formats the result type as requested which 
> works fine except when url is http encoded.
> For example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true
> the response I get is :
> 
> 
> 
>   0
>   1
> 
> 
>   
> 
>   5
>   7
>   9
>   
> acknowledged
> ack
> actual
> actually
> access
>   
> 
> Status:acknowledged
>   
> 
> 
> whereas the correct response should be:
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":1},
>   "spellcheck":{
>     "suggestions":[
>       "ac",{
>         "numFound":5,
>         "startOffset":7,
>         "endOffset":9,
>         "suggestion":["acknowledged",
>           "ack",
>           "actual",
>           "actually",
>           "access"]},
>       "collation","Status:acknowledged"]}}
> This causes severe problem when solr is integrated with GWT client where 
> embedded script often encode url as per http encoding and ends up failing 
> with timeout exception. Specially noticeable solr JSONP queries. for example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true&json.wrf=xyz
>  when it returns xml instead of json. 
>  Noticeably other arguments works perfectly fine for example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5639) Return type parameter 'wt' is completely ignored when url is http encoded

2014-01-17 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5639.


Resolution: Invalid

> Return type parameter 'wt' is completely ignored when url is http encoded
> -
>
> Key: SOLR-5639
> URL: https://issues.apache.org/jira/browse/SOLR-5639
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, SearchComponents - other
>Affects Versions: 4.5.1
> Environment: Ubuntu 13.04, 
> Browser Chrome 
>Reporter: Suneeta Mall
>
> Querying solr with 'wt' parameter formats the result type as requested which 
> works fine except when url is http encoded.
> For example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true
> the response I get is :
> 
> 
> 
>   0
>   1
> 
> 
>   
> 
>   5
>   7
>   9
>   
> acknowledged
> ack
> actual
> actually
> access
>   
> 
> Status:acknowledged
>   
> 
> 
> whereas the correct response should be:
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":1},
>   "spellcheck":{
>     "suggestions":[
>       "ac",{
>         "numFound":5,
>         "startOffset":7,
>         "endOffset":9,
>         "suggestion":["acknowledged",
>           "ack",
>           "actual",
>           "actually",
>           "access"]},
>       "collation","Status:acknowledged"]}}
> This causes severe problem when solr is integrated with GWT client where 
> embedded script often encode url as per http encoding and ends up failing 
> with timeout exception. Specially noticeable solr JSONP queries. for example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true&json.wrf=xyz
>  when it returns xml instead of json. 
>  Noticeably other arguments works perfectly fine for example: 
> http://localhost:8983/solr/suggest?q=Status:ac&wt=json&indent=true.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5594) Enable using extended field types with prefix queries for non-default encoded strings

2014-01-17 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875349#comment-13875349
 ] 

Uwe Schindler commented on SOLR-5594:
-

bq. SolrQueryParserBase.newPrefixQuery shouldnt' be removed completely (it's 
protected, subclasses might be using it) ... just update it to call the new 
method on the FieldType.

bq. Why are any of the changes in SimpleQParser necessary, except for changing 
new PrefixQuery(...) to sf.getType().getPrefixQuery()? It looks like all the 
other changes there are unnecessary structural changes. Since the IndexSchema 
is already available, it should be just a couple lines changed (180/183).

Yes, this would also be identical to range query behaviour: newRangeQuery also 
delegates to the field type, and the protected method is there for subclasses.

> Enable using extended field types with prefix queries for non-default encoded 
> strings
> -
>
> Key: SOLR-5594
> URL: https://issues.apache.org/jira/browse/SOLR-5594
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers, Schema and Analysis
>Affects Versions: 4.6
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-5594-branch_4x.patch, SOLR-5594.patch, 
> SOLR-5594.patch, SOLR-5594.patch, SOLR-5594.patch
>
>
> Enable users to be able to use prefix query with custom field types with 
> non-default encoding/decoding for queries more easily. e.g. having a custom 
> field work with base64 encoded query strings.
> Currently, the workaround for it is to have the override at getRewriteMethod 
> level. Perhaps having the prefixQuery also use the calling FieldType's 
> readableToIndexed method would work better.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Simon Willnauer
+1

SUCCESS! [1:11:35.695571]

upgrade Elasticsearch to 4.1.6 and all tests pass

simon

On Fri, Jan 17, 2014 at 10:04 PM, Michael McCandless
 wrote:
> +1
>
> SUCCESS! [0:48:35.505732]
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Jan 17, 2014 at 3:02 PM, Robert Muir  wrote:
>> +1
>>
>> SUCCESS! [1:08:56.292897]
>>
>>
>>
>> On Fri, Jan 17, 2014 at 9:13 AM, Mark Miller  wrote:
>>>
>>> Please vote to release the following artifacts:
>>>
>>> http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
>>>
>>> Here is my +1.
>>>
>>> --
>>> - Mark
>>
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5594) Enable using extended field types with prefix queries for non-default encoded strings

2014-01-17 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875282#comment-13875282
 ] 

Ryan Ernst commented on SOLR-5594:
--

{quote}
Fixed the reformatting, however as things have moved (and there's been a level 
change.. new inner classes etc) it still looks a little tricky but yes, it's no 
longer just reformatted code in the patch.
{quote}

Why are any of the changes in SimpleQParser necessary, except for changing 
{{new PrefixQuery(...)}} to {{sf.getType().getPrefixQuery()}}?  It looks like 
all the other changes there are unnecessary structural changes.  Since the 
{{IndexSchema}} is already available, it should be just a couple lines changed 
(180/183).

Why not just make the necessary changes for this issue, and open another jira 
if you feel static inner classes would be better here (although I don't see why 
two are necessary)?

> Enable using extended field types with prefix queries for non-default encoded 
> strings
> -
>
> Key: SOLR-5594
> URL: https://issues.apache.org/jira/browse/SOLR-5594
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers, Schema and Analysis
>Affects Versions: 4.6
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-5594-branch_4x.patch, SOLR-5594.patch, 
> SOLR-5594.patch, SOLR-5594.patch, SOLR-5594.patch
>
>
> Enable users to be able to use prefix query with custom field types with 
> non-default encoding/decoding for queries more easily. e.g. having a custom 
> field work with base64 encoded query strings.
> Currently, the workaround for it is to have the override at getRewriteMethod 
> level. Perhaps having the prefixQuery also use the calling FieldType's 
> readableToIndexed method would work better.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Michael McCandless
+1

SUCCESS! [0:48:35.505732]

Mike McCandless

http://blog.mikemccandless.com


On Fri, Jan 17, 2014 at 3:02 PM, Robert Muir  wrote:
> +1
>
> SUCCESS! [1:08:56.292897]
>
>
>
> On Fri, Jan 17, 2014 at 9:13 AM, Mark Miller  wrote:
>>
>> Please vote to release the following artifacts:
>>
>> http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
>>
>> Here is my +1.
>>
>> --
>> - Mark
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5641) REST API to modify request handlers

2014-01-17 Thread Willy Solaligue (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Willy Solaligue updated SOLR-5641:
--

Attachment: SOLR-5641.patch

> REST API to modify request handlers
> ---
>
> Key: SOLR-5641
> URL: https://issues.apache.org/jira/browse/SOLR-5641
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Willy Solaligue
> Attachments: SOLR-5641.patch
>
>
> There should be a REST API to allow modify request handlers.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5641) REST API to modify request handlers

2014-01-17 Thread Willy Solaligue (JIRA)
Willy Solaligue created SOLR-5641:
-

 Summary: REST API to modify request handlers
 Key: SOLR-5641
 URL: https://issues.apache.org/jira/browse/SOLR-5641
 Project: Solr
  Issue Type: Sub-task
Reporter: Willy Solaligue


There should be a REST API to allow modify request handlers.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5594) Enable using extended field types with prefix queries for non-default encoded strings

2014-01-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875225#comment-13875225
 ] 

Hoss Man commented on SOLR-5594:


Anshum: looks pretty good ... a few requests...

* SolrQueryParserBase.newPrefixQuery shouldnt' be removed completely (it's 
protected, subclasses might be using it) ... just update it to call the new 
method on the FieldType.
* can you add some javadocs to the test classes (particularly the new field 
types) with an explanation of what they do and why they are special
* can we change the field names in your test schema (currently "customfield" 
and "customfield2") to something more clear what they do (ie: "swap_foo_bar" 
and "int_prefix_as_range") so the (expected) wonky behavior is less confusing 
when reading the test?
* the tests should be updated to show that using {{\{!prefix\}}} and 
{{\{!simple\}}} also works with your custom field types
* ideally we want to show that {{\{!prefix\}}}, {{\{!simple\}}}, and 
{{\{!lucene\}}} all produce queries that are _equals()_ when using your custom 
field types (take a look at QueryEqualityTest for inspiration)



> Enable using extended field types with prefix queries for non-default encoded 
> strings
> -
>
> Key: SOLR-5594
> URL: https://issues.apache.org/jira/browse/SOLR-5594
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers, Schema and Analysis
>Affects Versions: 4.6
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-5594-branch_4x.patch, SOLR-5594.patch, 
> SOLR-5594.patch, SOLR-5594.patch, SOLR-5594.patch
>
>
> Enable users to be able to use prefix query with custom field types with 
> non-default encoding/decoding for queries more easily. e.g. having a custom 
> field work with base64 encoded query strings.
> Currently, the workaround for it is to have the override at getRewriteMethod 
> level. Perhaps having the prefixQuery also use the calling FieldType's 
> readableToIndexed method would work better.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Robert Muir
+1

SUCCESS! [1:08:56.292897]



On Fri, Jan 17, 2014 at 9:13 AM, Mark Miller  wrote:

> Please vote to release the following artifacts:
>
> http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
>
> Here is my +1.
>
> --
> - Mark
>


[jira] [Commented] (SOLR-5633) HttpShardHandlerFactory should make its http client available to subclasses

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875200#comment-13875200
 ] 

ASF subversion and git services commented on SOLR-5633:
---

Commit 1559238 from [~rjernst] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1559238 ]

SOLR-5633: HttpShardHandlerFactory should make its http client available to 
subclasses

> HttpShardHandlerFactory should make its http client available to subclasses
> ---
>
> Key: SOLR-5633
> URL: https://issues.apache.org/jira/browse/SOLR-5633
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Attachments: SOLR-5633.patch
>
>
> To save on doubling up on resources, the SHF should have its http client 
> protected (so subclasses can do things like custom status checks).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5633) HttpShardHandlerFactory should make its http client available to subclasses

2014-01-17 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst reassigned SOLR-5633:


Assignee: Ryan Ernst

> HttpShardHandlerFactory should make its http client available to subclasses
> ---
>
> Key: SOLR-5633
> URL: https://issues.apache.org/jira/browse/SOLR-5633
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Attachments: SOLR-5633.patch
>
>
> To save on doubling up on resources, the SHF should have its http client 
> protected (so subclasses can do things like custom status checks).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5633) HttpShardHandlerFactory should make its http client available to subclasses

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875184#comment-13875184
 ] 

ASF subversion and git services commented on SOLR-5633:
---

Commit 1559236 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1559236 ]

SOLR-5633: HttpShardHandlerFactory should make its http client available to 
subclasses

> HttpShardHandlerFactory should make its http client available to subclasses
> ---
>
> Key: SOLR-5633
> URL: https://issues.apache.org/jira/browse/SOLR-5633
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: SOLR-5633.patch
>
>
> To save on doubling up on resources, the SHF should have its http client 
> protected (so subclasses can do things like custom status checks).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-01-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875045#comment-13875045
 ] 

Hoss Man commented on SOLR-5623:


bq. In general whats happening here is not happening inside indexwriter, its 
happening in the analysis chain. I think solr or other applications is the 
right place to add additional debugging information (such as a unique ID for 
the document) because only it has that additional context to ensure its what is 
useful to get to the bottom.

Agreed ... but it would be nice if (in addition to application concepts like 
the uniqueKey of the doc) the exceptions could be annotated with information 
like what field name was associated with the runtime exception -- I don't think 
there's currently anyway for code "above" IndexWriter to do that is there?  

The flip side though is that having this kind of logic in IndexWriter (or 
DocInverterPerField, or wherever under the covers) to wrap any arbitrary 
Runtime exception (maybe IllegalArgumentEx, maybe ArrayOutOfBounds, etc...) 
with some kind of generic LuceneAnalysisRuntimeException that contains a 
"getField" method seems like a really bad idea since it would hide (via 
wrapping) the true underlying exception type.  We do this a lot in Solr since 
ultimately we're always going to need to propagate a SolrException with a 
status code to the remote client -- but i don't think anything else in Lucene 
Core wraps exceptions like this.

I don't know of any sane way to deal with this kind of problem -- just pointing 
out that knowing the field name that caused the problem seems equally important 
to knowing the uniqueKey. (in case anybody else has any good ideas).

In any case, we can make progress on the fairly easy part: annotating with the 
unqieuKey in Solr...

Benson, comments on your current pull request:

* there's some cut/paste comments/javadocs in the test configs/classes that 
need corrected
* considering things like SOLR-4992, i don't think adding a "catch (Throwable 
t)" is a good idea ... i would constrain this to RuntimeException
* take a look at AddUpdateCommand.getPrintableId
* your try/catch/wrap block is only arround one code path that calls 
IndexWriter.updateDocument\* ... there are others. The most 
straightforward/safe approach would probably be to refactor the entire 
{{addDoc(AddUpdateCommand)}} method along the lines of...{code}
  public int addDoc(AddUpdateCommand cmd) throws IOException {
try { 
  return addDocInternal(cmd) 
} catch (...) {
   ...
}
  }
  // nocommit: javadocs as to purpose
  private int addDocInternal(AddUpdateCommand cmd) throws IOException {
...
  }
{code}
* this recipe is a bit cleaner for the type of assertion you are doing...{code}
  try {
doSomethingThatShouldThrowAndException();
fail("didn't get expected exception");
  } catch (ExpectedExceptionType e) {
assertStuffAbout(e);
  }
{code}

> Better diagnosis of RuntimeExceptions in analysis
> -
>
> Key: SOLR-5623
> URL: https://issues.apache.org/jira/browse/SOLR-5623
> Project: Solr
>  Issue Type: Bug
>Reporter: Benson Margulies
>
> If an analysis component (tokenizer, filter, etc) gets really into a hissy 
> fit and throws a RuntimeException, the resulting log traffic is less than 
> informative, lacking any pointer to the doc under discussion (in the doc 
> case). It would be more better if there was a catch/try shortstop that logged 
> this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5399) PagingFieldCollector is very slow with String fields

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874977#comment-13874977
 ] 

ASF subversion and git services commented on LUCENE-5399:
-

Commit 1559196 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1559196 ]

LUCENE-5376, LUCENE-5399: add missingLast support to lucene server

> PagingFieldCollector is very slow with String fields
> 
>
> Key: LUCENE-5399
> URL: https://issues.apache.org/jira/browse/LUCENE-5399
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Robert Muir
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5399.patch, LUCENE-5399.patch, LUCENE-5399.patch, 
> LUCENE-5399.patch, LUCENE-5399.patch, LUCENE-5399.patch, LUCENE-5399.patch, 
> LUCENE-5399.patch, LUCENE-5399.patch
>
>
> PagingFieldCollector (sort comparator) is significantly slower with string 
> fields, because of how its "seen on a previous page" works: it calls 
> compareDocToValue(int doc, T t) first to check this. (its the only user of 
> this method)
> This is very slow with String, because no ordinals are used. so each document 
> must lookup ord, then lookup bytes, then compare bytes.
> I think maybe we should replace this method with an 'after' slot, and just 
> have compareDocToAfter or something.
> Otherwise we could use a hack-patch like the one i will upload (i did this 
> just to test the performance, although tests do pass).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5376) Add a demo search server

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874976#comment-13874976
 ] 

ASF subversion and git services commented on LUCENE-5376:
-

Commit 1559196 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1559196 ]

LUCENE-5376, LUCENE-5399: add missingLast support to lucene server

> Add a demo search server
> 
>
> Key: LUCENE-5376
> URL: https://issues.apache.org/jira/browse/LUCENE-5376
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: lucene-demo-server.tgz
>
>
> I think it'd be useful to have a "demo" search server for Lucene.
> Rather than being fully featured, like Solr, it would be minimal, just 
> wrapping the existing Lucene modules to show how you can make use of these 
> features in a server setting.
> The purpose is to demonstrate how one can build a minimal search server on 
> top of APIs like SearchManager, SearcherLifetimeManager, etc.
> This is also useful for finding rough edges / issues in Lucene's APIs that 
> make building a server unnecessarily hard.
> I don't think it should have back compatibility promises (except Lucene's 
> index back compatibility), so it's free to improve as Lucene's APIs change.
> As a starting point, I'll post what I built for the "eating your own dog 
> food" search app for Lucene's & Solr's jira issues 
> http://jirasearch.mikemccandless.com (blog: 
> http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It 
> uses Netty to expose basic indexing & searching APIs via JSON, but it's very 
> rough (lots nocommits).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release Lucene/Solr 4.6.1 RC1

2014-01-17 Thread Mark Miller
Please vote to release the following artifacts:

http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/

Here is my +1.

--
- Mark


[jira] [Updated] (SOLR-5610) Support cluster-wide properties with an API called CLUSTERPROP

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5610:


Issue Type: Sub-task  (was: Bug)
Parent: SOLR-5128

> Support cluster-wide properties with an API called CLUSTERPROP
> --
>
> Key: SOLR-5610
> URL: https://issues.apache.org/jira/browse/SOLR-5610
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5610.patch
>
>
> Add a collection admin API for cluster wide property management
> the new API would create an entry in the root as 
> /cluster-props.json
> {code:javascript}
> {
> "prop":val"
> }
> {code}
> The API would work as
> /command=clusterprop&name=propName&value=propVal
> there will be a set of well-known properties which can be set or unset with 
> this command



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5609) Don't let cores create slices/named replicas

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874900#comment-13874900
 ] 

Shalin Shekhar Mangar commented on SOLR-5609:
-

We should really be discussing this under SOLR-5096.

bq. Having legacyCloudMode (or whatever it's named) = false should allow the 
cluster to disable the ability for cores to create collections or remove them 
and turn on the features that let the cluster use zk as the truth.

+1 in general. This also bring up the question of what features are must to 
have to use zk as the truth. I don't think we can have users invoke core admin 
CREATE and UNLOAD commands directly. Instead they should use collection APIs 
such as addReplica and deleteReplica exclusively. These APIs will invoke 
overseer commands (to assign a coreNodeName) and then invoke the core admin 
APIs.

bq. The other thing to note, is that we don't have to have this new mode do 
everything that makes sense to do to make zk the truth initially. It's ability 
to make zk the truth can improve over time. We won't be limited by the history 
that are with the legacy mode.

I think at a minimum we need to implement this mode plus SOLR-5130 (addReplica 
API) before we release this. The modifyCollection APIs and other things can 
come in later.

What do you think?

> Don't let cores create slices/named replicas
> 
>
> Key: SOLR-5609
> URL: https://issues.apache.org/jira/browse/SOLR-5609
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
> Fix For: 5.0, 4.7
>
>
> In SolrCloud, it is possible for a core to come up in any node , and register 
> itself with an arbitrary slice/coreNodeName. This is a legacy requirement and 
> we would like to make it only possible for Overseer to initiate creation of 
> slice/replicas
> We plan to introduce cluster level properties at the top level
> /cluster-props.json
> {code:javascript}
> {
> "noSliceOrReplicaByCores":true"
> }
> {code}
> If this property is set to true, cores won't be able to send STATE commands 
> with unknown slice/coreNodeName . Those commands will fail at Overseer. This 
> is useful for SOLR-5310 / SOLR-5311 where a core/replica is deleted by a 
> command and  it comes up later and tries to create a replica/slice



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5130) Implement addReplica Collections API

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5130:


Description: 
addReplica API will add a node to a given collection/shard.

Parameters:
# replica
# collection
# shard (optional)
# _route_ (optional) (see SOLR-4221)

If shard or _route_ is not specified then physical shards will be created on 
the node for the given collection using the persisted values of 
maxShardsPerNode and replicationFactor.

  was:
addNode API will add a node to a given collection/shard.

Parameters:
# node
# collection
# shard (optional)
# _route_ (optional) (see SOLR-4221)

If shard or _route_ is not specified then physical shards will be created on 
the node for the given collection using the persisted values of 
maxShardsPerNode and replicationFactor.


> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # replica
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5130) Implement addReplica Collections API

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5130:


Summary: Implement addReplica Collections API  (was: Implement addNode 
Collections API)

> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> addNode API will add a node to a given collection/shard.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874881#comment-13874881
 ] 

Mark Miller commented on SOLR-5476:
---

Have you actually run into issues with this? Seems like premature 
optimization...the overseer simply fires http commands and simple zk 
stuff...you really think 'special hardware' overseers are going to matter and 
not just complicate the code?

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5128) Improve SolrCloud cluster management capabilities

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5128:


Description: 
This is a master issue to track things that we need to do to make cluster 
management easier.

# Introduce a new mode to disallow creation of collections by configuration and 
use the collection API instead
# New addReplica API to add a replica to a collection/shard
# New modifyCollection API to change replicationFactor, maxShardsPerNode 
configurations
# Update Solr Admin UI for the new APIs
# Longer term, optionally enforce the replicationFactor to add/remove nodes 
automatically

Some features have already been committed as part of other issues:
# SOLR-4693 added a deleteShard API
# SOLR-5006 added a createShard API
# SOLR-4808 persisted replicationFactor and maxShardsPerNode parameters in 
cluster state
# SOLR-5310 added a deleteReplica API

  was:
This is a master issue to track things that we need to do to make cluster 
management easier.

# Introduce a new mode to disallow creation of collections by configuration and 
use the collection API instead
# New addNode/removeNode APIs to add/remove nodes to/from a collection/shard
# New modifyCollection API to change replicationFactor, maxShardsPerNode 
configurations
# Update Solr Admin UI for the new APIs
# Longer term, optionally enforce the replicationFactor to add/remove nodes 
automatically

Some features have already been committed as part of other issues:
# SOLR-4693 added a deleteShard API
# SOLR-5006 added a createShard API
# SOLR-4808 persisted replicationFactor and maxShardsPerNode parameters in 
cluster state


> Improve SolrCloud cluster management capabilities
> -
>
> Key: SOLR-5128
> URL: https://issues.apache.org/jira/browse/SOLR-5128
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> This is a master issue to track things that we need to do to make cluster 
> management easier.
> # Introduce a new mode to disallow creation of collections by configuration 
> and use the collection API instead
> # New addReplica API to add a replica to a collection/shard
> # New modifyCollection API to change replicationFactor, maxShardsPerNode 
> configurations
> # Update Solr Admin UI for the new APIs
> # Longer term, optionally enforce the replicationFactor to add/remove nodes 
> automatically
> Some features have already been committed as part of other issues:
> # SOLR-4693 added a deleteShard API
> # SOLR-5006 added a createShard API
> # SOLR-4808 persisted replicationFactor and maxShardsPerNode parameters in 
> cluster state
> # SOLR-5310 added a deleteReplica API



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5131) Implement removeNode Collections API

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874842#comment-13874842
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5131 at 1/17/14 3:15 PM:
--

This has been implemented by SOLR-5310 by the name of deleteReplica instead of 
removeNode so this issue is no longer required.


was (Author: shalinmangar):
This has been implemented by SOLR-5310 by the name of removeReplica instead of 
removeNode so this issue is no longer required.

> Implement removeNode Collections API
> 
>
> Key: SOLR-5131
> URL: https://issues.apache.org/jira/browse/SOLR-5131
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> Add a removeNode Collections API to remove the given node from the collection 
> i.e. all indexes belonging to the specified collection/shard will be unloaded 
> from the given node.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5131) Implement removeNode Collections API

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5131.
-

Resolution: Duplicate

This has been implemented by SOLR-5310 by the name of removeReplica instead of 
removeNode so this issue is no longer required.

> Implement removeNode Collections API
> 
>
> Key: SOLR-5131
> URL: https://issues.apache.org/jira/browse/SOLR-5131
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
>
> Add a removeNode Collections API to remove the given node from the collection 
> i.e. all indexes belonging to the specified collection/shard will be unloaded 
> from the given node.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_51) - Build # 9027 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9027/
Java: 32bit/jdk1.7.0_51 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 50407 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:398: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:187: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/jackson-core-asl-1.7.4.jar.sha1
* ./solr/licenses/jackson-mapper-asl-1.7.4.jar.sha1
* ./solr/licenses/jersey-core-1.16.jar.sha1

Total time: 58 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_51 -client -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-01-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874841#comment-13874841
 ] 

Mark Miller commented on SOLR-4260:
---

Thanks Shawn - fixed.

> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874839#comment-13874839
 ] 

ASF subversion and git services commented on SOLR-4260:
---

Commit 1559125 from [~markrmil...@gmail.com] in branch 
'dev/branches/lucene_solr_4_6'
[ https://svn.apache.org/r1559125 ]

SOLR-4260: Bring back import still used on 4.6 branch.

> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-01-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874835#comment-13874835
 ] 

Joel Bernstein commented on SOLR-4260:
--

Ok, just had two clean test runs with trunk. The NPE is no longer occurring and 
the leaders and replicas are in sync. Running through some more stress tests 
this morning, but so far so good.



> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5213) collections?action=SPLITSHARD parent vs. sub-shards numDocs

2014-01-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874834#comment-13874834
 ] 

Shalin Shekhar Mangar commented on SOLR-5213:
-

Yes, this can go in. I'll commit it.

> collections?action=SPLITSHARD parent vs. sub-shards numDocs
> ---
>
> Key: SOLR-5213
> URL: https://issues.apache.org/jira/browse/SOLR-5213
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5213.patch
>
>
> The problem we saw was that splitting a shard took a long time and at the end 
> of it the sub-shards contained fewer documents than the original shard.
> The root cause was eventually tracked down to the disappearing documents not 
> falling into the hash ranges of the sub-shards.
> Could SolrIndexSplitter split report per-segment numDocs for parent and 
> sub-shards, with at least a warning logged for any discrepancies (documents 
> falling into none of the sub-shards or documents falling into several 
> sub-shards)?
> Additionally, could a case be made for erroring out when discrepancies are 
> detected i.e. not proceeding with the shard split? Either to always error or 
> to have an verifyNumDocs=false/true optional parameter for the SPLITSHARD 
> action.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5356) more generic lucene-morfologik integration

2014-01-17 Thread Michal Hlavac (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Hlavac updated LUCENE-5356:
--

Attachment:  LUCENE-5356.patch

This patch contains proposals from previous issue comments.

> more generic lucene-morfologik integration
> --
>
> Key: LUCENE-5356
> URL: https://issues.apache.org/jira/browse/LUCENE-5356
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.6
>Reporter: Michal Hlavac
>Assignee: Dawid Weiss
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 5.0, 4.7
>
> Attachments:  LUCENE-5356.patch, LUCENE-5356.patch, LUCENE-5356.patch
>
>
> I have little proposal for morfologik lucene module. Current module is 
> tightly coupled with polish DICTIONARY enumeration.
> But other people (like me) can build own dictionaries to FSA and use it with 
> lucene. 
> You can find proposal in attachment and also example usage in analyzer 
> (SlovakLemmaAnalyzer).
> It uses dictionary property as String resource from classpath, not 
> enumeration.
> One change is, that dictionary variable must be set in MofologikFilterFactory 
> (no default value).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874785#comment-13874785
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1559100 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1559100 ]

SOLR-5476 logging added

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-01-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874735#comment-13874735
 ] 

Erick Erickson commented on SOLR-5488:
--

[~sbower] I may have some cycles over the next couple of weeks to see if I can 
understand what's going on. On the surface, it looks like a race condition

Before I poke around blindly in the code, can you point me to where you'd start 
trying to diagnose?

Thanks...


> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 4.7
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread Michael McCandless
On Fri, Jan 17, 2014 at 3:09 AM, ASF subversion and git services
(JIRA)  wrote:

> PLEASE RUN "ant precommit" (root) or alternatively the faster "ant 
> check-forbidden-apis" (in your module folder) before committing!

And also try to watch out for build failures for a few hours after you
commit ...

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-01-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874686#comment-13874686
 ] 

Mikhail Khludnev commented on SOLR-4260:


What a great hunt, guys! Thanks a lot!

> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2014-01-17 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874667#comment-13874667
 ] 

Markus Jelsma commented on SOLR-4260:
-

I believe the whole building now knows i cannot reproduce the problem!

> Inconsistent numDocs between leader and replica
> ---
>
> Key: SOLR-4260
> URL: https://issues.apache.org/jira/browse/SOLR-4260
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
> Environment: 5.0.0.2013.01.04.15.31.51
>Reporter: Markus Jelsma
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: 192.168.20.102-replica1.png, 
> 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
> demo_shard1_replicas_out_of_sync.tgz
>
>
> After wiping all cores and reindexing some 3.3 million docs from Nutch using 
> CloudSolrServer we see inconsistencies between the leader and replica for 
> some shards.
> Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
> a small deviation in then number of documents. The leader and slave deviate 
> for roughly 10-20 documents, not more.
> Results hopping ranks in the result set for identical queries got my 
> attention, there were small IDF differences for exactly the same record 
> causing a record to shift positions in the result set. During those tests no 
> records were indexed. Consecutive catch all queries also return different 
> number of numDocs.
> We're running a 10 node test cluster with 10 shards and a replication factor 
> of two and frequently reindex using a fresh build from trunk. I've not seen 
> this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Prefix wildcard cost.

2014-01-17 Thread Dawid Weiss
>From ES documentation at:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax

"Allowing a wildcard at the beginning of a word (eg "*ing") is
particularly heavy, because all terms in the index need to be
examined, just in case they match. Leading wildcards can be disabled
by setting allow_leading_wildcard to false."

I wonder how much of a problem this is in practice (prefix matches are
cheap on a reversed FST of all terms). Perhaps a good topic for a
student project :)

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1233 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1233/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 10485 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20140117_102023_419.syserr
   [junit4] >>> JVM J0: stderr (verbatim) 
   [junit4] java(206,0x145303000) malloc: *** error for object 0x1452f1f10: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4] java(206,0x145509000) malloc: *** error for object 0x1454f8050: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4] <<< JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=DAF6D0F76AF0C3F7 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queries/lucene-queries-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queryparser/lucene-queryparser-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/join/lucene-join-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/antlr-runtime-3.5.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-commons-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core

[jira] [Created] (SOLR-5640) Use Solrj to enable/disable replication and enable/disable polling

2014-01-17 Thread ESSOUSSI Jamel (JIRA)
ESSOUSSI Jamel created SOLR-5640:


 Summary: Use Solrj to enable/disable replication and 
enable/disable polling
 Key: SOLR-5640
 URL: https://issues.apache.org/jira/browse/SOLR-5640
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: ESSOUSSI Jamel
Priority: Minor
 Fix For: 5.0, 4.7, 4.6.1


Add the possibility to enable/disable replication and enable/disable polling 
using solrj.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b123) - Build # 9125 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9125/
Java: 64bit/jdk1.8.0-ea-b123 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.testArgsParserHelp

Error Message:
Conversion = '๑'

Stack Trace:
java.util.UnknownFormatConversionException: Conversion = '๑'
at 
__randomizedtesting.SeedInfo.seed([736183204E928FCC:EFB3295709FCE381]:0)
at java.util.Formatter.checkText(Formatter.java:2579)
at java.util.Formatter.parse(Formatter.java:2555)
at java.util.Formatter.format(Formatter.java:2501)
at java.io.PrintWriter.format(PrintWriter.java:905)
at 
net.sourceforge.argparse4j.helper.TextHelper.printHelp(TextHelper.java:206)
at 
net.sourceforge.argparse4j.internal.ArgumentImpl.printHelp(ArgumentImpl.java:247)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.printArgumentHelp(ArgumentParserImpl.java:253)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.printHelp(ArgumentParserImpl.java:279)
at 
org.apache.solr.hadoop.MapReduceIndexerTool$MyArgumentParser$1.run(MapReduceIndexerTool.java:187)
at 
net.sourceforge.argparse4j.internal.ArgumentImpl.run(ArgumentImpl.java:425)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.processArg(ArgumentParserImpl.java:913)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:810)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:683)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:580)
at 
net.sourceforge.argparse4j.internal.ArgumentParserImpl.parseArgs(ArgumentParserImpl.java:573)
at 
org.apache.solr.hadoop.MapReduceIndexerTool$MyArgumentParser.parseArgs(MapReduceIndexerTool.java:505)
at 
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.testArgsParserHelp(MapReduceIndexerToolArgumentParserTest.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1081: POMs out of sync

2014-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1081/

All tests passed

Build Log:
[...truncated 50480 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:476: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:176: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 68 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_60-ea-b02) - Build # 3683 - Still Failing!

2014-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3683/
Java: 64bit/jdk1.7.0_60-ea-b02 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 30522 lines...]
-check-forbidden-base:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.7
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.7
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\tools\forbiddenApis\base.txt
[forbidden-apis] Reading API signatures: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\tools\forbiddenApis\servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] Forbidden method invocation: java.lang.String#toLowerCase() 
[Uses default locale]
[forbidden-apis]   in org.apache.solr.cloud.OverseerRolesTest 
(OverseerRolesTest.java:160)
[forbidden-apis] Scanned 2039 (and 1337 related) class file(s) for forbidden 
API invocations (in 7.05s), 1 error(s).

BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:453: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:64: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:271: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:474:
 Check for forbidden API calls failed, see log.

Total time: 87 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0_60-ea-b02 -XX:-UseCompressedOops 
-XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874555#comment-13874555
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1559044 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1559044 ]

Merged revision(s) 1559043 from lucene/dev/trunk:
SOLR-5476: Fix forbidden.

PLEASE RUN "ant precommit" (root) or alternatively the faster "ant 
check-forbidden-apis" (in your module folder) before committing!

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874550#comment-13874550
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1559043 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1559043 ]

SOLR-5476: Fix forbidden.

PLEASE RUN "ant precommit" (root) or alternatively the faster "ant 
check-forbidden-apis" (in your module folder) before committing!

> Overseer Role for nodes
> ---
>
> Key: SOLR-5476
> URL: https://issues.apache.org/jira/browse/SOLR-5476
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
> SOLR-5476.patch, SOLR-5476.patch
>
>
> In a very large cluster the Overseer is likely to be overloaded.If the same 
> node is a serving a few other shards it can lead to OverSeer getting slowed 
> down due to GC pauses , or simply too much of work  . If the cluster is 
> really large , it is possible to dedicate high end h/w for OverSeers
> It works as a new collection admin command
> command=addrole&role=overseer&node=192.168.1.5:8983_solr
> This results in the creation of a entry in the /roles.json in ZK which would 
> look like the following
> {code:javascript}
> {
> "overseer" : ["192.168.1.5:8983_solr"]
> }
> {code}
> If a node is designated for overseer it gets preference over others when 
> overseer election takes place. If no designated servers are available another 
> random node would become the Overseer.
> Later on, if one of the designated nodes are brought up ,it would take over 
> the Overseer role from the current Overseer to become the Overseer of the 
> system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org