[jira] [Commented] (SOLR-2412) Multipath hierarchical faceting

2014-04-01 Thread Toke Eskildsen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957371#comment-13957371
 ] 

Toke Eskildsen commented on SOLR-2412:
--

If the README in solr/contrib/exposed/ does not help, I will be happy to answer 
any questions and try to explain it better.

> Multipath hierarchical faceting
> ---
>
> Key: SOLR-2412
> URL: https://issues.apache.org/jira/browse/SOLR-2412
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Affects Versions: 4.0
> Environment: Fast IO when huge hierarchies are used
>Reporter: Toke Eskildsen
>  Labels: contrib, patch
> Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
> SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch
>
>
> Hierarchical faceting with slow startup, low memory overhead and fast 
> response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
>   * Multiple paths per document
>   * Query-time analysis of the facet-field; no special requirements for 
> indexing besides retaining separator characters in the terms used for faceting
>   * Optional custom sorting of tag values
>   * Recursive counting of references to tags at all levels of the output
> This is a shell around LUCENE-2369, making it work with the Solr API. The 
> underlying principle is to reference terms by their ordinals and create an 
> index wide documents to tags map, augmented with a compressed representation 
> of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5946) Need clear docs on how to "backup" and "restore" indexes

2014-04-01 Thread Maxim Novikov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957323#comment-13957323
 ] 

Maxim Novikov commented on SOLR-5946:
-

That was the point actually, if it is possible (even if very difficult), please 
explain in the docs what strategy we could use.

> Need clear docs on how to "backup" and "restore" indexes
> 
>
> Key: SOLR-5946
> URL: https://issues.apache.org/jira/browse/SOLR-5946
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> As pointed out by Maxim Novikov in the comments of the Solr Ref Guide: the 
> instructions on creating index backups are very sparse, and give no clear 
> instructions on how to _restore_ fro ma backup.
> This page really needs to be beefed up with more complete instructions...
> https://cwiki.apache.org/confluence/display/solr/Backing+Up



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5946) Need clear docs on how to "backup" and "restore" indexes

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957309#comment-13957309
 ] 

Shalin Shekhar Mangar commented on SOLR-5946:
-

Backup restore in SolrCloud is possible but very difficult currently. There's 
SOLR-5750 to make it easier.

> Need clear docs on how to "backup" and "restore" indexes
> 
>
> Key: SOLR-5946
> URL: https://issues.apache.org/jira/browse/SOLR-5946
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> As pointed out by Maxim Novikov in the comments of the Solr Ref Guide: the 
> instructions on creating index backups are very sparse, and give no clear 
> instructions on how to _restore_ fro ma backup.
> This page really needs to be beefed up with more complete instructions...
> https://cwiki.apache.org/confluence/display/solr/Backing+Up



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-2446.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

I will open followups for some of the ideas here.

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957295#comment-13957295
 ] 

ASF subversion and git services commented on LUCENE-2446:
-

Commit 1583863 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583863 ]

LUCENE-2446: add checksums to index files

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5947) solr doesnt work

2014-04-01 Thread Ian Ding (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957292#comment-13957292
 ] 

Ian Ding commented on SOLR-5947:


PS.No Warn or Error...

> solr doesnt work
> 
>
> Key: SOLR-5947
> URL: https://issues.apache.org/jira/browse/SOLR-5947
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 4.7
> Environment: Win7 64x
> Tomcat 8
>Reporter: Ian Ding
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> i can see the icon of solr, the "sunny" icon, while nothing is working, 
> including the dashboard. 
> if it does work, I could read the system monitor on the right side of the 
> interface, at least..but it did't show any number.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5947) solr doesnt work

2014-04-01 Thread Ian Ding (JIRA)
Ian Ding created SOLR-5947:
--

 Summary: solr doesnt work
 Key: SOLR-5947
 URL: https://issues.apache.org/jira/browse/SOLR-5947
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.7
 Environment: Win7 64x
Tomcat 8
Reporter: Ian Ding


i can see the icon of solr, the "sunny" icon, while nothing is working, 
including the dashboard. 
if it does work, I could read the system monitor on the right side of the 
interface, at least..but it did't show any number.





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3862) add "remove" as update option for atomically removing a value from a multivalued field

2014-04-01 Thread Alaknantha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957268#comment-13957268
 ] 

Alaknantha commented on SOLR-3862:
--

Erick: None of the existing patches support multiple regex's to be specified 
for "remove" and "replace". Would you like to me code that along with Junits 
and provide a patch?  

> add "remove" as update option for atomically removing a value from a 
> multivalued field
> --
>
> Key: SOLR-3862
> URL: https://issues.apache.org/jira/browse/SOLR-3862
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.0-BETA
>Reporter: Jim Musil
>Assignee: Erick Erickson
> Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
> SOLR-3862.patch, SOLR-3862.patch
>
>
> Currently you can atomically "add" a value to a multivalued field. It would 
> be useful to be able to "remove" a value from a multivalued field. 
> When you "set" a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3862) add "remove" as update option for atomically removing a value from a multivalued field

2014-04-01 Thread Alaknantha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957268#comment-13957268
 ] 

Alaknantha edited comment on SOLR-3862 at 4/2/14 2:36 AM:
--

Erick: None of the existing patches support multiple regex's to be specified 
for "remove" and "replace". Would you like me to code that along with Junits 
and provide a patch?  


was (Author: alaknantha):
Erick: None of the existing patches support multiple regex's to be specified 
for "remove" and "replace". Would you like to me code that along with Junits 
and provide a patch?  

> add "remove" as update option for atomically removing a value from a 
> multivalued field
> --
>
> Key: SOLR-3862
> URL: https://issues.apache.org/jira/browse/SOLR-3862
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.0-BETA
>Reporter: Jim Musil
>Assignee: Erick Erickson
> Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
> SOLR-3862.patch, SOLR-3862.patch
>
>
> Currently you can atomically "add" a value to a multivalued field. It would 
> be useful to be able to "remove" a value from a multivalued field. 
> When you "set" a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5946) Need clear docs on how to "backup" and "restore" indexes

2014-04-01 Thread Maxim Novikov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957257#comment-13957257
 ] 

Maxim Novikov commented on SOLR-5946:
-


Please describe also the strategy for restoring backups, specifically for 
SorlCloud. From your documentation it is still possible to grasp the idea how 
to back up the index using HTTP commands, but it is absolutely unclear what to 
do with those backups if the cluster goes down one day for some reason. So far 
those are just a bunch of files/directories no one knows how to use.

Would we need to delete the directory with the current index?
Should we copy one  of the snapshot directories files to the index directory?
Would we have to do something with transaction logs (delete, modify, etc.)? 
How to let the other nodes in the cluster know that the index has been restored 
from an earlier version and trigger the synchronization process?

Also, it does not seem to be explained anywhere, but it looks like fetching 
data (triggering that process via HTTP) from master to slave does not work in 
SolrCloud as all the nodes are considered to me masters.

I believe all these questions (and the others related) should be addressed as 
for now SolrCloud is not being a comprehensive solution because you need to 
work around some similar stuff on your own. And it is not really reliable if 
Solr cannot handle such things at all.

PS My understanding is that currently backups are useless as you cannot do 
anything with them (at least I have not found any info that would cover that). 
They may help only in case of complete horrible failure when all the nodes in 
the cluster got exploded and then you want to restore the index quickly on 
other ones avoiding the full import from the data source from scratch. But this 
is really not a real-life use case. The more typical would be "I want to 
restore my data after I occasionally cleaned up the index in SolrCloud" or "I 
want to restore my data from a backed up snapshot when the index got corrupted 
for some reason", etc.


> Need clear docs on how to "backup" and "restore" indexes
> 
>
> Key: SOLR-5946
> URL: https://issues.apache.org/jira/browse/SOLR-5946
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> As pointed out by Maxim Novikov in the comments of the Solr Ref Guide: the 
> instructions on creating index backups are very sparse, and give no clear 
> instructions on how to _restore_ fro ma backup.
> This page really needs to be beefed up with more complete instructions...
> https://cwiki.apache.org/confluence/display/solr/Backing+Up



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 1356 - Failure!

2014-04-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/1356/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([B8D360A503CE0A27:D288DFB45A802AD4]:0)
at java.util.Arrays.copyOfRange(Arrays.java:2694)
at java.lang.String.(String.java:203)
at 
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.toString(CharTermAttributeImpl.java:267)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:703)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:920)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)




Build Log:
[...truncated 773 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=true text='structivas. No obstante, 
algunas formulaciones son confusas y deben elaborarse m\u00e1s o eliminarse. El 
Centro debe ser una herramie'
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2> tokenizer=
   [junit4]   2>   org.apache.lucene.analysis.ngram.NGramTokenizer(LUCENE_50, 
26, 53)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@4a79347b 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,
 75)
   [junit4]   2>   
org.apache.lucene.analysis.hu.HungarianLightStemFilter(ValidatingTokenFilter@2a9c1149
 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.reverse.ReverseStringFilter(LUCENE_50, 
ValidatingTokenFilter@177a26e 
term=,bytes=[],positionIncrement=1,positionLength=1,startOffset=0,endOffset=0,type=word,keyword=false)
   [junit4]   2> offsetsAreCorrect=true
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=B8D360A503CE0A27 
-Dtests.slow=true -Dtests.locale=iw_IL -Dtests.timez

[jira] [Created] (SOLR-5946) Need clear docs on how to "backup" and "restore" indexes

2014-04-01 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5946:
--

 Summary: Need clear docs on how to "backup" and "restore" indexes
 Key: SOLR-5946
 URL: https://issues.apache.org/jira/browse/SOLR-5946
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


As pointed out by Maxim Novikov in the comments of the Solr Ref Guide: the 
instructions on creating index backups are very sparse, and give no clear 
instructions on how to _restore_ fro ma backup.

This page really needs to be beefed up with more complete instructions...

https://cwiki.apache.org/confluence/display/solr/Backing+Up



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5829) Allow ExpandComponent to accept query and filter query parameters

2014-04-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-5829.
--

Resolution: Fixed

> Allow ExpandComponent to accept query and filter query parameters
> -
>
> Key: SOLR-5829
> URL: https://issues.apache.org/jira/browse/SOLR-5829
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.8
>
> Attachments: SOLR-5829.patch, SOLR-5829.patch, SOLR-5829.patch, 
> SOLR-5829.patch
>
>
> By default the ExpandComponent re-runs both the main query and filter queries 
> to expand the groups collapsed by the CollapsingQParserPlugin. This ticket 
> allows you to pass the main query and filter queries into the 
> ExpandComponent. It also allows you to pass in the expand field.
> This design allows the ExpandComponent to operate independently of the 
> CollapsingQParserPlugin and allows for modeling of parent/child 
> relationships. 
> For example:
> {code}
> q=*:*&fq=type:parent&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:child
> {code}
> In the query above the the main query returns all documents of the 
> type:parent. The ExpandComponent then expands the groups by retrieving all 
> documents with type:child and grouping them by the group_id.
> In other words, the main result set will be the parent documents and the 
> expanded result set will be the child documents.
> You could reverse this as well:
> {code}
> q=*:*&fq=type:child&fq={!collapse 
> field=group_id}&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:parent
> {code}
> In the query above the main query returns all documents with type:child and 
> collapses them on the group_id field. The ExpandComponent then expands the 
> groups by retrieving all documents with type:parent and groups them by 
> group_id. Since there is only one parent per collapsed child, each group will 
> have one document 1.
> In this case the main result set will be collapsed child documents and the 
> expanded results will be parent documents.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5561) NativeUnixDirectory is broken

2014-04-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5561:
---

Attachment: LUCENE-5561.patch

New patch, fixing nocommits, getting checksum working, and
conditionalizing TestNativeUnixDirectory to run only on unix.  I think
it's ready; we can later separately iterate on improving
BaseDirectoryTestCase and the TODOs to move away from our own JNI...


> NativeUnixDirectory is broken
> -
>
> Key: LUCENE-5561
> URL: https://issues.apache.org/jira/browse/LUCENE-5561
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5561.patch, LUCENE-5561.patch
>
>
> Several things:
>   * It assumed ByteBuffer.allocateDirect would be page-aligned, but
> that's no longer true in Java 1.7
>   * It failed to throw FNFE if a file didn't exist (throw IOExc
> instead)
>   * It didn't have a default ctor taking File (so it was hard to run
> all tests against it)
>   * It didn't have a test case
>   * Some Javadocs problems
>   * I cutover to FilterDirectory
> I tried to cutover to BufferedIndexOutput since this is essentially
> all that NativeUnixIO is doing ... but it's not simple because BIO
> sometimes flushes non-full (non-aligned) buffers even before the end
> of the file (its writeBytes method).
> I also factored out a BaseDirectoryTestCase, and tried to fold in
> "generic" Directory tests, and added/cutover explicit tests for the
> core directory impls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5829) Allow ExpandComponent to accept query and filter query parameters

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957083#comment-13957083
 ] 

ASF subversion and git services commented on SOLR-5829:
---

Commit 1583806 from [~joel.bernstein] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583806 ]

SOLR-5829: Allow ExpandComponent to accept query and filter query parameters

> Allow ExpandComponent to accept query and filter query parameters
> -
>
> Key: SOLR-5829
> URL: https://issues.apache.org/jira/browse/SOLR-5829
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.8
>
> Attachments: SOLR-5829.patch, SOLR-5829.patch, SOLR-5829.patch, 
> SOLR-5829.patch
>
>
> By default the ExpandComponent re-runs both the main query and filter queries 
> to expand the groups collapsed by the CollapsingQParserPlugin. This ticket 
> allows you to pass the main query and filter queries into the 
> ExpandComponent. It also allows you to pass in the expand field.
> This design allows the ExpandComponent to operate independently of the 
> CollapsingQParserPlugin and allows for modeling of parent/child 
> relationships. 
> For example:
> {code}
> q=*:*&fq=type:parent&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:child
> {code}
> In the query above the the main query returns all documents of the 
> type:parent. The ExpandComponent then expands the groups by retrieving all 
> documents with type:child and grouping them by the group_id.
> In other words, the main result set will be the parent documents and the 
> expanded result set will be the child documents.
> You could reverse this as well:
> {code}
> q=*:*&fq=type:child&fq={!collapse 
> field=group_id}&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:parent
> {code}
> In the query above the main query returns all documents with type:child and 
> collapses them on the group_id field. The ExpandComponent then expands the 
> groups by retrieving all documents with type:parent and groups them by 
> group_id. Since there is only one parent per collapsed child, each group will 
> have one document 1.
> In this case the main result set will be collapsed child documents and the 
> expanded results will be parent documents.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5829) Allow ExpandComponent to accept query and filter query parameters

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957040#comment-13957040
 ] 

ASF subversion and git services commented on SOLR-5829:
---

Commit 1583802 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1583802 ]

SOLR-5829: Allow ExpandComponent to accept query and filter query parameters

> Allow ExpandComponent to accept query and filter query parameters
> -
>
> Key: SOLR-5829
> URL: https://issues.apache.org/jira/browse/SOLR-5829
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.8
>
> Attachments: SOLR-5829.patch, SOLR-5829.patch, SOLR-5829.patch, 
> SOLR-5829.patch
>
>
> By default the ExpandComponent re-runs both the main query and filter queries 
> to expand the groups collapsed by the CollapsingQParserPlugin. This ticket 
> allows you to pass the main query and filter queries into the 
> ExpandComponent. It also allows you to pass in the expand field.
> This design allows the ExpandComponent to operate independently of the 
> CollapsingQParserPlugin and allows for modeling of parent/child 
> relationships. 
> For example:
> {code}
> q=*:*&fq=type:parent&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:child
> {code}
> In the query above the the main query returns all documents of the 
> type:parent. The ExpandComponent then expands the groups by retrieving all 
> documents with type:child and grouping them by the group_id.
> In other words, the main result set will be the parent documents and the 
> expanded result set will be the child documents.
> You could reverse this as well:
> {code}
> q=*:*&fq=type:child&fq={!collapse 
> field=group_id}&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:parent
> {code}
> In the query above the main query returns all documents with type:child and 
> collapses them on the group_id field. The ExpandComponent then expands the 
> groups by retrieving all documents with type:parent and groups them by 
> group_id. Since there is only one parent per collapsed child, each group will 
> have one document 1.
> In this case the main result set will be collapsed child documents and the 
> expanded results will be parent documents.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5829) Allow ExpandComponent to accept query and filter query parameters

2014-04-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5829:
-

Attachment: SOLR-5829.patch

> Allow ExpandComponent to accept query and filter query parameters
> -
>
> Key: SOLR-5829
> URL: https://issues.apache.org/jira/browse/SOLR-5829
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.8
>
> Attachments: SOLR-5829.patch, SOLR-5829.patch, SOLR-5829.patch, 
> SOLR-5829.patch
>
>
> By default the ExpandComponent re-runs both the main query and filter queries 
> to expand the groups collapsed by the CollapsingQParserPlugin. This ticket 
> allows you to pass the main query and filter queries into the 
> ExpandComponent. It also allows you to pass in the expand field.
> This design allows the ExpandComponent to operate independently of the 
> CollapsingQParserPlugin and allows for modeling of parent/child 
> relationships. 
> For example:
> {code}
> q=*:*&fq=type:parent&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:child
> {code}
> In the query above the the main query returns all documents of the 
> type:parent. The ExpandComponent then expands the groups by retrieving all 
> documents with type:child and grouping them by the group_id.
> In other words, the main result set will be the parent documents and the 
> expanded result set will be the child documents.
> You could reverse this as well:
> {code}
> q=*:*&fq=type:child&fq={!collapse 
> field=group_id}&expand=true&expand.field=group_id&expand.q=*:*&expand.fq=type:parent
> {code}
> In the query above the main query returns all documents with type:child and 
> collapses them on the group_id field. The ExpandComponent then expands the 
> groups by retrieving all documents with type:parent and groups them by 
> group_id. Since there is only one parent per collapsed child, each group will 
> have one document 1.
> In this case the main result set will be collapsed child documents and the 
> expanded results will be parent documents.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5565) Remove String based encoding from SpatialPrefixTree/Cell API; just use bytes

2014-04-01 Thread David Smiley (JIRA)
David Smiley created LUCENE-5565:


 Summary: Remove String based encoding from SpatialPrefixTree/Cell 
API; just use bytes
 Key: LUCENE-5565
 URL: https://issues.apache.org/jira/browse/LUCENE-5565
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley


The SpatialPrefixTree/Cell API supports bytes and String encoding/decoding 
dually.  I want to remove the String side to keep the API simpler.  Included in 
this issue, I'd like to make some small refactorings to reduce assumptions the 
filters make of the underlying encoding such that future encodings can work a 
in more different ways with less impact on the filters.

String encode/decode will exist for the Geohash one for now since GeohashUtils 
works off of Strings, but Quad could change more easily.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-04-01 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956932#comment-13956932
 ] 

Tim Allison commented on LUCENE-5205:
-

Thank you, Robert!  Next steps: LUCENE-5470 and then LUCENE-5504?

> [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
> classic QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Fix For: 4.8
>
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5823) Add utility function for internal code to know if it is currently the overseer

2014-04-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5823.


Resolution: Incomplete

> Add utility function for internal code to know if it is currently the overseer
> --
>
> Key: SOLR-5823
> URL: https://issues.apache.org/jira/browse/SOLR-5823
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-5823.patch, SOLR-5823.patch, SOLR-5823.patch, 
> SOLR-5823.patch, SOLR-5823.patch, SOLR-5823.patch
>
>
> It would be useful if there was some Overseer equivalent to 
> CloudDescriptor.isLeader() that plugins running in solr could use to know "At 
> this moment, am i the leader?" 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5794) DistribCursorPagingTest chewing up too much ram in nightly mode? (OutOfMemoryError: GC overhead limit exceeded)

2014-04-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5794:
---

Fix Version/s: 4.7.2

> DistribCursorPagingTest chewing up too much ram in nightly mode? 
> (OutOfMemoryError: GC overhead limit exceeded)
> ---
>
> Key: SOLR-5794
> URL: https://issues.apache.org/jira/browse/SOLR-5794
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.8, 5.0, 4.7.2
>
>
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/524/consoleText
> Looks like nightly + some unlucky seeds are causing some of the constants 
> used in picking how many random docs to include in the index, and how many 
> random sort criteria to use to get a bit extreme...
> {noformat}
>[junit4]  says ?! Master seed: 4B5478AA7B4E2CC1
> ...
>[junit4]   2> 4135075 T11364 C14961 P12490 oasup.LogUpdateProcessor.finish 
> [collection1] webapp=/_zs path=/update params={wt=javabin&version=2} 
> {add=[27357 (1461190022826819584)]} 0 3
> ...
>[junit4]   2> 4156004 T10702 C15063 P12510 oasc.SolrCore.execute 
> [collection1] webapp=/_zs path=/select 
> params={sort=int_dv_first+desc,+double+desc,+str_dv_last+desc,+long_first+desc,+if(exists(bin_dv_first),47,83)+desc,+long_dv_last+desc,+score+asc,+double_first+asc,+int_last+asc,+bin_last+desc,+id+asc&distrib=false&wt=javabin&rows=72&version=2&fl=id,score&shard.url=https://127.0.0.1:12496/_zs/collection1/|https://127.0.0.1:12510/_zs/collection1/&NOW=1393499417081&start=0&q=*:*&cursorMark=AotbsLboLAXB19jMq44ZMT8I4ZyF4ZyO4ZyS4ZyY4ZyC4ZyR4ZyR4ZyR4ZyQ4Zyf4ZyH4ZyF4ZyOB2un04dFfJrMBUBHgAAAB2un04dFfJrMCD%2BFwdfYzKuOGTFbsLboLABRgA0%3D&isShard=true&fsv=true}
>  hits=13498 status=0 QTime=8 
>[junit4]   2> 4203004 T9432 oaz.ClientCnxn$SendThread.run WARN Session 
> 0x1447301ec160005 for server localhost/127.0.0.1:12478, unexpected error, 
> closing socket connection and attempting reconnect 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>[junit4]   2> 
>[junit4]   2> 557628 T9402 ccr.RunnerThreadGroup.uncaughtException SEVERE 
> RunnerThreadGroup's sub thread should always have a context and it didn't 
> have any? java.lang.OutOfMemoryError: GC overhead limit exceeded
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([4B5478AA7B4E2CC1]:0)
> {noformat}
> ...i'm probably using "atLeast()" in a few too many places ... i'll dial this 
> back
> (FWIW: I can't reproduce the OOM on my machine, but with just that seed, the 
> test takes ~2min; that seed + -Dnightly=true is 5min+)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5794) DistribCursorPagingTest chewing up too much ram in nightly mode? (OutOfMemoryError: GC overhead limit exceeded)

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956904#comment-13956904
 ] 

ASF subversion and git services commented on SOLR-5794:
---

Commit 1583752 from hoss...@apache.org in branch 'dev/branches/lucene_solr_4_7'
[ https://svn.apache.org/r1583752 ]

SOLR-5794: merge r1572775 to 4.7 branch to prevent random multipler explosion

> DistribCursorPagingTest chewing up too much ram in nightly mode? 
> (OutOfMemoryError: GC overhead limit exceeded)
> ---
>
> Key: SOLR-5794
> URL: https://issues.apache.org/jira/browse/SOLR-5794
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.8, 5.0
>
>
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/524/consoleText
> Looks like nightly + some unlucky seeds are causing some of the constants 
> used in picking how many random docs to include in the index, and how many 
> random sort criteria to use to get a bit extreme...
> {noformat}
>[junit4]  says ?! Master seed: 4B5478AA7B4E2CC1
> ...
>[junit4]   2> 4135075 T11364 C14961 P12490 oasup.LogUpdateProcessor.finish 
> [collection1] webapp=/_zs path=/update params={wt=javabin&version=2} 
> {add=[27357 (1461190022826819584)]} 0 3
> ...
>[junit4]   2> 4156004 T10702 C15063 P12510 oasc.SolrCore.execute 
> [collection1] webapp=/_zs path=/select 
> params={sort=int_dv_first+desc,+double+desc,+str_dv_last+desc,+long_first+desc,+if(exists(bin_dv_first),47,83)+desc,+long_dv_last+desc,+score+asc,+double_first+asc,+int_last+asc,+bin_last+desc,+id+asc&distrib=false&wt=javabin&rows=72&version=2&fl=id,score&shard.url=https://127.0.0.1:12496/_zs/collection1/|https://127.0.0.1:12510/_zs/collection1/&NOW=1393499417081&start=0&q=*:*&cursorMark=AotbsLboLAXB19jMq44ZMT8I4ZyF4ZyO4ZyS4ZyY4ZyC4ZyR4ZyR4ZyR4ZyQ4Zyf4ZyH4ZyF4ZyOB2un04dFfJrMBUBHgAAAB2un04dFfJrMCD%2BFwdfYzKuOGTFbsLboLABRgA0%3D&isShard=true&fsv=true}
>  hits=13498 status=0 QTime=8 
>[junit4]   2> 4203004 T9432 oaz.ClientCnxn$SendThread.run WARN Session 
> 0x1447301ec160005 for server localhost/127.0.0.1:12478, unexpected error, 
> closing socket connection and attempting reconnect 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>[junit4]   2> 
>[junit4]   2> 557628 T9402 ccr.RunnerThreadGroup.uncaughtException SEVERE 
> RunnerThreadGroup's sub thread should always have a context and it didn't 
> have any? java.lang.OutOfMemoryError: GC overhead limit exceeded
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([4B5478AA7B4E2CC1]:0)
> {noformat}
> ...i'm probably using "atLeast()" in a few too many places ... i'll dial this 
> back
> (FWIW: I can't reproduce the OOM on my machine, but with just that seed, the 
> test takes ~2min; that seed + -Dnightly=true is 5min+)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: I am gettin this error, need help

2014-04-01 Thread Shawn Heisey

On 4/1/2014 11:58 AM, Narendra Meena wrote:


2014-04-01 23:21:30,811 WARN mapred.LocalJobRunner - job_local_0019

org.apache.solr.common.SolrException: Bad Request


Bad Request


request: http://localhost:8080/solr/update?wt=javabin&version=2

at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)


at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)


at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)


at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:49)

at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:93)

at 
org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48)


at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)

at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)


2014-04-01 23:21:31,099 ERROR solr.SolrIndexer - java.io.IOException: 
Job failed!


2014-04-01 23:21:31,101 INFO solr.SolrDeleteDuplicates - 
SolrDeleteDuplicates: starting at 2014-04-01 23:21:31


2014-04-01 23:21:31,109 INFO solr.SolrDeleteDuplicates - 
SolrDeleteDuplicates: Solr url: http://localhost:8080/solr/


2014-04-01 23:21:31,480 WARN mapred.LocalJobRunner - job_local_0020

java.lang.NullPointerException

at org.apache.hadoop.io.Text.encode(Text.java:388)

at org.apache.hadoop.io.Text.set(Text.java:178)

at 
org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:270)


at 
org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:241)


at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:192)


at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:176)


at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)

at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)

at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)




This mailing list is for discussing the development of Solr and Lucene.  
The solr-user mailing list is appropriate for discussions and help 
requests with *using* Solr.


http://lucene.apache.org/solr/discussion.html

The error messages appear to be a problem with an application using 
nutch and hadoop, connecting to a Solr server using the SolrJ support in 
nutch.  There's not enough information provided here to figure out what 
is going wrong, at least to someone like me who knows Solr but doesn't 
know anything about nutch.


There's a good chance that you're actually going to need help with 
nutch.  Nutch is a completely separate Apache project.They use client 
code provided by the Solr project, but they are separate.


If you do send a message to solr-user, be sure to include corresponding 
error messages from the Solr server log.  Be prepared to share your 
config, schema, and other details.


One possible problem: The Solr URL does not include a core name. The 
newest versions of Solr will not work without one, unless the core is 
actually named "collection1" which is what you can find in the example.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956848#comment-13956848
 ] 

Tomás Fernández Löbbe commented on SOLR-445:


I see. Maybe I could add then just the "numSucceed" just as a confirmation that 
the rest of the docs made it in?

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5945) Add retry for zookeeper reconnect failure

2014-04-01 Thread Jessica Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Cheng updated SOLR-5945:


Description: 
We had some network issue where we temporarily lost connection and DNS. The 
zookeeper client properly triggered the watcher. However, when trying to 
reconnect, this following Exception is thrown:

2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 121) 
:java.net.UnknownHostException: : Name or service not 
known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getAllByName(InetAddress.java:1063)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)

I tried to look at the code and it seems that there'd be no further retries to 
connect to Zookeeper, and the node is basically left in a bad state and will 
not recover on its own. (Please correct me if I'm reading this wrong.) Thinking 
about it, this is probably fair, since normally you wouldn't expect retries to 
fix an "unknown host" issue (even though in our case it would have) but I'm 
wondering what we should do to handle this situation if it happens again in the 
future.

Any advice is appreciated.



>From Mark Miller:
We don’t currently retry, but I don’t think it would hurt much if we did - at 
least briefly.

If you want to file a JIRA issue, that would be the best way to get it in a 
future release.

  was:
We had some network issue where we temporarily lost connection and DNS. The 
zookeeper client properly triggered the watcher. However, when trying to 
reconnect, this following Exception is thrown:

2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 121) 
:java.net.UnknownHostException: : Name or service not 
known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getAllByName(InetAddress.java:1063)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)

I tried to look at the code and it seems that there'd be no further retries to 
connect to Zookeeper, and the node is basically left in a bad state and will 
not recover on its own. (Please correct me if I'm reading this wrong.) Thinking 
about it, this is probably fair, since normally you wouldn't expect retries to 
fix an "unknown host" issue--even though in our case it would have--but I'm 
wondering what we should do to handle this situation if it happens again in the 
future.

Any advice is appreciated.



>From Mark Miller:
We don’t currently retry, but I don’t think it would hurt much if we did - at 
least briefly.

If you want to file a JIRA issue, that would be the best way to get it in a 
future release.


> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng
>  Labels: solrcloud, zookeeper
>
> We had some network issue where we tempor

[jira] [Created] (SOLR-5945) Add retry for zookeeper reconnect failure

2014-04-01 Thread Jessica Cheng (JIRA)
Jessica Cheng created SOLR-5945:
---

 Summary: Add retry for zookeeper reconnect failure
 Key: SOLR-5945
 URL: https://issues.apache.org/jira/browse/SOLR-5945
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng


We had some network issue where we temporarily lost connection and DNS. The 
zookeeper client properly triggered the watcher. However, when trying to 
reconnect, this following Exception is thrown:

2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 121) 
:java.net.UnknownHostException: : Name or service not 
known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getAllByName(InetAddress.java:1063)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)

I tried to look at the code and it seems that there'd be no further retries to 
connect to Zookeeper, and the node is basically left in a bad state and will 
not recover on its own. (Please correct me if I'm reading this wrong.) Thinking 
about it, this is probably fair, since normally you wouldn't expect retries to 
fix an "unknown host" issue--even though in our case it would have--but I'm 
wondering what we should do to handle this situation if it happens again in the 
future.

Any advice is appreciated.



>From Mark Miller:
We don’t currently retry, but I don’t think it would hurt much if we did - at 
least briefly.

If you want to file a JIRA issue, that would be the best way to get it in a 
future release.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81195 - Still Failing!

2014-04-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81195/

No tests ran.

Build Log:
[...truncated 6 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1414)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:652)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:561)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)

I am gettin this error, need help

2014-04-01 Thread Narendra Meena
2014-04-01 23:21:30,811 WARN  mapred.LocalJobRunner - job_local_0019

org.apache.solr.common.SolrException: Bad Request


Bad Request


request: http://localhost:8080/solr/update?wt=javabin&version=2

at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)

at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)

at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)

at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:49)

at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:93)

at
org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48)

at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)

at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)

2014-04-01 23:21:31,099 ERROR solr.SolrIndexer - java.io.IOException: Job
failed!

2014-04-01 23:21:31,101 INFO  solr.SolrDeleteDuplicates -
SolrDeleteDuplicates: starting at 2014-04-01 23:21:31

2014-04-01 23:21:31,109 INFO  solr.SolrDeleteDuplicates -
SolrDeleteDuplicates: Solr url: http://localhost:8080/solr/

2014-04-01 23:21:31,480 WARN  mapred.LocalJobRunner - job_local_0020

java.lang.NullPointerException

at org.apache.hadoop.io.Text.encode(Text.java:388)

at org.apache.hadoop.io.Text.set(Text.java:178)

at
org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:270)

at
org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:241)

at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:192)

at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:176)

at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)

at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)

at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)


[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 1336 - Failure!

2014-04-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/1336/

No tests ran.

Build Log:
[...truncated 12 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1414)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:652)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:561)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)

[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 17517 - Failure!

2014-04-01 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/17517/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/branches/branch_4x failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1414)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:652)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:561)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/branches/branch_4x failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/branches/branch_4x'
svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on 
'/repos/asf/lucene/dev/branches/branch_4x'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: timed out waiting for server
at 
org.tm

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81194 - Failure!

2014-04-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81194/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:914)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1414)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:652)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:561)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: Connection reset
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: Connection reset
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:109)
at 
org.tmates

[jira] [Commented] (SOLR-2412) Multipath hierarchical faceting

2014-04-01 Thread J.L. Hill (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956814#comment-13956814
 ] 

J.L. Hill commented on SOLR-2412:
-

Thank you - that worked.
I appreciate the effort. Now I just have to try and understand/test it. 


> Multipath hierarchical faceting
> ---
>
> Key: SOLR-2412
> URL: https://issues.apache.org/jira/browse/SOLR-2412
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Affects Versions: 4.0
> Environment: Fast IO when huge hierarchies are used
>Reporter: Toke Eskildsen
>  Labels: contrib, patch
> Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
> SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch
>
>
> Hierarchical faceting with slow startup, low memory overhead and fast 
> response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
>   * Multiple paths per document
>   * Query-time analysis of the facet-field; no special requirements for 
> indexing besides retaining separator characters in the terms used for faceting
>   * Optional custom sorting of tag values
>   * Recursive counting of references to tags at all levels of the output
> This is a shell around LUCENE-2369, making it work with the Solr API. The 
> underlying principle is to reference terms by their ordinals and create an 
> index wide documents to tags map, augmented with a compressed representation 
> of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5564) Currency characters are not tokenized

2014-04-01 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-5564.


Resolution: Invalid

First of all, please raise this kind of issue on the user's list first so 
others have a chance to comment and you have some assurance that what you 
expect is a good thing.

In this case, the analyzer isn't much good if it can't compare numbers for 
currency. If it has the Euro or US dollar sign attached, it isn't a number any 
more, and it's compared lexically. So, for instance, the sort order (and this 
affects range queries etc) for $100 and $20 would sort (ascending) as
$100
$20

which is clearly wrong. The symbol _will_ be _stored_ if you set things to 
stored, so you can get it back, it just won't be part of the token in the index.

What is it you really want? This seems like an XY problem; you're asking for a 
solution without clearly defining the problem.

Feel free to reopen this if, through discussion on the user's list, you truly 
find that this behavior is unexpected.

> Currency characters are not tokenized
> -
>
> Key: LUCENE-5564
> URL: https://issues.apache.org/jira/browse/LUCENE-5564
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.6.2
>Reporter: Jerome Lanneluc
>
> It is not possible to have the SmartChineseAnalyzer(nor the StandardAnalyzer) 
> include the currency characters (e.g $ or €) in the token stream.
> For example, the following will output 100 200. I would expect a way to 
> configure the analyzers to output 100$ 200€ instead.
> import java.io.StringReader;
> import org.apache.lucene.analysis.Analyzer;
> import org.apache.lucene.analysis.TokenStream;
> import org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer;
> import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
> import org.apache.lucene.util.Version;
> public class Test {
>   public static void main(String[] args) throws Exception {
>   Analyzer analyzer = new 
> SmartChineseAnalyzer(Version.LUCENE_36); //new 
> StandardAnalyzer(Version.LUCENE_36);
>   TokenStream stream = analyzer.tokenStream(null, new 
> StringReader("100$ 200€"));
>   while (stream.incrementToken()) {
>   CharTermAttribute attr = 
> stream.getAttribute(CharTermAttribute.class);
>   System.out.print(new String(attr.buffer(), 0, 
> attr.length()));
>   System.out.print(' ');
>   }
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956798#comment-13956798
 ] 

Yonik Seeley commented on SOLR-445:
---

bq. Do you mean including the ids of good docs in the response too? I don't 
think that would be that big. Should be much smaller than the request

Some people (including myself) send/load millions of docs per request - it's 
very unfriendly to get back megabytes of responses unless you explicitly ask.
If this processor is not in the default chain, then I guess it doesn't matter 
much.  But I could see adding this ability by default (regardless of if it's a 
separate processor or not) via a parameter like maxErrors or something.

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5795) Option to periodically delete docs based on an expiration field -- or ttl specified when indexed.

2014-04-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5795.


   Resolution: Fixed
Fix Version/s: 5.0
   4.8

> Option to periodically delete docs based on an expiration field -- or ttl 
> specified when indexed.
> -
>
> Key: SOLR-5795
> URL: https://issues.apache.org/jira/browse/SOLR-5795
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch, 
> SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch
>
>
> A question I get periodically from people is how to automatically remove 
> documents from a collection at a certain time (or after a certain amount of 
> time).  
> Excluding from search results using a filter query on a date field is 
> trivial, but you still have to periodically send a deleteByQuery to clean up 
> those older "expired" documents.  And in the case where you want all 
> documents to auto-expire some fixed amount of time when they were indexed, 
> you still have to setup a simple UpdateProcessorto set that expiration date.  
> So i've been thinking it would be nice if there was a simple way to configure 
> solr to do it all for you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5795) Option to periodically delete docs based on an expiration field -- or ttl specified when indexed.

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956795#comment-13956795
 ] 

ASF subversion and git services commented on SOLR-5795:
---

Commit 1583741 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583741 ]

SOLR-5795: New DocExpirationUpdateProcessorFactory supports computing an 
expiration date for documents from the TTL expression, as well as automatically 
deleting expired documents on a periodic basis (merge r1583734)

> Option to periodically delete docs based on an expiration field -- or ttl 
> specified when indexed.
> -
>
> Key: SOLR-5795
> URL: https://issues.apache.org/jira/browse/SOLR-5795
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch, 
> SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch
>
>
> A question I get periodically from people is how to automatically remove 
> documents from a collection at a certain time (or after a certain amount of 
> time).  
> Excluding from search results using a filter query on a date field is 
> trivial, but you still have to periodically send a deleteByQuery to clean up 
> those older "expired" documents.  And in the case where you want all 
> documents to auto-expire some fixed amount of time when they were indexed, 
> you still have to setup a simple UpdateProcessorto set that expiration date.  
> So i've been thinking it would be nice if there was a simple way to configure 
> solr to do it all for you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956790#comment-13956790
 ] 

Tomás Fernández Löbbe commented on SOLR-445:


bq. I think instead the notion of not having a uniqueKey should essentially be 
deprecated.
+1

bq. That would lead to some huge responses.
Do you mean including the ids of good docs in the response too? I don't think 
that would be that big. Should be much smaller than the request

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956774#comment-13956774
 ] 

Shalin Shekhar Mangar commented on SOLR-5473:
-

Thanks Noble. All tests pass with your patch. I am working on enabling external 
collection for more cloud tests.

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5944) Support updates of numeric DocValues

2014-04-01 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-5944:
--

 Summary: Support updates of numeric DocValues
 Key: SOLR-5944
 URL: https://issues.apache.org/jira/browse/SOLR-5944
 Project: Solr
  Issue Type: New Feature
Reporter: Ishan Chattopadhyaya


LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5941:
---

Assignee: Shalin Shekhar Mangar

> CommitTracker should use the default UpdateProcessingChain for autocommit
> -
>
> Key: SOLR-5941
> URL: https://issues.apache.org/jira/browse/SOLR-5941
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.6, 4.7
>Reporter: ludovic Boutros
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5941.patch
>
>
> Currently, the CommitTracker class is using the UpdateHandler directly for 
> autocommit.
> If a custom update processor is configured with a commit action, nothing is 
> done until an explicit commit is done by the client.
> This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5943:
---

Assignee: Shalin Shekhar Mangar

> SolrCmdDistributor does not distribute the openSearcher parameter
> -
>
> Key: SOLR-5943
> URL: https://issues.apache.org/jira/browse/SOLR-5943
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.6.1, 4.7
>Reporter: ludovic Boutros
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.8, 5.0
>
>
> The openSearcher parameter in a commit command is totally ignored by the 
> SolrCmdDistributor :
> {code:title=SolrCmdDistributor.java|borderStyle=solid}
>  void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
> if (cmd == null) return;
> ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
> : AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
> cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
>   }{code}
> I think the SolrJ API should take this parameter in account as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956767#comment-13956767
 ] 

Shalin Shekhar Mangar commented on SOLR-445:


bq. That would lead to some huge responses. I think instead the notion of not 
having a uniqueKey should essentially be deprecated.

+1

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5795) Option to periodically delete docs based on an expiration field -- or ttl specified when indexed.

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956750#comment-13956750
 ] 

ASF subversion and git services commented on SOLR-5795:
---

Commit 1583734 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1583734 ]

SOLR-5795: New DocExpirationUpdateProcessorFactory supports computing an 
expiration date for documents from the TTL expression, as well as automatically 
deleting expired documents on a periodic basis

> Option to periodically delete docs based on an expiration field -- or ttl 
> specified when indexed.
> -
>
> Key: SOLR-5795
> URL: https://issues.apache.org/jira/browse/SOLR-5795
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch, 
> SOLR-5795.patch, SOLR-5795.patch, SOLR-5795.patch
>
>
> A question I get periodically from people is how to automatically remove 
> documents from a collection at a certain time (or after a certain amount of 
> time).  
> Excluding from search results using a filter query on a date field is 
> trivial, but you still have to periodically send a deleteByQuery to clean up 
> those older "expired" documents.  And in the case where you want all 
> documents to auto-expire some fixed amount of time when they were indexed, 
> you still have to setup a simple UpdateProcessorto set that expiration date.  
> So i've been thinking it would be nice if there was a simple way to configure 
> solr to do it all for you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956748#comment-13956748
 ] 

Yonik Seeley commented on SOLR-445:
---

bq. even if the schema doesn't use uniqueKey...

That would lead to some huge responses.  I think instead the notion of not 
having a uniqueKey should essentially be deprecated.

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2014-04-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956720#comment-13956720
 ] 

Hoss Man commented on SOLR-445:
---

bq. . The errors are managed by an UpdateRequestProcessor that must be added 
before other processors in the chain.

Off the cuff: this sounds like a great idea.

The on piece of feedback that occurred to me though would be to tweak the 
response format so that there is a 1-to-1 correspondence of documents in the 
initial request to statuses in the response -- even if the schema doesn't use 
uniqueKey...

{code}

  10
  

 
 
  ERROR: [doc=1] Error adding field 'weight'='b' 
msg=For input string: "b"

 
 
  ERROR: [doc=3] Error adding field 'weight'='b' 
msg=For input string: "b"

...
  0
  17


{code}

?

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5773) CollapsingQParserPlugin should make elevated documents the group head

2014-04-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-5773.
--

Resolution: Fixed

> CollapsingQParserPlugin should make elevated documents the group head
> -
>
> Key: SOLR-5773
> URL: https://issues.apache.org/jira/browse/SOLR-5773
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.6.1
>Reporter: David Boychuck
>Assignee: Joel Bernstein
>  Labels: collapse, solr
> Fix For: 4.8
>
> Attachments: SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, 
> SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Hi Joel,
> I sent you an email but I'm not sure if you received it or not. I ran into a 
> bit of trouble using the CollapsingQParserPlugin with elevated documents. To 
> explain it simply, I want to exclude grouped documents when one of the 
> members of the group are contained in the elevated document set. I'm not sure 
> this is possible currently because as you explain above elevated documents 
> are added to the request context after the original query is constructed.
> To try to better illustrate the problem. If I have 2 documents docid=1 and 
> docid=2 and both have a groupid of 'a'. If a grouped query scores docid 2 
> first in the results but I have elevated docid 1 then both documents are 
> shown in the results when I really only want the elevated document to be 
> shown in the results.
> Is this something that would be difficult to implement? Any help is 
> appreciated.
> I think the solution would be to remove the documents from liveDocs that 
> share the same groupid in the getBoostDocs() function. Let me know if this 
> makes any sense. I'll continue working towards a solution in the meantime.
> {code}
> private IntOpenHashSet getBoostDocs(SolrIndexSearcher indexSearcher, 
> Set boosted) throws IOException {
>   IntOpenHashSet boostDocs = null;
>   if(boosted != null) {
> SchemaField idField = indexSearcher.getSchema().getUniqueKeyField();
> String fieldName = idField.getName();
> HashSet localBoosts = new HashSet(boosted.size()*2);
> Iterator boostedIt = boosted.iterator();
> while(boostedIt.hasNext()) {
>   localBoosts.add(new BytesRef(boostedIt.next()));
> }
> boostDocs = new IntOpenHashSet(boosted.size()*2);
> Listleaves = 
> indexSearcher.getTopReaderContext().leaves();
> TermsEnum termsEnum = null;
> DocsEnum docsEnum = null;
> for(AtomicReaderContext leaf : leaves) {
>   AtomicReader reader = leaf.reader();
>   int docBase = leaf.docBase;
>   Bits liveDocs = reader.getLiveDocs();
>   Terms terms = reader.terms(fieldName);
>   termsEnum = terms.iterator(termsEnum);
>   Iterator it = localBoosts.iterator();
>   while(it.hasNext()) {
> BytesRef ref = it.next();
> if(termsEnum.seekExact(ref)) {
>   docsEnum = termsEnum.docs(liveDocs, docsEnum);
>   int doc = docsEnum.nextDoc();
>   if(doc != -1) {
> //Found the document.
> boostDocs.add(doc+docBase);
>*// HERE REMOVE ANY DOCUMENTS THAT SHARE THE GROUPID NOT ONLY 
> THE DOCID //*
> it.remove();
>   }
> }
>   }
> }
>   }
>   return boostDocs;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5720) Add ExpandComponent to expand results collapsed by the CollapsingQParserPlugin

2014-04-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-5720.
--

Resolution: Fixed

> Add ExpandComponent to expand results collapsed by the CollapsingQParserPlugin
> --
>
> Key: SOLR-5720
> URL: https://issues.apache.org/jira/browse/SOLR-5720
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5720.patch, SOLR-5720.patch, SOLR-5720.patch, 
> SOLR-5720.patch, SOLR-5720.patch, SOLR-5720.patch, SOLR-5720.patch, 
> SOLR-5720.patch, SOLR-5720.patch
>
>
> This ticket introduces a new search component called the ExpandComponent. The 
> expand component expands a single page of results collapsed by the 
> CollapsingQParserPlugin.
> Sample syntax:
> {code}
> q=*:*&fq={!collapse 
> field=fieldA}&expand=true&expand.sort=fieldB+asc&expand.rows=10
> {code}
> In the above query the results are collapsed on "fieldA" with the 
> CollapsingQParserPlugin. The expand component expands the current page of 
> collapsed results.
> The initial implementation of the ExpandComponent takes three parameters:
> *expand=true* (Turns on the ExpandComponent)
> *expand.sort=fieldB+asc,fieldC+desc* (Sorts the documents based on a sort 
> spec. If none is specified the documents are sorted by relevance based on the 
> main query.)
> *expand.rows=10* (Sets the numbers of rows that groups are expanded to).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-04-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

The CloudSolrServer test was always firing mbeans request to default collection 
"collection1" so the test was failing. That is fixed now

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5880) org.apache.solr.client.solrj.impl.CloudSolrServerTest is failing pretty much every time for a long time with an exception about not being able to connect to ZooKeeper wi

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956670#comment-13956670
 ] 

ASF subversion and git services commented on SOLR-5880:
---

Commit 1583721 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1583721 ]

SOLR-5880: Fix test failure when n=1. Make it at least 2.

> org.apache.solr.client.solrj.impl.CloudSolrServerTest is failing pretty much 
> every time for a long time with an exception about not being able to connect 
> to ZooKeeper within the timeout.
> --
>
> Key: SOLR-5880
> URL: https://issues.apache.org/jira/browse/SOLR-5880
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.8, 5.0
>
>
> This test is failing consistently, though currently only on Policeman Jenkins 
> servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5880) org.apache.solr.client.solrj.impl.CloudSolrServerTest is failing pretty much every time for a long time with an exception about not being able to connect to ZooKeeper wi

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956672#comment-13956672
 ] 

ASF subversion and git services commented on SOLR-5880:
---

Commit 1583722 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583722 ]

SOLR-5880: Fix test failure when n=1. Make it at least 2.

> org.apache.solr.client.solrj.impl.CloudSolrServerTest is failing pretty much 
> every time for a long time with an exception about not being able to connect 
> to ZooKeeper within the timeout.
> --
>
> Key: SOLR-5880
> URL: https://issues.apache.org/jira/browse/SOLR-5880
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.8, 5.0
>
>
> This test is failing consistently, though currently only on Policeman Jenkins 
> servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5831) Scale score PostFilter

2014-04-01 Thread Peter Keegan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Keegan updated SOLR-5831:
---

Attachment: SOLR-5831.patch

Thanks Joel. I found a bug in the Collector's 'finish()' method that wasn't 
obvious until I added a secondary Sort to a query. Patch updated.

> Scale score PostFilter
> --
>
> Key: SOLR-5831
> URL: https://issues.apache.org/jira/browse/SOLR-5831
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.7
>Reporter: Peter Keegan
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-5831.patch, SOLR-5831.patch, SOLR-5831.patch, 
> TestScaleScoreQParserPlugin.patch
>
>
> The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
> This is an alternative to using a function query wrapping a scale() wrapping 
> a query(). For example:
> select?qq={!edismax v='news' qf='title^2 
> body'}&scaledQ=scale(product(query($qq),1),0,1)&q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))&fq={!query
>  v=$qq}
> The problem with this query is that it has to scale every hit. Usually, only 
> the returned hits need to be scaled,
> but there may be use cases where the number of hits to be scaled is greater 
> than the returned hit count,
> but less than or equal to the total hit count.
> Sample syntax:
> fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
> func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
> l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
> maxscalehits=1//The maximum number of result scores to scale (-1 = 
> all hits, 0 = results 'page' size)
> func=...  //Apply the composite function to each hit. The 
> scaled score value is accessed by the 'score()' value source
> All parameters are optional. The defaults are:
> l=0.0 u=1.0
> maxscalehits=0 (result window size)
> func=(null)
>  
> Note: this patch is not complete, as it contains no test cases and may not 
> conform 
> to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
>  
> I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5831) Scale score PostFilter

2014-04-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956630#comment-13956630
 ] 

Joel Bernstein edited comment on SOLR-5831 at 4/1/14 3:21 PM:
--

Hi Peter,

I haven't forgotten about this ticket. I've got one ticket ahead of this to 
finish for Solr 4.8 and then I'll work with you to try to get this ticket ready 
for Solr 4.9.

Joel


was (Author: joel.bernstein):
Hi Peter,

I haven't forgotten about this ticket. I've got one ticket ahead this to finish 
for Solr 4.8 and then I'll work with you to try to get this ticket ready for 
Solr 4.9.

Joel

> Scale score PostFilter
> --
>
> Key: SOLR-5831
> URL: https://issues.apache.org/jira/browse/SOLR-5831
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.7
>Reporter: Peter Keegan
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-5831.patch, SOLR-5831.patch, 
> TestScaleScoreQParserPlugin.patch
>
>
> The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
> This is an alternative to using a function query wrapping a scale() wrapping 
> a query(). For example:
> select?qq={!edismax v='news' qf='title^2 
> body'}&scaledQ=scale(product(query($qq),1),0,1)&q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))&fq={!query
>  v=$qq}
> The problem with this query is that it has to scale every hit. Usually, only 
> the returned hits need to be scaled,
> but there may be use cases where the number of hits to be scaled is greater 
> than the returned hit count,
> but less than or equal to the total hit count.
> Sample syntax:
> fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
> func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
> l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
> maxscalehits=1//The maximum number of result scores to scale (-1 = 
> all hits, 0 = results 'page' size)
> func=...  //Apply the composite function to each hit. The 
> scaled score value is accessed by the 'score()' value source
> All parameters are optional. The defaults are:
> l=0.0 u=1.0
> maxscalehits=0 (result window size)
> func=(null)
>  
> Note: this patch is not complete, as it contains no test cases and may not 
> conform 
> to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
>  
> I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5831) Scale score PostFilter

2014-04-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956630#comment-13956630
 ] 

Joel Bernstein commented on SOLR-5831:
--

Hi Peter,

I haven't forgotten about this ticket. I've got one ticket ahead this to finish 
for Solr 4.8 and then I'll work with you to try to get this ticket ready for 
Solr 4.9.

Joel

> Scale score PostFilter
> --
>
> Key: SOLR-5831
> URL: https://issues.apache.org/jira/browse/SOLR-5831
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.7
>Reporter: Peter Keegan
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-5831.patch, SOLR-5831.patch, 
> TestScaleScoreQParserPlugin.patch
>
>
> The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
> This is an alternative to using a function query wrapping a scale() wrapping 
> a query(). For example:
> select?qq={!edismax v='news' qf='title^2 
> body'}&scaledQ=scale(product(query($qq),1),0,1)&q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))&fq={!query
>  v=$qq}
> The problem with this query is that it has to scale every hit. Usually, only 
> the returned hits need to be scaled,
> but there may be use cases where the number of hits to be scaled is greater 
> than the returned hit count,
> but less than or equal to the total hit count.
> Sample syntax:
> fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
> func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
> l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
> maxscalehits=1//The maximum number of result scores to scale (-1 = 
> all hits, 0 = results 'page' size)
> func=...  //Apply the composite function to each hit. The 
> scaled score value is accessed by the 'score()' value source
> All parameters are optional. The defaults are:
> l=0.0 u=1.0
> maxscalehits=0 (result window size)
> func=(null)
>  
> Note: this patch is not complete, as it contains no test cases and may not 
> conform 
> to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
>  
> I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [newdev] Which branch to commit to?

2014-04-01 Thread Uwe Schindler
Hi,

 

-  trunk is on Java 7, yes.

-  Branch_4x (the stable branch) is now also on Java 7. This will be 
Release 4.8, arriving soon

-  Lucene_solr_4_7 was released as 4.7.1 today, still on Java 6. But it 
is unlikely that there will be a new release

-  Lucene_solr_3_6 will not get any new releases (is on Java 5)

 

So any patch have to be on Java 7 and should be on trunk, sorry.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Fermin Silva [mailto:silvafe...@gmail.com] 
Sent: Tuesday, April 01, 2014 4:14 PM
To: dev@lucene.apache.org
Subject: Re: [newdev] Which branch to commit to?

 

I see that trunk works on Java 1.7, which is kind of an impediment for me right 
now.

Under which cases fixes/features go to a branch rather than trunk?

Would this case be an example? Like working on branch lucene_solr_4_7 rather 
than trunk?

Thanks again

 

On Tue, Apr 1, 2014 at 10:44 AM, Shai Erera  wrote:

In general you should checkout 'trunk' and fix/add stuff there. The community 
will then decide whether the feature/fix should also be backported to 4.x (very 
likely) and whether a 3.x releases is also needed (very unlikely).

Shai

 

On Tue, Apr 1, 2014 at 4:39 PM, Fermin Silva  wrote:

Hi all,

I found a possible feature addition to the ReplicationHandler class plus some 
fixes to allow better extension/subclassing of this file.



I'd like to contribute but I'm unsure about the steps required.

Firstly, I should create a JIRA ticket, I'm OK with that.
Secondly, I should checkout a branch, do the fixes and post the patch.

Now, which branch should I checkout ?

The fix I'd like to contribute applies to SOLR 3.5.x, which is our production 
version, but I'm sure that's a bit obsolete.

Also, I'm not sure if SOLR 4.x branch still uses the same kind of replication, 
let alone trunk, which heads to SOLR 5.x.

So, which branch you think I should checkout? Should I also backport or 
'forwardport' to other branches ?

Thanks in advance

-- 

Fermin Silva

 

 



Re: [newdev] Which branch to commit to?

2014-04-01 Thread Fermin Silva
I see that trunk works on Java 1.7, which is kind of an impediment for me
right now.

Under which cases fixes/features go to a branch rather than trunk?
Would this case be an example? Like working on branch lucene_solr_4_7
rather than trunk?

Thanks again


On Tue, Apr 1, 2014 at 10:44 AM, Shai Erera  wrote:

> In general you should checkout 'trunk' and fix/add stuff there. The
> community will then decide whether the feature/fix should also be
> backported to 4.x (very likely) and whether a 3.x releases is also needed
> (very unlikely).
>
> Shai
>
>
> On Tue, Apr 1, 2014 at 4:39 PM, Fermin Silva  wrote:
>
>> Hi all,
>>
>> I found a possible feature addition to the ReplicationHandler class plus
>> some fixes to allow better extension/subclassing of this file.
>>
>> I'd like to contribute but I'm unsure about the steps required.
>> Firstly, I should create a JIRA ticket, I'm OK with that.
>> Secondly, I should checkout a branch, do the fixes and post the patch.
>>
>> Now, which branch should I checkout ?
>> The fix I'd like to contribute applies to SOLR 3.5.x, which is our
>> production version, but I'm sure that's a bit obsolete.
>> Also, I'm not sure if SOLR 4.x branch still uses the same kind of
>> replication, let alone trunk, which heads to SOLR 5.x.
>>
>> So, which branch you think I should checkout? Should I also backport or
>> *'forwardport'* to other branches ?
>>
>> Thanks in advance
>> --
>> Fermin Silva
>>
>
>


[jira] [Updated] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5941:


Fix Version/s: 5.0
   4.8

> CommitTracker should use the default UpdateProcessingChain for autocommit
> -
>
> Key: SOLR-5941
> URL: https://issues.apache.org/jira/browse/SOLR-5941
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.6, 4.7
>Reporter: ludovic Boutros
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5941.patch
>
>
> Currently, the CommitTracker class is using the UpdateHandler directly for 
> autocommit.
> If a custom update processor is configured with a commit action, nothing is 
> done until an explicit commit is done by the client.
> This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [newdev] Which branch to commit to?

2014-04-01 Thread Shai Erera
In general you should checkout 'trunk' and fix/add stuff there. The
community will then decide whether the feature/fix should also be
backported to 4.x (very likely) and whether a 3.x releases is also needed
(very unlikely).

Shai


On Tue, Apr 1, 2014 at 4:39 PM, Fermin Silva  wrote:

> Hi all,
>
> I found a possible feature addition to the ReplicationHandler class plus
> some fixes to allow better extension/subclassing of this file.
>
> I'd like to contribute but I'm unsure about the steps required.
> Firstly, I should create a JIRA ticket, I'm OK with that.
> Secondly, I should checkout a branch, do the fixes and post the patch.
>
> Now, which branch should I checkout ?
> The fix I'd like to contribute applies to SOLR 3.5.x, which is our
> production version, but I'm sure that's a bit obsolete.
> Also, I'm not sure if SOLR 4.x branch still uses the same kind of
> replication, let alone trunk, which heads to SOLR 5.x.
>
> So, which branch you think I should checkout? Should I also backport or
> *'forwardport'* to other branches ?
>
> Thanks in advance
> --
> Fermin Silva
>


[newdev] Which branch to commit to?

2014-04-01 Thread Fermin Silva
Hi all,

I found a possible feature addition to the ReplicationHandler class plus
some fixes to allow better extension/subclassing of this file.

I'd like to contribute but I'm unsure about the steps required.
Firstly, I should create a JIRA ticket, I'm OK with that.
Secondly, I should checkout a branch, do the fixes and post the patch.

Now, which branch should I checkout ?
The fix I'd like to contribute applies to SOLR 3.5.x, which is our
production version, but I'm sure that's a bit obsolete.
Also, I'm not sure if SOLR 4.x branch still uses the same kind of
replication, let alone trunk, which heads to SOLR 5.x.

So, which branch you think I should checkout? Should I also backport or
*'forwardport'* to other branches ?

Thanks in advance
-- 
Fermin Silva


[RESULT][VOTE] Lucene / Solr 4.7.1 RC2

2014-04-01 Thread Steve Rowe
This vote has passed.

I’ll finish up the release.

Steve

On Apr 1, 2014, at 7:35 AM, Simon Willnauer  wrote:

> I guess that vote passed?
> 
> On Tue, Apr 1, 2014 at 11:02 AM, Tommaso Teofili
>  wrote:
>> +1 smoke tester is happy.
>> 
>> Tommaso
>> 
>> 
>> 2014-03-29 9:46 GMT+01:00 Steve Rowe :
>> 
>>> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>>> 
>>> Download it here:
>>> 
>>> 
>>> 
>>> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>>> 
>>> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>>> 
>>> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
>>> \
>>> 1582953 4.7.1 /tmp/4.7.1-smoke
>>> 
>>> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>>> 
>>> My vote: +1
>>> 
>>> Steve
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5943:
--

Fix Version/s: 5.0
   4.8

> SolrCmdDistributor does not distribute the openSearcher parameter
> -
>
> Key: SOLR-5943
> URL: https://issues.apache.org/jira/browse/SOLR-5943
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.6.1, 4.7
>Reporter: ludovic Boutros
> Fix For: 4.8, 5.0
>
>
> The openSearcher parameter in a commit command is totally ignored by the 
> SolrCmdDistributor :
> {code:title=SolrCmdDistributor.java|borderStyle=solid}
>  void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
> if (cmd == null) return;
> ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
> : AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
> cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
>   }{code}
> I think the SolrJ API should take this parameter in account as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-01 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5943:
-

 Summary: SolrCmdDistributor does not distribute the openSearcher 
parameter
 Key: SOLR-5943
 URL: https://issues.apache.org/jira/browse/SOLR-5943
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.7, 4.6.1
Reporter: ludovic Boutros


The openSearcher parameter in a commit command is totally ignored by the 
SolrCmdDistributor :

{code:title=SolrCmdDistributor.java|borderStyle=solid}
 void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
if (cmd == null) return;
ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
: AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
  }{code}

I think the SolrJ API should take this parameter in account as well.





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5942) verify the login functionality of the solar admin

2014-04-01 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956473#comment-13956473
 ] 

Noble Paul commented on SOLR-5942:
--

Why is this a bug ? When did Solr offer to have a login for the admin app ?

> verify the login functionality of the solar admin 
> --
>
> Key: SOLR-5942
> URL: https://issues.apache.org/jira/browse/SOLR-5942
> Project: Solr
>  Issue Type: Bug
> Environment: windows 7+ Ie
>Reporter: ram
>
> verify the login functionality of the solar admin with credentials



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5564) Currency characters are not tokenized

2014-04-01 Thread Jerome Lanneluc (JIRA)
Jerome Lanneluc created LUCENE-5564:
---

 Summary: Currency characters are not tokenized
 Key: LUCENE-5564
 URL: https://issues.apache.org/jira/browse/LUCENE-5564
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.6.2
Reporter: Jerome Lanneluc


It is not possible to have the SmartChineseAnalyzer(nor the StandardAnalyzer) 
include the currency characters (e.g $ or €) in the token stream.

For example, the following will output 100 200. I would expect a way to 
configure the analyzers to output 100$ 200€ instead.

import java.io.StringReader;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.util.Version;
public class Test {
public static void main(String[] args) throws Exception {
Analyzer analyzer = new 
SmartChineseAnalyzer(Version.LUCENE_36); //new 
StandardAnalyzer(Version.LUCENE_36);
TokenStream stream = analyzer.tokenStream(null, new 
StringReader("100$ 200€"));
while (stream.incrementToken()) {
CharTermAttribute attr = 
stream.getAttribute(CharTermAttribute.class);
System.out.print(new String(attr.buffer(), 0, 
attr.length()));
System.out.print(' ');
}
}
}




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5922) Add Properties and other parameters to SolrJ Collection Admin Request calls

2014-04-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956462#comment-13956462
 ] 

Shalin Shekhar Mangar commented on SOLR-5922:
-

bq. Test is broken when it tries to verify if the properties is getting used 
correctly. Any ideas on what I am doing incorrectly?

Thanks Varun. I don't think initCore is the right way. Why don't you make a 
core admin status call against a replica of the newly created collection? 
(hint: use the new clusterstatus API to know the baseUrl of the replica)

> Add Properties and other parameters to SolrJ Collection Admin Request calls
> ---
>
> Key: SOLR-5922
> URL: https://issues.apache.org/jira/browse/SOLR-5922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0
>
> Attachments: SOLR-5922.patch, SOLR-5922.patch
>
>
> SOLR-5208 added functionality for the setting of core.properties key/values 
> at create-time on Collections API.
> We should allow the same behaviour for SolrJ API calls as well.
> Also I want to add get and set methods to be able to add 'instanceDir', 
> 'dataDir', 'ulogDir' for a create colleciton call.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5940) Make post.jar report back detailed error in case of 400 responses

2014-04-01 Thread Sameer Maggon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sameer Maggon updated SOLR-5940:


Attachment: solr-5940.patch

Patch that adds the error returned by solr and prints out as a warning

> Make post.jar report back detailed error in case of 400 responses
> -
>
> Key: SOLR-5940
> URL: https://issues.apache.org/jira/browse/SOLR-5940
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.7
>Reporter: Sameer Maggon
> Attachments: solr-5940.patch
>
>
> Currently post.jar does not print detailed error message that is encountered 
> during indexing. In certain use cases, it's helpful to see the error message 
> so that clients can take appropriate actions.
> In 4.7, here's what gets shown if there is an error during indexing:
> SimplePostTool: WARNING: Solr returned an error #400 Bad Request
> SimplePostTool: WARNING: IOException while reading response: 
> java.io.IOException: Server returned HTTP response code: 400 for URL: 
> http://localhost:8983/solr/update
> It would be helpful to print out the "msg" that is returned from Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5918) ant clean does not remove ZooKeeper data

2014-04-01 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956449#comment-13956449
 ] 

Varun Thacker commented on SOLR-5918:
-

With the latest checkout of trunk I don't get any errors related to 
initializing QueryElevationComponent. Maybe I was doing something wrong that 
time. Sorry for the noise there.


> ant clean does not remove ZooKeeper data
> 
>
> Key: SOLR-5918
> URL: https://issues.apache.org/jira/browse/SOLR-5918
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-5918.patch
>
>
> From the solr/ directory when I run 'ant clean' it cleans up all the 
> necessary compiled files etc.
> This also removes the indexes rightly so, but fails to delete the ZooKeeper 
> data.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2412) Multipath hierarchical faceting

2014-04-01 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-2412:
-

Attachment: SOLR-2412.patch

Patch updated to Solr 4.6.1 and verified (patching, executing 'ant 
run-example', running the sample script, indexing the output and inspecting the 
result in a browser) on a clean SVN checkout.

The old patch did not have properly updated build scripts. My apologies to J.L. 
Hill and others that might have tried applying it.

> Multipath hierarchical faceting
> ---
>
> Key: SOLR-2412
> URL: https://issues.apache.org/jira/browse/SOLR-2412
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Affects Versions: 4.0
> Environment: Fast IO when huge hierarchies are used
>Reporter: Toke Eskildsen
>  Labels: contrib, patch
> Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
> SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch
>
>
> Hierarchical faceting with slow startup, low memory overhead and fast 
> response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
>   * Multiple paths per document
>   * Query-time analysis of the facet-field; no special requirements for 
> indexing besides retaining separator characters in the terms used for faceting
>   * Optional custom sorting of tag values
>   * Recursive counting of references to tags at all levels of the output
> This is a shell around LUCENE-2369, making it work with the Solr API. The 
> underlying principle is to reference terms by their ordinals and create an 
> index wide documents to tags map, augmented with a compressed representation 
> of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.7-Linux (64bit/jdk1.6.0_45) - Build # 65 - Failure!

2014-04-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.7-Linux/65/
Java: 64bit/jdk1.6.0_45 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:55561 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:55561 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([E986597FCA619E09:6860D767BD3EFE35]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:148)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:99)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:94)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:85)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFail

[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956419#comment-13956419
 ] 

ASF subversion and git services commented on SOLR-5488:
---

Commit 1583636 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1583636 ]

SOLR-5488: Fix up test failures for Analytics Component. Runs clean locally.

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5922) Add Properties and other parameters to SolrJ Collection Admin Request calls

2014-04-01 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5922:


Attachment: SOLR-5922.patch

- New patch with a test case. Test is broken when it tries to verify if the 
properties is getting used correctly. Any ideas on what I am doing incorrectly?

- Fixed typo in CoreAdminCreateDiscoveryTest

> Add Properties and other parameters to SolrJ Collection Admin Request calls
> ---
>
> Key: SOLR-5922
> URL: https://issues.apache.org/jira/browse/SOLR-5922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0
>
> Attachments: SOLR-5922.patch, SOLR-5922.patch
>
>
> SOLR-5208 added functionality for the setting of core.properties key/values 
> at create-time on Collections API.
> We should allow the same behaviour for SolrJ API calls as well.
> Also I want to add get and set methods to be able to add 'instanceDir', 
> 'dataDir', 'ulogDir' for a create colleciton call.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-04-01 Thread Simon Willnauer
I guess that vote passed?

On Tue, Apr 1, 2014 at 11:02 AM, Tommaso Teofili
 wrote:
> +1 smoke tester is happy.
>
> Tommaso
>
>
> 2014-03-29 9:46 GMT+01:00 Steve Rowe :
>
>> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>>
>> Download it here:
>>
>> 
>>
>> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>>
>> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
>> \
>> 1582953 4.7.1 /tmp/4.7.1-smoke
>>
>> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>>
>> My vote: +1
>>
>> Steve
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5942) verify the login functionality of the solar admin

2014-04-01 Thread Raja Nagendra Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956395#comment-13956395
 ] 

Raja Nagendra Kumar commented on SOLR-5942:
---

Ram, could you give more info what you mean by verify the login functionality

> verify the login functionality of the solar admin 
> --
>
> Key: SOLR-5942
> URL: https://issues.apache.org/jira/browse/SOLR-5942
> Project: Solr
>  Issue Type: Bug
> Environment: windows 7+ Ie
>Reporter: ram
>
> verify the login functionality of the solar admin with credentials



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-04-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956358#comment-13956358
 ] 

Rafał Kuć commented on SOLR-5935:
-

The test is done using JMeter with batch indexing, so no streaming here. The 
indexing application is not Java, so streaming is not possible in the solution. 
The test indexing procedure is 10 documents per JMeter thread. The documents 
are not small. 

And yes we tried increasing the total number of connections on the shard 
factory level. Should we try increasing max connections total and max 
connections per host on request handler level?



> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() 

[jira] [Updated] (SOLR-1632) Distributed IDF

2014-04-01 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---

Attachment: SOLR-5488.patch

- Fixed global stats distribution
- Added assert on query explain (docNum, weight and idf should be the same in 
distributed tests), this assert is valid on 2nd query only since global stats 
merged in the end of 1st query.

> Distributed IDF
> ---
>
> Key: SOLR-1632
> URL: https://issues.apache.org/jira/browse/SOLR-1632
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 1.5
>Reporter: Andrzej Bialecki 
>Assignee: Mark Miller
> Fix For: 4.8, 5.0
>
> Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
> SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
> SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
> SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-5488.patch, 
> distrib-2.patch, distrib.patch
>
>
> Distributed IDF is a valuable enhancement for distributed search across 
> non-uniform shards. This issue tracks the proposed implementation of an API 
> to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5942) verify the login functionality of the solar admin

2014-04-01 Thread ram (JIRA)
ram created SOLR-5942:
-

 Summary: verify the login functionality of the solar admin 
 Key: SOLR-5942
 URL: https://issues.apache.org/jira/browse/SOLR-5942
 Project: Solr
  Issue Type: Bug
 Environment: windows 7+ Ie
Reporter: ram


verify the login functionality of the solar admin with credentials



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5941:
--

Attachment: SOLR-5941.patch

A small starting patch.

> CommitTracker should use the default UpdateProcessingChain for autocommit
> -
>
> Key: SOLR-5941
> URL: https://issues.apache.org/jira/browse/SOLR-5941
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.6, 4.7
>Reporter: ludovic Boutros
> Attachments: SOLR-5941.patch
>
>
> Currently, the CommitTracker class is using the UpdateHandler directly for 
> autocommit.
> If a custom update processor is configured with a commit action, nothing is 
> done until an explicit commit is done by the client.
> This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5941:
-

 Summary: CommitTracker should use the default 
UpdateProcessingChain for autocommit
 Key: SOLR-5941
 URL: https://issues.apache.org/jira/browse/SOLR-5941
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.7, 4.6
Reporter: ludovic Boutros


Currently, the CommitTracker class is using the UpdateHandler directly for 
autocommit.

If a custom update processor is configured with a commit action, nothing is 
done until an explicit commit is done by the client.

This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-04-01 Thread Tommaso Teofili
+1 smoke tester is happy.

Tommaso


2014-03-29 9:46 GMT+01:00 Steve Rowe :

> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>
> Download it here:
> <
> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
> >
>
> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>
> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/\
> 1582953 4.7.1 /tmp/4.7.1-smoke
>
> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>
> My vote: +1
>
> Steve
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-5563) Remove sep layout

2014-04-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956232#comment-13956232
 ] 

Michael McCandless commented on LUCENE-5563:


+1

> Remove sep layout
> -
>
> Key: LUCENE-5563
> URL: https://issues.apache.org/jira/browse/LUCENE-5563
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>
> This has fallen behind feature wise, and isn't really performant (the 4.1 
> block format is a great improvement).
> I think we should remove it, its served its purpose but its time to retire...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956228#comment-13956228
 ] 

ASF subversion and git services commented on LUCENE-2446:
-

Commit 1583565 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1583565 ]

LUCENE-2446: ensure we close file if we hit exception writing codec header

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1457 - Failure!

2014-04-01 Thread Robert Muir
Ill commit a fix. MemoryPostings didnt have a codec header before, and
I added one. But we need try/catch in the ctor in case it hits
exception.

On Tue, Apr 1, 2014 at 4:15 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1457/
> Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC 
> -XX:-UseSuperWord
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestIndexWriterExceptions.testForceMergeExceptions
>
> Error Message:
> MockDirectoryWrapper: cannot close: there are still open files: 
> {_e_Memory_0.ram=1}
>
> Stack Trace:
> java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are 
> still open files: {_e_Memory_0.ram=1}
> at 
> __randomizedtesting.SeedInfo.seed([79454932DFC46C93:3F6BE8A849A77D0]:0)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:662)
> at 
> org.apache.lucene.index.TestIndexWriterExceptions.testForceMergeExceptions(TestIndexWriterExceptions.java:973)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.RuntimeException: unclosed IndexOut

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1457 - Failure!

2014-04-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1457/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC -XX:-UseSuperWord

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestIndexWriterExceptions.testForceMergeExceptions

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_e_Memory_0.ram=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_e_Memory_0.ram=1}
at 
__randomizedtesting.SeedInfo.seed([79454932DFC46C93:3F6BE8A849A77D0]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:662)
at 
org.apache.lucene.index.TestIndexWriterExceptions.testForceMergeExceptions(TestIndexWriterExceptions.java:973)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.RuntimeException: unclosed IndexOutput: _e_Memory_0.ram
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:551)
at 
org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:523)
at 
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:44)
at 
org.apache.lucene.codecs.memory.MemoryPostingsF

[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956218#comment-13956218
 ] 

Robert Muir commented on LUCENE-2446:
-

I don't think that its true it "only applies to disk-based indexes".

FilterReader/SlowWrapper etc pass this down to their underlying readers so you 
still get the check on the underlying data e.g. if you are using 
SortingMergePolicy.

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956206#comment-13956206
 ] 

Uwe Schindler commented on LUCENE-2446:
---

One idea to make the whole thing more hidden to the user:
I am not sure if we need the abstract checkIntegrity() on the public 
AtomicReader abstract interface - because it only applies to disk-based 
indexes. Would it be not enough to have it on SegmentReader? I know we might 
need some instanceof checks while merging, but I think we do those already. By 
that it would not be public and we can hide it to the user. We would only not 
validate on FilterAtomicReader's merging (split / sort indexes).

Any opinion about this?

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956202#comment-13956202
 ] 

Uwe Schindler commented on LUCENE-2446:
---

Thanks Robert!
{{checkIntegrity()}} is in my opinion the best we can have: most generic, but 
not to easy to misunderstand and run after every commitI was just afraid about 
the {{optimize()}}-loving people! 
Uwe

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5563) Remove sep layout

2014-04-01 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5563:
---

 Summary: Remove sep layout
 Key: LUCENE-5563
 URL: https://issues.apache.org/jira/browse/LUCENE-5563
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


This has fallen behind feature wise, and isn't really performant (the 4.1 block 
format is a great improvement).

I think we should remove it, its served its purpose but its time to retire...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956189#comment-13956189
 ] 

ASF subversion and git services commented on LUCENE-2446:
-

Commit 1583550 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1583550 ]

LUCENE-2446: add checksums to index files

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-04-01 Thread Gary Yue (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956190#comment-13956190
 ] 

Gary Yue commented on SOLR-5931:


thanks Shawn. This should work for me!

> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-04-01 Thread Dawid Weiss
+1

SUCCESS! [1:37:39.680664]

On Mon, Mar 31, 2014 at 8:39 PM, Adrien Grand  wrote:
> +1
> SUCCESS! [1:30:20.918150]
>
> On Mon, Mar 31, 2014 at 5:40 PM, david.w.smi...@gmail.com
>  wrote:
>> +1
>>
>> SUCCESS! [1:51:37.952160]
>>
>>
>>
>> On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe  wrote:
>>>
>>> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>>>
>>> Download it here:
>>>
>>> 
>>>
>>> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>>>
>>> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>>>
>>> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
>>> \
>>> 1582953 4.7.1 /tmp/4.7.1-smoke
>>>
>>> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>>>
>>> My vote: +1
>>>
>>> Steve
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org