Re: Lucene tests killed one other SSD - Policeman Jenkins

2013-08-20 Thread Toke Eskildsen
On Mon, 2013-08-19 at 17:03 +0200, Uwe Schindler wrote:
 Finally the SSD device got unresponsible and only after a power cycle
 it was responsible again. The error messages in dmesg look similar to
 other dying OCX Vertex 2 drives.

The only statistics I could find was
http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-3.html
It is a bit old and does not speak well for the Vertex 2 series.

 So just to conclude: Lucene kills SSDs :-)

I am an accomplice to murder!? Oh Noes!


- Toke Eskildsen, happily using an old 160GB Intel X25 SSD with 11TB
written and 3 reallocated sectors.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5171) SOLR Admin gui works in IE9, breaks in IE10. Workaround.

2013-08-20 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744796#comment-13744796
 ] 

Stefan Matheis (steffkes) commented on SOLR-5171:
-

that's already tracked in SOLR-4372 :)

  SOLR Admin gui works in IE9, breaks in IE10. Workaround. 
 --

 Key: SOLR-5171
 URL: https://issues.apache.org/jira/browse/SOLR-5171
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.4
 Environment: Windows7 and 8, IE10 Browser, Windows Server, Centos 
Reporter: Joseph L Howard
Assignee: Stefan Matheis (steffkes)
Priority: Minor
  Labels: patch
 Fix For: 4.5, 5.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg, SOLR-5171.patch


 Solr Admin.html does not work in IE10. Thought it was unmatched tags. It's a 
 document mode issue. 
 Workaround for IE10. Modify Admin.html by adding the metatag after doctype 
 meta http-equiv=x-ua-compatible content=IE=9.  
  I use total validator. 
 The line numbers refer to lines in the original source.
 Any with a line number of '0' are implicit tags added by Total Validator:
1 !DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN 
 http://www.w3.org/TR/html4/strict.dtd;
2 next problemprevious problemW864 [WCAG 2.0 3.1.1 (A)] Use the 'lang' or 
 'xml:lang' attribute to denote the primary language of the 
   document:
  html
   21   head
   23 title
   23   Solr Admin
   23 /title
   25 link rel=icon type=image/ico href=img/favicon.ico?_=4.4.0
   27 link rel=stylesheet type=text/css 
 href=css/styles/common.css?_=4.4.0
   28 link rel=stylesheet type=text/css 
 href=css/styles/analysis.css?_=4.4.0
   29 link rel=stylesheet type=text/css 
 href=css/styles/cloud.css?_=4.4.0
   30 link rel=stylesheet type=text/css 
 href=css/styles/cores.css?_=4.4.0
   31 link rel=stylesheet type=text/css 
 href=css/styles/dashboard.css?_=4.4.0
   32 link rel=stylesheet type=text/css 
 href=css/styles/dataimport.css?_=4.4.0
   33 link rel=stylesheet type=text/css 
 href=css/styles/index.css?_=4.4.0
   34 link rel=stylesheet type=text/css 
 href=css/styles/java-properties.css?_=4.4.0
   35 link rel=stylesheet type=text/css 
 href=css/styles/logging.css?_=4.4.0
   36 link rel=stylesheet type=text/css 
 href=css/styles/menu.css?_=4.4.0
   37 link rel=stylesheet type=text/css 
 href=css/styles/plugins.css?_=4.4.0
   38 link rel=stylesheet type=text/css 
 href=css/styles/documents.css?_=4.4.0
   39 link rel=stylesheet type=text/css 
 href=css/styles/query.css?_=4.4.0
   40 link rel=stylesheet type=text/css 
 href=css/styles/replication.css?_=4.4.0
   41 link rel=stylesheet type=text/css 
 href=css/styles/schema-browser.css?_=4.4.0
   42 link rel=stylesheet type=text/css 
 href=css/styles/threads.css?_=4.4.0
   43 link rel=stylesheet type=text/css href=css/chosen.css?_=4.4.0
   45 script type=text/javascript
   52 /script
   54   /head
   55   body
   57 div id=wrapper
   59   div id=header
   61 a href=./ id=solr
   61   span
   61 Apache SOLR
   61   /span
   61 /a
   63 p id=environment
   63   nbsp;
   63 /p
   65   /div
   67   div id=main class=clearfix
   69 div id=init-failures
   71 next problemprevious problemP883 [WCAG 2.0 1.3.1 (A)] Nest headings 
 properly (H1  H2  H3):
h2
   71 SolrCore Initialization Failures
   71   /h2
   72 next problemprevious problemP892 [WCAG 2.0 1.3.1 (A)] Use CSS for 
 presentation effects:
ul
   72 next problemprevious problemE610 One or more of the following tags are 
 missing from within the enclosing tag: li 
/ul
   73   p
   73 Please check your logs for more information
   73   /p
   75 /div
   77 div id=content-wrapper
   78   div id=content
   79 nbsp;
   82   /div
   83 /div
   85 div id=menu-wrapper
   86   div
   88 ul id=menu
   90   li id=index class=global
   90 p
   90 next problemprevious problemE049 No matching anchor name:
a href=#/
   90 Dashboard
   90   /a
   90 /p
   90   /li
   92   li id=logging class=global
   92 p
   92 next problemprevious problemE049 No matching anchor name:
a href=#/~logging
   92 Logging
   92   /a
   92 /p
   93 ul
   94   li class=level
   94 next 

[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2013-08-20 Thread William Harris (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744859#comment-13744859
 ] 

William Harris commented on SOLR-2894:
--

its a pretty simple query: q=star:starfacet=onfacet.pivot=fieldA,fieldB , 
both regular single valued solr.TextFields with solr.LowerCaseFilterFactory 
filters. All shards work well individually.
I'm looking at the logs but unfortunately I'm not seeing any other error 
messages there.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.5

 Attachments: SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894-reworked.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2894) Implement distributed pivot faceting

2013-08-20 Thread William Harris (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744859#comment-13744859
 ] 

William Harris edited comment on SOLR-2894 at 8/20/13 11:12 AM:


its a pretty simple query: q=star:starfacet=onfacet.pivot=fieldA,fieldB , 
both regular single valued solr.TextFields with solr.LowerCaseFilterFactory 
filters. All shards work well individually.
I'm looking at the logs but unfortunately I'm not seeing any other error 
messages there.

It works as long as I use less than 6 shards. With 6 or more it fails with that 
error, regardless of which shards I use.

  was (Author: killscreen):
its a pretty simple query: 
q=star:starfacet=onfacet.pivot=fieldA,fieldB , both regular single valued 
solr.TextFields with solr.LowerCaseFilterFactory filters. All shards work well 
individually.
I'm looking at the logs but unfortunately I'm not seeing any other error 
messages there.
  
 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.5

 Attachments: SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894-reworked.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744870#comment-13744870
 ] 

Michael McCandless commented on LUCENE-5178:


+1, I reviewed the first patch.

We'll need to fix facet module's dynamic range faceting to skip missing values; 
I can do this after you commit this patch...

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4372) Search text box in auto-complete/chooser extends outside of the dropdown pane on IE9 FF 17+

2013-08-20 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744872#comment-13744872
 ] 

Jack Krupansky commented on SOLR-4372:
--

I am seeing this with IE10 as well, with Solr 4.5 nightly build as of August 
19, 2013.

 Search text box in auto-complete/chooser extends outside of the dropdown pane 
 on IE9  FF 17+
 -

 Key: SOLR-4372
 URL: https://issues.apache.org/jira/browse/SOLR-4372
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1, 4.2
 Environment: IE9
Reporter: Senthuran Sivananthan
Priority: Minor
 Attachments: chooser_ff17+.png, chooser_ie9.png


 This is an issue across all of the pages.
 The textbox in auto-complete/chooser extends outside of the dropdown page on 
 IE9 and FF17+.
 Looks like there's an explicit width of 130px being specified on the textbox:
 input type=search autocomplete=off style=width: 130px; tabindex=-1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 728 - Still Failing!

2013-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/728/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 9629 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=5F21763931E0D427 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.5 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.5-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dtests.disableHdfs=true -classpath 

[jira] [Updated] (SOLR-4688) add tests related to reporting core init failures and lazy loaded cores

2013-08-20 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4688:
-

Summary: add tests related to reporting core init failures and lazy loaded 
cores  (was: add tests realted to reporting core init failures and lazy loaded 
cores)

 add tests related to reporting core init failures and lazy loaded cores
 ---

 Key: SOLR-4688
 URL: https://issues.apache.org/jira/browse/SOLR-4688
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erick Erickson

 Spin off of SOLR-4672 zince Erick said he would worry about this so i don't 
 have to...
 we should have more tests that sanity check the behvaior of lazy loaded 
 cores, and reporting back core init failures -- both as part of CoreAdmin 
 STATUS requests and in the error message returned when trying to use these 
 cores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744879#comment-13744879
 ] 

Michael McCandless commented on LUCENE-3849:


I think because facet module cutover from payloads to DV, many of the 
problematic TokenStreams disappeared?  But there was still one, inside 
DirectoryTaxonomyWriter, that I fixed in the patch ... it now calls 
clearAttributes and sets each att on incrementToken.

That's a good idea on end(); I'll do that and check all impls.

I don't see a better way than setting posInc to 0 in end ... and I agree this 
bug is bad.  It can also affects suggesters, e.g. if it uses ShingleFilter 
after StopFilter and the user's last word was a stop word.

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744880#comment-13744880
 ] 

Robert Muir commented on LUCENE-3849:
-

{quote}
That's a good idea on end(); I'll do that and check all impls.
{quote}

I did this: i dont think there are any issues (or you fixed them in the patch).

So I think when this issue is committed we should resolve LUCENE-4318. Thank 
you!

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744909#comment-13744909
 ] 

Michael McCandless commented on LUCENE-3849:


bq. So I think when this issue is committed we should resolve LUCENE-4318.

OK, will do!

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5057) queryResultCache should not related with the order of fq's list

2013-08-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744917#comment-13744917
 ] 

Erick Erickson commented on SOLR-5057:
--

Hmmm, sorry it took so long. But it appears that your latest patch doesn't have 
any of Yonik's suggestions in it and still has the test in a completely new 
file rather than just another test case in QueryResultKeyTest.java as my patch 
had. Did you upload your first patch again by mistake? 

Please upload a new version and I _promise_ I'll get it committed soon.

 queryResultCache should not related with the order of fq's list
 ---

 Key: SOLR-5057
 URL: https://issues.apache.org/jira/browse/SOLR-5057
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0, 4.1, 4.2, 4.3
Reporter: Feihong Huang
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-5057.patch, SOLR-5057.patch, SOLR-5057.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 There are two case query with the same meaning below. But the case2 can't use 
 the queryResultCache when case1 is executed.
 case1: q=*:*fq=field1:value1fq=field2:value2
 case2: q=*:*fq=field2:value2fq=field1:value1
 I think queryResultCache should not be related with the order of fq's list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4408) Server hanging on startup

2013-08-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744929#comment-13744929
 ] 

Erick Erickson commented on SOLR-4408:
--

OK, I can't apply this patch cleanly to trunk, the underlying code has changed, 
see SOLR-5122.

Which leads me to ask whether this has been addressed by the other changes to 
SpellCheckCollator. Can anyone who can reproduce this try with a recent 4.x 
checkout but WITHOUT applying this patch?

If it's still a problem, I'd appreciate it if someone could update the patch 
and I promise I'll get to it this week/weekend.

 Server hanging on startup
 -

 Key: SOLR-4408
 URL: https://issues.apache.org/jira/browse/SOLR-4408
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
 Environment: OpenJDK 64-Bit Server VM (23.2-b09 mixed mode)
 Tomcat 7.0
 Eclipse Juno + WTP
Reporter: Francois-Xavier Bonnet
Assignee: Erick Erickson
 Fix For: 4.5, 5.0

 Attachments: patch-4408.txt


 While starting, the server hangs indefinitely. Everything works fine when I 
 first start the server with no index created yet but if I fill the index then 
 stop and start the server, it hangs. Could it be a lock that is never 
 released?
 Here is what I get in a full thread dump:
 2013-02-06 16:28:52
 Full thread dump OpenJDK 64-Bit Server VM (23.2-b09 mixed mode):
 searcherExecutor-4-thread-1 prio=10 tid=0x7fbdfc16a800 nid=0x42c6 in 
 Object.wait() [0x7fbe0ab1]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0xc34c1c48 (a java.lang.Object)
   at java.lang.Object.wait(Object.java:503)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
   - locked 0xc34c1c48 (a java.lang.Object)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
   at 
 org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:94)
   at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:213)
   at 
 org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:112)
   at 
 org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:203)
   at 
 org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:180)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
   at 
 org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:64)
   at org.apache.solr.core.SolrCore$5.call(SolrCore.java:1594)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 coreLoadExecutor-3-thread-1 prio=10 tid=0x7fbe04194000 nid=0x42c5 in 
 Object.wait() [0x7fbe0ac11000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0xc34c1c48 (a java.lang.Object)
   at java.lang.Object.wait(Object.java:503)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
   - locked 0xc34c1c48 (a java.lang.Object)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
   at 
 org.apache.solr.handler.ReplicationHandler.getIndexVersion(ReplicationHandler.java:495)
   at 
 org.apache.solr.handler.ReplicationHandler.getStatistics(ReplicationHandler.java:518)
   at 
 org.apache.solr.core.JmxMonitoredMap$SolrDynamicMBean.getMBeanInfo(JmxMonitoredMap.java:232)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:512)
   at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:140)
   at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:51)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:636)
   at 

[jira] [Updated] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2013-08-20 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-4280:


Attachment: SOLR-4280-trunk.patch

New patch. This patch now also works in a distributed environment.

 spellcheck.maxResultsForSuggest based on filter query results
 -

 Key: SOLR-4280
 URL: https://issues.apache.org/jira/browse/SOLR-4280
 Project: Solr
  Issue Type: Improvement
  Components: spellchecker
Reporter: Markus Jelsma
 Fix For: 4.5, 5.0

 Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
 SOLR-4280-trunk.patch


 spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
 able to take a ratio and calculate that against the maximum number of results 
 the filter queries return.
 At least in our case this would certainly add a lot of value. 99% of our 
 end-users search within one or more filters of which one is always unique. 
 The number of documents for each of those unique filters varies significantly 
 ranging from 300 to 3.000.000 documents in which they search. The 
 maxResultsForSuggest is set to a reasonable low value so it kind of works 
 fine but sometimes leads to undesired suggestions for a large subcorpus that 
 has more misspellings.
 Spun off from SOLR-4278.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-5182:
---

 Summary: FVH can end in very very long running recursion on phrase 
highlight
 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.4, 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5


due to the nature of FVH extract logic a simple phrase query can put a FHV into 
a super long running recursion. I had documents taking literally days to return 
form the extract phrases logic. I have a test that reproduces the problem and a 
possible fix. The reason for this is that the FVH never tries to early 
terminate if a phrase is already way beyond the slop coming from the phrase 
query. If there is a document with lot of occurrences or two or more terms in 
the phrase this literally tries to match all possible combinations of the terms 
in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5182:


Attachment: LUCENE-5182.patch

here is a patch and a test

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744978#comment-13744978
 ] 

Simon Willnauer commented on LUCENE-5182:
-

this patch really doesn't fix the actual issue that this alg is freaking crazy 
and somehow n! of all the positions etc. I am not even sure what the Big-O of 
this is but this patch just tires to prevent this thing from going totally 
nuts. 

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744984#comment-13744984
 ] 

Robert Muir commented on LUCENE-5182:
-

It seems to me this patch will solve the issue for low slop values, but for 
higher slop values there might be the same trouble right?

Maybe there can be a hard upper bound on this: is there some existing limit in 
this highlighter that can bound the slop (e.g. like the maximum number of words 
that can be in a snippet or something?) Failing that, maybe a separate 
configurable limit?

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5057) queryResultCache should not related with the order of fq's list

2013-08-20 Thread Feihong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feihong Huang updated SOLR-5057:


Attachment: (was: SOLR-5057.patch)

 queryResultCache should not related with the order of fq's list
 ---

 Key: SOLR-5057
 URL: https://issues.apache.org/jira/browse/SOLR-5057
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0, 4.1, 4.2, 4.3
Reporter: Feihong Huang
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-5057.patch, SOLR-5057.patch, SOLR-5057.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 There are two case query with the same meaning below. But the case2 can't use 
 the queryResultCache when case1 is executed.
 case1: q=*:*fq=field1:value1fq=field2:value2
 case2: q=*:*fq=field2:value2fq=field1:value1
 I think queryResultCache should not be related with the order of fq's list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744987#comment-13744987
 ] 

Simon Willnauer commented on LUCENE-5182:
-

I agree robert we don't really fix the problem for high slops. I am not sure 
what a good default is for that but maybe it's just enough to make it 
configurable?

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5057) queryResultCache should not related with the order of fq's list

2013-08-20 Thread Feihong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feihong Huang updated SOLR-5057:


Attachment: SOLR-5057.patch

 queryResultCache should not related with the order of fq's list
 ---

 Key: SOLR-5057
 URL: https://issues.apache.org/jira/browse/SOLR-5057
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0, 4.1, 4.2, 4.3
Reporter: Feihong Huang
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-5057.patch, SOLR-5057.patch, SOLR-5057.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 There are two case query with the same meaning below. But the case2 can't use 
 the queryResultCache when case1 is executed.
 case1: q=*:*fq=field1:value1fq=field2:value2
 case2: q=*:*fq=field2:value2fq=field1:value1
 I think queryResultCache should not be related with the order of fq's list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5057) queryResultCache should not related with the order of fq's list

2013-08-20 Thread Feihong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744994#comment-13744994
 ] 

Feihong Huang commented on SOLR-5057:
-

hi, erick, patch attached again. I upload by mistake previous time. 
I'm so sorry.

 queryResultCache should not related with the order of fq's list
 ---

 Key: SOLR-5057
 URL: https://issues.apache.org/jira/browse/SOLR-5057
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0, 4.1, 4.2, 4.3
Reporter: Feihong Huang
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-5057.patch, SOLR-5057.patch, SOLR-5057.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 There are two case query with the same meaning below. But the case2 can't use 
 the queryResultCache when case1 is executed.
 case1: q=*:*fq=field1:value1fq=field2:value2
 case2: q=*:*fq=field2:value2fq=field1:value1
 I think queryResultCache should not be related with the order of fq's list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744999#comment-13744999
 ] 

Robert Muir commented on LUCENE-5182:
-

Yeah I'm not sure either: maybe just a Math.min and a default of 
Integer.MAX_VALUE. Sure its still trappy but at least its an improvement. 

another idea (if the user is using the IDF-weighted fragments) might be to 
somehow not process terms where docFreq/maxDoc  foo%, realizing they wont 
contribute much to the score anyway.

But in general i feel like the problem will still exist without an algorithmic 
change.

anyway +1 to the patch

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745008#comment-13745008
 ] 

Simon Willnauer commented on LUCENE-5182:
-

I kind of feel that we can make a lot of things configurable but eventually we 
need to get rid of it. It's really a can of worms and really fixing it means 
rewriting it from my point of view.

I think I will go with what I have for now (the patch) which at least fixes the 
larger issue.

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5173) Test-only Hadoop dependencies are included in release artifacts

2013-08-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745054#comment-13745054
 ] 

Steve Rowe commented on SOLR-5173:
--

From a #solr-dev IRC conversation today:

{noformat}
10:15 markrmiller i think those 3 jars are the compile time dependencies
10:16   it's the test jars that are not
10:16   the hdfs common jar requires auth and annotations for 
whatever reason
10:16   would be nice if they did something like we do with solrj 
for the client
10:17   but they dont

10:22 sarowe  compile succeeds with only hadoop-common, leaving out -auth,
10:22   -annotations, and -hdfs -- there's just a compilation 
warning about annotations
10:22   about solrj, I guess you mean a reduced-footprint set of 
dependencies?
10:23   runtime deps I mean

10:25 markrmiller perhaps its runtime then
10:25   been many months since I added them
10:26   but I seem to remember those 3 came in before any test work

10:26 sarowe  ok, I'm confused - are you saying that the Ant setup as it 
is right now,
10:26   without my patch, is as it should be?
10:26   right now, the Ant/Ivy/dist setup for solr doesn't make a 
distinction
10:27   between compile-time and run-time dependencies

10:27 markrmiller not if it was shipping jetty 6
10:27   I've got to look though
10:27   the first draft that i committed
10:27   it was setup right
10:27   all the test stuff was in the test framework
10:27   and core only had the min of what it needed

10:27 sarowe  jetty 6 did not ship with 4.4

10:27 markrmiller i thought that might be the case - its simply a dependency 
thing?

10:28 sarowe  it's a *maven* dependency thing
10:28   the direct deps for Ivy and Maven are the same

10:28 markrmiller not much can be done about it - unless they make a sep 
client jar
10:28   maven sucks in the world and thats what you get

10:28 sarowe  but Maven also pulls in Jetty 6 as an indirect dep

10:28 markrmiller yeah, thats a bummer
10:28   but nothing we can do
10:29   thats how maven and the hadoop guys have things

10:29 sarowe  well, we can exclude jetty 6 as an indirect dep

10:29 markrmiller there is prob a lot you can exclude

10:29 sarowe  I did exclude a bunch of stuff
10:30   but I didn't revisit the exclusions when I moved the test 
stuff
10:30   from test-framework to solr-core unfortunately

10:29 markrmiller i found the min jars i needed

10:31 sarowe  so to confirm about the -hdfs, -auth, and -annotations jars:
10:31   these are required at run-time, right?

10:31 markrmiller as far as I know yes
10:31   one sec, let me look at an older build

10:32 sarowe  thanks - I'll change the patch to keep them as compile-time 
deps then,
10:32   since as I said the Ant/Ivy setup doesn't make a 
distinction between
10:32   compile-time and run-time deps
10:32   assuming you find that the older build did that

10:34 markrmiller what I added and need for compile/runtime was:
10:35   common, hdfs, annotations, auth, commons-configuration, 
protobuf-java,
10:35   perhaps concurrentlinkedhashmap unless it was already there
10:35   dependency org=org.apache.hadoop name=hadoop-common 
rev=${hadoop.version} transitive=false/
10:35   dependency org=org.apache.hadoop name=hadoop-hdfs 
rev=${hadoop.version} transitive=false/
10:35   dependency org=org.apache.hadoop 
name=hadoop-annotations rev=${hadoop.version} transitive=false/
10:35   dependency org=org.apache.hadoop name=hadoop-auth 
rev=${hadoop.version} transitive=false/
10:35   dependency org=commons-configuration 
name=commons-configuration rev=1.6 transitive=false/
10:35   dependency org=com.google.protobuf name=protobuf-java 
rev=2.4.0a transitive=false/
10:35   dependency org=com.googlecode.concurrentlinkedhashmap 
name=concurrentlinkedhashmap-lru rev=1.2 transitive=false/
{noformat}

 Test-only Hadoop dependencies are included in release artifacts
 ---

 Key: SOLR-5173
 URL: https://issues.apache.org/jira/browse/SOLR-5173
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.4
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 4.5

 Attachments: SOLR-5173.patch


 Chris Collins [reported on 
 solr-user|http://markmail.org/message/evhpcougs5ppafjk] that solr-core 4.4 
 has dependencies on 

[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745073#comment-13745073
 ] 

ASF subversion and git services commented on LUCENE-5182:
-

Commit 1515847 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1515847 ]

LUCENE-5182: Terminate phrase searches early if max phrase window is exceeded

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745078#comment-13745078
 ] 

ASF subversion and git services commented on LUCENE-5182:
-

Commit 1515850 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1515850 ]

LUCENE-5182: Terminate phrase searches early if max phrase window is exceeded

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-5182.
-

Resolution: Fixed

I committed to trunk and 4x - really I want to get LUCENE-2878 in soon (will 
start working on it in the near future) and then re-visit all the highlighters

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5108) plugin loading should fail if mor then one instance of a singleton plugin is found

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745090#comment-13745090
 ] 

ASF subversion and git services commented on SOLR-5108:
---

Commit 1515852 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1515852 ]

SOLR-5108: fail plugin info loading if multiple instances exist but only one is 
expected

 plugin loading should fail if mor then one instance of a singleton plugin is 
 found
 --

 Key: SOLR-5108
 URL: https://issues.apache.org/jira/browse/SOLR-5108
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5108.patch


 Continuing from the config parsing/validation work done in SOLR-4953, we 
 should improve SolrConfig so that parsing fails if multiple instances of a 
 plugin are found for types of plugins where only one is allowed to be used 
 at a time.
 at the moment, {{SolrConfig.loadPluginInfo}} happily initializes a 
 {{ListPluginInfo}} for whatever xpath it's given, and then later code can 
 either call {{ListPluginInfo getPluginInfos(String)}} or {{PluginInfo 
 getPluginInfo(String)}} (the later just being shorthand for getting the first 
 item in the list.
 we could make {{getPluginInfo(String)}} throw an error if the list has 
 multiple items, but i think we should also change the signature of 
 {{loadPluginInfo}} to be explicit about how many instances we expect to find, 
 so we can error earlier, and have a redundant check.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5108) plugin loading should fail if mor then one instance of a singleton plugin is found

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745101#comment-13745101
 ] 

ASF subversion and git services commented on SOLR-5108:
---

Commit 1515859 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1515859 ]

SOLR-5108: fail plugin info loading if multiple instances exist but only one is 
expected (merge r1515852)

 plugin loading should fail if mor then one instance of a singleton plugin is 
 found
 --

 Key: SOLR-5108
 URL: https://issues.apache.org/jira/browse/SOLR-5108
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5108.patch


 Continuing from the config parsing/validation work done in SOLR-4953, we 
 should improve SolrConfig so that parsing fails if multiple instances of a 
 plugin are found for types of plugins where only one is allowed to be used 
 at a time.
 at the moment, {{SolrConfig.loadPluginInfo}} happily initializes a 
 {{ListPluginInfo}} for whatever xpath it's given, and then later code can 
 either call {{ListPluginInfo getPluginInfos(String)}} or {{PluginInfo 
 getPluginInfo(String)}} (the later just being shorthand for getting the first 
 item in the list.
 we could make {{getPluginInfo(String)}} throw an error if the list has 
 multiple items, but i think we should also change the signature of 
 {{loadPluginInfo}} to be explicit about how many instances we expect to find, 
 so we can error earlier, and have a redundant check.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2013-08-20 Thread Andrew Muldowney (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745106#comment-13745106
 ] 

Andrew Muldowney commented on SOLR-2894:


It shouldn't be a shard amount issue, were running this patch on a 50 shard 
cluster over several servers with solid results.
Try the shards.tolerant=true parameter on your distributed search? It 
supposedly includes error information if available. 


 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.5

 Attachments: SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894-reworked.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5108) plugin loading should fail if mor then one instance of a singleton plugin is found

2013-08-20 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5108.


   Resolution: Fixed
Fix Version/s: 5.0
   4.5

 plugin loading should fail if mor then one instance of a singleton plugin is 
 found
 --

 Key: SOLR-5108
 URL: https://issues.apache.org/jira/browse/SOLR-5108
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.5, 5.0

 Attachments: SOLR-5108.patch


 Continuing from the config parsing/validation work done in SOLR-4953, we 
 should improve SolrConfig so that parsing fails if multiple instances of a 
 plugin are found for types of plugins where only one is allowed to be used 
 at a time.
 at the moment, {{SolrConfig.loadPluginInfo}} happily initializes a 
 {{ListPluginInfo}} for whatever xpath it's given, and then later code can 
 either call {{ListPluginInfo getPluginInfos(String)}} or {{PluginInfo 
 getPluginInfo(String)}} (the later just being shorthand for getting the first 
 item in the list.
 we could make {{getPluginInfo(String)}} throw an error if the list has 
 multiple items, but i think we should also change the signature of 
 {{loadPluginInfo}} to be explicit about how many instances we expect to find, 
 so we can error earlier, and have a redundant check.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5173) Solr-core's Maven configuration includes test-only Hadoop dependencies as indirect compile-time dependencies

2013-08-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-5173:
-

Summary: Solr-core's Maven configuration includes test-only Hadoop 
dependencies as indirect compile-time dependencies  (was: Test-only Hadoop 
dependencies are included in release artifacts)

 Solr-core's Maven configuration includes test-only Hadoop dependencies as 
 indirect compile-time dependencies
 

 Key: SOLR-5173
 URL: https://issues.apache.org/jira/browse/SOLR-5173
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.4
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 4.5

 Attachments: SOLR-5173.patch


 Chris Collins [reported on 
 solr-user|http://markmail.org/message/evhpcougs5ppafjk] that solr-core 4.4 
 has dependencies on hadoop, and indirectly on jetty 6.
 As a workaround for Maven dependencies, the hadoop-hdfs, hadoop-auth, and 
 hadoop-annotations dependencies can be excluded, which will also exclude the 
 indirect jetty 6 dependency/ies.  hadoop-common is a compile-time dependency, 
 though, so I'm not sure if it's safe to exclude.
 The problems, as far as I can tell, are:
 1) The ivy configuration puts three test-only dependencies (hadoop-hdfs, 
 hadoo-auth, and hadoop-annotations) in solr/core/lib/, rather than where they 
 belong, in solr/core/test-lib/.  (hadoop-common is required for solr-core 
 compilation to succeed.)
 2) The Maven configuration makes the equivalent mistake in marking these 
 test-only hadoop dependencies as compile-scope rather than test-scope 
 dependencies.
 3) The Solr .war, which packages everything under solr/core/lib/, includes 
 these three test-only hadoop dependencies (though it does not include any 
 jetty 6 jars).
 4) The license files for jetty and jetty-util v6.1.26, but not the jar files 
 corresponding to them, are included in the Solr distribution.
 I have working (tests pass) local Ant and Maven configurations that treat the 
 three hadoop test-only dependencies properly; as result, the .war will no 
 longer contain them - this will cover problems #1-3 above.
 I think we can just remove the jetty and jetty-util 6.1.26 license files from 
 solr/licenses/, since we don't ship those jars.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5173) Solr-core's Maven configuration includes test-only Hadoop dependencies as indirect compile-time dependencies

2013-08-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-5173:
-

Description: 
Chris Collins [reported on 
solr-user|http://markmail.org/message/evhpcougs5ppafjk] that solr-core 4.4 has 
dependencies on hadoop, and indirectly on jetty 6.

As a workaround for Maven dependencies, the indirect jetty 6 dependency/ies can 
be excluded.

The Maven configuration should exclude any compile-time (and also run-time) 
dependencies that are not Ant/Ivy compile- and run-time dependencies, including 
jetty 6.

  was:
Chris Collins [reported on 
solr-user|http://markmail.org/message/evhpcougs5ppafjk] that solr-core 4.4 has 
dependencies on hadoop, and indirectly on jetty 6.

As a workaround for Maven dependencies, the hadoop-hdfs, hadoop-auth, and 
hadoop-annotations dependencies can be excluded, which will also exclude the 
indirect jetty 6 dependency/ies.  hadoop-common is a compile-time dependency, 
though, so I'm not sure if it's safe to exclude.

The problems, as far as I can tell, are:

1) The ivy configuration puts three test-only dependencies (hadoop-hdfs, 
hadoo-auth, and hadoop-annotations) in solr/core/lib/, rather than where they 
belong, in solr/core/test-lib/.  (hadoop-common is required for solr-core 
compilation to succeed.)

2) The Maven configuration makes the equivalent mistake in marking these 
test-only hadoop dependencies as compile-scope rather than test-scope 
dependencies.

3) The Solr .war, which packages everything under solr/core/lib/, includes 
these three test-only hadoop dependencies (though it does not include any jetty 
6 jars).

4) The license files for jetty and jetty-util v6.1.26, but not the jar files 
corresponding to them, are included in the Solr distribution.

I have working (tests pass) local Ant and Maven configurations that treat the 
three hadoop test-only dependencies properly; as result, the .war will no 
longer contain them - this will cover problems #1-3 above.

I think we can just remove the jetty and jetty-util 6.1.26 license files from 
solr/licenses/, since we don't ship those jars.



 Solr-core's Maven configuration includes test-only Hadoop dependencies as 
 indirect compile-time dependencies
 

 Key: SOLR-5173
 URL: https://issues.apache.org/jira/browse/SOLR-5173
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.4
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 4.5

 Attachments: SOLR-5173.patch


 Chris Collins [reported on 
 solr-user|http://markmail.org/message/evhpcougs5ppafjk] that solr-core 4.4 
 has dependencies on hadoop, and indirectly on jetty 6.
 As a workaround for Maven dependencies, the indirect jetty 6 dependency/ies 
 can be excluded.
 The Maven configuration should exclude any compile-time (and also run-time) 
 dependencies that are not Ant/Ivy compile- and run-time dependencies, 
 including jetty 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745142#comment-13745142
 ] 

ASF subversion and git services commented on LUCENE-3849:
-

Commit 1515887 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1515887 ]

LUCENE-3849: end() now sets position increment, so any trailing holes are 
counted

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745225#comment-13745225
 ] 

ASF subversion and git services commented on LUCENE-3849:
-

Commit 1515909 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1515909 ]

LUCENE-3849: end() now sets position increment, so any trailing holes are 
counted

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-3849.


   Resolution: Fixed
Fix Version/s: 4.5
   5.0
 Assignee: Michael McCandless

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, 4.5

 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4318) facets module has many tokenstreams that never call clearAttributes

2013-08-20 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-4318.


   Resolution: Fixed
Fix Version/s: 4.5
   5.0

This was mostly fixed by moving the facets module from payloads to doc values 
... then the one remaining TokenStream was fixed in LUCENE-3849.

 facets module has many tokenstreams that never call clearAttributes
 ---

 Key: LUCENE-4318
 URL: https://issues.apache.org/jira/browse/LUCENE-4318
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.5




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test cases for Solr wiki/ref guide consolidation policy

2013-08-20 Thread Cassandra Targett
I've been meaning to reply to this for a couple weeks now...I've been
thinking about it quite a bit.

On Sat, Aug 3, 2013 at 4:11 PM, Jack Krupansky j...@basetechnology.com wrote:
 [Is there an active Jira issue where these comments belong?]

No, not really. Although SOLR-4618 is still open, that is really only
tracking issues with the original donation of the Ref Guide from
LucidWorks.

Going forward, I think it works well to have Jira issues for
individual items that need documentation (say, when it's getting time
to do a particular release and New Feature X has not been documented,
as Hoss did for the 4.4 release), or add comments to the Ref Guide
page if something needs to be improved, but open-ended questions
probably aren't really served well with never-ending Jira issues.

 Here are two great test case for the issue of how to switch over from the
 old Solr wiki to the new Confluence-based Solr Reference Guide: the terms
 component and the term vector component.

 1. The ref guide is virtually identical to the wiki anyway for these two
 pages. No apparent need to move any info over or worry about loss of info.
 IOW, 100% overlap
 2. If anybody does want to update either of these two pages, it would be
 nice to be clear what the status of the wiki page is. Maybe it should have a
 banner indicating that it is historic/archive.
 3. Should the wiki be kept as is and simply add a “pointer” to the ref
 guide?
 4. Or should the wiki be “stubbed out” and point to the ref guide only?
 5. Or should the wiki page be deleted and any referencing pages in the wiki
 re-point to the ref guide. There may be non-Solr web pages that link to the
 wiki page. A 301 redirect would be nice.

All of these questions are actually addressed by this:
https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation#Internal-MaintainingDocumentation-Migrating%22Official%22DocumentationfromMoinMoin

It's high on my list to start work on that effort; part of the goal of
having the ref guide is to improve the available documentation and
this duplication between ref guide and wiki actually makes it
temporarily worse. The earliest authors of the Ref Guide did some
liberal copying from the wiki and for various reasons I wasn't ever
able to entirely erase it. Those pages are the easiest to migrate,
since, as with the cases you mention, they're nearly identical
already. If there aren't other priorities, then those are probably a
great place to start.


 There are three other interesting issues with these two test cases:

 1. They both have Javadoc which, as they say, “needs some love” – should the
 Javadoc be updated, maintained, encouraged, etc?

IMO, yes. But, don't read anything into that...I'm just saying yes, I
think Javadoc is important.

Javadoc is in the code, so it would require jira issues  patches,
etc., to fix anything...someday maybe I'll dip my toes into that but
definitely not in the short-term.

 2. Should a pointer to the ref guide be added to the Javadoc? There are
 plenty of cases in Solr where the Javadoc is spotty, missing or explicitly
 “TO DO”; a policy for dealing with it  is needed – may simply linking to the
 closest ref guide page is the next step.

I hadn't thought about that before. My first instinct is that
something is better than nothing, so a link to the ref guide would
be better than TO DO, but at the same time, that might not really
help another developer much...maybe more than nothing though.

 3. Both have additional DEFINITIVE reference documentation... in the Solr
 example solrconfig.xml. How much info should go into the Javadoc vs.
 solrconfig vs. ref guide? Some day we will finally have multiple example
 config/schemas; then solrconfig might not be the best place to use as the
 master for doc info.

I don't think this needs to be too complicated.

Javadoc obviously is the definitive Java API documentation. If you're
developing customizations for Lucene/Solr (or contributing patches to
Lucene/Solr), it will be helpful to you. It should help you understand
what the code is doing and how, and it's generally written by
developers for other developers.

The Ref Guide is meant to give deeper context of how each feature
works, and how it interacts with other features. More of a classical
manual. You won't learn details of the code itself from the Ref Guide.
It's meant for people trying to *use* the software, as opposed to
people who want to extend the software. Some may find both useful at
different times, depending on what they're doing.

The comments in solrconfig.xml are good, I think, mostly because they
provide inline context when you're making changes. However, IMO,
solrconfig.xml (or schema.xml, or any other config) should not be the
only place something is explained - where that is the case (and there
are a few) there should be a page in the Ref Guide that explains the
same thing (and maybe also the Javadoc...but that probably depends).

Ideally, all these things are in sync. 

[jira] [Created] (SOLR-5177) test covers overwrite true/false for block updates

2013-08-20 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-5177:
--

 Summary: test covers overwrite true/false for block updates 
 Key: SOLR-5177
 URL: https://issues.apache.org/jira/browse/SOLR-5177
 Project: Solr
  Issue Type: Test
Affects Versions: 4.5, 5.0
Reporter: Mikhail Khludnev


DUH2 uses \_root_ field to support overwrite for block updates. I want to 
contribute this test, which assert the current functionality.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5177) test covers overwrite true/false for block updates

2013-08-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-5177:
---

Attachment: SOLR-5177.patch

 test covers overwrite true/false for block updates 
 ---

 Key: SOLR-5177
 URL: https://issues.apache.org/jira/browse/SOLR-5177
 Project: Solr
  Issue Type: Test
Affects Versions: 4.5, 5.0
Reporter: Mikhail Khludnev
  Labels: patch, test
 Attachments: SOLR-5177.patch


 DUH2 uses \_root_ field to support overwrite for block updates. I want to 
 contribute this test, which assert the current functionality.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745467#comment-13745467
 ] 

ASF subversion and git services commented on LUCENE-5178:
-

Commit 1515977 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1515977 ]

LUCENE-5178: add missing support for docvalues

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745469#comment-13745469
 ] 

Robert Muir commented on LUCENE-5178:
-

Thanks Mike: I added additional tests and ensured the 'missing' stuff in solr 
is fully functional. 

I'll let jenkins chomp on it for a while in trunk.

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 351 - Failure

2013-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/351/

1 tests failed.
REGRESSION:  org.apache.lucene.index.Test2BPostings.test

Error Message:
first position increment must be  0 (got 0) for field 'field'

Stack Trace:
java.lang.IllegalArgumentException: first position increment must be  0 (got 
0) for field 'field'
at 
__randomizedtesting.SeedInfo.seed([AA541EE7CB672416:2200213D659B49EE]:0)
at 
org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:125)
at 
org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:245)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:254)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:446)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1521)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1191)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1172)
at org.apache.lucene.index.Test2BPostings.test(Test2BPostings.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 

[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745513#comment-13745513
 ] 

ASF subversion and git services commented on LUCENE-5182:
-

Commit 1515986 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1515986 ]

LUCENE-5182: don't stack overflow jenkins

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5182) FVH can end in very very long running recursion on phrase highlight

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745518#comment-13745518
 ] 

ASF subversion and git services commented on LUCENE-5182:
-

Commit 1515988 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1515988 ]

LUCENE-5182: don't stack overflow jenkins

 FVH can end in very very long running recursion on phrase highlight
 ---

 Key: LUCENE-5182
 URL: https://issues.apache.org/jira/browse/LUCENE-5182
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5182.patch


 due to the nature of FVH extract logic a simple phrase query can put a FHV 
 into a super long running recursion. I had documents taking literally days to 
 return form the extract phrases logic. I have a test that reproduces the 
 problem and a possible fix. The reason for this is that the FVH never tries 
 to early terminate if a phrase is already way beyond the slop coming from the 
 phrase query. If there is a document with lot of occurrences or two or more 
 terms in the phrase this literally tries to match all possible combinations 
 of the terms in the doc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745541#comment-13745541
 ] 

ASF subversion and git services commented on LUCENE-3849:
-

Commit 1515994 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1515994 ]

LUCENE-3849: fix some more test only TokenStreams

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, 4.5

 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 351 - Failure

2013-08-20 Thread Michael McCandless
I committed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Tue, Aug 20, 2013 at 5:59 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/351/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.Test2BPostings.test

 Error Message:
 first position increment must be  0 (got 0) for field 'field'

 Stack Trace:
 java.lang.IllegalArgumentException: first position increment must be  0 (got 
 0) for field 'field'
 at 
 __randomizedtesting.SeedInfo.seed([AA541EE7CB672416:2200213D659B49EE]:0)
 at 
 org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:125)
 at 
 org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:245)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:254)
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:446)
 at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1521)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1191)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1172)
 at org.apache.lucene.index.Test2BPostings.test(Test2BPostings.java:76)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 

[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745561#comment-13745561
 ] 

ASF subversion and git services commented on LUCENE-5178:
-

Commit 1515999 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1515999 ]

LUCENE-5178: handle missing values in range facets

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3849) position increments should be implemented by TokenStream.end()

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745563#comment-13745563
 ] 

ASF subversion and git services commented on LUCENE-3849:
-

Commit 1516001 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1516001 ]

LUCENE-3849: fix some more test only TokenStreams

 position increments should be implemented by TokenStream.end()
 --

 Key: LUCENE-3849
 URL: https://issues.apache.org/jira/browse/LUCENE-3849
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6, 4.0-ALPHA
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, 4.5

 Attachments: LUCENE-3849.patch, LUCENE-3849.patch, LUCENE-3849.patch, 
 LUCENE-3849.patch


 if you have pages of a book as multivalued fields, with the default position 
 increment gap
 of analyzer.java (0), phrase queries won't work across pages if one ends with 
 stopword(s).
 This is because the 'trailing holes' are not taken into account in end(). So 
 I think in
 TokenStream.end(), subclasses of FilteringTokenFilter (e.g. stopfilter) 
 should do:
 {code}
 super.end();
 posIncAtt += skippedPositions;
 {code}
 One problem is that these filters need to 'add' to the posinc, but currently 
 nothing clears
 the attributes for end() [they are dirty, except offset which is set by the 
 tokenizer].
 Also the indexer should be changed to pull posIncAtt from end().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745571#comment-13745571
 ] 

ASF subversion and git services commented on LUCENE-5178:
-

Commit 1516003 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1516003 ]

LUCENE-5178: test requires codec that supports docsWithField

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745574#comment-13745574
 ] 

ASF subversion and git services commented on LUCENE-5178:
-

Commit 1516012 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1516012 ]

LUCENE-5178: add missing support for docvalues

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5165) Remove required default from DocValues fields

2013-08-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-5165.
---

   Resolution: Fixed
Fix Version/s: 5.0
   4.5

single-valued docvalues fields no longer require 'default' or 'required'.

 Remove required default from DocValues fields
 -

 Key: SOLR-5165
 URL: https://issues.apache.org/jira/browse/SOLR-5165
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Fix For: 4.5, 5.0


 Solr DocValues fields currently require a default.  This doesn't really make 
 sense, harms usability, and should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-08-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5178.
-

   Resolution: Fixed
Fix Version/s: 4.5
   5.0

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5163) Document fileformat of DiskDV.

2013-08-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5163.
-

   Resolution: Fixed
Fix Version/s: 4.5
   5.0

 Document fileformat of DiskDV.
 --

 Key: LUCENE-5163
 URL: https://issues.apache.org/jira/browse/LUCENE-5163
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Fix For: 5.0, 4.5


 This needs fileformat docs before being part of the index format.
 I'll take it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5124) fix+document+rename DiskDV to Lucene45

2013-08-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5124.
-

   Resolution: Fixed
Fix Version/s: 4.5
   5.0

 fix+document+rename DiskDV to Lucene45
 --

 Key: LUCENE-5124
 URL: https://issues.apache.org/jira/browse/LUCENE-5124
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Robert Muir
 Fix For: 5.0, 4.5


 The idea is that the default implementation should not hold everything in 
 memory, we can have a Memory impl for that. I think stuff being all in heap 
 memory is just a relic of FieldCache.
 In my benchmarking diskdv works well, and its much easier to manage (keep a 
 smaller heap, leave it to the OS, no OOMs etc from merging large FSTs, ...)
 If someone wants to optimize by forcing everything in memory, they can then 
 use the usual approach (e.g. just use FileSwitchDirectory, or pick Memory 
 for even more efficient stuff).
 Ill keep the issue here for a bit. If we decide to do this, ill work up file 
 format docs and so on. We should also fix a few things that are not great 
 about it (LUCENE-5122) before making it the default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5183) BinaryDocValues inconsistencies

2013-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745586#comment-13745586
 ] 

Michael McCandless commented on LUCENE-5183:


+1

 BinaryDocValues inconsistencies
 ---

 Key: LUCENE-5183
 URL: https://issues.apache.org/jira/browse/LUCENE-5183
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 Some current inconsistencies:
 * Binary/SortedDocValues.EMPTY_BYTES should be removed (BytesRef.EMPTY_BYTES 
 should be used in its place): FieldCache.getDocsWithField should be used to 
 determine missing. Thats fine if FC wants to back its Bits by some special 
 placeholder value, but thats its impl detail not part of the API.
 * Sorting comparator of Binary should either be removed (is this REALLY 
 useful?) or should support missingValue(): and it should support this for 
 SortedDocValues in any case: solr does it, but lucene wont allow it accept 
 for numerics?!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5183) BinaryDocValues inconsistencies

2013-08-20 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5183:
---

 Summary: BinaryDocValues inconsistencies
 Key: LUCENE-5183
 URL: https://issues.apache.org/jira/browse/LUCENE-5183
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Some current inconsistencies:

* Binary/SortedDocValues.EMPTY_BYTES should be removed (BytesRef.EMPTY_BYTES 
should be used in its place): FieldCache.getDocsWithField should be used to 
determine missing. Thats fine if FC wants to back its Bits by some special 
placeholder value, but thats its impl detail not part of the API.
* Sorting comparator of Binary should either be removed (is this REALLY 
useful?) or should support missingValue(): and it should support this for 
SortedDocValues in any case: solr does it, but lucene wont allow it accept for 
numerics?!


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2013-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745708#comment-13745708
 ] 

ASF subversion and git services commented on LUCENE-2562:
-

Commit 1516057 from [~markrmil...@gmail.com]
[ https://svn.apache.org/r1516057 ]

LUCENE-2562: update to Lucene/Solr 4.4 and Pivot 2.0.2

 Make Luke a Lucene/Solr Module
 --

 Key: LUCENE-2562
 URL: https://issues.apache.org/jira/browse/LUCENE-2562
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller
  Labels: gsoc2013
 Attachments: luke1.jpg, luke2.jpg, luke3.jpg, Luke-ALE-1.png, 
 Luke-ALE-2.png, Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png


 see
 http://search.lucidimagination.com/search/document/ee0e048c6b56ee2/luke_in_need_of_maintainer
 http://search.lucidimagination.com/search/document/5e53136b7dcb609b/web_based_luke
 I think it would be great if there was a version of Luke that always worked 
 with trunk - and it would also be great if it was easier to match Luke jars 
 with Lucene versions.
 While I'd like to get GWT Luke into the mix as well, I think the easiest 
 starting point is to straight port Luke to another UI toolkit before 
 abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
 I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
 haven't/don't have a lot of time for this at the moment, but I've plugged 
 away here and there over the past work or two. There is still a *lot* to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2013-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745710#comment-13745710
 ] 

Mark Miller commented on LUCENE-2562:
-

At Ajay Bhat's request, I have updated to Lucene/Solr 4.4 and Pivot 2.0.2.

[~ajay bhat], feel free to comment on this JIRA issue regarding your current 
progress.

Also, it would be great to commit what you have so far when you are ready.


 Make Luke a Lucene/Solr Module
 --

 Key: LUCENE-2562
 URL: https://issues.apache.org/jira/browse/LUCENE-2562
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller
  Labels: gsoc2013
 Attachments: luke1.jpg, luke2.jpg, luke3.jpg, Luke-ALE-1.png, 
 Luke-ALE-2.png, Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png


 see
 http://search.lucidimagination.com/search/document/ee0e048c6b56ee2/luke_in_need_of_maintainer
 http://search.lucidimagination.com/search/document/5e53136b7dcb609b/web_based_luke
 I think it would be great if there was a version of Luke that always worked 
 with trunk - and it would also be great if it was easier to match Luke jars 
 with Lucene versions.
 While I'd like to get GWT Luke into the mix as well, I think the easiest 
 starting point is to straight port Luke to another UI toolkit before 
 abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
 I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
 haven't/don't have a lot of time for this at the moment, but I've plugged 
 away here and there over the past work or two. There is still a *lot* to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5184) CountFacetRequest does not seem to sum the totals of the subResult values.

2013-08-20 Thread Karl Nicholas (JIRA)
Karl Nicholas created LUCENE-5184:
-

 Summary: CountFacetRequest does not seem to sum the totals of the 
subResult values.
 Key: LUCENE-5184
 URL: https://issues.apache.org/jira/browse/LUCENE-5184
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.4
 Environment: Windows 7, Java 1.6 (64 bit), Eclipse
Reporter: Karl Nicholas


CountFacetRequest does not seem to sum the totals of the subResult values when 
the query searches in a facet hierarchy. Seemed to be better behaved in version 
4.0, and changed when I updated to version 4.4, though I did have to change 
code as well. I am using facets to create a hierarchy. Will attempt to upload 
sample code. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5184) CountFacetRequest does not seem to sum the totals of the subResult values.

2013-08-20 Thread Karl Nicholas (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Nicholas updated LUCENE-5184:
--

Attachment: FacetTest.java

Example of issue

 CountFacetRequest does not seem to sum the totals of the subResult values.
 --

 Key: LUCENE-5184
 URL: https://issues.apache.org/jira/browse/LUCENE-5184
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.4
 Environment: Windows 7, Java 1.6 (64 bit), Eclipse
Reporter: Karl Nicholas
  Labels: CountFacetRequest, facet
 Attachments: FacetTest.java

   Original Estimate: 24h
  Remaining Estimate: 24h

 CountFacetRequest does not seem to sum the totals of the subResult values 
 when the query searches in a facet hierarchy. Seemed to be better behaved in 
 version 4.0, and changed when I updated to version 4.4, though I did have to 
 change code as well. I am using facets to create a hierarchy. Will attempt to 
 upload sample code. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-5184) CountFacetRequest does not seem to sum the totals of the subResult values.

2013-08-20 Thread Karl Nicholas (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Nicholas updated LUCENE-5184:
--

Comment: was deleted

(was: Example of issue)

 CountFacetRequest does not seem to sum the totals of the subResult values.
 --

 Key: LUCENE-5184
 URL: https://issues.apache.org/jira/browse/LUCENE-5184
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.4
 Environment: Windows 7, Java 1.6 (64 bit), Eclipse
Reporter: Karl Nicholas
  Labels: CountFacetRequest, facet
 Attachments: FacetTest.java

   Original Estimate: 24h
  Remaining Estimate: 24h

 CountFacetRequest does not seem to sum the totals of the subResult values 
 when the query searches in a facet hierarchy. Seemed to be better behaved in 
 version 4.0, and changed when I updated to version 4.4, though I did have to 
 change code as well. I am using facets to create a hierarchy. Will attempt to 
 upload sample code. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org