[jira] [Commented] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614959#comment-13614959
 ] 

Steve Rowe commented on SOLR-4623:
--

Committed to trunk and branch_4x.

> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3, 5.0
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-4623.
--

   Resolution: Fixed
Fix Version/s: 5.0

> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3, 5.0
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3956) group.facet and facet.limit=-1 returns no facet counts

2013-03-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-3956:
--

Assignee: Hoss Man

> group.facet and facet.limit=-1 returns no facet counts
> --
>
> Key: SOLR-3956
> URL: https://issues.apache.org/jira/browse/SOLR-3956
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0
>Reporter: Mike Spencer
>Assignee: Hoss Man
> Attachments: SOLR-3956.patch, SOLR-3956.patch, SOLR-3956.patch, 
> SOLR-3956.patch
>
>
> Attempting to use group.facet=true and facet.limit=-1 to return all facets 
> from a grouped result ends up with the counts not being returned. Adjusting 
> the facet.limit to any number greater than 0 returns the facet counts as 
> expected.
> This does not appear limited to a specific field type, as I have tried on 
> (both multivalued and not) text, string, boolean, and double types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3956) group.facet and facet.limit=-1 returns no facet counts

2013-03-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-3956:
---

Attachment: SOLR-3956.patch

Hmmm... looking at the context of the fix a bit more i realized it wasn't going 
to play nice with using a non-0 offset, so in this updated patch i added tests 
for that case and fixed the code.

The lucene level test passes, but the solr ones don't suggesting that there is 
another aspect of this bug somewhere (either that i managed to screw up the 
solr tests) ... need to look closer later

> group.facet and facet.limit=-1 returns no facet counts
> --
>
> Key: SOLR-3956
> URL: https://issues.apache.org/jira/browse/SOLR-3956
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0
>Reporter: Mike Spencer
> Attachments: SOLR-3956.patch, SOLR-3956.patch, SOLR-3956.patch, 
> SOLR-3956.patch
>
>
> Attempting to use group.facet=true and facet.limit=-1 to return all facets 
> from a grouped result ends up with the counts not being returned. Adjusting 
> the facet.limit to any number greater than 0 returns the facet counts as 
> expected.
> This does not appear limited to a specific field type, as I have tried on 
> (both multivalued and not) text, string, boolean, and double types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-4623:
-

Attachment: SOLR-4623-fix-classname-shortening-part-deux.patch

Oops, last patch didn't cover shortening of names of FieldType subclasses, 
which live under package org.apache.solr.schema, another member of the package 
prefix set that SolrResourceLoader.findClass() checks for.  Fortunately a 
couple schema REST API tests caught this problem.

This patch converts the qualification tests in getShortName() to a regex 
accepting prefixes "org.apache.lucene.analysis.(whatever)", 
"org.apache.solr.analysis.", and "org.apache.solr.schema."

Committing shortly.  For reals this time.

> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4644) Implement spatial WITHIN query for RecursivePrefixTree

2013-03-26 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614936#comment-13614936
 ] 

Ryan McKinley commented on LUCENE-4644:
---

+1

In a quick review all looks good. 

> Implement spatial WITHIN query for RecursivePrefixTree
> --
>
> Key: LUCENE-4644
> URL: https://issues.apache.org/jira/browse/LUCENE-4644
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 4.3
>
> Attachments: 
> LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch, 
> LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Admin UI pluggability

2013-03-26 Thread Ryan Ernst
Thanks Stefan! I will try the menu top/bottom and await that patch (I use
Chrome).

Ryan

On Tuesday, March 26, 2013, Stefan Matheis wrote:

> While writing about it .. we have one open issue regarding this topic,
> i'll see if i can commit the attached patch this week:
> https://issues.apache.org/jira/browse/SOLR-4405
>
>
>
> On Tuesday, March 26, 2013 at 11:14 PM, Stefan Matheis wrote:
>
> > Hey Ryan
> >
> > Additionally to the "admin-extra.html" file which is display on the
> overview, we have "admin-extra.menu-top.html" and
> "admin-extra.menu-top.html" which are display on top / at the bottom of
> core-menu, if they exist (and do not only contain a comment, like they do
> in our sample-configuration)
> >
> > so, kind of quick hack would be: put some css definitions in there,
> which hide the existing options (they all have classes assigned, which you
> would use for the css-selector). the additional links you'd like to use can
> be placed either on top or at the bottom, where you like them more.
> >
> > To answer your final question: of course we can :) It mainly depends on
> someone to come up with some suggestions. Chat about what is doable/usable
> and what is not .. and then see what we can get out of that.
> >
> > That may either be some kind of configuration to hide different
> (existing) options or f.e. a additional stylesheet which would be loaded
> after the ones we already have, that you can overwrite the default styles.
> >
> > If you can elaborate a bit on what you'd like to change there, we may
> get other ideas as well?
> >
> > Stefan
> >
> >
> > On Tuesday, March 26, 2013 at 6:48 PM, Ryan Ernst wrote:
> >
> > > I would like to add some custom pages to the core menu for my setup,
> replace some existing (like ping) and also remove some others (like data
> import). From what I can tell, the existing hooks are very limited (like
> admin extra that appears in overview for the core). I've searched through
> JIRA for any issues regarding this, but can't find anything. Any thoughts
> on how this could be done? Can we make the admin UI more pluggable?
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
>
>


[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614931#comment-13614931
 ] 

Shai Erera commented on LUCENE-4886:


In the past, before LUCENE-3441, DirTaxoReader had refresh(), which refreshed 
its internal IndexReader. That was buggy, and didn't work if you open 
DirTaxoWriter with OpenMode.CREATE (re-creating a taxonomy). Part of 
LUCENE-3441 (make it NRT) was to overcome this buggy behavior. So now to 
reopen, your code looks exactly like IndexReader.reopen:

{code}
DirTaxoReader existing;
DirTaxoReader newone = TaxoReader.openIfChanged(existing);
if (newone != null) { // like IndexReader, denotes that there were changes
  existing.close();
  existing = newone;
}
{code}

This works whether DTR was opened on Directory or DirTaxoWriter.

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614928#comment-13614928
 ] 

Shai Erera edited comment on LUCENE-4882 at 3/27/13 4:50 AM:
-

bq. It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.

Ahh, right. That's why in eclipse I mark these things as warnings because it's 
dangerous to call static methods in these cases. At any rate, this is just a 
temporary workaround until 4.3. Thanks for bringing closure!

  was (Author: shaie):
bq. It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.

Ahh, right. That's why in eclipse I mark these things as warnings because it's 
dangerous to call static methods in these cases. At any rate, this is just a 
temporary workaround until 4.3. Thanks for bringing closure, I'll close the 
issue.
  
> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614928#comment-13614928
 ] 

Shai Erera commented on LUCENE-4882:


bq. It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.

Ahh, right. That's why in eclipse I mark these things as warnings because it's 
dangerous to call static methods in these cases. At any rate, this is just a 
temporary workaround until 4.3. Thanks for bringing closure, I'll close the 
issue.

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-4623:
-

Attachment: SOLR-4623-fix-classname-shortening-part-deux.patch

Patch with the fixes.

Committing shortly.

> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, 
> SOLR-4623-fix-classname-shortening-part-deux.patch, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614662#comment-13614662
 ] 

crocket edited comment on LUCENE-4886 at 3/27/13 4:41 AM:
--

I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but as 
far as I remember, "new DirectoryTaxonomyReader(Directory)" could see changes 
and new categories without reopening it on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?

  was (Author: crocket):
I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
I remember that "new DirectoryTaxonomyReader(Directory)" could see changes and 
new categories without reopening it on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?
  
> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614918#comment-13614918
 ] 

Steve Rowe commented on SOLR-4623:
--

bq. we should review the other uses of the same code-dup'ed function in 
IndexSchema and ensure there are not similar bugs

The code-dup'ed function is in FieldType, not IndexSchema, and right now it's 
used to convert fully qualified class names of analyzers, and analysis 
components, to short name "solr.".

Looking at SolrResourceLoader.findClass(), where analysis component references 
of the form "solr." are converted to Class references, I see 
that this is inappropriate for analyzer classes, since Lucene SPI doesn't cover 
them.  I'll stop shortening analyzer classnames.

I looked up the currently defined analysis factories in trunk, and all of them 
are under org.apache.lucene.analysis.\*\* and org.apache.solr.analysis.\*\*.  
Lucene analysis component factories are loaded via SPI, and Solr analysis 
factories are discovered by iteratively attempting Class.forName() using a 
fixed set of package prefixes, including "org.apache.solr.analysis.".

I'll change the acceptable prefixes to "org.apache.lucene.analysis." and 
"org.apache.solr.analysis.".  

Since SPI isn't used for Solr factories, I'll change the method name from 
normalizeSPIname() to getShortName(), since "shortname"/"short name" seems to 
be what "solr." instances are called.  I would change 
SimilarityFactory.normalizeName() to getShortName() too, but I see it's only 
called the one time, so I'll inline it and get rid of the method. 

Patch coming shortly.


> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614662#comment-13614662
 ] 

crocket edited comment on LUCENE-4886 at 3/27/13 4:39 AM:
--

I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but I 
remember that "new DirectoryTaxonomyReader(Directory)" could see changes and 
new categories without reopening it on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?

  was (Author: crocket):
I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without reopening it on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?
  
> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614662#comment-13614662
 ] 

crocket edited comment on LUCENE-4886 at 3/27/13 4:37 AM:
--

I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without being reopened on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?

  was (Author: crocket):
I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without being reopened on lucene 4.1
  
> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614662#comment-13614662
 ] 

crocket edited comment on LUCENE-4886 at 3/27/13 4:38 AM:
--

I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without reopening it on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?

  was (Author: crocket):
I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without being reopened on lucene 4.1.

By the way, how do I reopen DirectoryTaxonomyReader(DirectoryTaxonomyWriter)?
Should I retrieve a new instance with "new 
DirectoryTaxonomyReader(DirectoryTaxonomyWriter)" whenever I want to reopen it?
  
> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614916#comment-13614916
 ] 

crocket commented on LUCENE-4882:
-

It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.
Thus, when I invoked StandardFacetsAccumulator.create, FacetsAccumulator.create 
was called actually.

After replacing StandardFacetsAccumulator.create, it worked.

I'll replace StandardFacetsAccumulator with something else when 4.3 comes 
around.

I guess it is safe to close the issue.

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614916#comment-13614916
 ] 

crocket edited comment on LUCENE-4882 at 3/27/13 4:36 AM:
--

It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.
Thus, when I invoked StandardFacetsAccumulator.create, FacetsAccumulator.create 
was called actually.

After replacing StandardFacetsAccumulator.create with new 
StandardFacetsAccumulator, it worked.

I'll replace StandardFacetsAccumulator with something else when 4.3 comes 
around.

I guess it is safe to close the issue.

  was (Author: crocket):
It turned out that StandardFacetsAccumulator inherited create from 
FacetsAccumulator.
Thus, when I invoked StandardFacetsAccumulator.create, FacetsAccumulator.create 
was called actually.

After replacing StandardFacetsAccumulator.create, it worked.

I'll replace StandardFacetsAccumulator with something else when 4.3 comes 
around.

I guess it is safe to close the issue.
  
> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4881) Add a set iterator to SentinalIntSet

2013-03-26 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4881:
-

 Priority: Minor  (was: Major)
Fix Version/s: 4.3
 Assignee: David Smiley

> Add a set iterator to SentinalIntSet
> 
>
> Key: LUCENE-4881
> URL: https://issues.apache.org/jira/browse/LUCENE-4881
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 4.3
>
>
> I'm working on code that needs a hash based int Set.  It will need to iterate 
> over the values, but SentinalIntSet doesn't have this utility feature.  It 
> should be pretty easy to add.
> FYI this is an out-growth of a question I posed to the dev list, examining 3 
> different int hash sets out there: SentinalIntSet, IntHashSet (in Lucene 
> facet module) and the 3rd party IntOpenHashSet (HPPC) -- see 
> http://lucene.472066.n3.nabble.com/IntHashSet-SentinelIntSet-SortedIntDocSet-td4037516.html
>   I decided to go for SentinalIntSet because it's already in Lucene-core, 
> adding the method I need should be easy, and it has a nice lean 
> implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4887) FSA NoOutputs should implement merge() allowing duplicate keys

2013-03-26 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved LUCENE-4887.
---

   Resolution: Fixed
Fix Version/s: 4.3
   5.0
 Assignee: Ryan McKinley
Lucene Fields:   (was: New)

Added in r1461409

I hit this issue trying to have the FST act as a Set

> FSA NoOutputs should implement merge() allowing duplicate keys
> --
>
> Key: LUCENE-4887
> URL: https://issues.apache.org/jira/browse/LUCENE-4887
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan McKinley
>Assignee: Ryan McKinley
>Priority: Trivial
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4887.patch
>
>
> The NoOutput Object throws NotImplemented if you try to add the same input 
> twice.  This can easily be implemented

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3956) group.facet and facet.limit=-1 returns no facet counts

2013-03-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-3956:
---

Attachment: SOLR-3956.patch

Patch updated to include testing directly from the grouping module as well.

> group.facet and facet.limit=-1 returns no facet counts
> --
>
> Key: SOLR-3956
> URL: https://issues.apache.org/jira/browse/SOLR-3956
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0
>Reporter: Mike Spencer
> Attachments: SOLR-3956.patch, SOLR-3956.patch, SOLR-3956.patch
>
>
> Attempting to use group.facet=true and facet.limit=-1 to return all facets 
> from a grouped result ends up with the counts not being returned. Adjusting 
> the facet.limit to any number greater than 0 returns the facet counts as 
> expected.
> This does not appear limited to a specific field type, as I have tried on 
> (both multivalued and not) text, string, boolean, and double types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-26 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614774#comment-13614774
 ] 

Erick Erickson commented on SOLR-4632:
--

Oh my goodness lookit that. It fails. Thanks for the instructions!

BTW, you're in a heap o' trouble with that new core definition, it'll try to 
use the same index as "collection1" with disastrous results I'd expect unless 
you specify a distinct data dir.

OK, now that I can reproduce this I'll fix it as part of SOLR-4615, thanks 
again!

> transientCacheSize is not retained when persisting solr.xml
> ---
>
> Key: SOLR-4632
> URL: https://issues.apache.org/jira/browse/SOLR-4632
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.2
>Reporter: dfdeshom
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 4.3
>
> Attachments: SOLR-4632.txt
>
>
> transientCacheSize is not persisted solr.xml when creating a new core. I was 
> able to reproduce this using the following solr.xml file:
> {code:xml}
> 
> 
>adminPath="/admin/cores" zkClientTimeout="${zkClientTimeout:15000}" 
> hostPort="8983" hostContext="solr">
> 
>   
> 
> {code}
> I created a new core:
> {code} curl 
> "http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&transient=true&name=tmp5&loadOnStartup=false"{code}
> The resulting solr.xml file has the new core added, but is missing the 
> transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614722#comment-13614722
 ] 

Steve Rowe commented on SOLR-4623:
--

bq. Robert's comment from the mailing list - I'll commit the patch shortly, as 
I agree about the bugs it fixes - thanks Robert: [...]

Patch committed to trunk and branch_4x.


> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3956) group.facet and facet.limit=-1 returns no facet counts

2013-03-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-3956:
---

Attachment: SOLR-3956.patch

Updated patch to include some test cases

> group.facet and facet.limit=-1 returns no facet counts
> --
>
> Key: SOLR-3956
> URL: https://issues.apache.org/jira/browse/SOLR-3956
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0
>Reporter: Mike Spencer
> Attachments: SOLR-3956.patch, SOLR-3956.patch
>
>
> Attempting to use group.facet=true and facet.limit=-1 to return all facets 
> from a grouped result ends up with the counts not being returned. Adjusting 
> the facet.limit to any number greater than 0 returns the facet counts as 
> expected.
> This does not appear limited to a specific field type, as I have tried on 
> (both multivalued and not) text, string, boolean, and double types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4752) Merge segments to sort them

2013-03-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4752:
-

Attachment: LUCENE-4752-2.patch

Patch:
 - fixes the issue by allowing OneMerges to return a doc map that translates 
doc IDs to their new value so that IndexWriter can commit merged deletes,
 - TestSortingMergePolicy has been modified to make deletions more likely to 
happen concurrently with a merge.

> Merge segments to sort them
> ---
>
> Key: LUCENE-4752
> URL: https://issues.apache.org/jira/browse/LUCENE-4752
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: David Smiley
>Assignee: Adrien Grand
> Attachments: LUCENE-4752-2.patch, LUCENE-4752.patch, 
> LUCENE-4752.patch, LUCENE-4752.patch, LUCENE-4752.patch, LUCENE-4752.patch, 
> LUCENE-4752.patch, LUCENE-4752.patch, natural_10M_ingestion.log, 
> sorting_10M_ingestion.log
>
>
> It would be awesome if Lucene could write the documents out in a segment 
> based on a configurable order.  This of course applies to merging segments 
> to. The benefit is increased locality on disk of documents that are likely to 
> be accessed together.  This often applies to documents near each other in 
> time, but also spatially.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4405) Optional admin html inserts fail on Chrome and Safari (Mac): admin-extra.menu-top.html and admin-extra.menu-bottom.html

2013-03-26 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) reassigned SOLR-4405:
---

Assignee: Stefan Matheis (steffkes)

> Optional admin html inserts fail on Chrome and Safari (Mac): 
> admin-extra.menu-top.html and admin-extra.menu-bottom.html
> ---
>
> Key: SOLR-4405
> URL: https://issues.apache.org/jira/browse/SOLR-4405
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.1
>Reporter: Alexandre Rafalovitch
>Assignee: Stefan Matheis (steffkes)
>Priority: Minor
>  Labels: admin-interface,
> Fix For: 4.3
>
> Attachments: SOLR-4405.patch
>
>
> Including admin-extra.html file in conf directory works - shows up on that 
> core's admin page.
> Doing that for the other two files admin-extra.menu-top.html and 
> admin-extra.menu-bottom.html fails:
> Uncaught Error: HIERARCHY_REQUEST_ERR: DOM Exception 3 require.js:8424
> jQuery.extend.clean require.js:8424
> jQuery.buildFragment require.js:8176
> jQuery.fn.extend.domManip require.js:8003
> jQuery.fn.extend.prepend require.js:7822
> (anonymous function) dashboard.js:62
> fire require.js:3099
> self.fireWith require.js:3217
> done require.js:9454
> callback require.js:10235
> I tried file content with "" and with ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Admin UI pluggability

2013-03-26 Thread Stefan Matheis
While writing about it .. we have one open issue regarding this topic, i'll see 
if i can commit the attached patch this week: 
https://issues.apache.org/jira/browse/SOLR-4405



On Tuesday, March 26, 2013 at 11:14 PM, Stefan Matheis wrote:

> Hey Ryan
> 
> Additionally to the "admin-extra.html" file which is display on the overview, 
> we have "admin-extra.menu-top.html" and "admin-extra.menu-top.html" which are 
> display on top / at the bottom of core-menu, if they exist (and do not only 
> contain a comment, like they do in our sample-configuration)
> 
> so, kind of quick hack would be: put some css definitions in there, which 
> hide the existing options (they all have classes assigned, which you would 
> use for the css-selector). the additional links you'd like to use can be 
> placed either on top or at the bottom, where you like them more.
> 
> To answer your final question: of course we can :) It mainly depends on 
> someone to come up with some suggestions. Chat about what is doable/usable 
> and what is not .. and then see what we can get out of that.
> 
> That may either be some kind of configuration to hide different (existing) 
> options or f.e. a additional stylesheet which would be loaded after the ones 
> we already have, that you can overwrite the default styles.
> 
> If you can elaborate a bit on what you'd like to change there, we may get 
> other ideas as well?
> 
> Stefan 
> 
> 
> On Tuesday, March 26, 2013 at 6:48 PM, Ryan Ernst wrote:
> 
> > I would like to add some custom pages to the core menu for my setup, 
> > replace some existing (like ping) and also remove some others (like data 
> > import). From what I can tell, the existing hooks are very limited (like 
> > admin extra that appears in overview for the core). I've searched through 
> > JIRA for any issues regarding this, but can't find anything. Any thoughts 
> > on how this could be done? Can we make the admin UI more pluggable? 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614669#comment-13614669
 ] 

Steve Rowe commented on SOLR-4623:
--

Robert's comment from the mailing list - I'll commit the patch shortly, as I 
agree about the bugs it fixes - thanks Robert:

{quote}
Well there are several bugs, resulting from the over-aggressive
normalization combined with normalizing *always* despite this comment:

 // Only normalize factory names

So consider the case someone has


which is allowed (it uses the anonymous factory). In this case its
bogusly normalized to "solr.BM25Similarity" which is invalid and won't
be loaded by IndexSchema, since it only looks for solr. in
org.apache.solr.search.similarities.

I think a patch like the following is a good start, but we should
review the other uses of the same code-dup'ed function in IndexSchema
and ensure there are not similar bugs:

I'm sorry if i came off terse or as a haiku, its not a big deal, I
just want it to work correctly.

{noformat}
Index: solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java
===
--- solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java
(revision 1460952)
+++ solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java
(working copy)
@@ -51,9 +51,9 @@
  public abstract Similarity getSimilarity();


-  private static String normalizeSPIname(String fullyQualifiedName) {
-if (fullyQualifiedName.startsWith("org.apache.lucene.") || 
fullyQualifiedName.startsWith("org.apache.solr.")) {
-  return "solr" + 
fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
+  private static String normalizeName(String fullyQualifiedName) {
+if (fullyQualifiedName.startsWith("org.apache.solr.search.similarities.")) 
{
+  return "solr" + 
fullyQualifiedName.substring("org.apache.solr.search.similarities".length());
}
return fullyQualifiedName;
  }
@@ -66,10 +66,10 @@
  className = getSimilarity().getClass().getName();
} else {
  // Only normalize factory names
-  className = normalizeSPIname(className);
+  className = normalizeName(className);
}
SimpleOrderedMap props = new SimpleOrderedMap();
-props.add(CLASS_NAME, normalizeSPIname(className));
+props.add(CLASS_NAME, className);
if (null != params) {
  Iterator iter = params.getParameterNamesIterator();
  while (iter.hasNext()) {
{noformat}
{quote}

> Add REST API methods to get all remaining schema information, and also to 
> return the full live schema in json, xml, and schema.xml formats
> --
>
> Key: SOLR-4623
> URL: https://issues.apache.org/jira/browse/SOLR-4623
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Affects Versions: 4.2
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.3
>
> Attachments: JSONResponseWriter.output.json, 
> SchemaXmlResponseWriter.output.xml, SOLR-4623.patch, 
> XMLResponseWriter.output.xml
>
>
> Each remaining schema component (after field types, fields, dynamic fields, 
> copy fields were added by SOLR-4503) should be available from the schema REST 
> API: name, version, default query operator, similarity, default search field, 
> and unique key.
> It should be possible to get the entire live schema back with a single 
> request, and schema.xml format should be one of the supported response 
> formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614662#comment-13614662
 ] 

crocket commented on LUCENE-4886:
-

I haven't tested the behavior of DirectoryTaxonomyReader on lucene 4.2, but 
"new DirectoryTaxonomyReader(Directory)" could see changes and new categories 
without being reopened on lucene 4.1

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4887) FSA NoOutputs should implement merge() allowing duplicate keys

2013-03-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614644#comment-13614644
 ] 

Michael McCandless commented on LUCENE-4887:


+1

This makes FST act like a Set, ie adding the same input more than once is 
indistinguishable from adding that input only once.

> FSA NoOutputs should implement merge() allowing duplicate keys
> --
>
> Key: LUCENE-4887
> URL: https://issues.apache.org/jira/browse/LUCENE-4887
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan McKinley
>Priority: Trivial
> Attachments: LUCENE-4887.patch
>
>
> The NoOutput Object throws NotImplemented if you try to add the same input 
> twice.  This can easily be implemented

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614621#comment-13614621
 ] 

Shai Erera commented on LUCENE-4882:


bq. I tried to use StandardFacetsAccumulator, but 
FacetsCollector.getFacetResults still throws 
"java.lang.ArrayIndexOutOfBoundsException: 0"

Strange, the test I added to TestFacetsCollector passed with 
StandardFacetsAccumulator. Also, I don't see that SFA has a static create() 
method -- are you sure you're using the right version of the code? Or perhaps 
it was a copy/paste bug?

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614619#comment-13614619
 ] 

Shai Erera commented on LUCENE-4886:


Basically it should be ok, as long as they are "reopened" together. We have 
LUCENE-3786 open to create a SearcherTaxoManager, so that you don't need to do 
this stuff yourself (i.e. make sure that the taxonomy index is always as good 
as the search index). Until then, if you reopen them together, then make sure 
to reopen the IndexReader first, and then the TaxonomyReader. It's ok if 
TaxoReader sees more categories than IndexReader, but not ok the other way 
around.

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614616#comment-13614616
 ] 

Shai Erera commented on LUCENE-4882:


Ahh. Well, you can still do without modifying CLB, by specifying 
OrdinalPolicy.ALL_PARENTS for you category lists. That's a change we've done in 
4.2, that the root dimension ordinal is not indexed by default (== 
OP.ALL_BUT_DIMENSION) to save some space as well as CPU cycles. The downside 
(besides the bug!) is that you don't get the count of the dimension. 
Performance-wise, this improved some, but not critical. So you can choose 
between overriding FacetIndexParams.getCategoryListParams() to always return a 
CLP which specifies OP.ALL_PARENTS for all categories, or extend CLB and apply 
the fix locally. In both cases, I would put the TODO :).

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614607#comment-13614607
 ] 

crocket edited comment on LUCENE-4882 at 3/26/13 10:16 PM:
---

1) I have a facet "me" that doesn't have a subcategory. Does it mean I need to 
modify CountingListBuilder as well as FacetsAccumulator or just use 
StandardFacetsAccumulator?

2) I tried to use StandardFacetsAccumulator, but 
FacetsCollector.getFacetResults still throws 
"java.lang.ArrayIndexOutOfBoundsException: 0"
FacetsAccumulator sfa=StandardFacetsAccumulator.create(fsp, 
searcher.getIndexReader(), taxoReader);
FacetsCollector fc = FacetsCollector.create(sfa);

Does it mean I need to apply your patch to my project?

  was (Author: crocket):
I have a facet "me" that doesn't have a subcategory. Does it mean I need to 
modify CountingListBuilder as well as FacetsAccumulator or just use 
StandardFacetsAccumulator?
  
> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Admin UI pluggability

2013-03-26 Thread Stefan Matheis
Hey Ryan

Additionally to the "admin-extra.html" file which is display on the overview, 
we have "admin-extra.menu-top.html" and "admin-extra.menu-top.html" which are 
display on top / at the bottom of core-menu, if they exist (and do not only 
contain a comment, like they do in our sample-configuration)

so, kind of quick hack would be: put some css definitions in there, which hide 
the existing options (they all have classes assigned, which you would use for 
the css-selector). the additional links you'd like to use can be placed either 
on top or at the bottom, where you like them more.

To answer your final question: of course we can :) It mainly depends on someone 
to come up with some suggestions. Chat about what is doable/usable and what is 
not .. and then see what we can get out of that.

That may either be some kind of configuration to hide different (existing) 
options or f.e. a additional stylesheet which would be loaded after the ones we 
already have, that you can overwrite the default styles.

If you can elaborate a bit on what you'd like to change there, we may get other 
ideas as well?

Stefan 


On Tuesday, March 26, 2013 at 6:48 PM, Ryan Ernst wrote:

> I would like to add some custom pages to the core menu for my setup, replace 
> some existing (like ping) and also remove some others (like data import). 
> From what I can tell, the existing hooks are very limited (like admin extra 
> that appears in overview for the core). I've searched through JIRA for any 
> issues regarding this, but can't find anything. Any thoughts on how this 
> could be done? Can we make the admin UI more pluggable?




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614613#comment-13614613
 ] 

crocket commented on LUCENE-4886:
-

I now use "new DirectoryTaxnomyReader(DirectoryTaxonomyWriter)".
It is near-real time reader.

Is it ok to use that with StandardFacetsAccumulator.create(fsp, 
searcher.getIndexReader(), taxoReader); when searcher is acquired by 
NRTManager.acquire?
searcher.getIndexReader() and taxoReader are both near-real time, and there 
might be collisions between two.

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614607#comment-13614607
 ] 

crocket commented on LUCENE-4882:
-

I have a facet "me" that doesn't have a subcategory. Does it mean I need to 
modify CountingListBuilder as well as FacetsAccumulator or just use 
StandardFacetsAccumulator?

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4888) SloppyPhraseScorer calls DocsAndPositionsEnum.advance with target = -1

2013-03-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4888:
-

Attachment: LUCENE-4888.patch

A patch that adds assertions to AssertingDocsAndPositionsEnum. You can 
reproduce the issue by applying this patch and running {{ant test 
-Dtestcase=TestSloppyPhraseQuery -Dtests.codec=Asserting}}.

> SloppyPhraseScorer calls DocsAndPositionsEnum.advance with target = -1
> --
>
> Key: LUCENE-4888
> URL: https://issues.apache.org/jira/browse/LUCENE-4888
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.2
>Reporter: Adrien Grand
> Attachments: LUCENE-4888.patch
>
>
> SloppyPhraseScorer calls DocsAndPositionsEnum.advance with target = -1 
> although the behavior of this method is undefined in such cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614604#comment-13614604
 ] 

Shai Erera commented on LUCENE-4882:


The fix that I added to CountingListBuilder is only for the case where you 
index facets such as "a", "b", which is done in tests only. Usually, your 
facets will look like dimension/level1[/level2/level3...], in which case you're 
not affected by the fix in CLB. I would just extends FacetsAccumulator with a 
TODO "remove when 4.3 is out"...

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4888) SloppyPhraseScorer calls DocsAndPositionsEnum.advance with target = -1

2013-03-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-4888:


 Summary: SloppyPhraseScorer calls DocsAndPositionsEnum.advance 
with target = -1
 Key: LUCENE-4888
 URL: https://issues.apache.org/jira/browse/LUCENE-4888
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Adrien Grand


SloppyPhraseScorer calls DocsAndPositionsEnum.advance with target = -1 although 
the behavior of this method is undefined in such cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614602#comment-13614602
 ] 

Shai Erera commented on LUCENE-4886:


In the past DTW committed its IW in the ctor and it no longer does that (to 
follow IW transactional logic). I think that it happened even before 4.1, but I 
don't remember, nor am able to find the issue where I made this change. I found 
some trace of this change in LUCENE-3441 (look at first 2-3 comments).

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3758) DirectSolrSpellChecker doesn't work when using group.

2013-03-26 Thread Alexander Kingson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614599#comment-13614599
 ] 

Alexander Kingson edited comment on SOLR-3758 at 3/26/13 9:56 PM:
--

Hi,

Commenting out
 if (!params.getBool(COMPONENT_NAME, false) || spellCheckers.isEmpty()) {
  return;
}

in SpellCheckComponent#process()

solves the issue, because when group=true params.getBool(COMPONENT_NAME, false) 
is false.

Which part constructs this params variable?

Thanks.
Alex.





  was (Author: alxksn):
Hi,

Commenting out
[code]
 if (!params.getBool(COMPONENT_NAME, false) || spellCheckers.isEmpty()) {
  return;
}
[/code]

in SpellCheckComponent#process()

solves the issue, because when group=true params.getBool(COMPONENT_NAME, false) 
is false.

Which part constructs this params variable?

Thanks.
Alex.




  
> DirectSolrSpellChecker doesn't work when using group.
> -
>
> Key: SOLR-3758
> URL: https://issues.apache.org/jira/browse/SOLR-3758
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, spellchecker
>Affects Versions: 4.0-BETA
> Environment: Linux Debian 6 / SolrCloud with 2 shards and 2 replicas.
>Reporter: Christian Johnsson
>  Labels: DirectSolrSpellChecker, bug, spellchecker, suggestions
>
> It seems like spellchecker using solr.DirectSolrSpellChecker doesn't work 
> when grouping results.
> /select?q=mispeled
> Gives me correct spellingsuggestions
> but..
> /select?q=mispeled&group=true&group.main=true&group.field=title
> don't give any suggestions.
> It worked in 3.5 with indexbased spellchecker.
> It seems like if i mispell something that returns 0 results i dont get any 
> suggestions. If i misspell something that genereate a result i get 
> suggestions on it.
> It should come up with proper suggestions even if there are no results to be 
> displayed (But there is things that should be suggested).
> Long story short. Same functionality as in 3.5 :-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3758) DirectSolrSpellChecker doesn't work when using group.

2013-03-26 Thread Alexander Kingson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614599#comment-13614599
 ] 

Alexander Kingson commented on SOLR-3758:
-

Hi,

Commenting out
[code]
 if (!params.getBool(COMPONENT_NAME, false) || spellCheckers.isEmpty()) {
  return;
}
[/code]

in SpellCheckComponent#process()

solves the issue, because when group=true params.getBool(COMPONENT_NAME, false) 
is false.

Which part constructs this params variable?

Thanks.
Alex.





> DirectSolrSpellChecker doesn't work when using group.
> -
>
> Key: SOLR-3758
> URL: https://issues.apache.org/jira/browse/SOLR-3758
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, spellchecker
>Affects Versions: 4.0-BETA
> Environment: Linux Debian 6 / SolrCloud with 2 shards and 2 replicas.
>Reporter: Christian Johnsson
>  Labels: DirectSolrSpellChecker, bug, spellchecker, suggestions
>
> It seems like spellchecker using solr.DirectSolrSpellChecker doesn't work 
> when grouping results.
> /select?q=mispeled
> Gives me correct spellingsuggestions
> but..
> /select?q=mispeled&group=true&group.main=true&group.field=title
> don't give any suggestions.
> It worked in 3.5 with indexbased spellchecker.
> It seems like if i mispell something that returns 0 results i dont get any 
> suggestions. If i misspell something that genereate a result i get 
> suggestions on it.
> It should come up with proper suggestions even if there are no results to be 
> displayed (But there is things that should be suggested).
> Long story short. Same functionality as in 3.5 :-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614597#comment-13614597
 ] 

crocket edited comment on LUCENE-4882 at 3/26/13 9:55 PM:
--

I decided to stick to 4.2 for a while.

Should I override methods in both FacetsAccumulator.java and 
CountingListBuilder.java to make it work?

  was (Author: crocket):
Should I override methods in both FacetsAccumulator.java and 
CountingListBuilder.java to make it work?
  
> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614597#comment-13614597
 ] 

crocket commented on LUCENE-4882:
-

Should I override methods in both FacetsAccumulator.java and 
CountingListBuilder.java to make it work?

> FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
> CategoryPath.
> ---
>
> Key: LUCENE-4882
> URL: https://issues.apache.org/jira/browse/LUCENE-4882
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Assignee: Shai Erera
>Priority: Critical
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-4882.patch
>
>
> When I wanted to count root categories, I used to pass "new CategoryPath(new 
> String[0])" to a CountFacetRequest.
> Since upgrading lucene from 4.1 to 4.2, that threw 
> ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
> CountFacetRequest instead, and this time I got NullPointerException.
> It all originates from FacetsAccumulator.java:185
> Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-26 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614589#comment-13614589
 ] 

Erick Erickson commented on SOLR-4632:
--

OK, I'll give it a shot with those instructions.


> transientCacheSize is not retained when persisting solr.xml
> ---
>
> Key: SOLR-4632
> URL: https://issues.apache.org/jira/browse/SOLR-4632
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.2
>Reporter: dfdeshom
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 4.3
>
> Attachments: SOLR-4632.txt
>
>
> transientCacheSize is not persisted solr.xml when creating a new core. I was 
> able to reproduce this using the following solr.xml file:
> {code:xml}
> 
> 
>adminPath="/admin/cores" zkClientTimeout="${zkClientTimeout:15000}" 
> hostPort="8983" hostContext="solr">
> 
>   
> 
> {code}
> I created a new core:
> {code} curl 
> "http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&transient=true&name=tmp5&loadOnStartup=false"{code}
> The resulting solr.xml file has the new core added, but is missing the 
> transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614568#comment-13614568
 ] 

crocket edited comment on LUCENE-4886 at 3/26/13 9:49 PM:
--

SegmentInfos$FindSegmentsFile.run seems almost the same on both 4.1 and 4.2.

Why did it work fine on lucene 4.1?

  was (Author: crocket):
Why did it work fine on lucene 4.1?
  
> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4887) FSA NoOutputs should implement merge() allowing duplicate keys

2013-03-26 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated LUCENE-4887:
--

Attachment: LUCENE-4887.patch

merge as:
{code:java}
  @Override
  public Object merge(Object first, Object second) {
assert first == NO_OUTPUT;
assert second == NO_OUTPUT;
return NO_OUTPUT;
  }
{code}

> FSA NoOutputs should implement merge() allowing duplicate keys
> --
>
> Key: LUCENE-4887
> URL: https://issues.apache.org/jira/browse/LUCENE-4887
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan McKinley
>Priority: Trivial
> Attachments: LUCENE-4887.patch
>
>
> The NoOutput Object throws NotImplemented if you try to add the same input 
> twice.  This can easily be implemented

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3758) DirectSolrSpellChecker doesn't work when using group.

2013-03-26 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614570#comment-13614570
 ] 

James Dyer commented on SOLR-3758:
--

I did looked a little at this and it seems that when "group=true", the first 
stage request doesn't reach all the shards.  For the case I was testing, with 2 
shards, only 1 shard would get the request.  This would make the spellchecker 
work some of the time and fail others.  I haven't figured out for sure why this 
happens though.  Possibly the grouping logic short-circuits and doesn't bother 
requesting to shards that are known not to contain the groups that will be 
returned?

> DirectSolrSpellChecker doesn't work when using group.
> -
>
> Key: SOLR-3758
> URL: https://issues.apache.org/jira/browse/SOLR-3758
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, spellchecker
>Affects Versions: 4.0-BETA
> Environment: Linux Debian 6 / SolrCloud with 2 shards and 2 replicas.
>Reporter: Christian Johnsson
>  Labels: DirectSolrSpellChecker, bug, spellchecker, suggestions
>
> It seems like spellchecker using solr.DirectSolrSpellChecker doesn't work 
> when grouping results.
> /select?q=mispeled
> Gives me correct spellingsuggestions
> but..
> /select?q=mispeled&group=true&group.main=true&group.field=title
> don't give any suggestions.
> It worked in 3.5 with indexbased spellchecker.
> It seems like if i mispell something that returns 0 results i dont get any 
> suggestions. If i misspell something that genereate a result i get 
> suggestions on it.
> It should come up with proper suggestions even if there are no results to be 
> displayed (But there is things that should be suggested).
> Long story short. Same functionality as in 3.5 :-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4887) FSA NoOutputs should implement merge() allowing duplicate keys

2013-03-26 Thread Ryan McKinley (JIRA)
Ryan McKinley created LUCENE-4887:
-

 Summary: FSA NoOutputs should implement merge() allowing duplicate 
keys
 Key: LUCENE-4887
 URL: https://issues.apache.org/jira/browse/LUCENE-4887
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Trivial


The NoOutput Object throws NotImplemented if you try to add the same input 
twice.  This can easily be implemented

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614568#comment-13614568
 ] 

crocket commented on LUCENE-4886:
-

Why did it work fine on lucene 4.1?

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We have confluence wikis!

2013-03-26 Thread Mark Miller

On Mar 26, 2013, at 4:59 PM, Adrien Grand  wrote:

> Hi Mark,
> 
> On Thu, Mar 7, 2013 at 3:31 AM, Mark Miller  wrote:
>> I'm going to start like some fresh doc stuff. And with versioning. Though I 
>> don't yet know what I'm doing. So help appreciated. This is stuff that 
>> Hossman is a genius at.
> 
> Is the goal of these wikis to replaces the current ones or are they
> intended for different content?
> 
> --
> Adrien

IMO, the goal is to replace the current wikis. That's a fair bit of work 
though, it is a big goal. Lucid has helped Solr a lot with their ref guide as a 
seed. Lucene won't have that same boost.

IMO though, MoinMoin is pretty weak and ugly compared to confluence. I'd love 
to see everything of value move over myself.

I don't have any illusions any of this will be fast and easy though.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614559#comment-13614559
 ] 

Michael McCandless commented on LUCENE-4886:


You need to call commit() before trying to open the DirectoryTaxonomyReader ... 
(same as with IndexWriter/IndexReader).

Or, alternatively, use near-real-time: open DirectoryTaxonomyReader passing in 
the DirectoryTaxonomyWriter.

> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is thrown.
> Below is the exception stack trace.
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
> org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
> [write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
> _0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>  ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
> ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
>  ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
>   at 
> org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

crocket updated LUCENE-4886:


Description: 
I made a taxonomy directory with
categoryDir=FSDirectory.open(new File("category"));
taxoWriter=new DirectoryTaxonomyWriter(categoryDir, OpenMode.CREATE_OR_APPEND);

Right after creating DirectoryTaxonomyWriter, I created a 
DirectoryTaxonomyReader with
taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
IndexNotFoundException. It used to work fine with lucene 4.1.
If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
taxonomy directory, no exception is thrown.

Below is the exception stack trace.

org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
[write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
_0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

  was:
I made a taxonomy directory with
categoryDir=FSDirectory.open(new File("category"));
taxoWriter=new DirectoryTaxonomyWriter(categoryDir, OpenMode.CREATE_OR_APPEND);

Right after creating DirectoryTaxonomyWriter, I created a 
DirectoryTaxonomyReader with
taxoReader=new DirectoryTaxonomyRriter(categoryDir); which throws 
IndexNotFoundException. It used to work fine with lucene 4.1.
If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
taxonomy directory, no exception is thrown.

Below is the exception stack trace.

org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
[write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
_0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]


> "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no 
> segments* file found" on a new taxonomy directory
> -
>
> Key: LUCENE-4886
> URL: https://issues.apache.org/jira/browse/LUCENE-4886
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: 4.2
>Reporter: crocket
>Priority: Critical
>
> I made a taxonomy directory with
> categoryDir=FSDirectory.open(new File("category"));
> taxoWriter=new DirectoryTaxonomyWriter(categoryDir, 
> OpenMode.CREATE_OR_APPEND);
> Right after creating DirectoryTaxonomyWriter, I created a 
> DirectoryTaxonomyReader with
> taxoReader=new DirectoryTaxonomyReader(categoryDir); which throws 
> IndexNotFoundException. It used to work fine with lucene 4.1.
> If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
> taxonomy directory, no exception is t

[jira] [Created] (LUCENE-4886) "new DirectoryTaxonomyReader(Directory)" throws "IndexNotFoundException: no segments* file found" on a new taxonomy directory

2013-03-26 Thread crocket (JIRA)
crocket created LUCENE-4886:
---

 Summary: "new DirectoryTaxonomyReader(Directory)" throws 
"IndexNotFoundException: no segments* file found" on a new taxonomy directory
 Key: LUCENE-4886
 URL: https://issues.apache.org/jira/browse/LUCENE-4886
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical


I made a taxonomy directory with
categoryDir=FSDirectory.open(new File("category"));
taxoWriter=new DirectoryTaxonomyWriter(categoryDir, OpenMode.CREATE_OR_APPEND);

Right after creating DirectoryTaxonomyWriter, I created a 
DirectoryTaxonomyReader with
taxoReader=new DirectoryTaxonomyRriter(categoryDir); which throws 
IndexNotFoundException. It used to work fine with lucene 4.1.
If I invoke new DirectoryTaxonomyReader(DirectoryTaxonomyWriter) on a new 
taxonomy directory, no exception is thrown.

Below is the exception stack trace.

org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
org.apache.lucene.store.MMapDirectory@/home/elisa/repos/mine/ZeroIrcLog/irclog-category
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@373983f: files: 
[write.lock, _0.si, _0.fnm, _0.fdt, _0_Lucene41_0.tim, _0_Lucene41_0.pos, 
_0.fdx, _0_Lucene41_0.doc, _0_Lucene41_0.tip]
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
 ~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65) 
~[lucene-core-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:25:29]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.openIndexReader(DirectoryTaxonomyReader.java:218)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader.(DirectoryTaxonomyReader.java:99)
 ~[lucene-facet-4.2.0.jar:4.2.0 1453694 - rmuir - 2013-03-06 22:26:53]
at 
org.zeroirclog.LuceneLoggerWorker.(LuceneLoggerWorker.java:141) ~[na:na]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-26 Thread dfdeshom (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614552#comment-13614552
 ] 

dfdeshom commented on SOLR-4632:


I created a solr.xml file that looks like this:

{code:xml}


  

  

{code}

I started solr: java -jar start.jar

I issued a command to create another core:
{code} curl 
"http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&name=tmp5"{code}

I opened the modified solr.xml file. It contained the newly-created core, but 
transientCacheSize was not present anymore. I don't think transientCacheSize 
should be modifiable by the http api either, I just don't think it should 
swallow it. Ideally, it should work as you said, but it's not for me.
 

> transientCacheSize is not retained when persisting solr.xml
> ---
>
> Key: SOLR-4632
> URL: https://issues.apache.org/jira/browse/SOLR-4632
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.2
>Reporter: dfdeshom
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 4.3
>
> Attachments: SOLR-4632.txt
>
>
> transientCacheSize is not persisted solr.xml when creating a new core. I was 
> able to reproduce this using the following solr.xml file:
> {code:xml}
> 
> 
>adminPath="/admin/cores" zkClientTimeout="${zkClientTimeout:15000}" 
> hostPort="8983" hostContext="solr">
> 
>   
> 
> {code}
> I created a new core:
> {code} curl 
> "http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&transient=true&name=tmp5&loadOnStartup=false"{code}
> The resulting solr.xml file has the new core added, but is missing the 
> transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-26 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614549#comment-13614549
 ] 

Erick Erickson commented on SOLR-4632:
--

Nope. Also, there's a test that's been in there from the beginning that 
verifies that if the value for transientCacheSize is other than 
Integer.MAX_VALUE, it's persisted. If it's not present, the default is 
Integer.MAX_VALUE. Reading that value is a bit opaque, it's easy to overlook. 

But in looking at the code, it was misleading at best. There's no way to change 
the value of transientCacheSize outside of having it in the solr.xml file in 
the  tag, you can't dynamically change it. None of the create process 
for a core changes the value in the  tag. Nor should it IMO.

So I don't understand what you're doing to test this. How are you changing the 
valur for transientCacheSize?

BTW, all that is entirely separate from not being able to specify loadOnStartup 
and transient when creating cores, I've include that patch in what I'm working 
on now.

> transientCacheSize is not retained when persisting solr.xml
> ---
>
> Key: SOLR-4632
> URL: https://issues.apache.org/jira/browse/SOLR-4632
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.2
>Reporter: dfdeshom
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 4.3
>
> Attachments: SOLR-4632.txt
>
>
> transientCacheSize is not persisted solr.xml when creating a new core. I was 
> able to reproduce this using the following solr.xml file:
> {code:xml}
> 
> 
>adminPath="/admin/cores" zkClientTimeout="${zkClientTimeout:15000}" 
> hostPort="8983" hostContext="solr">
> 
>   
> 
> {code}
> I created a new core:
> {code} curl 
> "http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&transient=true&name=tmp5&loadOnStartup=false"{code}
> The resulting solr.xml file has the new core added, but is missing the 
> transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4361) DIH request parameters with dots throws UnsupportedOperationException

2013-03-26 Thread Chris Eldredge (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614545#comment-13614545
 ] 

Chris Eldredge commented on SOLR-4361:
--

By the way, in case it isn't clear, we define {{server.prefix}} as a system 
property and it defaults to {{""}} in production, but it would be something 
like {{"test."}} in pre-production to produce a complete baseUrl like 
{{http://test.api.fool.com}}


> DIH request parameters with dots throws UnsupportedOperationException
> -
>
> Key: SOLR-4361
> URL: https://issues.apache.org/jira/browse/SOLR-4361
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 4.3, 5.0, 4.2.1
>
> Attachments: SOLR-4361.patch
>
>
> If the user puts placeholders for request parameters and these contain dots, 
> DIH fails.  Current workaround is to either use no dots or use the 4.0 DIH 
> jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: We have confluence wikis!

2013-03-26 Thread Adrien Grand
Hi Mark,

On Thu, Mar 7, 2013 at 3:31 AM, Mark Miller  wrote:
> I'm going to start like some fresh doc stuff. And with versioning. Though I 
> don't yet know what I'm doing. So help appreciated. This is stuff that 
> Hossman is a genius at.

Is the goal of these wikis to replaces the current ones or are they
intended for different content?

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jrockit-jdk1.6.0_33-R28.2.4-4.1.0) - Build # 4835 - Still Failing!

2013-03-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4835/
Java: 32bit/jrockit-jdk1.6.0_33-R28.2.4-4.1.0 -XnoOpt

All tests passed

Build Log:
[...truncated 20863 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/test-framework/lib/randomizedtesting-runner-2.0.8.jar
 [licenses] Scanned 95 JAR file(s) for licenses (in 1.34s.), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:381: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:234: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/custom-tasks.xml:43:
 License check failed. Check the logs.

Total time: 87 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jrockit-jdk1.6.0_33-R28.2.4-4.1.0 -XnoOpt
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614514#comment-13614514
 ] 

Simon Willnauer commented on LUCENE-4872:
-

I often use min_should_match in practice. Like one example is if you do search 
for titles or name like POI's or meta-data. Lets take youtube as an example you 
often get queries like "queen wembley live 1989" which was in-fact 1986 (at 
least the one I meant here) a pretty good pattern is to use some metric like 
80% must match if >= 2 query terms etc. 
Another good example is if you use shingles a query like "queen wembley live 
1989" produces lots of terms and "wembley live" might be pretty common so you 
want to make sure that you are not returning stuff from other band but on the 
other hand a pure conjunction is not acceptable here either. 

hope that give some insight?

> BooleanWeight should decide how to execute minNrShouldMatch
> ---
>
> Key: LUCENE-4872
> URL: https://issues.apache.org/jira/browse/LUCENE-4872
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/search
>Reporter: Robert Muir
> Fix For: 5.0, 4.3
>
> Attachments: crazyMinShouldMatch.tasks
>
>
> LUCENE-4571 adds a dedicated document-at-time scorer for minNrShouldMatch 
> which can use advance() behind the scenes. 
> In cases where you have some really common terms and some rare ones this can 
> be a huge performance improvement.
> On the other hand BooleanScorer might still be faster in some cases.
> We should think about what the logic should be here: one simple thing to do 
> is to always use the new scorer when minShouldMatch is set: thats where i'm 
> leaning. 
> But maybe we could have a smarter heuristic too, perhaps based on cost()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4620) CloudSolrServer has single point of failure

2013-03-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4620:
--

Fix Version/s: 5.0
   4.3
 Assignee: Mark Miller
 Priority: Minor  (was: Major)
   Issue Type: Improvement  (was: Bug)

> CloudSolrServer has single point of failure
> ---
>
> Key: SOLR-4620
> URL: https://issues.apache.org/jira/browse/SOLR-4620
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Hardik Upadhyay
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Fix For: 4.3, 5.0
>
>
> CloudSolrServer (solrj) has single point of failure.If the zookeeper node 
> specified into cloud solr server client is down,solr client will fail.
> (Since purpose of zookeeper is to avoid such failures and to provide high 
> availability) This seems to be a valid bug,as it violets single point of 
> failure.
> Rather CloudSolrServer should accept list of zkHost and should not fail until 
> a single zkHost is up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4620) CloudSolrServer has single point of failure

2013-03-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614512#comment-13614512
 ] 

Mark Miller commented on SOLR-4620:
---

Yup, we can update the javadocs.

> CloudSolrServer has single point of failure
> ---
>
> Key: SOLR-4620
> URL: https://issues.apache.org/jira/browse/SOLR-4620
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Hardik Upadhyay
>  Labels: features, patch
>
> CloudSolrServer (solrj) has single point of failure.If the zookeeper node 
> specified into cloud solr server client is down,solr client will fail.
> (Since purpose of zookeeper is to avoid such failures and to provide high 
> availability) This seems to be a valid bug,as it violets single point of 
> failure.
> Rather CloudSolrServer should accept list of zkHost and should not fail until 
> a single zkHost is up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Mark Miller
Is that a +1 ? :)

On Tue, Mar 26, 2013 at 4:29 PM, Adrien Grand  wrote:
> smokeTestRelease ran successfully on my machine.
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
- Mark

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Adrien Grand
smokeTestRelease ran successfully on my machine.

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 340 - Failure!

2013-03-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/340/
Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 21536 lines...]
check-licenses:
 [echo] License check under: 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr
 [licenses] MISSING sha1 checksum file for: 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/randomizedtesting-runner-2.0.8.jar
 [licenses] Scanned 95 JAR file(s) for licenses (in 2.50s.), 1 error(s).

BUILD FAILED
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/build.xml:381: 
The following error occurred while executing this line:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/build.xml:67: The 
following error occurred while executing this line:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build.xml:234:
 The following error occurred while executing this line:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/custom-tasks.xml:43:
 License check failed. Check the logs.

Total time: 94 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4361) DIH request parameters with dots throws UnsupportedOperationException

2013-03-26 Thread Chris Eldredge (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614485#comment-13614485
 ] 

Chris Eldredge commented on SOLR-4361:
--

Snippet of our configuration that stopped working in 4.2:

{code:title=solrconfig.xml|borderStyle=solid}

  
${server.prefix:}
  

{code}

{code:title=data-config.xml|borderStyle=solid}
http://${dataimporter.request.server.prefix}api.fool.com"; />
{code}

Changing server.prefix to server-prefix makes it work again.


> DIH request parameters with dots throws UnsupportedOperationException
> -
>
> Key: SOLR-4361
> URL: https://issues.apache.org/jira/browse/SOLR-4361
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 4.3, 5.0, 4.2.1
>
> Attachments: SOLR-4361.patch
>
>
> If the user puts placeholders for request parameters and these contain dots, 
> DIH fails.  Current workaround is to either use no dots or use the 4.0 DIH 
> jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614475#comment-13614475
 ] 

Michael McCandless commented on LUCENE-4872:


bq. What about your own great work 
(http://blog.mikemccandless.com/2013/02/drill-sideways-faceting-with-lucene.html)
 as a use-case to start with?

Thank Stefan :)  That's sort of a specialized (but very useful) use case I 
think ... and the minShouldMatch is always N-1.

bq. Maybe some consulting committers can also share some insight on how this is 
used in the wild.

+1, that'd be great to know!

> BooleanWeight should decide how to execute minNrShouldMatch
> ---
>
> Key: LUCENE-4872
> URL: https://issues.apache.org/jira/browse/LUCENE-4872
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/search
>Reporter: Robert Muir
> Fix For: 5.0, 4.3
>
> Attachments: crazyMinShouldMatch.tasks
>
>
> LUCENE-4571 adds a dedicated document-at-time scorer for minNrShouldMatch 
> which can use advance() behind the scenes. 
> In cases where you have some really common terms and some rare ones this can 
> be a huge performance improvement.
> On the other hand BooleanScorer might still be faster in some cases.
> We should think about what the logic should be here: one simple thing to do 
> is to always use the new scorer when minShouldMatch is set: thats where i'm 
> leaning. 
> But maybe we could have a smarter heuristic too, perhaps based on cost()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4487) SolrException usage in solrj client code can't handle all possible http error codes returned by servers -- example "413 Request Entity Too Large"

2013-03-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4487:
---

Assignee: Hoss Man
 Summary: SolrException usage in solrj client code can't handle all 
possible http error codes returned by servers -- example "413 Request Entity 
Too Large"  (was: Add "413 Request Entity Too Large" to SolrException.ErrorCode)

Once upon a time (before SolrJ even existed), SolrException's constructor took 
in an arbitrary numeric code, and it was being used inconsistently, so the 
ErrorCode enum was defined to help ensure that when errors were thrown by solr, 
they were thrown using a consistent, finite subset, of http status codes that 
could be propagate to the end user.

But once solrj was added, and we started using SolrException in clients to wrap 
any and all errors returned by Solr via HTTP, i think it's a mistake to 
continue limiting SolrException to this enumeration.  

Solr as a server should still have a finite, limited list of error codes that 
it throws, but if the SolrException is going to be used in the client, then it 
needs to be able to handle all of the possible status codes that might be 
returned by any arbitrary http server (or proxy) that the client is talking to.

The really ironic thing is that SolrException still tracks & exposes the status 
code as an arbitrary int (via the code() method) -- it's only the constructor 
that limits you to the ErrorCode enum.

So i propose we re-add a constructor to SolrException that accepts an arbitrary 
int error code, and start using that in client code like HttpSolrServer where 
we are building up an exception from an arbitrary http server response.

The addition is trivial, but we should obviously add some javadocs explaining 
when/why to use each constructor...

{noformat}
  public SolrException(int code, String msg, Throwable th) {
super(msg, th);
this.code = code;
  }
{noformat}

...any objections?

alternative suggestion: add a package protected (or private?) subclass of 
SolrException to org.apache.solr.client.impl (or maybe even directly in 
HttpSolrServer) and put this constructor there.  

i actually think i kind of like this alternative idea better, because i would 
help mitigate the risk of people using the int constructor with bogus error 
code values in other solr code.

> SolrException usage in solrj client code can't handle all possible http error 
> codes returned by servers -- example "413 Request Entity Too Large"
> -
>
> Key: SOLR-4487
> URL: https://issues.apache.org/jira/browse/SOLR-4487
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.0
>Reporter: Alexander Dietrich
>Assignee: Hoss Man
>
> Solr responds to excessively large queries with a 413 status code, but 
> HttpSolrServer.request() loses this information when it tries to look up the 
> code in SolrException.ErrorCode, resulting in a status code 0 in the thrown 
> exception.
> Being able to see this status code would be helpful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4361) DIH request parameters with dots throws UnsupportedOperationException

2013-03-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reopened SOLR-4361:
--


re-open to investigate the issue reported by Chris Eldredge.  

Chris, Can you provide (at least) the line in your data-config.xml that has the 
variable that doesn't resolve and also the url you're using (or section from 
solrconfig.xml that has the variable in "defaults").

> DIH request parameters with dots throws UnsupportedOperationException
> -
>
> Key: SOLR-4361
> URL: https://issues.apache.org/jira/browse/SOLR-4361
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 4.3, 5.0, 4.2.1
>
> Attachments: SOLR-4361.patch
>
>
> If the user puts placeholders for request parameters and these contain dots, 
> DIH fails.  Current workaround is to either use no dots or use the 4.0 DIH 
> jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4361) DIH request parameters with dots throws UnsupportedOperationException

2013-03-26 Thread Chris Eldredge (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614464#comment-13614464
 ] 

Chris Eldredge commented on SOLR-4361:
--

I don't have permission to reopen the issue, but I just confirmed that 
URLDataSource does not correctly replace variables that contain dots in its 
baseUrl. However, it substitutes an empty string instead of throwing 
UnsupportedOperationException.

Removing the dots still works around the issue.

I tested against 
https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_4_2@1460810


> DIH request parameters with dots throws UnsupportedOperationException
> -
>
> Key: SOLR-4361
> URL: https://issues.apache.org/jira/browse/SOLR-4361
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: James Dyer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 4.3, 5.0, 4.2.1
>
> Attachments: SOLR-4361.patch
>
>
> If the user puts placeholders for request parameters and these contain dots, 
> DIH fails.  Current workaround is to either use no dots or use the 4.0 DIH 
> jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The JIRA commit tag bot.

2013-03-26 Thread Shawn Heisey

On 3/25/2013 1:15 PM, Mark Miller wrote:

So the bot flooded the list on Friday. It was enough mail to turn me off of the 
whole thing.

With some time gone by, I'm ready to start looking into bringing JIRA tags back 
and what other options I have in terms of how to approach it as well as looking 
into more limitations to prevent any bad behavior.

It will probably be a little while before I'm comfortable depending on the 
solution chosen, but I will make sure we have some form of JIRA tagging again 
before long.


Would it be possible for your app to track the last revision number it 
tagged, and if it somehow comes across a smaller revision number, drop a 
note somewhere you'll be sure to see it, and don't send the email?


I don't know how the bot works.  If it doesn't normally get the revision 
numbers in order, an alternate approach might be required:


Have the bot keep track of the last few thousand revision numbers that 
it used for tagging. and if one comes up again, note it and don't send 
the email.  If more than a few thousand can be stored compactly and 
checked quickly, make it more.  An additional check: If the revision is 
more than X hours/days old, don't send the email.  I don't know what the 
right value for X is.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Simon Willnauer
moved ES to the new RC + run smoke tester

+1 - thanks mark

On Tue, Mar 26, 2013 at 7:44 PM, Michael McCandless
 wrote:
> +1
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Tue, Mar 26, 2013 at 9:25 AM, Mark Miller  wrote:
>> http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/
>>
>> Thanks for voting!
>>
>> Smoke tester passes for me,
>>
>> +1.
>>
>> --
>> - Mark
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4885) each FacetResult should return the facet equivalent of totalHits

2013-03-26 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4885:
---

Fix Version/s: 4.3
   5.0

> each FacetResult should return the facet equivalent of totalHits
> 
>
> Key: LUCENE-4885
> URL: https://issues.apache.org/jira/browse/LUCENE-4885
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, 4.3
>
>
> This is cheap to compute, since the TopKFRH already must visit all 
> non-zero-count ords under the FacetRequest.categoryPath.
> This can be useful to a front end, eg to know whether to present a "More..." 
> under that dimension or not, whether to use a suggester like LinkedIn's facet 
> UI, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4885) each FacetResult should return the facet equivalent of totalHits

2013-03-26 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4885:
--

 Summary: each FacetResult should return the facet equivalent of 
totalHits
 Key: LUCENE-4885
 URL: https://issues.apache.org/jira/browse/LUCENE-4885
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


This is cheap to compute, since the TopKFRH already must visit all 
non-zero-count ords under the FacetRequest.categoryPath.

This can be useful to a front end, eg to know whether to present a "More..." 
under that dimension or not, whether to use a suggester like LinkedIn's facet 
UI, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4620) CloudSolrServer has single point of failure

2013-03-26 Thread Hardik Upadhyay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614451#comment-13614451
 ] 

Hardik Upadhyay commented on SOLR-4620:
---

Mark, CloudSolrServer java docs says this.
"
public CloudSolrServer(String zkHost)
throws MalformedURLException

Parameters:
zkHost - The client endpoint of the zookeeper quorum containing the cloud 
state, in the form HOST:PORT.
"
The word endpoint here creates impression of single client endpoint only,and 
this leads to believe that it accepts only single host in the ensemble.
You made things clear,if possible and if i am not wrong can we please update 
the java docs to mention the fact that it accepts comma separated HOST:PORT 
list in the ensemble? Humble request !

> CloudSolrServer has single point of failure
> ---
>
> Key: SOLR-4620
> URL: https://issues.apache.org/jira/browse/SOLR-4620
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Hardik Upadhyay
>  Labels: features, patch
>
> CloudSolrServer (solrj) has single point of failure.If the zookeeper node 
> specified into cloud solr server client is down,solr client will fail.
> (Since purpose of zookeeper is to avoid such failures and to provide high 
> availability) This seems to be a valid bug,as it violets single point of 
> failure.
> Rather CloudSolrServer should accept list of zkHost and should not fail until 
> a single zkHost is up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_15) - Build # 4881 - Still Failing!

2013-03-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/4881/
Java: 32bit/jdk1.7.0_15 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 21216 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/test-framework/lib/randomizedtesting-runner-2.0.8.jar
 [licenses] Scanned 95 JAR file(s) for licenses (in 0.66s.), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:375: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:234: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:43:
 License check failed. Check the logs.

Total time: 46 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_15 -client -XX:+UseG1GC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Michael McCandless
+1

Mike McCandless

http://blog.mikemccandless.com

On Tue, Mar 26, 2013 at 9:25 AM, Mark Miller  wrote:
> http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/
>
> Thanks for voting!
>
> Smoke tester passes for me,
>
> +1.
>
> --
> - Mark
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Increase default maxBooleanClauses

2013-03-26 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614407#comment-13614407
 ] 

Shawn Heisey commented on SOLR-4586:


Here's my proposal:

The current 4x patch goes in largely as it is now.  Changes:
1) Remove the solr.xml additions.
2) Log a deprecation warning when maxBooleanClauses is found in solrconfig.xml, 
but honor it.
2a) Should we make it possible to go lower than Lucene's default?  The current 
patch won't.
3) Make some tests to verify behavior.  I'm willing to do this, but I will need 
a little guidance.

With the current POST buffer default size of 2MiB, you could include just under 
2^20 boolean clauses, if each clause were only 1 byte, a highly contrived and 
illogical query.  For that reason, I think that 2^20 is a reasonable default 
value.  Also, I think that performance would become intolerable long before you 
reached that many clauses, and I think that will continue to be the case for 
the foreseeable future.

For 5.0, we remove the maxBooleanClauses config entirely.  If someone really 
did have a viable use case for more than 2^20 clauses, they would very likely 
have the expertise required to modify Solr code.

Would it be a good idea to file another issue to have Solr use a better 
solution than BooleanQuery when possible?


> Increase default maxBooleanClauses
> --
>
> Key: SOLR-4586
> URL: https://issues.apache.org/jira/browse/SOLR-4586
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.2
> Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
>Reporter: Shawn Heisey
> Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
> SOLR-4586.patch
>
>
> In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
> someone asking a question about queries.  Mark Miller told me that 
> maxBooleanClauses no longer applies, that the limitation was removed from 
> Lucene sometime in the 3.x series.  The config still shows up in the example 
> even in the just-released 4.2.
> Checking through the source code, I found that the config option is parsed 
> and the value stored in objects, but does not actually seem to be used by 
> anything.  I removed every trace of it that I could find, and all tests still 
> pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-26 Thread Stefan Pohl (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614371#comment-13614371
 ] 

Stefan Pohl commented on LUCENE-4872:
-

{quote}
I really don't really know what the typical/common use cases are for 
minShouldMatch.
{quote}
What about your own great work 
(http://blog.mikemccandless.com/2013/02/drill-sideways-faceting-with-lucene.html)
 as a use-case to start with?
Maybe some consulting committers can also share some insight on how this is 
used in the wild.

> BooleanWeight should decide how to execute minNrShouldMatch
> ---
>
> Key: LUCENE-4872
> URL: https://issues.apache.org/jira/browse/LUCENE-4872
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/search
>Reporter: Robert Muir
> Fix For: 5.0, 4.3
>
> Attachments: crazyMinShouldMatch.tasks
>
>
> LUCENE-4571 adds a dedicated document-at-time scorer for minNrShouldMatch 
> which can use advance() behind the scenes. 
> In cases where you have some really common terms and some rare ones this can 
> be a huge performance improvement.
> On the other hand BooleanScorer might still be faster in some cases.
> We should think about what the logic should be here: one simple thing to do 
> is to always use the new scorer when minShouldMatch is set: thats where i'm 
> leaning. 
> But maybe we could have a smarter heuristic too, perhaps based on cost()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Admin UI pluggability

2013-03-26 Thread Ryan Ernst
I would like to add some custom pages to the core menu for my setup,
replace some existing (like ping) and also remove some others (like data
import).  From what I can tell, the existing hooks are very limited (like
admin extra that appears in overview for the core).  I've searched through
JIRA for any issues regarding this, but can't find anything.  Any thoughts
on how this could be done?  Can we make the admin UI more pluggable?


[jira] [Commented] (LUCENE-4884) deleteAll() does not remove all TaxonomyWriter files

2013-03-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614350#comment-13614350
 ] 

Shai Erera commented on LUCENE-4884:


bq. I think for this issue we should just add DirectoryTaxonomyWriter.deleteAll

The taxonomy index is a special index with specific structure (e.g. it has the 
ROOT document, at doc=0). DTW.deleteAll() makes no sense, even if we try to 
implement it properly (by e.g. adding back doc=0). Rather, either open a DTW 
with OpenMode.CREATE, or do something like this:

{code}
Directory emptyTaxoDir = new RAMDirectory();
new DirTaxoWriter(emptyTaxoDir).close();
oldTaxoIndex.replaceTaxonomy(emptyTaxoDir);
{code}

I know that DirTaxoWriter.deleteAll() would have been simpler to the app, but I 
prefer that we don't expose it.

> deleteAll() does not remove all TaxonomyWriter files
> 
>
> Key: LUCENE-4884
> URL: https://issues.apache.org/jira/browse/LUCENE-4884
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Rob Audenaerde
>Priority: Minor
>
> When calling deleteAll() on the IndexWriter, the documents are removed from 
> the index and from the taxonomy. When investigating what is happing after the 
> deleteAll() on the disk, I see that in the index-directory I end up with just 
> two files:
> Index-directory:
> * segments.gen
> * segments_2
> Taxonomy directory:
> * segments.gen 
> * segments_h 
> BUT also a lot of 'older' files, like 
> * _1_Lucene41_0.tip 
> * _1_Lucene41_0.tim
> etc. 
> It seems these files are never deleted. If you index a lot and call deleteAll 
> a lot, it will fill up your disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Erik Hatcher
+1

On Mar 26, 2013, at 9:25, Mark Miller  wrote:

> http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/
> 
> Thanks for voting!
> 
> Smoke tester passes for me,
> 
> +1.
> 
> -- 
> - Mark
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Opening up FieldCacheImpl

2013-03-26 Thread Alan Woodward
Separately from this, I'm playing with an ExternalDocValuesFilterReader that 
takes a list of abstract ExternalDocValuesProviders, as a kind of 
generalisation of FileFloatSource.  It's a bit rough at the moment, and it's 
for a lucene application rather than for Solr, but it could work as a 
replacement for ExternalFileField with appropriate factories - I'll open a JIRA 
and put up a patch once it does anything useful.

Alan Woodward
www.flax.co.uk


On 26 Mar 2013, at 10:02, Alan Woodward wrote:

> I've opened https://issues.apache.org/jira/browse/LUCENE-4883 as a start.
> 
> Alan Woodward
> www.flax.co.uk
> 
> 
> On 26 Mar 2013, at 00:51, Robert Muir wrote:
> 
>> I don't think codec would be where you'd plugin for a filterreader that 
>> exposes external data as fake fields. That's because its all about what 
>> encoding indexwriter uses to write. I think solr has an indexreaderfactory 
>> if you want to e.g. wrap readers with filteratomicreaders.
>> 
>> On Mar 25, 2013 2:30 PM, "David Smiley (@MITRE.org)"  
>> wrote:
>> Interesting conversation. So if hypothetically Solr's FileFloatSource /
>> ExternalFileField didn't yet exist and we were instead talking about how to
>> implement such a thing on the latest 4.x code, then how basically might it
>> work?  I can see how to implement a Solr CodecFactory ( a SchemaAware one) ,
>> then a DocValuesProducer.  The CodecFactory implements
>> NamedInitializedPlugin and can thus get its config info that way.  That's
>> one approach.  But it's not clear to me where one would wrap AtomicReader
>> with FilterAtomicReader to use that approach.
>> 
>> ~ David
>> 
>> 
>> Robert Muir wrote
>> > On Sat, Mar 23, 2013 at 7:25 AM, Alan Woodward <
>> 
>> > alan@.co
>> 
>> > > wrote:
>> >>> I think instead FieldCache should actually be completely package
>> >>> private and hidden behind a UninvertingFilterReader and accessible via
>> >>> the existing AtomicReader docValues methods.
>> >>
>> >> Aha, right, because SegmentCoreReaders already caches XXXDocValues
>> >> instances (without using WeakReferences or anything like that).
>> >>
>> >> I should explain my motivation here.  I want to store various scoring
>> >> factors externally to Lucene, but make them available via a ValueSource
>> >> to CustomScoreQueries - essentially a generalisation of FileFloatSource
>> >> to any external data source.  FFS already has a bunch of code copied from
>> >> FieldCache, which was why my first thought was to open it up a bit and
>> >> extend it, rather than copy and paste again.
>> >>
>> >> But it sounds as though a nicer way of doing this would be to create a
>> >> new DocValuesProducer that talks to the external data source, and then
>> >> access it through the AR docValues methods.  Does that sound plausible?
>> >> Is SPI going to make it difficult to pass parameters to a custom
>> >> DVProducer (data location, host/port, other DV fields to use as primary
>> >> key lookups, etc)?
>> >>
>> >
>> > its not involved if you implement via FilterAtomicReader. its only
>> > involved for reading things that are actually written into the index.
>> >
>> > -
>> > To unsubscribe, e-mail:
>> 
>> > dev-unsubscribe@.apache
>> 
>> > For additional commands, e-mail:
>> 
>> > dev-help@.apache
>> 
>> 
>> 
>> 
>> 
>> -
>>  Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
>> --
>> View this message in context: 
>> http://lucene.472066.n3.nabble.com/Opening-up-FieldCacheImpl-tp4050537p4051217.html
>> Sent from the Lucene - Java Developer mailing list archive at Nabble.com.
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Chris Hostetter

: http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/

+1 VOTE for the files with the following sha1 checksums to be released as 
Lucene/Solr 4.2.1...

d5694d06dd2035949c7487f83772afc89afd3372 *lucene-4.2.1-src.tgz
ae9c8e3d0508aa1445acb6dd048bf7d6c706e882 *lucene-4.2.1.tgz
d5c4c357ac9ada58367b6ca3661456913fc89d15 *lucene-4.2.1.zip
e56c180896f9206212a417fc5c74cbf56341a4dc *solr-4.2.1-src.tgz
de73b88fe584c99dae85d15e4eedaf4c6bd3a946 *solr-4.2.1.tgz
63e635c28cf1d1780bec5497dd5879bba33b4e41 *solr-4.2.1.zip



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4883) Hide FieldCache behind an UninvertingFilterReader

2013-03-26 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614316#comment-13614316
 ] 

Alan Woodward commented on LUCENE-4883:
---

bq. I think it needs to take this information up-front: a mapping of field 
names from the underlying fieldinfos to docvalues types

I was wondering about how to do this.  We could add an optional Map parameter to the UFR constructor - if it's absent, then you can 
uninvert any field you like, at the risk of fieldcache-insanity.  Otherwise 
you're restricted to just the fields in the map, but you know you're not going 
to uninvert the wrong type.  Applications like Solr or ES can manage the types 
outside of UFR using their own field type information.

bq. How can we expose "missing" for NumericDocValues

I was going to move the FieldCache#getDocsWithField() method to AtomicReader, 
but I see that this doesn't actually work in the same way with DocValues at the 
moment.

Maybe for the moment we should just get FieldCache moved into UFR and worry 
about passing CheckIndex in another issue?  Unless you think that we'll end up 
having to make major changes if we don't build this in from the beginning.  I'm 
new to a lot of this part of the codebase, so all advice is very welcome here 
:-)

> Hide FieldCache behind an UninvertingFilterReader
> -
>
> Key: LUCENE-4883
> URL: https://issues.apache.org/jira/browse/LUCENE-4883
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-4883.patch
>
>
> From a discussion on the mailing list:
> {{
> rmuir:
> I think instead FieldCache should actually be completely package
> private and hidden behind a UninvertingFilterReader and accessible via
> the existing AtomicReader docValues methods.
> }}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-26 Thread dfdeshom (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614303#comment-13614303
 ] 

dfdeshom commented on SOLR-4632:


> Looking more closely, transientCacheSize is persisted when it's value is 
> other than Integer.MAX_VALUE. So I don't think this is a problem after all.

I have been able to reproduce the bug today using the example I gave 
previously: transientCacheSize is definitely not persisted regardless of its 
value. I am using the trunk branch of lucene-solr here: 
https://github.com/apache/lucene-solr/commits/trunk

Were you not able to reproduce this bug using the example I gave above?

> transientCacheSize is not retained when persisting solr.xml
> ---
>
> Key: SOLR-4632
> URL: https://issues.apache.org/jira/browse/SOLR-4632
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.2
>Reporter: dfdeshom
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 4.3
>
> Attachments: SOLR-4632.txt
>
>
> transientCacheSize is not persisted solr.xml when creating a new core. I was 
> able to reproduce this using the following solr.xml file:
> {code:xml}
> 
> 
>adminPath="/admin/cores" zkClientTimeout="${zkClientTimeout:15000}" 
> hostPort="8983" hostContext="solr">
> 
>   
> 
> {code}
> I created a new core:
> {code} curl 
> "http://localhost:8983/solr/admin/cores?action=create&instanceDir=collection1&transient=true&name=tmp5&loadOnStartup=false"{code}
> The resulting solr.xml file has the new core added, but is missing the 
> transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Robert Muir
+1

On Tue, Mar 26, 2013 at 6:25 AM, Mark Miller  wrote:
> http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/
>
> Thanks for voting!
>
> Smoke tester passes for me,
>
> +1.
>
> --
> - Mark
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4884) deleteAll() does not remove all TaxonomyWriter files

2013-03-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614281#comment-13614281
 ] 

Michael McCandless commented on LUCENE-4884:


bq. I expect the deleteAll() on the IndexWriter would take care of cleaning the 
TaxonomyWriter,

Hmm that won't happen: your primary IndexWriter, and the TaxonomyWriter, are 
independent of one another (they don't know about each other, at least 
currently).  DirectoryTaxonomyWriter does have a private IndexWriter that it 
uses ... but it doesn't expose this to you.

bq. just as addDocument() needs only to be called on the IndexWriter and takes 
care of filling the Facets in the TaxonomyWriter.

Actually, it's FacetFields.addFields that takes care of interacting with the 
TaxonomyWriter (adding new label/ords to it).

I think for this issue we should just add DirectoryTaxonomyWriter.deleteAll.  
But seems like lowish priority since the workaround should work ...

> deleteAll() does not remove all TaxonomyWriter files
> 
>
> Key: LUCENE-4884
> URL: https://issues.apache.org/jira/browse/LUCENE-4884
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Rob Audenaerde
>Priority: Minor
>
> When calling deleteAll() on the IndexWriter, the documents are removed from 
> the index and from the taxonomy. When investigating what is happing after the 
> deleteAll() on the disk, I see that in the index-directory I end up with just 
> two files:
> Index-directory:
> * segments.gen
> * segments_2
> Taxonomy directory:
> * segments.gen 
> * segments_h 
> BUT also a lot of 'older' files, like 
> * _1_Lucene41_0.tip 
> * _1_Lucene41_0.tim
> etc. 
> It seems these files are never deleted. If you index a lot and call deleteAll 
> a lot, it will fill up your disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-03-26 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614251#comment-13614251
 ] 

Anshum Gupta commented on SOLR-3755:


There'are more changes on the branch, including a ChaosMonkey test for the 
feature. Any feedback on the design/strategy would be good.

Also, I'm working on adding some more documentation on the general strategy 
somewhere in the code/package and improving the javadoc for the same as well.

> shard splitting
> ---
>
> Key: SOLR-3755
> URL: https://issues.apache.org/jira/browse/SOLR-3755
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Attachments: SOLR-3755-combined.patch, 
> SOLR-3755-combinedWithReplication.patch, SOLR-3755-CoreAdmin.patch, 
> SOLR-3755.patch, SOLR-3755.patch, SOLR-3755.patch, SOLR-3755.patch, 
> SOLR-3755.patch, SOLR-3755.patch, SOLR-3755.patch, 
> SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch
>
>
> We can currently easily add replicas to handle increases in query volume, but 
> we should also add a way to add additional shards dynamically by splitting 
> existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4645) Missing Adobe XMP library can abort DataImportHandler process

2013-03-26 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614160#comment-13614160
 ] 

Alexandre Rafalovitch commented on SOLR-4645:
-

Mar 26, 2013 11:58:44 AM org.apache.solr.common.SolrException log
SEVERE: Full Import failed:java.lang.RuntimeException: 
java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NoClassDefFoundError: com/adobe/xmp/XMPException
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:266)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:422)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487)
at 
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)
Caused by: java.lang.RuntimeException: 
org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NoClassDefFoundError: com/adobe/xmp/XMPException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:406)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:319)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:227)
... 3 more
Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.NoClassDefFoundError: com/adobe/xmp/XMPException
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:535)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
... 5 more
Caused by: java.lang.NoClassDefFoundError: com/adobe/xmp/XMPException
at 
com.drew.imaging.jpeg.JpegMetadataReader.extractMetadataFromJpegSegmentReader(JpegMetadataReader.java:112)
at 
com.drew.imaging.jpeg.JpegMetadataReader.readMetadata(JpegMetadataReader.java:71)
at 
org.apache.tika.parser.image.ImageMetadataExtractor.parseJpeg(ImageMetadataExtractor.java:91)
at org.apache.tika.parser.jpeg.JpegParser.parse(JpegParser.java:56)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.tika.parser.DelegatingParser.parse(DelegatingParser.java:72)
at 
org.apache.tika.extractor.ParsingEmbeddedDocumentExtractor.parseEmbedded(ParsingEmbeddedDocumentExtractor.java:102)
at 
org.apache.tika.parser.microsoft.AbstractPOIFSExtractor.handleEmbeddedResource(AbstractPOIFSExtractor.java:104)
at 
org.apache.tika.parser.microsoft.WordExtractor.handlePictureCharacterRun(WordExtractor.java:427)
at 
org.apache.tika.parser.microsoft.WordExtractor.handleParagraph(WordExtractor.java:228)
at 
org.apache.tika.parser.microsoft.WordExtractor.parse(WordExtractor.java:99)
at 
org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:186)
at 
org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:161)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at org.apache.tika.Tika.parseToString(Tika.java:380)
at 
transformers.FullTextInjectorTransformer.transformRow(FullTextInjectorTransformer.java:175)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.applyTransformer(EntityProcessorWrapper.java:198)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:256)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:465)
... 6 more
Caused by: java.lang.ClassNotFoundException: com.adobe.xmp.XMPException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 29 more

Mar 26, 2013 11:58:44 AM org.apache.solr.update.DirectUpdateHandler2 rollback
INFO: start rollback{}


> Missing Adobe XMP library can abort DataImportHandler process
> -
>
> Key: SOLR-4645
> URL: https://issues.apache.org/jira/browse/SOLR-4645
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportH

[jira] [Updated] (SOLR-4645) Missing Adobe XMP library can abort DataImportHandler process

2013-03-26 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-4645:


Description: Solr distribution is missing Adobe XMP library ( 
http://www.adobe.com/devnet/xmp.html ). In particular code path, DIH+Tika tries 
to load an XMPException and fails with ClassNotFound. The library is present in 
Tika's distribution.

> Missing Adobe XMP library can abort DataImportHandler process
> -
>
> Key: SOLR-4645
> URL: https://issues.apache.org/jira/browse/SOLR-4645
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
> extraction)
>Affects Versions: 4.2
>Reporter: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 5.0
>
>
> Solr distribution is missing Adobe XMP library ( 
> http://www.adobe.com/devnet/xmp.html ). In particular code path, DIH+Tika 
> tries to load an XMPException and fails with ClassNotFound. The library is 
> present in Tika's distribution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4645) Missing Adobe XMP library can abort DataImportHandler process

2013-03-26 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-4645:
---

 Summary: Missing Adobe XMP library can abort DataImportHandler 
process
 Key: SOLR-4645
 URL: https://issues.apache.org/jira/browse/SOLR-4645
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
extraction)
Affects Versions: 4.2
Reporter: Alexandre Rafalovitch
Priority: Minor
 Fix For: 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX ([[ Exception while replacing ENV. Please report this as a bug. ]]

2013-03-26 Thread Policeman Jenkins Server
{{ java.lang.NullPointerException }})
 - Build # 354 - Still Failing!
MIME-Version: 1.0
Content-Type: multipart/mixed; 
boundary="=_Part_20_99675817.1364312198563"
X-Jenkins-Job: Lucene-Solr-trunk-MacOSX
X-Jenkins-Result: FAILURE
Precedence: bulk

--=_Part_20_99675817.1364312198563
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/354/
Java: [[ Exception while replacing ENV. Please report this as a bug. ]]
{{ java.lang.NullPointerException }}

No tests ran.

Build Log:
[...truncated 29 lines...]
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected 
reader termination
hudson.remoting.RequestAbortedException: 
hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected reader 
termination
at hudson.remoting.Request.call(Request.java:174)
at hudson.remoting.Channel.call(Channel.java:672)
at hudson.FilePath.act(FilePath.java:854)
at hudson.FilePath.act(FilePath.java:838)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:843)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:781)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1364)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:670)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:575)
at hudson.model.Run.execute(Run.java:1575)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:237)
Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: 
Unexpected reader termination
at hudson.remoting.Request.abort(Request.java:299)
at hudson.remoting.Channel.terminate(Channel.java:732)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:76)
Caused by: java.io.IOException: Unexpected reader termination
... 1 more
Caused by: java.lang.OutOfMemoryError: Java heap space


--=_Part_20_99675817.1364312198563--

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #812: POMs out of sync

2013-03-26 Thread Steve Rowe
From the log:

-
-validate-maven-dependencies:
[…]
 [licenses] MISSING sha1 checksum file for: 
/home/hudson/.m2/repository/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.8/randomizedtesting-runner-2.0.8.jar
 [licenses] Scanned 2 JAR file(s) for licenses (in 0.01s.), 1 error(s).

BUILD FAILED

[…]
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:534:
 License check failed. Check the logs.
-

This was caused by a missing 
solr/licenses/randomizedtesting-runner-2.0.9.jar.sha1, as well as a stale 
randomizedtesting-runner v2.0.8 version in the grandparent POM's 
 section.

I committed a fix to trunk and branch_4x.

Steve

On Mar 25, 2013, at 10:40 PM, Apache Jenkins Server  
wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/812/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 11720 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614086#comment-13614086
 ] 

Robert Muir commented on LUCENE-4880:
-

I also think its stupid you get 0640 as a token by itself in any case. I dont 
agree with the unicode property of "letter" for this character as that doesnt 
makes sense to me, in my opinion it should be "format". I sure hope there is 
some good reason for this, but to me its crazy.

> Difference in offset handling between IndexReader created by MemoryIndex and 
> one created by RAMDirectory
> 
>
> Key: LUCENE-4880
> URL: https://issues.apache.org/jira/browse/LUCENE-4880
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
> Environment: Windows 7 (probably irrelevant)
>Reporter: Timothy Allison
> Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java
>
>
> MemoryIndex skips tokens that have length == 0 when building the index; the 
> result is that it does not increment the token offset (nor does it store the 
> position offsets if that option is set) for tokens of length == 0.  A regular 
> index (via, say, RAMDirectory) does not appear to do this.
> When using the ICUFoldingFilter, it is possible to have a term of zero length 
> (the \u0640 character separated by spaces).  If that occurs in a document, 
> the offsets returned at search time differ between the MemoryIndex and a 
> regular index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene/Solr 4.2.1 RC2

2013-03-26 Thread Steve Rowe
And now with the correct URL, smoking SUCCESS!

+1 to release RC2

Steve

On Mar 26, 2013, at 9:55 AM, Steve Rowe  wrote:

> Crap, you're right, I did use the old URL.  Sorry for the noise. - Steve
> 
> On Mar 26, 2013, at 9:53 AM, Mark Miller  wrote:
> 
>> Hmm…I think you used the wrong URL perhaps? That's yesterdays date and I 
>> re-spun this morning.
>> 
>>> 4.2.1 1460908 - mark - 2013-03-25 21:05:08
>> 
>> 
>> Also, when I looked at MANIFEST.MF, I see:
>> 
>> Manifest-Version: 1.0
>> 
>> Ant-Version: Apache Ant 1.8.2
>> 
>> Created-By: 1.6.0_27-b27 (Sun Microsystems Inc.)
>> 
>> Extension-Name: org.apache.solr
>> 
>> Specification-Title: Apache Solr Search Server: solr-core
>> 
>> Specification-Version: 4.2.1.2013.03.26.08.29.00
>> 
>> Specification-Vendor: The Apache Software Foundation
>> 
>> Implementation-Title: org.apache.solr
>> 
>> Implementation-Version: 4.2.1 1461071 - mark - 2013-03-26 08:29:00
>> 
>> Implementation-Vendor: The Apache Software Foundation
>> 
>> X-Compile-Source-JDK: 1.6
>> 
>> X-Compile-Target-JDK: 1.6
>> 
>> 
>> 
>> - Mark
>> 
>> On Mar 26, 2013, at 9:40 AM, Steve Rowe  wrote:
>> 
>>> Here's MANIFEST.MF contents:
>>> 
>>> -
>>> Manifest-Version: 1.0
>>> Ant-Version: Apache Ant 1.8.2
>>> Created-By: 1.7.0_15-b20 (Oracle Corporation)
>>> Extension-Name: org.apache.lucene
>>> Specification-Title: Lucene Search Engine: analyzers-common
>>> Specification-Version: 4.2.1
>>> Specification-Vendor: The Apache Software Foundation
>>> Implementation-Title: org.apache.lucene
>>> Implementation-Version: 4.2.1 1460908 - mark - 2013-03-25 21:05:08
>>> Implementation-Vendor: The Apache Software Foundation
>>> X-Compile-Source-JDK: 1.6
>>> X-Compile-Target-JDK: 1.6
>>> -
>>> 
>>> 
>>> On Mar 26, 2013, at 9:37 AM, Steve Rowe  wrote:
>>> 
 Smoke tester (from branch_4x r1461125) says:
 
 RuntimeError: JAR file 
 "/Users/sarowe/temp/smokeTestTmpDir/unpack/lucene-4.2.1/analysis/common/lucene-analyzers-common-4.2.1.jar"
  is missing "Created-By: 1.6" inside its META-INF/MANIFES.MF
 
 On Mar 26, 2013, at 9:25 AM, Mark Miller  wrote:
 
> http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810_2/
> 
> Thanks for voting!
> 
> Smoke tester passes for me,
> 
> +1.
> 
> -- 
> - Mark
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
 
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614080#comment-13614080
 ] 

Uwe Schindler edited comment on LUCENE-4880 at 3/26/13 2:52 PM:


Yes, this is a bug in MemoryIndex. In earlier Lucene versions I think we 
skipped empty terms in standard IndexWriter, but thats no longer the case. So 
MemoryIndex must be consistent.

  was (Author: thetaphi):
Yes, I this is a bug in MemoryIndex. In earlier Lucene versions I think we 
skipped empty terms in standard IndexWriter, but thats no longer the case. So 
MemoryIndex must be consistent.
  
> Difference in offset handling between IndexReader created by MemoryIndex and 
> one created by RAMDirectory
> 
>
> Key: LUCENE-4880
> URL: https://issues.apache.org/jira/browse/LUCENE-4880
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
> Environment: Windows 7 (probably irrelevant)
>Reporter: Timothy Allison
> Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java
>
>
> MemoryIndex skips tokens that have length == 0 when building the index; the 
> result is that it does not increment the token offset (nor does it store the 
> position offsets if that option is set) for tokens of length == 0.  A regular 
> index (via, say, RAMDirectory) does not appear to do this.
> When using the ICUFoldingFilter, it is possible to have a term of zero length 
> (the \u0640 character separated by spaces).  If that occurs in a document, 
> the offsets returned at search time differ between the MemoryIndex and a 
> regular index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614080#comment-13614080
 ] 

Uwe Schindler commented on LUCENE-4880:
---

Yes, I this is a bug in MemoryIndex. In earlier Lucene versions I think we 
skipped empty terms in standard IndexWriter, but thats no longer the case. So 
MemoryIndex must be consistent.

> Difference in offset handling between IndexReader created by MemoryIndex and 
> one created by RAMDirectory
> 
>
> Key: LUCENE-4880
> URL: https://issues.apache.org/jira/browse/LUCENE-4880
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
> Environment: Windows 7 (probably irrelevant)
>Reporter: Timothy Allison
> Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java
>
>
> MemoryIndex skips tokens that have length == 0 when building the index; the 
> result is that it does not increment the token offset (nor does it store the 
> position offsets if that option is set) for tokens of length == 0.  A regular 
> index (via, say, RAMDirectory) does not appear to do this.
> When using the ICUFoldingFilter, it is possible to have a term of zero length 
> (the \u0640 character separated by spaces).  If that occurs in a document, 
> the offsets returned at search time differ between the MemoryIndex and a 
> regular index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614073#comment-13614073
 ] 

Robert Muir commented on LUCENE-4880:
-

Thanks for raising this Timothy. 

I think its a bug in MemoryIndex: it shouldn't skip terms that are of zero 
length.

> Difference in offset handling between IndexReader created by MemoryIndex and 
> one created by RAMDirectory
> 
>
> Key: LUCENE-4880
> URL: https://issues.apache.org/jira/browse/LUCENE-4880
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
> Environment: Windows 7 (probably irrelevant)
>Reporter: Timothy Allison
> Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java
>
>
> MemoryIndex skips tokens that have length == 0 when building the index; the 
> result is that it does not increment the token offset (nor does it store the 
> position offsets if that option is set) for tokens of length == 0.  A regular 
> index (via, say, RAMDirectory) does not appear to do this.
> When using the ICUFoldingFilter, it is possible to have a term of zero length 
> (the \u0640 character separated by spaces).  If that occurs in a document, 
> the offsets returned at search time differ between the MemoryIndex and a 
> regular index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >