[jira] Updated: (SOLR-646) Configuration properties enhancements in solr.xml

2008-09-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-646:
---

Affects Version/s: (was: 1.3)
   1.4
 Assignee: (was: Shalin Shekhar Mangar)

I may not be able to look at this for a week or two. Removing myself as the 
assignee so that I don't hold back others.

> Configuration properties enhancements in solr.xml
> -
>
> Key: SOLR-646
> URL: https://issues.apache.org/jira/browse/SOLR-646
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 1.4
>Reporter: Henri Biestro
> Fix For: 1.4
>
> Attachments: solr-646.patch, solr-646.patch, solr-646.patch, 
> SOLR-646.patch, solr-646.patch, solr-646.patch, solr-646.patch, 
> solr-646.patch, solr-646.patch
>
>
> This patch refers to 'generalized configuration properties' as specified by 
> [HossMan|https://issues.apache.org/jira/browse/SOLR-350?focusedCommentId=12562834#action_12562834]
> This means configuration & schema files can use expression based on 
> properties defined in *solr.xml*.
> h3. Use cases:
> Describe core data directories from solr.xml as properties.
> Share the same schema and/or config file between multiple cores.
> Share reusable fragments of schema & configuration between multiple cores.
> h3. Usage:
> h4. solr.xml
> This *solr.xml* will be used to illustrates using properties for different 
> purpose.
> {code:xml}
> 
>   
>   
>   
>   
>
>   
>   
> 
> 
> 
> 
> 
>   
> 
> 
> 
> 
> 
>   
>   
> 
> {code}
> {{version}} : if you update your solr.xml or your cores for various motives, 
> it can be useful to track of a version. In this example, this will be used to 
> define the {{dataDir}} for each core.
> {{en-cores}},{{fr-cores}}: with aliases, if the list is long or repetitive, 
> it might be convenient to use a property that can then be used to describe 
> the Solr core name.
> {{instanceDir}}: note that both cores will use the same instance directory, 
> sharing their configuration and schema. The {{dataDir}} will be set for each 
> of them from the *solrconfig.xml*.
> h4. solrconfig.xml
> This is where our *solr.xml* property are used to define the data directory 
> as a composition of, in our example, the language code {{l10n}} and the core 
> version stored in {{version}}.
> {code:xml}
> 
>   ${solr.solr.home}/data/${l10n}-${version}
> 
> 
> {code}
> h5. schema.xml
> The {{include}} allows to import a file within the schema (or a solrconfig); 
> this can help de-clutter long schemas or reuse parts.
> The {{ctlField}} is just illustrating that a field & its type can be set 
> through properties as well; in our example, we will want the 'english' core 
> to refer to an 'english-configured' field and the 'french' core to a 
> 'french-configured' one. The type for the field is defined as {{text-EN}} or 
> {{text-FR}} after expansion.
> {code:xml}
> 
>   
> ...
>
>   
> 
> ...
>stored="true"  multiValued="true" /> 
>  
> {code}
> This schema is importing this *text-l10n.xml* file which is a *fragment*; the 
> fragment tag must be present & indicates the file is to be included. Our 
> example only defines different stopwords for each language but you could of 
> course extend this to stemmers, synonyms, etc.
> {code:xml}
> 
>positionIncrementGap="100">
> ...
>words="stopwords-fr.txt"/>
> ...
>   
>positionIncrementGap="100">
> ...
>words="stopwords-en.txt"/>
> ...
>   
> 
> {code}
> Alternatively, one can use XML entities using the 'solr:' protocol to the 
> same end as in:
> {code:xml}
>  
> ]>
> 
>   
> omitNorms="true"/>
>
>&textL10n;
>   
>   ...
> 
> {code}
> h4. Technical specifications
> solr.xml can define properties at the multicore & each core level.
> Properties defined in the multicore scope can override system properties.
> Properties defined in a core scope can override multicore & system properties.
> Property definitions can use expressions to define their name & value; these 
> expressions are evaluated in their outer scope context .
> CoreContainer serialization keeps properties as defined; persistence is 
> idem-potent. (ie property expressions are written, not their evaluation).
> The core descriptor properties are automatically defined in each core 
> context, namely:
> solr.core.instanceDir
> solr.core.name
> solr.core.configName
> solr.core.schemaName
> h3. Coding notes:
> - DOMUtil.java:
> cosmetic changes
> toMapExcept systematically skips 'xml:base" attributes (which may come from 
> entity resolving)
> - CoreDescriptor.java:
> The core descriptor does not store properties as values but as expressions 
> (and all its members can be prop

Re: 1.3.0 RC2

2008-09-05 Thread Koji Sekiguchi

> It looks like all of the "client" directory is now included...

This is true after SOLR-510 was resolved.

> including "python" which we don't even know works with trunk.  Also,

If we need to remove python client from release,
we could reopen SOLR-510 and test the patch:

-includes="LICENSE.txt NOTICE.txt *.txt *.xml lib/** src/** 
example/** client/**"
+includes="LICENSE.txt NOTICE.txt *.txt *.xml lib/** src/** 
example/** client/java/** client/ruby/**"


> should "ruby" be included and is it in shape to do so (I assume Erik
> or Koji would need to verify that part of the release), and does it's
> release structure look OK (not too much or too little included, etc)?

I'd verified that "rake test" under branch-1.3/client/ruby/solr-ruby
successfully done, but I have not tested flare.
Regarding release structure, I don't have much time to look into details...
I'd like to hear comment from Erik :-)

Koji



Re: 1.3.0 RC2

2008-09-05 Thread Yonik Seeley
We will need another build that incorporates SOLR-755, but people
should keep testing / evaluation this current release candidate.

Actually, it's not quite a release candidate, because we wouldn't
release it as is because it's built as a release candidate (RC2 is in
all the metadata).  Perhaps in the future we should make release
candidates as we would the final build and then actually use the last
one?

Our simple "example" is filling up a little more... do we really need
the very special purpose exampleAnalysis directory?  Seems like
instructions in the documentation or on a wiki page would be more
appropriate.

CHANGES.txt starts with "Apache Solr Version 1.3-dev"... should be 1.3.0 or 1.3

It looks like all of the "client" directory is now included...
including "python" which we don't even know works with trunk.  Also,
should "ruby" be included and is it in shape to do so (I assume Erik
or Koji would need to verify that part of the release), and does it's
release structure look OK (not too much or too little included, etc)?

-Yonik

On Thu, Sep 4, 2008 at 9:18 AM, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> Is available at http://people.apache.org/~gsingers/solr/1.3-RC2/
>
> I've included the signatures this time.  If you want to download the whole
> directory, you can get solr.tar
>
> As always, let me know any issues, etc.  Later today I will let solr-user
> know about RC2.
>
> -Grant
>


[jira] Resolved: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-755.
---

   Resolution: Fixed
Fix Version/s: 1.3

ASL for this patch granted, and patch committed to trunk and 1.3 branch.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.3
>
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-758) Enhance DisMaxQParserPlugin to support full-Solr syntax and to support alternate escaping strategies.

2008-09-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-758:
--

Attachment: AdvancedQParserPlugin.java
DisMaxQParserPlugin.java
UserQParser.java

I am contributing source files to this issue instead of patches because the 
code was significantly reworked.
Note that this patch depends strongly on SOLR-756 and mildly on SOLR-757 which 
I've contributed separately.  They need to be applied for this to compile.  
Even if you don't get those patches, you can read the source any way to see 
what it does.

> Enhance DisMaxQParserPlugin to support full-Solr syntax and to support 
> alternate escaping strategies.
> -
>
> Key: SOLR-758
> URL: https://issues.apache.org/jira/browse/SOLR-758
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 1.3
>Reporter: David Smiley
> Attachments: AdvancedQParserPlugin.java, DisMaxQParserPlugin.java, 
> UserQParser.java
>
>
> The DisMaxQParserPlugin has a variety of nice features; chief among them is 
> that is uses the DisjunctionMaxQueryParser.  However it imposes limitations 
> on the syntax.  
> I've enhanced the DisMax QParser plugin to use a pluggable query string 
> re-writer (via subclass extension) instead of hard-coding the logic currently 
> embedded within it (i.e. the escape nearly everything logic). Additionally, 
> I've made this QParser have a notion of a "simple" syntax (the default) or 
> non-simple in which case some of the logic in this QParser doesn't occur 
> because it's irrelevant (phrase boosting and min-should-max in particular). 
> As part of my work I significantly moved the code around to make it clearer 
> and more extensible.  I also chose to rename it to suggest it's role as a 
> parser for user queries.
> Attachment to follow...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-758) Enhance DisMaxQParserPlugin to support full-Solr syntax and to support alternate escaping strategies.

2008-09-05 Thread David Smiley (JIRA)
Enhance DisMaxQParserPlugin to support full-Solr syntax and to support 
alternate escaping strategies.
-

 Key: SOLR-758
 URL: https://issues.apache.org/jira/browse/SOLR-758
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.3
Reporter: David Smiley


The DisMaxQParserPlugin has a variety of nice features; chief among them is 
that is uses the DisjunctionMaxQueryParser.  However it imposes limitations on 
the syntax.  

I've enhanced the DisMax QParser plugin to use a pluggable query string 
re-writer (via subclass extension) instead of hard-coding the logic currently 
embedded within it (i.e. the escape nearly everything logic). Additionally, 
I've made this QParser have a notion of a "simple" syntax (the default) or 
non-simple in which case some of the logic in this QParser doesn't occur 
because it's irrelevant (phrase boosting and min-should-max in particular). As 
part of my work I significantly moved the code around to make it clearer and 
more extensible.  I also chose to rename it to suggest it's role as a parser 
for user queries.

Attachment to follow...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-757) SolrQueryParser should support escaping of characters in lieu of analysis for prefix & wildcard & fuzzy searches.

2008-09-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-757:
--

Attachment: SolrQueryParser_wildcardescape.patch

Apply this patch to the directory src/java/org/apache/solr/search and it will 
modify just SolrQueryParser.java (I've tested this using TextMate's diff 
plugin).

Note that this patch also calls setLowercaseExpandedTerms(true) because I found 
that more to my liking.

> SolrQueryParser should support escaping of characters in lieu of analysis for 
> prefix & wildcard & fuzzy searches.
> -
>
> Key: SOLR-757
> URL: https://issues.apache.org/jira/browse/SOLR-757
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 1.3
>Reporter: David Smiley
> Attachments: SolrQueryParser_wildcardescape.patch
>
>
> In Lucene and Solr, query words that are prefix or wildcard or fuzzy do not 
> go through analyzer processing.  This is for well known understood reasons.  
> However, for my data for a field I might want certain processing to occur.  
> Lowercasing has already been identified in SOLR-219.  Another which I address 
> in the attached patch file is the ability to remove characters that meet a 
> supplied regular expression.  I've implemented this as part of 
> SolrQueryParser but there should probably be a more thorough plan such as an 
> analyzer chain expressly for the purpose of being applied to prefix, 
> wildcard, and fuzzy queries.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Lars Kotthoff (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628728#action_12628728
 ] 

Lars Kotthoff commented on SOLR-755:


bq. So... the only question left is if this should go in 1.3 or not.

Since it's clearly a bug, +1 for getting it into 1.3.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-757) SolrQueryParser should support escaping of characters in lieu of analysis for prefix & wildcard & fuzzy searches.

2008-09-05 Thread David Smiley (JIRA)
SolrQueryParser should support escaping of characters in lieu of analysis for 
prefix & wildcard & fuzzy searches.
-

 Key: SOLR-757
 URL: https://issues.apache.org/jira/browse/SOLR-757
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.3
Reporter: David Smiley


In Lucene and Solr, query words that are prefix or wildcard or fuzzy do not go 
through analyzer processing.  This is for well known understood reasons.  
However, for my data for a field I might want certain processing to occur.  
Lowercasing has already been identified in SOLR-219.  Another which I address 
in the attached patch file is the ability to remove characters that meet a 
supplied regular expression.  I've implemented this as part of SolrQueryParser 
but there should probably be a more thorough plan such as an analyzer chain 
expressly for the purpose of being applied to prefix, wildcard, and fuzzy 
queries.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Lars Kotthoff (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628725#action_12628725
 ] 

Lars Kotthoff commented on SOLR-755:


bq. Back compatibility.

Right. What's going to break if this is changed? As far as I can see it's just 
code that relies on facet counts being returned in a particular order -- which, 
in this case, is no particular order ;)

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628722#action_12628722
 ] 

Yonik Seeley commented on SOLR-755:
---

So... the only question left is if this should go in 1.3 or not.
A client workaround is easy (specify a big number rather than -1), but it's 
probably not too much work to make another release candidate either.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Lars Kotthoff (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628719#action_12628719
 ] 

Lars Kotthoff commented on SOLR-755:


Ah, good point. I was testing with facet.mincount=1. Just disregard my patch.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628720#action_12628720
 ] 

Yonik Seeley commented on SOLR-755:
---

bq. Is there any reason why sorting is only turned on for limits > 0?

Back compatibility.  At one point there was no facet.sort... before I added it, 
facet.limit read as follows:
{quote}
 == facet.limit ==
This param indicates the maximum number of constraint counts that should be 
returned for the facet fields.  If a non-negative value is specified, the 
constraints (ie: Terms) will be sorted by the facet count (descending) and only 
the top N terms will be returned with their counts.  A negative value will 
result in every constraint being returned, in no partiular order.
{quote}

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628716#action_12628716
 ] 

Yonik Seeley commented on SOLR-755:
---

Whoops, you snuck in first... thanks Lars.
Although, I think your fix would cause an out-of-bounds exception?



> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-755:
--

Attachment: SOLR-755.patch

patch attached

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch, SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Lars Kotthoff (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Kotthoff updated SOLR-755:
---

Attachment: SOLR-755.patch

I could swear that this was working for me a while back before I abandoned it 
because it was too slow. Ah well.

The culprit is the facet paginating code. The end is determined with
{code}
int end = Math.min(dff.offset + dff.limit, counts.length);
{code}
which is going to be -1 if the limit is set to -1, i.e. no facet counts will be 
returned.

The attached patch adds a check for a negative value for end and sets it to 
Integer.MAX_VALUE. This fixes the behaviour when sorting is turned on.

Is there any reason why sorting is only turned on for limits > 0? I think it 
would make sense to turn it on for -1 as well, or probably even for everything 
as it's not going to make a difference when no facet counts are requested.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-755.patch
>
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-756) Make DisjunctionMaxQueryParser generally useful by supporting all query types.

2008-09-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-756:
--

Attachment: SolrPluginUtilsDisMax.patch

The patch file is only to be applied on SolrPluginUtils.java.  The patch file 
might need to be modified to remove the header that indicates the path to this 
file which is unique to my development system.
Testing this would be a little tricky because this parser is only used by one 
QParser and it prohibits doing the type of queries that would exercise these 
changes.  I have tests on my system but they involve other things that I have 
not contributed (yet, any way).

> Make DisjunctionMaxQueryParser generally useful by supporting all query types.
> --
>
> Key: SOLR-756
> URL: https://issues.apache.org/jira/browse/SOLR-756
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
>Reporter: David Smiley
> Attachments: SolrPluginUtilsDisMax.patch
>
>
> This is an enhancement to the DisjunctionMaxQueryParser to work on all the 
> query variants such as wildcard, prefix, and fuzzy queries, and to support 
> working in "AND" scenarios that are not processed by the min-should-match 
> DisMax QParser. This was not in Solr already because DisMax was only used for 
> a very limited syntax that didn't use those features. In my opinion, this 
> makes a more suitable base parser for general use because unlike the 
> Lucene/Solr parser, this one supports multiple default fields whereas other 
> ones (say Yonik's {!prefix} one for example, can't do dismax). The notion of 
> a single default field is antiquated and a technical under-the-hood detail of 
> Lucene that I think Solr should shield the user from by on-the-fly using a 
> DisMax when multiple fields are used. 
> (patch to be attached soon)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-284) Parsing Rich Document Types

2008-09-05 Thread Chris Harris (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Harris updated SOLR-284:
--

Attachment: rich.patch

THIS IS A BREAKING CHANGE TO RICH.PATCH! CLIENT URLs NEED TO BE UPDATED!

All unit tests pass.

Changes:

* As suggested earlier, the "id" parameter is no longer treated as a special 
case; it is not required, and it does not need to be an int. If you *do* use a 
field called "id", you *must* now declare it in the fieldnames parameter, as 
you would any other field

* Do updates with with UpdateRequestProcessor and SolrInputDocument, rather 
than UpdateHandler and DocumentBuilder. (The latter pair appear to be obsolete.)

* Previously if you declared a field in the fieldnames parameter but did not 
then did not specify a value for that field, you would get a 
NullPointerException. Now you can specify any nonnegative number of values for 
a declared field, including zero. (I've added a unit test for this.)

* In SolrPDFParser, properly close PDDocument when PDF parsing throws an 
exception

* Log the stream type in the solr log, rather than on the console

* Some not-very-thorough conversion of tabs to spaces

As an aside, I've noticed that I failed in my earlier efforts to incorporate 
Juri Kuehn's change to allow the id field to be non-integer. Sorry about that, 
Juri; that was not at all intentional.


> Parsing Rich Document Types
> ---
>
> Key: SOLR-284
> URL: https://issues.apache.org/jira/browse/SOLR-284
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: Eric Pugh
> Fix For: 1.4
>
> Attachments: libs.zip, rich.patch, rich.patch, rich.patch, 
> rich.patch, rich.patch, rich.patch, rich.patch, source.zip, test-files.zip, 
> test-files.zip, test.zip, un-hardcode-id.diff
>
>
> I have developed a RichDocumentRequestHandler based on the CSVRequestHandler 
> that supports streaming a PDF, Word, Powerpoint, Excel, or PDF document into 
> Solr.
> There is a wiki page with information here: 
> http://wiki.apache.org/solr/UpdateRichDocuments
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-756) Make DisjunctionMaxQueryParser generally useful by supporting all query types.

2008-09-05 Thread David Smiley (JIRA)
Make DisjunctionMaxQueryParser generally useful by supporting all query types.
--

 Key: SOLR-756
 URL: https://issues.apache.org/jira/browse/SOLR-756
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
Reporter: David Smiley


This is an enhancement to the DisjunctionMaxQueryParser to work on all the 
query variants such as wildcard, prefix, and fuzzy queries, and to support 
working in "AND" scenarios that are not processed by the min-should-match 
DisMax QParser. This was not in Solr already because DisMax was only used for a 
very limited syntax that didn't use those features. In my opinion, this makes a 
more suitable base parser for general use because unlike the Lucene/Solr 
parser, this one supports multiple default fields whereas other ones (say 
Yonik's {!prefix} one for example, can't do dismax). The notion of a single 
default field is antiquated and a technical under-the-hood detail of Lucene 
that I think Solr should shield the user from by on-the-fly using a DisMax when 
multiple fields are used. 
(patch to be attached soon)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628704#action_12628704
 ] 

Yonik Seeley commented on SOLR-755:
---

I also just found and fixed a bug in TestDistributedSearch that failed to find 
some differences in responses... so now a test of 
facet.limit=-1&facet.sort=true now fails as it does in a manual test.

Note that one must add facet.sorted=true in conjunction with facet.limit=-1 
since it defaults to unsorted (or sorted-by-term), and this is currently 
unsupported by distributed search.

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Shalin Shekhar Mangar
On Fri, Sep 5, 2008 at 8:26 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:

>
> We all scratch our own itch in open-source... good, practical
> scalability to high levels is an interest of mine :-)
> Many will choose their solution based on it's ability to scale to
> their most optimistic projections... so they may only use 10 or 20
> servers, but if it can't scale to 100 then they might start with
> something that easily can.  And I think this work would greatly
> benefit those with smaller clusters also.
>

Yonik, I agree with you that reaching that scale is what we should be aiming
for. What I'm really trying to say is that giving a real time search
solution *now* to our *current* users is the path that may give the most
value to Solr. We can start with that limited scope and proceed to manage
thousands of nodes in subsequent iterations.

I'm hopeful that this modest scope can be achieved with tweaks to the
current system by Solr 1.5 in less than a year (sorry Hoss :-) ). Working
towards a completely new architecture for the benefit of users that we don't
even have right now may distract us from our current users' needs.

I propose that we use the wiki to jot down all the goals that we want to
achieve. Let us scope them such that each goal is achievable in a quarter of
a year, re-architecting pieces as and when necessary to achieve each bit.
Backwards compatibility will be useful but not compulsory. Trivial loss of
compatibility may be OK -- if it becomes impossible to be backwards
compatible in a major way, we can bump the release to a major version. I'm
sure users won't mind doing a little work if we can give them compelling
features.

Thoughts?

-- 
Regards,
Shalin Shekhar Mangar.


[jira] Commented: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Wojtek Piaseczny (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628670#action_12628670
 ] 

Wojtek Piaseczny commented on SOLR-755:
---

Hi Yonik,

I submitted a patch for SOLR 748 
(https://issues.apache.org/jira/browse/SOLR-748) a few days ago. I'm guessing 
that whatever changes you make for this bug will break my patch. Could you 
review my patch while you're making these changes?


Thanks,


Wojtek


Quoted from: 
http://www.nabble.com/-jira--Created%3A-%28SOLR-755%29-facet.limit%3D-1-does-not-work-in-distributed-search-tp19334831p19334879.html



> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-755:
-

Assignee: Yonik Seeley

> facet.limit=-1 does not work in distributed search
> --
>
> Key: SOLR-755
> URL: https://issues.apache.org/jira/browse/SOLR-755
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
>
> If you specify facet.limit=-1 in distributed mode, no facet results are 
> returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-755) facet.limit=-1 does not work in distributed search

2008-09-05 Thread Yonik Seeley (JIRA)
facet.limit=-1 does not work in distributed search
--

 Key: SOLR-755
 URL: https://issues.apache.org/jira/browse/SOLR-755
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley
Priority: Minor


If you specify facet.limit=-1 in distributed mode, no facet results are 
returned.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



NullPointerException with duplicate shards

2008-09-05 Thread wojtekpia

I've noticed a NullPointerException being thrown in the FacetComponent's
countFacets method when I specify duplicate shards in the URL (e.g.
...&shards=localhost:8983/solr,localhost:8983/solr...&facet=on). Should I
create a JIRA issue for this, or is it just user error?
-- 
View this message in context: 
http://www.nabble.com/NullPointerException-with-duplicate-shards-tp19334437p19334437.html
Sent from the Solr - Dev mailing list archive at Nabble.com.



Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Mark Miller

Yonik Seeley wrote:

On Fri, Sep 5, 2008 at 10:45 AM, Shalin Shekhar Mangar
<[EMAIL PROTECTED]> wrote:
  

Cloud computing cluster having hundreds of servers is a niche area
limited to few (and they already have their systems in place without Solr).
Any efforts of integration must keep in mind our users and their needs.

Most Solr deployments have around 1-25 servers.



We all scratch our own itch in open-source... good, practical
scalability to high levels is an interest of mine :-)
Many will choose their solution based on it's ability to scale to
their most optimistic projections... so they may only use 10 or 20
servers, but if it can't scale to 100 then they might start with
something that easily can.  And I think this work would greatly
benefit those with smaller clusters also.

-Yonik
  
Can't emphasize how much I agree with this. Search Engines have often 
been stuck with for 4 or 5 years easily. You don't want to switch too 
often. In this day and age, many users that we might want to target solr 
too (the 'enterprise'), are going to want to be able to do capacity 
planning over the next 4 or 5 years. The rate that some of these players 
are/will grow seems to mean we want to be shooting for 100 servers with 
solr no problem. To put solr in the class of FAST et all (something I am 
personally interested in), we have to have almost arbitrary ease in scaling.


When FAST comes in, they give you a formula telling you how many servers 
with how much RAM you need to run your collection n. Their formula 
probably breaks down, but seems to at least hold in to the billions. A 
company looking at FAST can know, they can throw servers at their great 
to have super growing problem, at almost any realistic scale. I think a 
company should know they can do the same thing with solr. Part of my 
itch anyway.


My guess is that 10-15 mil docs per decent machine today is optimal...so 
to get to a billion docs thats...or 500 million...


It might be that few current users have those needs, but think of the 
users we will have in the coming years or could have now...and how our 
current users will benefit in ways they don't expect now...


solr should rule the search server roost across the board given enough 
time. Why not  ?


- Mark


[jira] Commented: (SOLR-42) Highlighting problems with HTMLStripWhitespaceTokenizerFactory

2008-09-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628650#action_12628650
 ] 

Francisco Sanmartín commented on SOLR-42:
-

The patch is for solr 1.3? 

I tried to apply the patch to solr 1.2 but it fails :(

Can anybody complete the "affected version" and "fix version" field? thanks!

> Highlighting problems with HTMLStripWhitespaceTokenizerFactory
> --
>
> Key: SOLR-42
> URL: https://issues.apache.org/jira/browse/SOLR-42
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Reporter: Andrew May
>Assignee: Grant Ingersoll
>Priority: Minor
> Attachments: htmlStripReaderTest.html, HTMLStripReaderTest.java, 
> HtmlStripReaderTestXmlProcessing.patch, 
> HtmlStripReaderTestXmlProcessing.patch, SOLR-42.patch, SOLR-42.patch, 
> SOLR-42.patch, SOLR-42.patch, TokenPrinter.java
>
>
> Indexing content that contains HTML markup, causes problems with highlighting 
> if the HTMLStripWhitespaceTokenizerFactory is used (to prevent the tag names 
> from being searchable).
> Example title field:
> 40Ar/39Ar laserprobe dating of mylonitic fabrics in a 
> polyorogenic terrane of NW Iberia
> Searching for title:fabrics with highlighting on, the highlighted version has 
> the  tags in the wrong place - 22 characters to the left of where they 
> should be (i.e. the sum of the lengths of the tags).
> Response from Yonik on the solr-user mailing-list:
> HTMLStripWhitespaceTokenizerFactory works in two phases...
> HTMLStripReader removes the HTML and passes the result to
> WhitespaceTokenizer... at that point, Tokens are generated, but the
> offsets will correspond to the text after HTML removal, not before.
> I did it this way so that HTMLStripReader  could go before any
> tokenizer (like StandardTokenizer).
> Can you open a JIRA bug for this?  The fix would be a special version
> of HTMLStripReader integrated with a WhitespaceTokenizer to keep
> offsets correct. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Yonik Seeley
On Fri, Sep 5, 2008 at 10:45 AM, Shalin Shekhar Mangar
<[EMAIL PROTECTED]> wrote:
> Cloud computing cluster having hundreds of servers is a niche area
> limited to few (and they already have their systems in place without Solr).
> Any efforts of integration must keep in mind our users and their needs.
>
> Most Solr deployments have around 1-25 servers.

We all scratch our own itch in open-source... good, practical
scalability to high levels is an interest of mine :-)
Many will choose their solution based on it's ability to scale to
their most optimistic projections... so they may only use 10 or 20
servers, but if it can't scale to 100 then they might start with
something that easily can.  And I think this work would greatly
benefit those with smaller clusters also.

-Yonik


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Shalin Shekhar Mangar
On Fri, Sep 5, 2008 at 7:59 PM, Mark Miller <[EMAIL PROTECTED]> wrote:

> These all sound like good ideas for solr2. The ability to handle changes on
> the fly easily across many machines would be an awesome place to reach.
> Dynamically changing field schema/stopword stuff is also a great feature to
> have.
>
> I think the key to reaching those goals is to really join the solr open
> source process and push at it a piece at a time. I think almost all of the
> features you have talked about make sense for the future of solr in one way
> or another. Take advantage of solrs popularity and I think eventually we can
> have more people helping make these goals a reality. It might not happen as
> fast as you'd like (already having all this Ocean code), but I think the
> long run returns will be much higher.
>
> You already have a great start with the patches and wiki info, but I think
> I am suggesting leaving the Ocean part out a little more, and maybe tackling
> these issues more form a Lucene/Solr perspective, using the Ocean ideas and
> code as leverage to move forward faster.


Mark has hit the nail on the head. We must start with the features which are
most relevant to our current set of users. Realtime Search is a need for
many. Cloud computing cluster having hundreds of servers is a niche area
limited to few (and they already have their systems in place without Solr).
Any efforts of integration must keep in mind our users and their needs.

Most Solr deployments have around 1-25 servers. We must start with that
use-case keeping in mind the grander goal. We should not be focusing on
building something that the developers want to build and use two years down
the line.

-- 
Regards,
Shalin Shekhar Mangar.


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Mark Miller
These all sound like good ideas for solr2. The ability to handle changes 
on the fly easily across many machines would be an awesome place to 
reach. Dynamically changing field schema/stopword stuff is also a great 
feature to have.


I think the key to reaching those goals is to really join the solr open 
source process and push at it a piece at a time. I think almost all of 
the features you have talked about make sense for the future of solr in 
one way or another. Take advantage of solrs popularity and I think 
eventually we can have more people helping make these goals a reality. 
It might not happen as fast as you'd like (already having all this Ocean 
code), but I think the long run returns will be much higher.


You already have a great start with the patches and wiki info, but I 
think I am suggesting leaving the Ocean part out a little more, and 
maybe tackling these issues more form a Lucene/Solr perspective, using 
the Ocean ideas and code as leverage to move forward faster.


- Mark

Jason Rutherglen wrote:

That may be the way to go, however there are many issues that you will
probably run into, the same ones I ran into that made the integration
difficult as mentioned.  I could be wrong however.  Most "realtime
search" is something on the order of a 5-10 second delay and there is
no transaction log.  I guess I made the goal of Ocean to be a
replacement for SQL databases, 0 second delay, transaction log
reliability, easy scalability, easy to add heavily customized queries
(span, payload, custom similarities, etc), and maximum uptime.  I
wanted to concentrate a lot on the grid computing part and leave the
rest of system as generic as possible so that what I consider to be
application specific code is not part of the search server.  For
example I consider custom Analyzers to be application specific code.
I want those dynamically loaded so I don't have to reboot 100 servers
if I make a change to one.  Add a stop word?  Well then don't have to
reboot 100 servers.  Synonyms?  The list does on and on.  Search is
fluid and dynamic.

On Fri, Sep 5, 2008 at 10:07 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
  

On Fri, Sep 5, 2008 at 7:23 PM, Mark Miller <[EMAIL PROTECTED]> wrote:


IMO, you are underestimating the difficulty of integrating Ocean with Solr's
current API's.
  

OK. you are right. Actually ,I did not mean the ocean integration. I
am mostly interested in the Realtime search part.  If we take one baby
step at a time it may be easy for us .  We can add features one by
one, but why don't we start with realtime search? (this sounds like an
immediately useful feature to an average solr user).


Also, Jason has already mentioned that Ocean is much more than just realtime
search. Adding realtime search to something like solr 1.5 is a different
goal than possibly integrating the Ocean work that has been done / is
planned, which seems like a very large scope project and if done would
certainly seem to merit a 2.0 change in its own right.

Still seems large and nebulous to me at the moment...just like solr 2. They
go well together in my mind 

Noble Paul നോബിള്‍ नोब्ळ् wrote:
  

Postponing Ocean Integration towards 2.0 is not a good idea. First of
all we do not know when 2.0 is going to happen. delaying  such a good
feature till 2.0 is wasting time.

My assumption was that Actually realtime search may have nothing to do
with the core itself . It may be fine with a Pluggable
SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
unified reader-writer which may choose to implement both in one class.

A total rewrite has its own problems. Achieving consensus on how
things should change is time consuming. So it will keep getting
delayed.  If with a few changes we can start the integration, that is
the best way forward . Eventually , we can slowly ,  evolve to a
better design. But, the design need not be as important as the feature
itself.



On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:



On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
<[EMAIL PROTECTED]> wrote:

  

Ok, SOLR 2 can be a from the ground up rewrite?



Sort-of... I think that's up for discussion at this point, but enough
should change that keeping Java APIs back compatible is not a priority
(just my opinion of course).  Supporting the current main search and
update interfaces and migrating most of the handlers shouldn't be that
difficult.  We should be able to provide relatively painless back
compatibility for the 95% of Solr users that don't do any custom
Java and the others hopefully won't mind migrating their stuff to
get the cool new features :-)

As far as SolrCore goes... I agree it's probably best to not do
pluggability at that level.
The way that Lucene has evolved, and may evolve (and how we want Solr
to evolve), it seems like we want more of a combo
IndexReader/IndexWriter interface.  It also needs (optional)
optimistic concurre

Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Jason Rutherglen
That may be the way to go, however there are many issues that you will
probably run into, the same ones I ran into that made the integration
difficult as mentioned.  I could be wrong however.  Most "realtime
search" is something on the order of a 5-10 second delay and there is
no transaction log.  I guess I made the goal of Ocean to be a
replacement for SQL databases, 0 second delay, transaction log
reliability, easy scalability, easy to add heavily customized queries
(span, payload, custom similarities, etc), and maximum uptime.  I
wanted to concentrate a lot on the grid computing part and leave the
rest of system as generic as possible so that what I consider to be
application specific code is not part of the search server.  For
example I consider custom Analyzers to be application specific code.
I want those dynamically loaded so I don't have to reboot 100 servers
if I make a change to one.  Add a stop word?  Well then don't have to
reboot 100 servers.  Synonyms?  The list does on and on.  Search is
fluid and dynamic.

On Fri, Sep 5, 2008 at 10:07 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> On Fri, Sep 5, 2008 at 7:23 PM, Mark Miller <[EMAIL PROTECTED]> wrote:
>> IMO, you are underestimating the difficulty of integrating Ocean with Solr's
>> current API's.
> OK. you are right. Actually ,I did not mean the ocean integration. I
> am mostly interested in the Realtime search part.  If we take one baby
> step at a time it may be easy for us .  We can add features one by
> one, but why don't we start with realtime search? (this sounds like an
> immediately useful feature to an average solr user).
>>
>> Also, Jason has already mentioned that Ocean is much more than just realtime
>> search. Adding realtime search to something like solr 1.5 is a different
>> goal than possibly integrating the Ocean work that has been done / is
>> planned, which seems like a very large scope project and if done would
>> certainly seem to merit a 2.0 change in its own right.
>>
>> Still seems large and nebulous to me at the moment...just like solr 2. They
>> go well together in my mind 
>>
>> Noble Paul നോബിള്‍ नोब्ळ् wrote:
>>>
>>> Postponing Ocean Integration towards 2.0 is not a good idea. First of
>>> all we do not know when 2.0 is going to happen. delaying  such a good
>>> feature till 2.0 is wasting time.
>>>
>>> My assumption was that Actually realtime search may have nothing to do
>>> with the core itself . It may be fine with a Pluggable
>>> SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
>>> unified reader-writer which may choose to implement both in one class.
>>>
>>> A total rewrite has its own problems. Achieving consensus on how
>>> things should change is time consuming. So it will keep getting
>>> delayed.  If with a few changes we can start the integration, that is
>>> the best way forward . Eventually , we can slowly ,  evolve to a
>>> better design. But, the design need not be as important as the feature
>>> itself.
>>>
>>>
>>>
>>> On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>>>

 On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
 <[EMAIL PROTECTED]> wrote:

>
> Ok, SOLR 2 can be a from the ground up rewrite?
>

 Sort-of... I think that's up for discussion at this point, but enough
 should change that keeping Java APIs back compatible is not a priority
 (just my opinion of course).  Supporting the current main search and
 update interfaces and migrating most of the handlers shouldn't be that
 difficult.  We should be able to provide relatively painless back
 compatibility for the 95% of Solr users that don't do any custom
 Java and the others hopefully won't mind migrating their stuff to
 get the cool new features :-)

 As far as SolrCore goes... I agree it's probably best to not do
 pluggability at that level.
 The way that Lucene has evolved, and may evolve (and how we want Solr
 to evolve), it seems like we want more of a combo
 IndexReader/IndexWriter interface.  It also needs (optional)
 optimistic concurrency... that was also assumed in the discussions
 about bailey.

 -Yonik


>>>
>>>
>>>
>>>
>>
>>
>
>
>
> --
> --Noble Paul
>


[jira] Updated: (SOLR-754) Solr logo on admin pages should always link back to main admin page

2008-09-05 Thread Sean Timm (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Timm updated SOLR-754:
---

Attachment: SOLR-754.patch

> Solr logo on admin pages should always link back to main admin page
> ---
>
> Key: SOLR-754
> URL: https://issues.apache.org/jira/browse/SOLR-754
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 1.2, 1.3
>Reporter: Sean Timm
>Priority: Trivial
> Fix For: 1.3
>
> Attachments: SOLR-754.patch
>
>
> The Solr logo on the threaddump, ping and registry admin pages do not link 
> back to the main admin page as the logos on the other pages do.  This patch 
> fixes the hyperlinks in the XSL files for those three admin pages.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-754) Solr logo on admin pages should always link back to main admin page

2008-09-05 Thread Sean Timm (JIRA)
Solr logo on admin pages should always link back to main admin page
---

 Key: SOLR-754
 URL: https://issues.apache.org/jira/browse/SOLR-754
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 1.2, 1.3
Reporter: Sean Timm
Priority: Trivial
 Fix For: 1.3


The Solr logo on the threaddump, ping and registry admin pages do not link back 
to the main admin page as the logos on the other pages do.  This patch fixes 
the hyperlinks in the XSL files for those three admin pages.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Noble Paul നോബിള്‍ नोब्ळ्
On Fri, Sep 5, 2008 at 7:23 PM, Mark Miller <[EMAIL PROTECTED]> wrote:
> IMO, you are underestimating the difficulty of integrating Ocean with Solr's
> current API's.
OK. you are right. Actually ,I did not mean the ocean integration. I
am mostly interested in the Realtime search part.  If we take one baby
step at a time it may be easy for us .  We can add features one by
one, but why don't we start with realtime search? (this sounds like an
immediately useful feature to an average solr user).
>
> Also, Jason has already mentioned that Ocean is much more than just realtime
> search. Adding realtime search to something like solr 1.5 is a different
> goal than possibly integrating the Ocean work that has been done / is
> planned, which seems like a very large scope project and if done would
> certainly seem to merit a 2.0 change in its own right.
>
> Still seems large and nebulous to me at the moment...just like solr 2. They
> go well together in my mind 
>
> Noble Paul നോബിള്‍ नोब्ळ् wrote:
>>
>> Postponing Ocean Integration towards 2.0 is not a good idea. First of
>> all we do not know when 2.0 is going to happen. delaying  such a good
>> feature till 2.0 is wasting time.
>>
>> My assumption was that Actually realtime search may have nothing to do
>> with the core itself . It may be fine with a Pluggable
>> SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
>> unified reader-writer which may choose to implement both in one class.
>>
>> A total rewrite has its own problems. Achieving consensus on how
>> things should change is time consuming. So it will keep getting
>> delayed.  If with a few changes we can start the integration, that is
>> the best way forward . Eventually , we can slowly ,  evolve to a
>> better design. But, the design need not be as important as the feature
>> itself.
>>
>>
>>
>> On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
>>> <[EMAIL PROTECTED]> wrote:
>>>

 Ok, SOLR 2 can be a from the ground up rewrite?

>>>
>>> Sort-of... I think that's up for discussion at this point, but enough
>>> should change that keeping Java APIs back compatible is not a priority
>>> (just my opinion of course).  Supporting the current main search and
>>> update interfaces and migrating most of the handlers shouldn't be that
>>> difficult.  We should be able to provide relatively painless back
>>> compatibility for the 95% of Solr users that don't do any custom
>>> Java and the others hopefully won't mind migrating their stuff to
>>> get the cool new features :-)
>>>
>>> As far as SolrCore goes... I agree it's probably best to not do
>>> pluggability at that level.
>>> The way that Lucene has evolved, and may evolve (and how we want Solr
>>> to evolve), it seems like we want more of a combo
>>> IndexReader/IndexWriter interface.  It also needs (optional)
>>> optimistic concurrency... that was also assumed in the discussions
>>> about bailey.
>>>
>>> -Yonik
>>>
>>>
>>
>>
>>
>>
>
>



-- 
--Noble Paul


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Jason Rutherglen
Ok... Well I did try that.  I think that can be done as well. IMO
schemas should be avoided with realtime.  Otherwise there is a
nightmare with schema versions.  The current config files would not be
used.  How do you propose the non-integration of those things?  It
would seem to create a strange non-overlapping system within SOLR.
Then I begin to wonder what is SOLR being used for here?  Is it the
RequestHandlers?  But then those don't support optimistic concurrency.
 They do support optimize and commit which would need to be turned
off.  Is this ok to the user?  I actually think the XML based
RequestHandlers (or even binary) is not as powerful as basic object
serialization.  For example at a previous company I wrote all this
code to do XML based span queries.  That was pretty useless given I
should have just serialized the span queries and sent them into SOLR.
But then what was SOLR doing in that case?  I would have needed to
write a request handler to handle serialized queries but over HTTP?
HTTP doesn't scale is grid computing.  So these are some of the things
I have thought about that are unclear right now.  Also Payloads, does
one need to write a custom RequestHandler or SearchComponent to handle
custom Payloads?  Using serialization I could just write the code and
it would be dynamically loaded by the server, executed, and returns a
result like the server is local.  All in 1/10 the time it would take
to do some custom RequestHandler.  If the deployment had 100 servers,
each RequestHandler I am testing out would require a reboot of each
server each time?  That is extremely inefficient.  Search server
systems always grow larger and my concern is, SOLR is adding features
on a level that is not scalable in grid computing, meaning every
little new feature, delays releases, needs testing, and is probably
something 50% of the users don't need and will never use.  It would be
better IMO to have a clean separation between the core search server,
and everything else.  This is the architecture I decided to go with in
Ocean.  Where if I want new functionality I write a class that
executes remotely on all the servers and returns any object I want.
The class directly accesses the individual IndexReader of each index.
I don't have to reboot anything, deploy a new WAR, do a bunch of
testing etc.  The XML interface should be at the server that is
performing the distributed search, rather than at each server node
because this is where the search results meet the real application.  I
guess I have found the current model for SOLR to be somewhat flawed.
It's not anyone's fault because SOLR also is a major step forward for
Lucene.  However, a lot of the delay in new releases is because
everyone is adding anything and everything they want into it which
should not really be the case in order to move forward with new core
features such as realtime.  I think the facets is another example
where it's currently tied into receiving an HTTP call via the
SolrParams which are strings.  It makes the code non-reusable in other
projects.  It could be rewritten and used in another project but then
big fixes need to be manually placed back in, which makes things
difficult.  I am unfamiliar with open source projects and am curious
how the Linux project handles these things.  I guess it just seems at
this point there is not enough clean separation between the various
parts of SOLR making the development of it somewhat less efficient for
production systems than it could be to the detriment of the users.

On Fri, Sep 5, 2008 at 9:40 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> Postponing Ocean Integration towards 2.0 is not a good idea. First of
> all we do not know when 2.0 is going to happen. delaying  such a good
> feature till 2.0 is wasting time.
>
> My assumption was that Actually realtime search may have nothing to do
> with the core itself . It may be fine with a Pluggable
> SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
> unified reader-writer which may choose to implement both in one class.
>
> A total rewrite has its own problems. Achieving consensus on how
> things should change is time consuming. So it will keep getting
> delayed.  If with a few changes we can start the integration, that is
> the best way forward . Eventually , we can slowly ,  evolve to a
> better design. But, the design need not be as important as the feature
> itself.
>
>
>
> On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>> On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
>> <[EMAIL PROTECTED]> wrote:
>>> Ok, SOLR 2 can be a from the ground up rewrite?
>>
>> Sort-of... I think that's up for discussion at this point, but enough
>> should change that keeping Java APIs back compatible is not a priority
>> (just my opinion of course).  Supporting the current main search and
>> update interfaces and migrating most of the handlers shouldn't be that
>> difficult.  We should be able to provide relatively pa

Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Mark Miller
IMO, you are underestimating the difficulty of integrating Ocean with 
Solr's current API's.


Also, Jason has already mentioned that Ocean is much more than just 
realtime search. Adding realtime search to something like solr 1.5 is a 
different goal than possibly integrating the Ocean work that has been 
done / is planned, which seems like a very large scope project and if 
done would certainly seem to merit a 2.0 change in its own right.


Still seems large and nebulous to me at the moment...just like solr 2. 
They go well together in my mind 


Noble Paul നോബിള്‍ नोब्ळ् wrote:

Postponing Ocean Integration towards 2.0 is not a good idea. First of
all we do not know when 2.0 is going to happen. delaying  such a good
feature till 2.0 is wasting time.

My assumption was that Actually realtime search may have nothing to do
with the core itself . It may be fine with a Pluggable
SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
unified reader-writer which may choose to implement both in one class.

A total rewrite has its own problems. Achieving consensus on how
things should change is time consuming. So it will keep getting
delayed.  If with a few changes we can start the integration, that is
the best way forward . Eventually , we can slowly ,  evolve to a
better design. But, the design need not be as important as the feature
itself.



On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
  

On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
<[EMAIL PROTECTED]> wrote:


Ok, SOLR 2 can be a from the ground up rewrite?
  

Sort-of... I think that's up for discussion at this point, but enough
should change that keeping Java APIs back compatible is not a priority
(just my opinion of course).  Supporting the current main search and
update interfaces and migrating most of the handlers shouldn't be that
difficult.  We should be able to provide relatively painless back
compatibility for the 95% of Solr users that don't do any custom
Java and the others hopefully won't mind migrating their stuff to
get the cool new features :-)

As far as SolrCore goes... I agree it's probably best to not do
pluggability at that level.
The way that Lucene has evolved, and may evolve (and how we want Solr
to evolve), it seems like we want more of a combo
IndexReader/IndexWriter interface.  It also needs (optional)
optimistic concurrency... that was also assumed in the discussions
about bailey.

-Yonik






  




Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Yonik Seeley
On Fri, Sep 5, 2008 at 9:40 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> If with a few changes we can start the integration, that is
> the best way forward.

That's a good point too... I think collecting all the requirements for
Solr2 will help from a design perspective, but if people figure out
how to put features in 1.x sooner, that's good too.

-Yonik


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Noble Paul നോബിള്‍ नोब्ळ्
Postponing Ocean Integration towards 2.0 is not a good idea. First of
all we do not know when 2.0 is going to happen. delaying  such a good
feature till 2.0 is wasting time.

My assumption was that Actually realtime search may have nothing to do
with the core itself . It may be fine with a Pluggable
SolrIndexSearcherFactory/SolrIndexWriterFactory . Ocean can have a
unified reader-writer which may choose to implement both in one class.

A total rewrite has its own problems. Achieving consensus on how
things should change is time consuming. So it will keep getting
delayed.  If with a few changes we can start the integration, that is
the best way forward . Eventually , we can slowly ,  evolve to a
better design. But, the design need not be as important as the feature
itself.



On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
> <[EMAIL PROTECTED]> wrote:
>> Ok, SOLR 2 can be a from the ground up rewrite?
>
> Sort-of... I think that's up for discussion at this point, but enough
> should change that keeping Java APIs back compatible is not a priority
> (just my opinion of course).  Supporting the current main search and
> update interfaces and migrating most of the handlers shouldn't be that
> difficult.  We should be able to provide relatively painless back
> compatibility for the 95% of Solr users that don't do any custom
> Java and the others hopefully won't mind migrating their stuff to
> get the cool new features :-)
>
> As far as SolrCore goes... I agree it's probably best to not do
> pluggability at that level.
> The way that Lucene has evolved, and may evolve (and how we want Solr
> to evolve), it seems like we want more of a combo
> IndexReader/IndexWriter interface.  It also needs (optional)
> optimistic concurrency... that was also assumed in the discussions
> about bailey.
>
> -Yonik
>



-- 
--Noble Paul


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Yonik Seeley
On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
<[EMAIL PROTECTED]> wrote:
> Ok, SOLR 2 can be a from the ground up rewrite?

Sort-of... I think that's up for discussion at this point, but enough
should change that keeping Java APIs back compatible is not a priority
(just my opinion of course).  Supporting the current main search and
update interfaces and migrating most of the handlers shouldn't be that
difficult.  We should be able to provide relatively painless back
compatibility for the 95% of Solr users that don't do any custom
Java and the others hopefully won't mind migrating their stuff to
get the cool new features :-)

As far as SolrCore goes... I agree it's probably best to not do
pluggability at that level.
The way that Lucene has evolved, and may evolve (and how we want Solr
to evolve), it seems like we want more of a combo
IndexReader/IndexWriter interface.  It also needs (optional)
optimistic concurrency... that was also assumed in the discussions
about bailey.

-Yonik


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Jason Rutherglen
Ok, SOLR 2 can be a from the ground up rewrite?  That would make
things much easier.  Otherwise Noble you are correct the integration
from my perspective is messy and there would be a lot of things that
are currently in SOLR that are unnecessary.

On Fri, Sep 5, 2008 at 9:00 AM, Mark Miller <[EMAIL PROTECTED]> wrote:
> Ocean integration should really be targeted at solr 2 I think, so API
> compatibility shouldn't be a large barrier.
>
> Noble Paul (JIRA) wrote:
>>
>>[
>> https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628624#action_12628624
>> ]
>> Noble Paul commented on SOLR-567:
>> -
>>
>> As a general approach , A pluggable SolrCore does not look like a very
>> good idea. Too many components depend on too many methods on SolrCore. So
>> ,you will have to  support all/most of them to be compatible .
>>
>> Corrrect me if I am wrong. You may only wish to plugin a IndexSearcher and
>> and IndexWriter. If we make the SolrIndexSearcher and SolrIndexWriter
>> pluggable ,that can be easier
>>
>>
>>>
>>> SolrCore Pluggable
>>> --
>>>
>>>Key: SOLR-567
>>>URL: https://issues.apache.org/jira/browse/SOLR-567
>>>Project: Solr
>>> Issue Type: Improvement
>>>   Affects Versions: 1.3
>>>   Reporter: Jason Rutherglen
>>>Attachments: solr-567.patch, solr-567.patch
>>>
>>>
>>> SolrCore needs to be an abstract class with the existing functionality in
>>> a subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher
>>> methods in SolrIndexSearcher are not used.  The new abstract class need only
>>> have the methods used by the other Solr classes.  This will allow other
>>> indexing and search implementations to reuse the other parts of Solr.  Any
>>> other classes that have functionality specific to the Solr implementation of
>>> indexing and replication such as SolrConfig can be made abstract.
>>>
>>
>>
>
>


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Mark Miller
Ocean integration should really be targeted at solr 2 I think, so API 
compatibility shouldn't be a large barrier.


Noble Paul (JIRA) wrote:
[ https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628624#action_12628624 ] 


Noble Paul commented on SOLR-567:
-

As a general approach , A pluggable SolrCore does not look like a very good 
idea. Too many components depend on too many methods on SolrCore. So ,you will 
have to  support all/most of them to be compatible .

Corrrect me if I am wrong. You may only wish to plugin a IndexSearcher and and 
IndexWriter. If we make the SolrIndexSearcher and SolrIndexWriter pluggable 
,that can be easier

  

SolrCore Pluggable
--

Key: SOLR-567
URL: https://issues.apache.org/jira/browse/SOLR-567
Project: Solr
 Issue Type: Improvement
   Affects Versions: 1.3
   Reporter: Jason Rutherglen
Attachments: solr-567.patch, solr-567.patch


SolrCore needs to be an abstract class with the existing functionality in a 
subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher 
methods in SolrIndexSearcher are not used.  The new abstract class need only 
have the methods used by the other Solr classes.  This will allow other 
indexing and search implementations to reuse the other parts of Solr.  Any 
other classes that have functionality specific to the Solr implementation of 
indexing and replication such as SolrConfig can be made abstract.



  




[jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628624#action_12628624
 ] 

Noble Paul commented on SOLR-567:
-

As a general approach , A pluggable SolrCore does not look like a very good 
idea. Too many components depend on too many methods on SolrCore. So ,you will 
have to  support all/most of them to be compatible .

Corrrect me if I am wrong. You may only wish to plugin a IndexSearcher and and 
IndexWriter. If we make the SolrIndexSearcher and SolrIndexWriter pluggable 
,that can be easier

> SolrCore Pluggable
> --
>
> Key: SOLR-567
> URL: https://issues.apache.org/jira/browse/SOLR-567
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
>Reporter: Jason Rutherglen
> Attachments: solr-567.patch, solr-567.patch
>
>
> SolrCore needs to be an abstract class with the existing functionality in a 
> subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher 
> methods in SolrIndexSearcher are not used.  The new abstract class need only 
> have the methods used by the other Solr classes.  This will allow other 
> indexing and search implementations to reuse the other parts of Solr.  Any 
> other classes that have functionality specific to the Solr implementation of 
> indexing and replication such as SolrConfig can be made abstract.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Jason Rutherglen
I would like to have a higher level discussion about the integration
before mucking about in the SOLR code again.  This way work invested
is work that will not have to be changed too much later on.  Do folks
have ideas about how they would want to do this?

On Fri, Sep 5, 2008 at 7:03 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> MultiCore.java is renamed to CoreContainer
>
> and SolrCore is changed a lot
>
> On Fri, Sep 5, 2008 at 4:27 PM, Jason Rutherglen (JIRA) <[EMAIL PROTECTED]> 
> wrote:
>>
>>[ 
>> https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628613#action_12628613
>>  ]
>>
>> Jason Rutherglen commented on SOLR-567:
>> ---
>>
>> Without looking at the code, what has changed?
>>
>>> SolrCore Pluggable
>>> --
>>>
>>> Key: SOLR-567
>>> URL: https://issues.apache.org/jira/browse/SOLR-567
>>> Project: Solr
>>>  Issue Type: Improvement
>>>Affects Versions: 1.3
>>>Reporter: Jason Rutherglen
>>> Attachments: solr-567.patch, solr-567.patch
>>>
>>>
>>> SolrCore needs to be an abstract class with the existing functionality in a 
>>> subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher 
>>> methods in SolrIndexSearcher are not used.  The new abstract class need 
>>> only have the methods used by the other Solr classes.  This will allow 
>>> other indexing and search implementations to reuse the other parts of Solr. 
>>>  Any other classes that have functionality specific to the Solr 
>>> implementation of indexing and replication such as SolrConfig can be made 
>>> abstract.
>>
>> --
>> This message is automatically generated by JIRA.
>> -
>> You can reply to this email to add a comment to the issue online.
>>
>>
>
>
>
> --
> --Noble Paul
>


Re: [jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Noble Paul നോബിള്‍ नोब्ळ्
MultiCore.java is renamed to CoreContainer

and SolrCore is changed a lot

On Fri, Sep 5, 2008 at 4:27 PM, Jason Rutherglen (JIRA) <[EMAIL PROTECTED]> 
wrote:
>
>[ 
> https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628613#action_12628613
>  ]
>
> Jason Rutherglen commented on SOLR-567:
> ---
>
> Without looking at the code, what has changed?
>
>> SolrCore Pluggable
>> --
>>
>> Key: SOLR-567
>> URL: https://issues.apache.org/jira/browse/SOLR-567
>> Project: Solr
>>  Issue Type: Improvement
>>Affects Versions: 1.3
>>Reporter: Jason Rutherglen
>> Attachments: solr-567.patch, solr-567.patch
>>
>>
>> SolrCore needs to be an abstract class with the existing functionality in a 
>> subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher 
>> methods in SolrIndexSearcher are not used.  The new abstract class need only 
>> have the methods used by the other Solr classes.  This will allow other 
>> indexing and search implementations to reuse the other parts of Solr.  Any 
>> other classes that have functionality specific to the Solr implementation of 
>> indexing and replication such as SolrConfig can be made abstract.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>



-- 
--Noble Paul


[jira] Updated: (SOLR-753) CoreDescriptor holds wrong instance dir for single core

2008-09-05 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-753:


Attachment: SOLR-753.patch

the fix

> CoreDescriptor holds wrong instance dir for single core
> ---
>
> Key: SOLR-753
> URL: https://issues.apache.org/jira/browse/SOLR-753
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Noble Paul
> Attachments: SOLR-753.patch
>
>
> for single core ,the  CoreDescriptor is instantiated with the following code
> {code:java}
> CoreDescriptor dcore = new CoreDescriptor(cores, "", 
> cfg.getResourceLoader().getInstanceDir());
> {code}
> and the reload in CoreContainer#create(CoreDescriptor dcore) has a snippet as 
> follows
> {code:java}
> File idir = new File(loader.getInstanceDir(), dcore.getInstanceDir());
> {code}
> which will make the idir value something like "solr/solr/" which is wrong

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-753) CoreDescriptor holds wrong instance dir for single core

2008-09-05 Thread Noble Paul (JIRA)
CoreDescriptor holds wrong instance dir for single core
---

 Key: SOLR-753
 URL: https://issues.apache.org/jira/browse/SOLR-753
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Noble Paul


for single core ,the  CoreDescriptor is instantiated with the following code
{code:java}
CoreDescriptor dcore = new CoreDescriptor(cores, "", 
cfg.getResourceLoader().getInstanceDir());
{code}
and the reload in CoreContainer#create(CoreDescriptor dcore) has a snippet as 
follows

{code:java}
File idir = new File(loader.getInstanceDir(), dcore.getInstanceDir());
{code}

which will make the idir value something like "solr/solr/" which is wrong





-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-567) SolrCore Pluggable

2008-09-05 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628613#action_12628613
 ] 

Jason Rutherglen commented on SOLR-567:
---

Without looking at the code, what has changed?

> SolrCore Pluggable
> --
>
> Key: SOLR-567
> URL: https://issues.apache.org/jira/browse/SOLR-567
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
>Reporter: Jason Rutherglen
> Attachments: solr-567.patch, solr-567.patch
>
>
> SolrCore needs to be an abstract class with the existing functionality in a 
> subclass.  SolrIndexSearcher the same.  It seems that most of the Searcher 
> methods in SolrIndexSearcher are not used.  The new abstract class need only 
> have the methods used by the other Solr classes.  This will allow other 
> indexing and search implementations to reuse the other parts of Solr.  Any 
> other classes that have functionality specific to the Solr implementation of 
> indexing and replication such as SolrConfig can be made abstract.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hudson build is back to normal: Solr-trunk #558

2008-09-05 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Solr-trunk/558/changes