[jira] Resolved: (SOLR-1229) deletedPkQuery feature does not work when pk and uniqueKey field do not have the same value

2009-08-04 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-1229.


Resolution: Fixed

> deletedPkQuery feature does not work when pk and uniqueKey field do not have 
> the same value
> ---
>
> Key: SOLR-1229
> URL: https://issues.apache.org/jira/browse/SOLR-1229
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: 1.4
>
> Attachments: SOLR-1229.patch, SOLR-1229.patch, SOLR-1229.patch, 
> SOLR-1229.patch, SOLR-1229.patch, SOLR-1229.patch, tests.patch
>
>
> Problem doing a delta-import such that records marked as "deleted" in the 
> database are removed from Solr using deletedPkQuery.
> Here's a config I'm using against a mocked test database:
> {code:xml}
> 
>  
>  
>pk="board_id"
>transformer="TemplateTransformer"
>deletedPkQuery="select board_id from boards where deleted = 'Y'"
>query="select * from boards where deleted = 'N'"
>deltaImportQuery="select * from boards where deleted = 'N'"
>deltaQuery="select * from boards where deleted = 'N'"
>preImportDeleteQuery="datasource:board">
>  
>  
>  
>
>  
> 
> {code}
> Note that the uniqueKey in Solr is the "id" field.  And its value is a 
> template board-.
> I noticed the javadoc comments in DocBuilder#collectDelta it says "Note: In 
> our definition, unique key of Solr document is the primary key of the top 
> level entity".  This of course isn't really an appropriate assumption.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-839) XML Query Parser support

2009-08-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738940#action_12738940
 ] 

Shalin Shekhar Mangar commented on SOLR-839:


As Yonik mentioned, we should use a SolrQueryParser with the XML QParser. 
Currently, queries on numeric fields (both legacy and trie) and date fields do 
not work. The current patch just enables one to use the Lucene XML QParser with 
Solr. It is not integrated with Solr as well as other qparsers are.

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 1.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: 1.4
>
> Attachments: lucene-xml-query-parser-2.4-dev.jar, SOLR-839.patch
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-839) XML Query Parser support

2009-08-04 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-839:
--

Fix Version/s: (was: 1.4)
   1.5
 Assignee: (was: Erik Hatcher)

Marking for 1.5 and unassigning.  I'll come back to this eventually, and 
integrate it fully.  Or someone else can take the lead.

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 1.3
>Reporter: Erik Hatcher
> Fix For: 1.5
>
> Attachments: lucene-xml-query-parser-2.4-dev.jar, SOLR-839.patch
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1275) Add expungeDeletes to DirectUpdateHandler2

2009-08-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738942#action_12738942
 ] 

Noble Paul commented on SOLR-1275:
--

json, is there a way to test if expungeDeletes is indeed called?



> Add expungeDeletes to DirectUpdateHandler2
> --
>
> Key: SOLR-1275
> URL: https://issues.apache.org/jira/browse/SOLR-1275
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Jason Rutherglen
>Assignee: Noble Paul
>Priority: Trivial
> Fix For: 1.4
>
> Attachments: SOLR-1275.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> expungeDeletes is a useful method somewhat like optimize is offered by 
> IndexWriter that can be implemented in DirectUpdateHandler2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (SOLR-1275) Add expungeDeletes to DirectUpdateHandler2

2009-08-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738942#action_12738942
 ] 

Noble Paul edited comment on SOLR-1275 at 8/4/09 5:29 AM:
--

jason, is there a way to test if expungeDeletes is indeed called?



  was (Author: noble.paul):
json, is there a way to test if expungeDeletes is indeed called?


  
> Add expungeDeletes to DirectUpdateHandler2
> --
>
> Key: SOLR-1275
> URL: https://issues.apache.org/jira/browse/SOLR-1275
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Jason Rutherglen
>Assignee: Noble Paul
>Priority: Trivial
> Fix For: 1.4
>
> Attachments: SOLR-1275.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> expungeDeletes is a useful method somewhat like optimize is offered by 
> IndexWriter that can be implemented in DirectUpdateHandler2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1051) Support the merge of multiple indexes

2009-08-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738948#action_12738948
 ] 

Shalin Shekhar Mangar commented on SOLR-1051:
-

Merging cores is the part which is left. I think it needs more 
thought/discussion before it can be implemented. I'll close this one and open 
another issue for 1.5 about merging cores.

> Support the merge of multiple indexes
> -
>
> Key: SOLR-1051
> URL: https://issues.apache.org/jira/browse/SOLR-1051
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: Ning Li
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1051.patch, SOLR-1051.patch, SOLR-1051.patch, 
> SOLR-1051.patch
>
>
> This is to support the merge of multiple indexes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1331) Support merging multiple cores

2009-08-04 Thread Shalin Shekhar Mangar (JIRA)
Support merging multiple cores
--

 Key: SOLR-1331
 URL: https://issues.apache.org/jira/browse/SOLR-1331
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Shalin Shekhar Mangar
 Fix For: 1.5


There should be a provision to merge one core with another. It should be 
possible to create a core, add documents to it and then just merge it into the 
main core which is serving requests. This way, the user will not need to know 
the filesystem as it is needed for SOLR-1051

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1051) Support the merge of multiple indexes

2009-08-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-1051.
-

Resolution: Fixed

I've opened SOLR-1331 for the missing piece.

> Support the merge of multiple indexes
> -
>
> Key: SOLR-1051
> URL: https://issues.apache.org/jira/browse/SOLR-1051
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: Ning Li
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1051.patch, SOLR-1051.patch, SOLR-1051.patch, 
> SOLR-1051.patch
>
>
> This is to support the merge of multiple indexes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1281) SignatureUpdateProcessorFactory does not use SolrResourceLoader

2009-08-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738970#action_12738970
 ] 

Shalin Shekhar Mangar commented on SOLR-1281:
-

We can remove the static loadClass method now. No need for back-compat here.

> SignatureUpdateProcessorFactory does not use SolrResourceLoader
> ---
>
> Key: SOLR-1281
> URL: https://issues.apache.org/jira/browse/SOLR-1281
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1281.patch
>
>
> can't load custom signature class from solr.home/lib, just webapp lib.
> Reported by Erik: 
> http://www.lucidimagination.com/search/document/a4ab6aa045c22d49/signatureupdateprocessorfactory_questions

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1319) Upgrade custom Solr Highlighter classes to new Lucene Highlighter API

2009-08-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738978#action_12738978
 ] 

Mark Miller commented on SOLR-1319:
---

the regex fragementer has to be updated (got a patch for that) and some work 
has to be done now that SpanScorer is gone, and the semantics/syntax for it is 
a bit different.

Prob makes sense to do SOLR-1221 with this one.

> Upgrade custom Solr Highlighter classes to new Lucene Highlighter API
> -
>
> Key: SOLR-1319
> URL: https://issues.apache.org/jira/browse/SOLR-1319
> Project: Solr
>  Issue Type: Task
>  Components: highlighter
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 1.4
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738986#action_12738986
 ] 

Uwe Schindler commented on SOLR-1322:
-

bq. If trie fields are indexed in parts, NumericRangeQuery will produce invalid 
results for multiValued fields... that's a limitation of the trie encoding (not 
easily fixable at all). 

I do not really understand the problem with MultiValueFields. I had trie fields 
in my index in the past that had multiple trie values and numeric range queries 
worked with it. What is the problem? You should be able to add more than one 
value using separate Field instances to the index.

A NumericRangeQuery on a MultiValued field should show results for all 
documents as soon as *one* of the indexed values fall into the range. Correct 
me if I am wrong!

> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738989#action_12738989
 ] 

Yonik Seeley commented on SOLR-1322:


bq. A NumericRangeQuery on a MultiValued field should show results for all 
documents as soon as one of the indexed values fall into the range.

Ah, you're right, because NumericRangeQuery uses a pure disjunctive model (it 
only logically ORs different terms).
Any types of queries that did an AND inbetween terms would get messed up, but 
we don't have any of those!


> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened SOLR-1322:



reopening - tries can work for multiValued fields

> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Upgrading Lucene

2009-08-04 Thread Grant Ingersoll


On Aug 3, 2009, at 8:21 PM, Mark Miller wrote:



4. You cannot instantiate MergePolicy with a no arg constructor  
anymore - so that fails now. I don't have a fix for this at the  
moment.



That sounds like a back compat break ;-)

This piece is starting to get a bit tricky.  We have a couple of  
places where we I think we need to allow for user args to somehow be  
passed in on custom settings (the MergeScheduler comes to mind).  I  
think Jason R. opened a bug for one recently as well, but I'm not  
finding it at the moment.


[jira] Commented: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738993#action_12738993
 ] 

Uwe Schindler commented on SOLR-1322:
-

Correct, but it is good to think about it multiple times. I always fall into 
the same trap when thinking about it, but as soon as I have a picture with the 
indexed terms it gets clear again. I think, I should write a test about this 
special case inside TestNumericRangeQueryXX (index multiple values and do some 
ranges with precStep=inf and real precStep on the same field and compare 
results).

> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Upgrading Lucene

2009-08-04 Thread Mark Miller

Grant Ingersoll wrote:


On Aug 3, 2009, at 8:21 PM, Mark Miller wrote:



4. You cannot instantiate MergePolicy with a no arg constructor 
anymore - so that fails now. I don't have a fix for this at the moment.



That sounds like a back compat break ;-)

It was - but they knew it would be and decided it was fine. The methods 
on the class were package private, so it appeared reasonable. The class 
was also labeled as expert and subject to sudden change. I guess it was 
fair game to break - I don't think this scenario was thought of, but I 
would think we can work around it. I havn't really thought about it yet 
myself though.



--
- Mark

http://www.lucidimagination.com





[jira] Updated: (SOLR-1299) In distributed search cannot search on any schema field in ascending order (descending is OK)

2009-08-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-1299:
---

Attachment: SOLR-1299.patch

Patch attached.
This uses the new Lucene FieldComparator classes and thus we can delete all of 
the previously copy-n-pasted code from Lucene.  It also uncovered (and fixed) 
bug in our SortMissingLast comparator:

{code}
 public Comparable value(int slot) {
   Comparable v = values[slot];
-  return v==null ? nullVal : null;
+  return v==null ? nullVal : v;
 }
{code}



> In distributed search cannot search on any schema field in ascending order 
> (descending is OK)
> -
>
> Key: SOLR-1299
> URL: https://issues.apache.org/jira/browse/SOLR-1299
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: David Bowen
>Assignee: Yonik Seeley
> Fix For: 1.4
>
> Attachments: SOLR-1299.patch, SOLR-1299.patch
>
>
> Using the example with some of the exampledocs posted, the url: 
> http://localhost:8983/solr/select/?q=*:*&sort=timestamp+desc&fsv=true 
> works correctly, but change "desc" to "asc" and you get:
> HTTP ERROR: 500
> null
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.getComparator(QueryComponent.java:284)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:229)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
>   at 
> com.proquest.magnolia.solr.plugins.SummonSearchHandler.handleRequestBody(SummonSearchHandler.java:19)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1299)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
>   at org.mortbay.jetty.Server.handle(Server.java:285)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
>   at 
> org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1299) In distributed search cannot search on any schema field in ascending order (descending is OK)

2009-08-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-1299.


Resolution: Fixed

committed.

> In distributed search cannot search on any schema field in ascending order 
> (descending is OK)
> -
>
> Key: SOLR-1299
> URL: https://issues.apache.org/jira/browse/SOLR-1299
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: David Bowen
>Assignee: Yonik Seeley
> Fix For: 1.4
>
> Attachments: SOLR-1299.patch, SOLR-1299.patch
>
>
> Using the example with some of the exampledocs posted, the url: 
> http://localhost:8983/solr/select/?q=*:*&sort=timestamp+desc&fsv=true 
> works correctly, but change "desc" to "asc" and you get:
> HTTP ERROR: 500
> null
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.getComparator(QueryComponent.java:284)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:229)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
>   at 
> com.proquest.magnolia.solr.plugins.SummonSearchHandler.handleRequestBody(SummonSearchHandler.java:19)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1299)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
>   at org.mortbay.jetty.Server.handle(Server.java:285)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
>   at 
> org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: solr 1.4 schedule

2009-08-04 Thread Grant Ingersoll
Yep, I've been working through the open issues and nudging people who  
are assigned them to provide updates or move them to 1.5 and in some  
cases close them out.  It seems like a few of them were open but had  
already been committed.


Would be good if people can revisit their 1.4 issues and update them.

On Aug 3, 2009, at 5:46 PM, Yonik Seeley wrote:


Yikes - just saw Mark's tweet about Lucene getting close to release...
we have some catching up to do :-)

-Yonik
http://www.lucidimagination.com



On Fri, Jul 24, 2009 at 5:14 PM, Yonik Seeley> wrote:

So it seems like Solr 1.4 should be targeted for release as soon as
possible after the Lucene 2.9 release - perhaps one week after.
Lucene looks like it may release in late August, so I think there is
enough time to get all of the issues out of the way (of course new
ones having to do with all of the changes in Lucene keep coming  
up...)


side note: I'll be out of touch (in VT) this Sat-Tues.

-Yonik
http://www.lucidimagination.com





[jira] Created: (SOLR-1332) Escape spaces in URLs in URLDataSource

2009-08-04 Thread Chris Eldredge (JIRA)
Escape spaces in URLs in URLDataSource
--

 Key: SOLR-1332
 URL: https://issues.apache.org/jira/browse/SOLR-1332
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4
 Environment: Any
Reporter: Chris Eldredge


At present the URLDataSource does not correctly escape spaces in URLs.  This 
particularly affects usages where ${dataimporter.last_index_time} is present in 
the URL to support delta imports.  That variable contains a space which should 
be escaped.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1332) Escape spaces in URLs in URLDataSource

2009-08-04 Thread Chris Eldredge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Eldredge updated SOLR-1332:
-

Attachment: SOLR-1332.patch

Patch that escapes spaces.

> Escape spaces in URLs in URLDataSource
> --
>
> Key: SOLR-1332
> URL: https://issues.apache.org/jira/browse/SOLR-1332
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
> Environment: Any
>Reporter: Chris Eldredge
> Attachments: SOLR-1332.patch
>
>
> At present the URLDataSource does not correctly escape spaces in URLs.  This 
> particularly affects usages where ${dataimporter.last_index_time} is present 
> in the URL to support delta imports.  That variable contains a space which 
> should be escaped.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739069#action_12739069
 ] 

Uwe Schindler commented on SOLR-1322:
-

I added a test to Lucene Core that verifies, that multi-valued terms work 
correctly: Revision #800896  


> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-868) Prepare solrjs trunk to be integrated into contrib

2009-08-04 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-868.


Resolution: Fixed

My apologies, this was actually resolved months ago!  (and solrjs is now in 
http://svn.apache.org/repos/asf/lucene/solr/trunk/client/javascript/ )

> Prepare solrjs trunk to be integrated into contrib
> --
>
> Key: SOLR-868
> URL: https://issues.apache.org/jira/browse/SOLR-868
> Project: Solr
>  Issue Type: Task
>Affects Versions: 1.4
>Reporter: Matthias Epheser
>Assignee: Ryan McKinley
> Fix For: 1.4
>
> Attachments: javascript_contrib.zip, reutersimporter.jar, 
> SOLR-868-testdata.patch, solrjs.zip
>
>
> This patch includes a zipfile snapshot of current solrjs trunk. The folder 
> structure is applied to standard solr layout.  It can be extracted to 
> "contrib/javascript".
> it includes a build.xml:
> * ant dist -> creates a single js file and a jar that holds velocity 
> templates.
> * ant docs -> creates js docs. test in browser: doc/index.html
> * ant example-init -> (depends ant dist on solr root) copies the current 
> built of solr.war and solr-velocity.jar to example/testsolr/..
> * ant example-start -> starts the testsolr server on port 8983
> * ant example-import -> imports 3000 test data rows (requires a started 
> testserver)
> Point your browser to example/testClientside.html 
> ,example/testServerSide.html or test/reuters/index.html to see it working.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-705) Distributed search should optionally return docID->shard map

2009-08-04 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley reassigned SOLR-705:
--

Assignee: Ryan McKinley

> Distributed search should optionally return docID->shard map
> 
>
> Key: SOLR-705
> URL: https://issues.apache.org/jira/browse/SOLR-705
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
> Environment: all
>Reporter: Brian Whitman
>Assignee: Ryan McKinley
> Fix For: 1.4
>
> Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
> SOLR-705.patch, SOLR-705.patch, SOLR-705.patch
>
>
> SOLR-303 queries with &shards parameters set need to return the dociD->shard 
> mapping in the response. Without it, updating/deleting documents when the # 
> of shards is variable is hard. We currently set this with a special 
> requestHandler that filters /update and inserts the shard as a field in the 
> index but it would be better if the shard location came back in the query 
> response outside of the index.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-705) Distributed search should optionally return docID->shard map

2009-08-04 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739076#action_12739076
 ] 

Ryan McKinley commented on SOLR-705:


I'll take this one on for 1.4...

I incorporate hoss' suggestion and then we can see how we like it.

> Distributed search should optionally return docID->shard map
> 
>
> Key: SOLR-705
> URL: https://issues.apache.org/jira/browse/SOLR-705
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
> Environment: all
>Reporter: Brian Whitman
>Assignee: Ryan McKinley
> Fix For: 1.4
>
> Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
> SOLR-705.patch, SOLR-705.patch, SOLR-705.patch
>
>
> SOLR-303 queries with &shards parameters set need to return the dociD->shard 
> mapping in the response. Without it, updating/deleting documents when the # 
> of shards is variable is hard. We currently set this with a special 
> requestHandler that filters /update and inserts the shard as a field in the 
> index but it would be better if the shard location came back in the query 
> response outside of the index.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1322) range queries won't work for trie fields with precisionStep=0

2009-08-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-1322.


Resolution: Fixed

committed patch that removes precisionStep checks if multiValued and updates 
comments in the example schema.

> range queries won't work for trie fields with precisionStep=0
> -
>
> Key: SOLR-1322
> URL: https://issues.apache.org/jira/browse/SOLR-1322
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 1.4
>
>
> range queries won't work for trie fields with precisionStep=0... a normal 
> range query should be used in this case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1333) Local Solr bug > 90 degrees lattitude near the poles

2009-08-04 Thread Bill Bell (JIRA)
Local Solr bug > 90 degrees lattitude near the poles


 Key: SOLR-1333
 URL: https://issues.apache.org/jira/browse/SOLR-1333
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.4
 Environment: All
Reporter: Bill Bell


When entering large distances near 90 degrees latitude, get an error on search.

Aug 4, 2009 1:54:00 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.IllegalArgumentException: Illegal lattitude value 
93.1558669413734
at 
org.apache.lucene.spatial.geometry.FloatLatLng.(FloatLatLng.java:26)
at 
org.apache.lucene.spatial.geometry.shape.LLRect.createBox(LLRect.java:93)
at 
org.apache.lucene.spatial.tier.DistanceUtils.getBoundary(DistanceUtils.java:50)
at 
org.apache.lucene.spatial.tier.CartesianPolyFilterBuilder.getBoxShape(CartesianPolyFilterBuilder.java:47)
at 
org.apache.lucene.spatial.tier.CartesianPolyFilterBuilder.getBoundingArea(CartesianPolyFilterBuilder.java:109)
at 
org.apache.lucene.spatial.tier.DistanceQueryBuilder.(DistanceQueryBuilder.java:61)
at 
com.pjaol.search.solr.component.LocalSolrQueryComponent.prepare(LocalSolrQueryComponent.java:151)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:174)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1328)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:341)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:244)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
at 
org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:857)
at 
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:565)
at 
org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1509)
at java.lang.Thread.run(Thread.java:619)


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Param Naming and Abbreviations

2009-08-04 Thread Chris Hostetter

: OK, sounds reasonable, although I suspect your frequency based convention is
: going to be in the eye of the beholder.

absolutely ... but even if we all drank the same koolaid and agreed 
perfectly about how short something hsould be based on it's frequency of 
use, frequency of use changes.  "qt" is a nice short param that is 
boardering on deprecation, yet at the time it was introduced it was 
explicitly used in almost every request.

hindsight is 20/20.

another problem to keep in mind is local params: even if a param will be 
used extremely infrequently, it might be worthwhile to keep it relaly 
short if it's *allways* going to be used as a local param inside of the 
QParser syntax -- because the qparser will help document the purpose/scope 
of hte param.  people ocnvinced me that the facet.date.* params were too 
short when i first wrote them because date faceting was kind of a niche 
thing, and the params needed ot be better self documenting, but now that 
we have local params it seems silly to specify them as top level params, 
but in local params they see obscenly verbose.

: Has anyone actually documented/tested the "cost" of a URL of 100 chars versus
: one of 200 chars?  I don't know much how NIC's work, but I have a hard time
: believing that makes much difference when it comes to packets, buffers, etc.
: especially in comparison to optimizing the response side of things.

i don't have any hard numbers, but i've known people who looked into and 
concluded that having shorter URLs does help, but that it wasn't a big 
enough deal to go overboard at the expense of readability.  common 
sense says that all other things being equal you might as well pick a 
short one.

As i recall the biggest advantages were:
 - smaller keys in HTTP caches improved cache lookup speed.
 - reduction in size of request logs
 - easier to read various logs when troubleshooting (less linewrap to 
confuse people)

: time), I find it curious that so much is put into shortening params in a 200
: char URL (if that) to the point of near unreadability, in some cases, and yet

po-TAY-toe po-TAH-toe ... one url with short param names that you aren't 
familiar with may seem unreadable, but when you're skimming thousands of 
URLs where the param keys are all the same and what you really care about 
are the different *values* it's easier to find the anomolies when the key 
names are short.

: the responses (up until the binary response format was added) are so verbose,
: especially for XML (but even JSON isn't all that succinct) and especially when

hey man, why do you think the sml format uses  instead of  ?

(At one point i remember an argument for only single letter xml node 
names, and replacing 'name="..."' with 'n="..."' -- this was back in 
the really early days when i wasn't happy that Solar was going to be a 
webserver at all and just wanted an in memory service; but both that and 
a pure binary response format had already been vetoed by Yonik's boss)


-Hoss



[jira] Commented: (SOLR-868) Prepare solrjs trunk to be integrated into contrib

2009-08-04 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739114#action_12739114
 ] 

Grant Ingersoll commented on SOLR-868:
--

bq. My apologies, this was actually resolved months ago! (and solrjs is now in 
http://svn.apache.org/repos/asf/lucene/solr/trunk/client/javascript/ ) 

It was, but it seems like we committed the Reuters data, which I don't think we 
should.

> Prepare solrjs trunk to be integrated into contrib
> --
>
> Key: SOLR-868
> URL: https://issues.apache.org/jira/browse/SOLR-868
> Project: Solr
>  Issue Type: Task
>Affects Versions: 1.4
>Reporter: Matthias Epheser
>Assignee: Ryan McKinley
> Fix For: 1.4
>
> Attachments: javascript_contrib.zip, reutersimporter.jar, 
> SOLR-868-testdata.patch, solrjs.zip
>
>
> This patch includes a zipfile snapshot of current solrjs trunk. The folder 
> structure is applied to standard solr layout.  It can be extracted to 
> "contrib/javascript".
> it includes a build.xml:
> * ant dist -> creates a single js file and a jar that holds velocity 
> templates.
> * ant docs -> creates js docs. test in browser: doc/index.html
> * ant example-init -> (depends ant dist on solr root) copies the current 
> built of solr.war and solr-velocity.jar to example/testsolr/..
> * ant example-start -> starts the testsolr server on port 8983
> * ant example-import -> imports 3000 test data rows (requires a started 
> testserver)
> Point your browser to example/testClientside.html 
> ,example/testServerSide.html or test/reuters/index.html to see it working.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1270) Legacy Numeric Field types need to be more consistent in their input/output error checking & documentation

2009-08-04 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-1270:
---

Description: 

FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
behavior at response writing time when dealing with "garbage" data in the 
index.  the behavior should be standardized, and better documented.

--

This issue originally came from my php client issue tracker: 
http://code.google.com/p/solr-php-client/issues/detail?id=13

  was:
The FloatField field type takes any string value at index. These values aren't 
necessarily in JSON numeric, but the JSON writer does not check its validity 
before writing it out as a JSON numeric.

I'm aware of the SortableFloatField which does do index time verification and 
conversion of the value, but the way the JSON writer is working seemed like 
either a bug that needed addressed or perhaps a gotch that needs better 
documented?

This issue originally came from my php client issue tracker: 
http://code.google.com/p/solr-php-client/issues/detail?id=13

   Assignee: Yonik Seeley
Summary: Legacy Numeric Field types need to be more consistent in their 
input/output error checking & documentation  (was: The FloatField (and probably 
others) field type takes any string value at index, but JSON writer outputs as 
numeric without checking)

revised issue summarydescription to clarify full scope of issue, assigning to 
yonik

> Legacy Numeric Field types need to be more consistent in their input/output 
> error checking & documentation
> --
>
> Key: SOLR-1270
> URL: https://issues.apache.org/jira/browse/SOLR-1270
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.2, 1.3, 1.4
> Environment: ubuntu 8.04, sun java 6, tomcat 5.5
>Reporter: Donovan Jimenez
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1270.patch
>
>
> FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
> behavior at response writing time when dealing with "garbage" data in the 
> index.  the behavior should be standardized, and better documented.
> --
> This issue originally came from my php client issue tracker: 
> http://code.google.com/p/solr-php-client/issues/detail?id=13

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1270) Legacy Numeric Field types need to be more consistent in their input/output error checking & documentation

2009-08-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739118#action_12739118
 ] 

Hoss Man commented on SOLR-1270:


FYI: since 1.3 was released, the usages of FloatField (and it's ilk) in the 
example schema have already been largely removed.  So i think the big issue 
here is just making the behavior of the legacy classes consistent, and 
improving their javadocs.

Donovan/Matt: FloatField (and it's ilk) is still listed in the example 
schema.xml on the trunk, it's just no longer used in any of the example fields 
-- so if you think there are improvements to be made to the comments associate 
with those field types, please give a specific example of how you think it 
should be documented.

> Legacy Numeric Field types need to be more consistent in their input/output 
> error checking & documentation
> --
>
> Key: SOLR-1270
> URL: https://issues.apache.org/jira/browse/SOLR-1270
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.2, 1.3, 1.4
> Environment: ubuntu 8.04, sun java 6, tomcat 5.5
>Reporter: Donovan Jimenez
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1270.patch
>
>
> FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
> behavior at response writing time when dealing with "garbage" data in the 
> index.  the behavior should be standardized, and better documented.
> --
> This issue originally came from my php client issue tracker: 
> http://code.google.com/p/solr-php-client/issues/detail?id=13

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1270) Legacy Numeric Field types need to be more consistent in their input/output error checking & documentation

2009-08-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739123#action_12739123
 ] 

Hoss Man commented on SOLR-1270:


sorry, one last comment i forgot to mention before...

Yonik: i assigned to you to make the final call about how to make all the 
behavior consistent.  (i'm guessing based on the inadvertent commit you made to 
FloatField that you prefer they all sanity check on output, but i didn't want 
to make any assumptions).  I'm -0 on any sanity checking, but I do agree they 
should be consistent.

> Legacy Numeric Field types need to be more consistent in their input/output 
> error checking & documentation
> --
>
> Key: SOLR-1270
> URL: https://issues.apache.org/jira/browse/SOLR-1270
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.2, 1.3, 1.4
> Environment: ubuntu 8.04, sun java 6, tomcat 5.5
>Reporter: Donovan Jimenez
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1270.patch
>
>
> FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
> behavior at response writing time when dealing with "garbage" data in the 
> index.  the behavior should be standardized, and better documented.
> --
> This issue originally came from my php client issue tracker: 
> http://code.google.com/p/solr-php-client/issues/detail?id=13

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1270) Legacy Numeric Field types need to be more consistent in their input/output error checking & documentation

2009-08-04 Thread Donovan Jimenez (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739152#action_12739152
 ] 

Donovan Jimenez commented on SOLR-1270:
---

Yonik has already started standardizing output time checking. So keep going 
there and we're good.

The trunk example schema now uses Trie fields for the default numeric types 
with precisionStep of 0. This should be fine since this seems to be the 
preferred / suggested field type implementations. The main idea of our argument 
was that the default make sense for the new user - I believe that change 
accomplishes this.  So again, we're good.

I also see new comments about which example field types are considered legacy - 
 good. The Trie field also mentions a little about how it generates tokens 
which can help when examining the index with tools like luke - a point of 
confusion that matt mentioned - so again, good.  The sortable numeric 
definitions might benefit from similar info in the comments. Probably the only 
thing I could offer as a suggestion.

Personally, I think you could remove more of the unused configuration from the 
example, but that's subjective opinion - some people like adding to a minimum, 
others like to prune from a maximum. I think that solr as a community is of the 
"let them prune" persuasion.


> Legacy Numeric Field types need to be more consistent in their input/output 
> error checking & documentation
> --
>
> Key: SOLR-1270
> URL: https://issues.apache.org/jira/browse/SOLR-1270
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.2, 1.3, 1.4
> Environment: ubuntu 8.04, sun java 6, tomcat 5.5
>Reporter: Donovan Jimenez
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1270.patch
>
>
> FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
> behavior at response writing time when dealing with "garbage" data in the 
> index.  the behavior should be standardized, and better documented.
> --
> This issue originally came from my php client issue tracker: 
> http://code.google.com/p/solr-php-client/issues/detail?id=13

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1270) Legacy Numeric Field types need to be more consistent in their input/output error checking & documentation

2009-08-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-1270.


Resolution: Fixed

I've committed the same changes for all plain numeric types.  Since these are 
special cases, the small performance impact is not outweighed by the need to 
produce valid transfer syntax.

> Legacy Numeric Field types need to be more consistent in their input/output 
> error checking & documentation
> --
>
> Key: SOLR-1270
> URL: https://issues.apache.org/jira/browse/SOLR-1270
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.2, 1.3, 1.4
> Environment: ubuntu 8.04, sun java 6, tomcat 5.5
>Reporter: Donovan Jimenez
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1270.patch
>
>
> FloatField, IntField, ByteField, LongField. and DoubleField have inconsistent 
> behavior at response writing time when dealing with "garbage" data in the 
> index.  the behavior should be standardized, and better documented.
> --
> This issue originally came from my php client issue tracker: 
> http://code.google.com/p/solr-php-client/issues/detail?id=13

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1334) SortableXXXField could use native FieldCache for sorting

2009-08-04 Thread Uwe Schindler (JIRA)
SortableXXXField could use native FieldCache for sorting


 Key: SOLR-1334
 URL: https://issues.apache.org/jira/browse/SOLR-1334
 Project: Solr
  Issue Type: Improvement
Reporter: Uwe Schindler


When looking through the FieldTypes (esp. new Trie code), I found out that 
field types using org.apache.solr.util.NumberUtils use String sorting. As 
SortField can get a FieldCache Parser since LUCENE-1478, NumberUtils could 
supply FieldCache.Parser singletons (serializable singletons!) for the 
SortableXXXField types, and the SortField instances could use these parsers 
instead of STRING only SortFields.

The same parsers could be used to create ValueSources for these types.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1334) SortableXXXField could use native FieldCache for sorting

2009-08-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739205#action_12739205
 ] 

Yonik Seeley commented on SOLR-1334:


Yeah - I decided against upgrading the Sortable* field types because it 
wouldn't be 100% back compatible, and we have the Trie* fields now anyway.  The 
issue has to do with missing values - if you use native FieldCache entries, you 
can't tell when a document had a value or not and that breaks some stuff like 
StatisticsComponent, and SortMissingLastComparator.

> SortableXXXField could use native FieldCache for sorting
> 
>
> Key: SOLR-1334
> URL: https://issues.apache.org/jira/browse/SOLR-1334
> Project: Solr
>  Issue Type: Improvement
>Reporter: Uwe Schindler
>
> When looking through the FieldTypes (esp. new Trie code), I found out that 
> field types using org.apache.solr.util.NumberUtils use String sorting. As 
> SortField can get a FieldCache Parser since LUCENE-1478, NumberUtils could 
> supply FieldCache.Parser singletons (serializable singletons!) for the 
> SortableXXXField types, and the SortField instances could use these parsers 
> instead of STRING only SortFields.
> The same parsers could be used to create ValueSources for these types.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-705) Distributed search should optionally return docID->shard map

2009-08-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739246#action_12739246
 ] 

Hoss Man commented on SOLR-705:
---

Ryan: might be worth while to split the jira issue ... create a new issue for 
an internal API to add per doc metadata and use of this metadata in at least 2 
response writers; then make SOLR-705 (shard mapping) and SOLR-1298 (function 
query results) dependent on the new issue and sanity test of the new internal 
APIs.

> Distributed search should optionally return docID->shard map
> 
>
> Key: SOLR-705
> URL: https://issues.apache.org/jira/browse/SOLR-705
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.3
> Environment: all
>Reporter: Brian Whitman
>Assignee: Ryan McKinley
> Fix For: 1.4
>
> Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
> SOLR-705.patch, SOLR-705.patch, SOLR-705.patch
>
>
> SOLR-303 queries with &shards parameters set need to return the dociD->shard 
> mapping in the response. Without it, updating/deleting documents when the # 
> of shards is variable is hard. We currently set this with a special 
> requestHandler that filters /update and inserts the shard as a field in the 
> index but it would be better if the shard location came back in the query 
> response outside of the index.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: trie fields default in example schema

2009-08-04 Thread Chris Hostetter

: Anyway, I think support will be good enough for 1.4 that we should
: make types like "integer" in the example schema be based on the trie
: fields.  The current "integer" fields should be renamed to "pinteger"
: (for plain integer), and have a recommended use only for compatibility
: with other/older indexes.  People have mistakenly used the plain
: integer in the past based on the name, so I think we should fix the
: naming.
: 
: The trie based fields should have lower memory footprint in the
: fieldcache and are faster for a lookup (the only reason to use plain
: ints in the past)... sint uses StringIndex for historical reasons - we
: had no other option... we could upgrade the existing sint fields, but
: it wouldn't be quite 100% compatible and there's little reason since
: we have the trie fields now.

+1

my only question when skimming hte recent schema.xml changes is wether the 
new TrieFields support sortMissingLast ?




-Hoss



[jira] Commented: (SOLR-1327) Allow special Filters to access, modify, and/or add Fields to/on a Solr Document

2009-08-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739280#action_12739280
 ] 

Hoss Man commented on SOLR-1327:


i'm a little unclear as to what this issue is about are we talking about a 
TokenFilter?

can you give an example use case?

> Allow special Filters to access, modify, and/or add Fields to/on a Solr 
> Document
> 
>
> Key: SOLR-1327
> URL: https://issues.apache.org/jira/browse/SOLR-1327
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
>Priority: Minor
> Fix For: 1.5
>
>
> Add a special Filter type that causes the field its in to be pre-analyzed - 
> when this happens, the Filter can work with the Solr Document and modify it 
> based on the tokens it sees.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (SOLR-1203) We should add an example of setting the update.processor for a given RequestHandler

2009-08-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-1203:
---


I don't think this has been addressed?

bq. update.processor is not for RequestHandler it is common across all 
requesthandlers

No - I'm talking about the processor chain that you configure on the handler 
with the update.processor param.

> We should add an example of setting the update.processor for a given 
> RequestHandler
> ---
>
> Key: SOLR-1203
> URL: https://issues.apache.org/jira/browse/SOLR-1203
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 1.4
>
>
> a commented out example that points to the commented out example update chain

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1203) We should add an example of setting the update.processor for a given RequestHandler

2009-08-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739288#action_12739288
 ] 

Mark Miller commented on SOLR-1203:
---

Hmm - I swear I had responded to you when you originally asked that Grant - but 
not seeing my reply here or on the dev list ...

Thats not the example I'm talking about - I'm talking about linking an update 
req handler to the dedupe chain. If you just uncomment that example, you won't 
get far because it won't be used by anything - they will use the default chain 
assigned to null.

> We should add an example of setting the update.processor for a given 
> RequestHandler
> ---
>
> Key: SOLR-1203
> URL: https://issues.apache.org/jira/browse/SOLR-1203
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 1.4
>
>
> a commented out example that points to the commented out example update chain

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1300) need to exlcude downloaded clustering libs from release packages

2009-08-04 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-1300:
---

Description: 
As noted by grant...

http://www.nabble.com/Re%3A-cleaning-up-example-p24469638.html

the build file for the clustering contrib downloads some optional jars that 
can't be included in the release.  Yonik/Hoss simplified the build files, but 
as a side effect *all* libs are included in the dist (even the ones that 
shouldn't be)

we need to exclude them (one way or another) before we release 1.4.

  was:
As noted by grant...

http://www.nabble.com/Re%3A-cleaning-up-example-p24469638.html

the build file for the extraction contrib downloads some optional jars that 
can't be included in the release.  Yonik/Hoss simplified the build files, but 
as a side effect *all* libs are included in the dist (even the ones that 
shouldn't be)

we need to exlucde them (one way or another) before we release 1.4.

Summary: need to exlcude downloaded clustering libs from release 
packages  (was: need to exlcude downloaded extraction libs from release 
packages)

Doh! ... sorry, we have too many contribs i keep getting them confused.

yes, i was talking about clustering not extraction (issue summary/description 
updated) and we do still have a problem...

{code}
hoss...@coaster:~/lucene/solr$ tar -ztf dist/apache-solr-1.4-dev.tgz | egrep 
pcj\|colt\|nni\|simple-xml
apache-solr-1.4-dev/contrib/clustering/lib/colt-1.2.0.jar
apache-solr-1.4-dev/contrib/clustering/lib/nni-1.0.0.jar
apache-solr-1.4-dev/contrib/clustering/lib/pcj-1.2.jar
apache-solr-1.4-dev/contrib/clustering/lib/simple-xml-1.7.3.jar
{code}

> need to exlcude downloaded clustering libs from release packages
> 
>
> Key: SOLR-1300
> URL: https://issues.apache.org/jira/browse/SOLR-1300
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
> Fix For: 1.4
>
>
> As noted by grant...
> http://www.nabble.com/Re%3A-cleaning-up-example-p24469638.html
> the build file for the clustering contrib downloads some optional jars that 
> can't be included in the release.  Yonik/Hoss simplified the build files, but 
> as a side effect *all* libs are included in the dist (even the ones that 
> shouldn't be)
> we need to exclude them (one way or another) before we release 1.4.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1332) Escape spaces in URLs in URLDataSource

2009-08-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739311#action_12739311
 ] 

Erik Hatcher commented on SOLR-1332:


Just escaping spaces isn't good enough, though certainly a workable fix in 
simple cases.   Each substitution into a URL would need to be escaped in some 
fashion to do this properly, right?   Sounds tricky!

> Escape spaces in URLs in URLDataSource
> --
>
> Key: SOLR-1332
> URL: https://issues.apache.org/jira/browse/SOLR-1332
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
> Environment: Any
>Reporter: Chris Eldredge
> Attachments: SOLR-1332.patch
>
>
> At present the URLDataSource does not correctly escape spaces in URLs.  This 
> particularly affects usages where ${dataimporter.last_index_time} is present 
> in the URL to support delta imports.  That variable contains a space which 
> should be escaped.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1332) Escape spaces in URLs in URLDataSource

2009-08-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739346#action_12739346
 ] 

Noble Paul commented on SOLR-1332:
--

it is possible to escape strings using the in-built functions
{code}
${dataimporter.functions.encodeUrl(dataimporter_last_index_time)}
{code}

> Escape spaces in URLs in URLDataSource
> --
>
> Key: SOLR-1332
> URL: https://issues.apache.org/jira/browse/SOLR-1332
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
> Environment: Any
>Reporter: Chris Eldredge
> Attachments: SOLR-1332.patch
>
>
> At present the URLDataSource does not correctly escape spaces in URLs.  This 
> particularly affects usages where ${dataimporter.last_index_time} is present 
> in the URL to support delta imports.  That variable contains a space which 
> should be escaped.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (SOLR-1332) Escape spaces in URLs in URLDataSource

2009-08-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12739346#action_12739346
 ] 

Noble Paul edited comment on SOLR-1332 at 8/4/09 10:42 PM:
---

it is possible to escape strings using the in-built functions 
http://wiki.apache.org/solr/DataImportHandler#head-5675e913396a42eb7c6c5d3c894ada5dadbb62d7
{code}
${dataimporter.functions.encodeUrl(dataimporter_last_index_time)}
{code}

  was (Author: noble.paul):
it is possible to escape strings using the in-built functions
{code}
${dataimporter.functions.encodeUrl(dataimporter_last_index_time)}
{code}
  
> Escape spaces in URLs in URLDataSource
> --
>
> Key: SOLR-1332
> URL: https://issues.apache.org/jira/browse/SOLR-1332
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
> Environment: Any
>Reporter: Chris Eldredge
> Attachments: SOLR-1332.patch
>
>
> At present the URLDataSource does not correctly escape spaces in URLs.  This 
> particularly affects usages where ${dataimporter.last_index_time} is present 
> in the URL to support delta imports.  That variable contains a space which 
> should be escaped.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1335) load core properties from a properties file

2009-08-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-1335:
-

Description: 
There are  few ways of loading properties in runtime,

# using env property using in the command line
# if you use a multicore drop it in the solr.xml

if not the only way is to or keep separate solrconfig.xml for each instance.  
#1 is error prone if the user fails to start with the correct system property. 
In our case we have four different configurations for the same deployment  . 
And we have to disable replication of solrconfig.xml

# main master
# slaves of main master
# repeater
# slaves of repeater

It would be nice if I can distribute four properties file so that our ops can 
drop  the right one and start Solr. 

I propose a properties file in the instancedir as solrcore.properties . If 
present would be loaded and added as core specific properties.




  was:
There are  few ways of loading properties in runtime,

# using env property using in the command line
# if you use a multicore drop it in the solr.xml

if not the only way is to or keep separate solrconfig.xml for each instance.  
#1 is error prone if the user fails to start with the correct system property. 
In our case we have four different configurations for the same deployment  of 
replication

# main master
# slaves of main master
# repeater
#slaves of repeater

It would be nice if I can distribute four properties file so that our ops can 
drop  the right one and start Solr

I propose a properties file in the instancedir as solrcore.properties . If 
present would be loaded and added as core specific properties.





> load core properties from a properties file
> ---
>
> Key: SOLR-1335
> URL: https://issues.apache.org/jira/browse/SOLR-1335
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>
> There are  few ways of loading properties in runtime,
> # using env property using in the command line
> # if you use a multicore drop it in the solr.xml
> if not the only way is to or keep separate solrconfig.xml for each instance.  
> #1 is error prone if the user fails to start with the correct system 
> property. 
> In our case we have four different configurations for the same deployment  . 
> And we have to disable replication of solrconfig.xml
> # main master
> # slaves of main master
> # repeater
> # slaves of repeater
> It would be nice if I can distribute four properties file so that our ops can 
> drop  the right one and start Solr. 
> I propose a properties file in the instancedir as solrcore.properties . If 
> present would be loaded and added as core specific properties.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1335) load core properties from a properties file

2009-08-04 Thread Noble Paul (JIRA)
load core properties from a properties file
---

 Key: SOLR-1335
 URL: https://issues.apache.org/jira/browse/SOLR-1335
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul


There are  few ways of loading properties in runtime,

# using env property using in the command line
# if you use a multicore drop it in the solr.xml

if not the only way is to or keep separate solrconfig.xml for each instance.  
#1 is error prone if the user fails to start with the correct system property. 
In our case we have four different configurations for the same deployment  of 
replication

# main master
# slaves of main master
# repeater
#slaves of repeater

It would be nice if I can distribute four properties file so that our ops can 
drop  the right one and start Solr

I propose a properties file in the instancedir as solrcore.properties . If 
present would be loaded and added as core specific properties.




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1335) load core properties from a properties file

2009-08-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-1335:
-

Description: 
There are  few ways of loading properties in runtime,

# using env property using in the command line
# if you use a multicore drop it in the solr.xml

if not , the only way is to  keep separate solrconfig.xml for each instance.  
#1 is error prone if the user fails to start with the correct system property. 
In our case we have four different configurations for the same deployment  . 
And we have to disable replication of solrconfig.xml. The configurations are...

# main master
# slaves of main master
# repeater
# slaves of repeater

It would be nice if I can distribute four properties file so that our ops can 
drop  the right one and start Solr. 

I propose a properties file in the instancedir as solrcore.properties . If 
present would be loaded and added as core specific properties.




  was:
There are  few ways of loading properties in runtime,

# using env property using in the command line
# if you use a multicore drop it in the solr.xml

if not the only way is to or keep separate solrconfig.xml for each instance.  
#1 is error prone if the user fails to start with the correct system property. 
In our case we have four different configurations for the same deployment  . 
And we have to disable replication of solrconfig.xml

# main master
# slaves of main master
# repeater
# slaves of repeater

It would be nice if I can distribute four properties file so that our ops can 
drop  the right one and start Solr. 

I propose a properties file in the instancedir as solrcore.properties . If 
present would be loaded and added as core specific properties.





> load core properties from a properties file
> ---
>
> Key: SOLR-1335
> URL: https://issues.apache.org/jira/browse/SOLR-1335
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>
> There are  few ways of loading properties in runtime,
> # using env property using in the command line
> # if you use a multicore drop it in the solr.xml
> if not , the only way is to  keep separate solrconfig.xml for each instance.  
> #1 is error prone if the user fails to start with the correct system 
> property. 
> In our case we have four different configurations for the same deployment  . 
> And we have to disable replication of solrconfig.xml. The configurations 
> are...
> # main master
> # slaves of main master
> # repeater
> # slaves of repeater
> It would be nice if I can distribute four properties file so that our ops can 
> drop  the right one and start Solr. 
> I propose a properties file in the instancedir as solrcore.properties . If 
> present would be loaded and added as core specific properties.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.