[jira] [Commented] (LUCENE-5096) WhitespaceTokenizer supports Java whitespace, should also support Unicode whitespace

2013-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701843#comment-13701843
 ] 

Uwe Schindler commented on LUCENE-5096:
---

Hi,
Lucene is flexible enough to make this configureable. Just subclass 
CharTokenizer and provide your own list of "whitespace".

I had several people that wanted to use a WhitespaceTokenizer where also things 
like "-" are treated as whitespace, so this was the way to go: A fast approach 
for many tokenchars is to make it flexible is to use a java.util.BitSet, mark 
all chars that are "whitespace" and then query in isTokenChar(int) the bitset. 
Alternatively use a chain of ifs.

An alternative way (if you are on solr) is to inject a CharFilter before the 
tokenizer, that maps any "special" whitespace to one of the standard ones 
WhitespaceTokenizer detects.

> WhitespaceTokenizer supports Java whitespace, should also support Unicode 
> whitespace
> 
>
> Key: LUCENE-5096
> URL: https://issues.apache.org/jira/browse/LUCENE-5096
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.3.1
> Environment: all
>Reporter: Jörg Prante
>Priority: Minor
>
> The whitespace tokenizer supports only Java whitespace as defined in 
> http://docs.oracle.com/javase/6/docs/api/java/lang/Character.html#isWhitespace(char)
> A useful improvement would be to support also Unicode whitespace as defined 
> in the Unicode property list 
> http://www.unicode.org/Public/UCD/latest/ucd/PropList.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r868760 - in /websites: production/lucene/content/ production/lucene/content/core/ production/lucene/content/solr/ staging/lucene/trunk/content/ staging/lucene/trunk/content/core/ stag

2013-07-08 Thread Uwe Schindler
Hi,

This is on my IMAP server's filter list :-)
The reason is a cronjob doing a commit when it updates the twitter feed and 
JIRA feels on the web page.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Jack Krupansky [mailto:j...@basetechnology.com]
> Sent: Monday, July 08, 2013 3:29 AM
> To: dev@lucene.apache.org
> Subject: Re: svn commit: r868760 - in /websites: production/lucene/content/
> production/lucene/content/core/ production/lucene/content/solr/
> staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
> staging/lucene/trunk/content/solr/
> 
> I keep forgetting to ask... what exactly does ANYBODY get from this particular
> notification that comes out a number of times every day and always appears
> to be absolutely identical other that the revision number?
> 
> -- Jack Krupansky
> 
> -Original Message-
> From: build...@apache.org
> Sent: Sunday, July 07, 2013 9:21 PM
> To: comm...@lucene.apache.org
> Subject: svn commit: r868760 - in /websites: production/lucene/content/
> production/lucene/content/core/ production/lucene/content/solr/
> staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
> staging/lucene/trunk/content/solr/
> 
> Author: buildbot
> Date: Mon Jul  8 01:21:02 2013
> New Revision: 868760
> 
> Log:
> Dynamic update by buildbot for lucene
> 
> Modified:
> websites/production/lucene/content/core/index.html
> websites/production/lucene/content/index.html
> websites/production/lucene/content/solr/index.html
> websites/staging/lucene/trunk/content/core/index.html
> websites/staging/lucene/trunk/content/index.html
> websites/staging/lucene/trunk/content/solr/index.html
> 
> Modified: websites/production/lucene/content/core/index.html
> ==
> 
> (empty)
> 
> Modified: websites/production/lucene/content/index.html
> ==
> 
> (empty)
> 
> Modified: websites/production/lucene/content/solr/index.html
> ==
> 
> (empty)
> 
> Modified: websites/staging/lucene/trunk/content/core/index.html
> ==
> 
> (empty)
> 
> Modified: websites/staging/lucene/trunk/content/index.html
> ==
> 
> (empty)
> 
> Modified: websites/staging/lucene/trunk/content/solr/index.html
> ==
> 
> (empty)
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r868760 - in /websites: production/lucene/content/ production/lucene/content/core/ production/lucene/content/solr/ staging/lucene/trunk/content/ staging/lucene/trunk/content/core/ stag

2013-07-08 Thread Shalin Shekhar Mangar
I had disabled the twitter update task during the 4.3.1 release. The
twitter API had changed (the error message tells us to use API v2) and
would not let me publish the 4.3.1 release announcement. After a quick
chat with Steve on IRC, we decided to disable the script for the
release. Do we know who maintains that script?

On Mon, Jul 8, 2013 at 1:16 PM, Uwe Schindler  wrote:
> Hi,
>
> This is on my IMAP server's filter list :-)
> The reason is a cronjob doing a commit when it updates the twitter feed and 
> JIRA feels on the web page.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>> -Original Message-
>> From: Jack Krupansky [mailto:j...@basetechnology.com]
>> Sent: Monday, July 08, 2013 3:29 AM
>> To: dev@lucene.apache.org
>> Subject: Re: svn commit: r868760 - in /websites: production/lucene/content/
>> production/lucene/content/core/ production/lucene/content/solr/
>> staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
>> staging/lucene/trunk/content/solr/
>>
>> I keep forgetting to ask... what exactly does ANYBODY get from this 
>> particular
>> notification that comes out a number of times every day and always appears
>> to be absolutely identical other that the revision number?
>>
>> -- Jack Krupansky
>>
>> -Original Message-
>> From: build...@apache.org
>> Sent: Sunday, July 07, 2013 9:21 PM
>> To: comm...@lucene.apache.org
>> Subject: svn commit: r868760 - in /websites: production/lucene/content/
>> production/lucene/content/core/ production/lucene/content/solr/
>> staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
>> staging/lucene/trunk/content/solr/
>>
>> Author: buildbot
>> Date: Mon Jul  8 01:21:02 2013
>> New Revision: 868760
>>
>> Log:
>> Dynamic update by buildbot for lucene
>>
>> Modified:
>> websites/production/lucene/content/core/index.html
>> websites/production/lucene/content/index.html
>> websites/production/lucene/content/solr/index.html
>> websites/staging/lucene/trunk/content/core/index.html
>> websites/staging/lucene/trunk/content/index.html
>> websites/staging/lucene/trunk/content/solr/index.html
>>
>> Modified: websites/production/lucene/content/core/index.html
>> ==
>> 
>> (empty)
>>
>> Modified: websites/production/lucene/content/index.html
>> ==
>> 
>> (empty)
>>
>> Modified: websites/production/lucene/content/solr/index.html
>> ==
>> 
>> (empty)
>>
>> Modified: websites/staging/lucene/trunk/content/core/index.html
>> ==
>> 
>> (empty)
>>
>> Modified: websites/staging/lucene/trunk/content/index.html
>> ==
>> 
>> (empty)
>>
>> Modified: websites/staging/lucene/trunk/content/solr/index.html
>> ==
>> 
>> (empty)
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
>> commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4738) Update to latest Jetty bug fix release, 8.1.10

2013-07-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701888#comment-13701888
 ] 

Ryan McKinley commented on SOLR-4738:
-

jetty 9 requires java 1.7  -- this is OK for trunk, but 4.x needs to work with 
1.6

> Update to latest Jetty bug fix release, 8.1.10
> --
>
> Key: SOLR-4738
> URL: https://issues.apache.org/jira/browse/SOLR-4738
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4738.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-5005:


Assignee: Noble Paul

> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4125 - Failure

2013-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4125/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.AliasIntegrationTest: 
1) Thread[id=5072, name=recoveryCmdExecutor-2515-thread-1, state=RUNNABLE, 
group=TGRP-AliasIntegrationTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.AliasIntegrationTest: 
   1) Thread[id=5072, name=recoveryCmdExecutor-2515-thread-1, state=RUNNABLE, 
group=TGRP-AliasIntegrationTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([A2CD6BB03AA9A150]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5072, name=recoveryCmdExecutor-2515-thread-1, state=RUNNABLE, 
group=TGRP-AliasIntegrationTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 

[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701929#comment-13701929
 ] 

Shalin Shekhar Mangar commented on SOLR-4788:
-

Thanks James for the patch.

I'm going to commit this to make sure that it gets into 4.4

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701938#comment-13701938
 ] 

ASF subversion and git services commented on SOLR-4788:
---

Commit 1500652 from sha...@apache.org
[ https://svn.apache.org/r1500652 ]

SOLR-4788: Multiple Entities DIH delta import: 
dataimporter.[entityName].last_index_time is empty

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701942#comment-13701942
 ] 

Shalin Shekhar Mangar commented on SOLR-5002:
-

[~thetaphi] - The wrong issue number has been added to the change log in 
branch_4x for this SolrResourceLoader commit. Also trunk doesn't seem to have 
this entry.

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701943#comment-13701943
 ] 

ASF subversion and git services commented on SOLR-4788:
---

Commit 1500662 from sha...@apache.org
[ https://svn.apache.org/r1500662 ]

SOLR-4788: Multiple Entities DIH delta import: 
dataimporter.[entityName].last_index_time is empty

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701944#comment-13701944
 ] 

Shalin Shekhar Mangar commented on SOLR-4788:
-

[~elyograg] - Can you please open a separate issue for the logging 
configuration?

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-4788.
-

   Resolution: Fixed
Fix Version/s: 4.4
   5.0

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.4
>
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4788:


Component/s: contrib - DataImportHandler

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.4
>
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch, SOLR-4788.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701948#comment-13701948
 ] 

Uwe Schindler commented on SOLR-5002:
-

[~shalinmangar], that was already fixed by edition commit message!

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-5002:


Comment: was deleted

(was: Commit 1500156 from [~thetaphi]
[ https://svn.apache.org/r1500156 ]

SOLR-5002: Don't create multiple SolrResourceLoaders for same Solr home, 
wasting resources and slowing down startup. This fixes the problem where the 
loader was not correctly closed, making tests fail on Windows.)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701948#comment-13701948
 ] 

Uwe Schindler edited comment on SOLR-5002 at 7/8/13 11:30 AM:
--

[~shalinmangar], that was already fixed by editing commit message! I removed 
the obsolete comment here!

  was (Author: thetaphi):
[~shalinmangar], that was already fixed by edition commit message!
  
> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4978) Time is stripped from datetime column when imported into Solr date field

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701949#comment-13701949
 ] 

ASF subversion and git services commented on SOLR-4978:
---

Commit 1500666 from sha...@apache.org
[ https://svn.apache.org/r1500666 ]

SOLR-4978: Time is stripped from datetime column when imported into Solr date 
field if convertType=true

> Time is stripped from datetime column when imported into Solr date field
> 
>
> Key: SOLR-4978
> URL: https://issues.apache.org/jira/browse/SOLR-4978
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Reporter: Bill Au
>
> I discovered that all dates I imported into a Solr date field from a MySQL 
> datetime column have the time stripped (ie time portion is always 00:00:00).
> After double checking my DIH config and trying different things, I decided to 
> take a look at the DIH code.
> When I looked at the source code of DIH JdbcDataSource class, I discovered 
> that it is using java.sql.ResultSet and its getDate() method to handle date 
> field. The getDate() method returns java.sql.Date. The java api doc for 
> java.sql.Date
> http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html
> states that:
> "To conform with the definition of SQL DATE, the millisecond values wrapped 
> by a java.sql.Date instance must be 'normalized' by setting the hours, 
> minutes, seconds, and milliseconds to zero in the particular time zone with 
> which the instance is associated."
> I am so surprise at my finding that I think I may not be right.  What am I 
> doing wrong here?  This is such a big hole in DIH, how could it be possible 
> that no one has noticed this until now?
> Has anyone successfully imported a datetime column into a Solr date field 
> using DIH?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4978) Time is stripped from datetime column when imported into Solr date field

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701950#comment-13701950
 ] 

ASF subversion and git services commented on SOLR-4978:
---

Commit 1500668 from sha...@apache.org
[ https://svn.apache.org/r1500668 ]

SOLR-4978: Time is stripped from datetime column when imported into Solr date 
field if convertType=true

> Time is stripped from datetime column when imported into Solr date field
> 
>
> Key: SOLR-4978
> URL: https://issues.apache.org/jira/browse/SOLR-4978
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Reporter: Bill Au
>
> I discovered that all dates I imported into a Solr date field from a MySQL 
> datetime column have the time stripped (ie time portion is always 00:00:00).
> After double checking my DIH config and trying different things, I decided to 
> take a look at the DIH code.
> When I looked at the source code of DIH JdbcDataSource class, I discovered 
> that it is using java.sql.ResultSet and its getDate() method to handle date 
> field. The getDate() method returns java.sql.Date. The java api doc for 
> java.sql.Date
> http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html
> states that:
> "To conform with the definition of SQL DATE, the millisecond values wrapped 
> by a java.sql.Date instance must be 'normalized' by setting the hours, 
> minutes, seconds, and milliseconds to zero in the particular time zone with 
> which the instance is associated."
> I am so surprise at my finding that I think I may not be right.  What am I 
> doing wrong here?  This is such a big hole in DIH, how could it be possible 
> that no one has noticed this until now?
> Has anyone successfully imported a datetime column into a Solr date field 
> using DIH?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4978) Time is stripped from datetime column when imported into Solr date field

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-4978.
-

   Resolution: Fixed
Fix Version/s: 4.4
   5.0
 Assignee: Shalin Shekhar Mangar

> Time is stripped from datetime column when imported into Solr date field
> 
>
> Key: SOLR-4978
> URL: https://issues.apache.org/jira/browse/SOLR-4978
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Reporter: Bill Au
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.4
>
>
> I discovered that all dates I imported into a Solr date field from a MySQL 
> datetime column have the time stripped (ie time portion is always 00:00:00).
> After double checking my DIH config and trying different things, I decided to 
> take a look at the DIH code.
> When I looked at the source code of DIH JdbcDataSource class, I discovered 
> that it is using java.sql.ResultSet and its getDate() method to handle date 
> field. The getDate() method returns java.sql.Date. The java api doc for 
> java.sql.Date
> http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html
> states that:
> "To conform with the definition of SQL DATE, the millisecond values wrapped 
> by a java.sql.Date instance must be 'normalized' by setting the hours, 
> minutes, seconds, and milliseconds to zero in the particular time zone with 
> which the instance is associated."
> I am so surprise at my finding that I think I may not be right.  What am I 
> doing wrong here?  This is such a big hole in DIH, how could it be possible 
> that no one has noticed this until now?
> Has anyone successfully imported a datetime column into a Solr date field 
> using DIH?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Several builds hanging pecause of permgen

2013-07-08 Thread Uwe Schindler
Another one, this time on OSX:

http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Robert Muir [mailto:rcm...@gmail.com] 
Sent: Sunday, July 07, 2013 11:15 PM
To: dev@lucene.apache.org
Subject: Re: Several builds hanging pecause of permgen

 

When there were leaks from static classes, we added a checker to LuceneTestCase 
that looks for RAM > N and fails with debugging information.

I wonder if some similar check is possible for this case (to make it easier 
than going thru heapdumps, and to find issues before crash-time)...

On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler  wrote:

Another one: 
http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6385/testReport/junit/junit.framework/TestSuite/org_apache_solr_request_SimpleFacetsTest/

Had to be killed with kill -9


-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]

> Sent: Saturday, July 06, 2013 10:16 PM
> To: dev@lucene.apache.org
> Subject: RE: Several builds hanging pecause of permgen
>
> Another one:
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6375/console
>
> I was only able to kill the JVM with kill -9 I am sure, it's horrible 
> slowdoop!
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Uwe Schindler [mailto:u...@thetaphi.de]
> > Sent: Friday, July 05, 2013 3:59 PM
> > To: dev@lucene.apache.org
> > Subject: Several builds hanging pecause of permgen
> >
> > Several Jenkins builds now hang because of permgen. The runner JVM is
> > dead (can only be killed by -9), last example:
> >
> > http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6360/console
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
> > additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

 



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701953#comment-13701953
 ] 

Shalin Shekhar Mangar commented on SOLR-5002:
-

[~thetaphi], the change log is not correct. On branch_4x, under 4.4 bug fixes I 
can see SOLR-5002 but there is no such entry in the trunk change log. I noticed 
this because of a merge conflict.

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4386) Variable expansion doesn't work in DIH SimplePropertiesWriter's filename

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-4386:
---

Assignee: Shalin Shekhar Mangar

> Variable expansion doesn't work in DIH SimplePropertiesWriter's filename
> 
>
> Key: SOLR-4386
> URL: https://issues.apache.org/jira/browse/SOLR-4386
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: Jonas Birgander
>Assignee: Shalin Shekhar Mangar
>  Labels: dataimport
> Attachments: SOLR-4386.patch
>
>
> I'm testing Solr 4.1, but I've run into some problems with 
> DataImportHandler's new propertyWriter tag.
> I'm trying to use variable expansion in the `filename` field when using 
> SimplePropertiesWriter.
> Here are the relevant parts of my configuration:
> conf/solrconfig.xml
> -
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
>   
> db-data-config.xml
>   
>   
> 
> ${country_code}
> 
>   
> 
> conf/db-data-config.xml
> -
> 
>dateFormat="-MM-dd HH:mm:ss"
> type="SimplePropertiesWriter"
> directory="conf"
> filename="${dataimporter.request.country_code}.dataimport.properties"
> />
>driver="${dataimporter.request.db_driver}"
> url="${dataimporter.request.db_url}"
> user="${dataimporter.request.db_user}"
> password="${dataimporter.request.db_password}"
> batchSize="${dataimporter.request.db_batch_size}" />
>   
>query="my normal SQL, not really relevant
> -- country=${dataimporter.request.country_code}">
>   
> 
> 
>   
> 
>   
> 
> If country_code is set to "gb", I want the last_index_time to be read and 
> written in the file conf/gb.dataimport.properties, instead of the default 
> conf/dataimport.properties
> The variable expansion works perfectly in the SQL and setup of the data 
> source, but not in the property writer's filename field.
> When initiating an import, the log file shows:
> Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter 
> maybeReloadConfiguration
> INFO: Loading DIH Configuration: db-data-config.xml
> Jan 30, 2013 11:25:42 AM 
> org.apache.solr.handler.dataimport.config.ConfigParseUtil verifyWithSchema
> INFO: The field :$skipDoc present in DataConfig does not have a counterpart 
> in Solr Schema
> Jan 30, 2013 11:25:42 AM 
> org.apache.solr.handler.dataimport.config.ConfigParseUtil verifyWithSchema
> INFO: The field :$deleteDocById present in DataConfig does not have a 
> counterpart in Solr Schema
> Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter 
> loadDataConfig
> INFO: Data Configuration loaded successfully
> Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter 
> doFullImport
> INFO: Starting Full Import
> Jan 30, 2013 11:25:42 AM 
> org.apache.solr.handler.dataimport.SimplePropertiesWriter 
> readIndexerProperties
> WARNING: Unable to read: 
> ${dataimporter.request.country_code}.dataimport.properties

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701957#comment-13701957
 ] 

Robert Muir commented on SOLR-5002:
---

{quote}
Uwe Schindler, the change log is not correct. On branch_4x, under 4.4 bug fixes 
I can see SOLR-5002 but there is no such entry in the trunk change log. I 
noticed this because of a merge conflict.
{quote}

The change log was always correct.

Either your svn working copy or svn mirror is out of date...

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701958#comment-13701958
 ] 

Uwe Schindler commented on SOLR-5002:
-

It is identical on both branches, at least in my checkout and after "svn up". 
Maybe you used an older checkout?

trunk:
{noformat}
* SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null (Robert 
Muir)

Other Changes
--

...

* SOLR-4948, SOLR-5009: Tidied up CoreContainer construction logic.
  (Alan Woodward, Uwe Schindler, Steve Rowe)

==  4.3.1 ==
{noformat}

4.x:
{noformat}
* SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null (Robert 
Muir)

Other Changes
--

...

* SOLR-4948, SOLR-5009: Tidied up CoreContainer construction logic.
  (Alan Woodward, Uwe Schindler, Steve Rowe)

==  4.3.1 ==
{noformat}

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_25) - Build # 6470 - Failure!

2013-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6470/
Java: 64bit/jdk1.7.0_25 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Mon Jul 08 18:51:39 
ICT 2013

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Mon Jul 08 18:51:39 ICT 2013
at 
__randomizedtesting.SeedInfo.seed([417E0BE5EDA376CB:9AD50B23E88B1F78]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1507)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:811)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakCo

Re: Several builds hanging pecause of permgen

2013-07-08 Thread Dawid Weiss
Not much I can do from my side about permgen errors. There is really no way
to deal with these from within Java (the same process) -- you cannot
effectively handle anything because your own classes may not load at all.

Dawid

On Mon, Jul 8, 2013 at 1:35 PM, Uwe Schindler  wrote:

> Another one, this time on OSX:
>
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/
>
> ** **
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
> ** **
>
> *From:* Robert Muir [mailto:rcm...@gmail.com]
> *Sent:* Sunday, July 07, 2013 11:15 PM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Several builds hanging pecause of permgen
>
> ** **
>
> When there were leaks from static classes, we added a checker to
> LuceneTestCase that looks for RAM > N and fails with debugging information.
>
> I wonder if some similar check is possible for this case (to make it
> easier than going thru heapdumps, and to find issues before crash-time)...
> 
>
> On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler  wrote:
>
> Another one:
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6385/testReport/junit/junit.framework/TestSuite/org_apache_solr_request_SimpleFacetsTest/
>
> Had to be killed with kill -9
>
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Uwe Schindler [mailto:u...@thetaphi.de]
>
> > Sent: Saturday, July 06, 2013 10:16 PM
> > To: dev@lucene.apache.org
> > Subject: RE: Several builds hanging pecause of permgen
> >
> > Another one:
> > http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6375/console
> >
> > I was only able to kill the JVM with kill -9 I am sure, it's horrible
> slowdoop!
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> > > -Original Message-
> > > From: Uwe Schindler [mailto:u...@thetaphi.de]
> > > Sent: Friday, July 05, 2013 3:59 PM
> > > To: dev@lucene.apache.org
> > > Subject: Several builds hanging pecause of permgen
> > >
> > > Several Jenkins builds now hang because of permgen. The runner JVM is
> > > dead (can only be killed by -9), last example:
> > >
> > > http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6360/console
> > >
> > > -
> > > Uwe Schindler
> > > H.-H.-Meier-Allee 63, D-28213 Bremen
> > > http://www.thetaphi.de
> > > eMail: u...@thetaphi.de
> > >
> > >
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
> > > additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> > commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> ** **
>


[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701976#comment-13701976
 ] 

Shalin Shekhar Mangar commented on SOLR-5002:
-

branch_4x solr/CHANGES.txt:

{code}
* SOLR-5002: Don't create multiple SolrResourceLoaders for same Solr home, 
wasting 
  resources and slowing down startup. This fixes the problem where the loader 
was
  not correctly closed, making tests fail on Windows.  (Steve Rowe, Uwe 
Schindler)
{code}

I see it in 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x/solr/CHANGES.txt


> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701984#comment-13701984
 ] 

Robert Muir commented on SOLR-5002:
---

guys please get your own issue! i will remove all these comments, they are 
inappropriate here!

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: [~thetaphi] - The wrong issue number has been added to the change log in 
branch_4x for this SolrResourceLoader commit. Also trunk doesn't seem to have 
this entry.)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701983#comment-13701983
 ] 

Robert Muir commented on SOLR-5002:
---

none of that has anythign to do with this issue!!!

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: [~shalinmangar], that was already fixed by editing commit message! I 
removed the obsolete comment here!)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: [~thetaphi], the change log is not correct. On branch_4x, under 4.4 bug 
fixes I can see SOLR-5002 but there is no such entry in the trunk change log. I 
noticed this because of a merge conflict.)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: {quote}
Uwe Schindler, the change log is not correct. On branch_4x, under 4.4 bug fixes 
I can see SOLR-5002 but there is no such entry in the trunk change log. I 
noticed this because of a merge conflict.
{quote}

The change log was always correct.

Either your svn working copy or svn mirror is out of date...)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: It is identical on both branches, at least in my checkout and after "svn 
up". Maybe you used an older checkout?

trunk:
{noformat}
* SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null (Robert 
Muir)

Other Changes
--

...

* SOLR-4948, SOLR-5009: Tidied up CoreContainer construction logic.
  (Alan Woodward, Uwe Schindler, Steve Rowe)

==  4.3.1 ==
{noformat}

4.x:
{noformat}
* SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null (Robert 
Muir)

Other Changes
--

...

* SOLR-4948, SOLR-5009: Tidied up CoreContainer construction logic.
  (Alan Woodward, Uwe Schindler, Steve Rowe)

==  4.3.1 ==
{noformat})

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: none of that has anythign to do with this issue!!!)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: branch_4x solr/CHANGES.txt:

{code}
* SOLR-5002: Don't create multiple SolrResourceLoaders for same Solr home, 
wasting 
  resources and slowing down startup. This fixes the problem where the loader 
was
  not correctly closed, making tests fail on Windows.  (Steve Rowe, Uwe 
Schindler)
{code}

I see it in 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x/solr/CHANGES.txt
)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-5002:
--

Comment: was deleted

(was: guys please get your own issue! i will remove all these comments, they 
are inappropriate here!)

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null

2013-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir closed SOLR-5002.
-

Assignee: Robert Muir

No more unrelated discussion here.

> optimize numDocs(Query,DocSet) when filterCache is null
> ---
>
> Key: SOLR-5002
> URL: https://issues.apache.org/jira/browse/SOLR-5002
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5002.patch
>
>
> getDocSet(Query, DocSet) has this opto, but numDocs does not.
> Especially in this case, where we just want the intersection count, its 
> faster to do a filtered query with TotalHitCountCollector and not create 
> bitsets at all...
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Looking for community guidance on SOLR-4872

2013-07-08 Thread Benson Margulies
Dear Lucene Community,

I note that this email has received no response, and the JIRA no further
discussion, since June 19th. As an occasional contributor to this
community, I think that this is unreasonable. My personal belief is that
the Apache Way calls for you to decide something here: use a vote if
needed. You might decide to do _nothing_ at all, but you won't just leave
me waving in the breeze. I suppose that _I_ could call a vote here, but as
a non-committer it would seem presumptuous of me. Also, I might not find a
committer willing to act on it.

Respectfully,

Benson



On Wed, Jun 19, 2013 at 8:49 AM, Benson Margulies wrote:

> I write to seek guidance from the dev community on SOLR-4872.
>
> This JIRA concerns lifecycle management for Solr schema components:
> tokenizers, token filters, and char filters.
>
> If you read the comments, you'll find three opinions from committers. What
> follows are précis: read the JIRA to get the details.
>
> Hoss is in favor of having close methods on these components and arranging
> to have them called when a schema is torn down. Hoss is opposed to allowing
> these objects to be SolrCoreAware.
>
> Yonik is opposed to having such close methods and prefers SolrCoreAware,
> or something like it, or letting component implementors use finalizers.
>
> Rob Muir thinks that there should be a fix to the related LUCENE-2145,
> which I see as complementary to this.
>
> So, here I am. I'm not a committer. I'm a builder of Solr plugins, and,
> from that standpoint, I think that there should be a lifecycle somehow,
> because I try to apply a general principle of avoiding finalizers, and
> because in some cases their unpredictable schedule can be a practical
> problem.
>
> Is there a committer in this community who is willing to work with me on
> this? As things are, I can't see how to proceed, since I'm suspended
> between two committers with apparently opposed views.
>
> I have already implemented what I think of as the hard part, and, indeed,
> the foundation of either approach. I have a close lifecycle that extends
> down to the IndexSchema object and the TokenizerChain. So it remains to
> decide whether that should in turn call ordinary close methods on the
> tokenizers, token filters, and char filters, or rather look for some
> optional lifecycle interface.
>
>
>


[jira] [Updated] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5005:
-

Attachment: SOLR-5005.patch



* Only JS needs supported. We need only one language now. Need to add more only 
if there is a real pressing need
* Can store the scripts in files "conf/script" or it can be passed as a request 
parameter
* This is only intended to query stuff. And there are very simple helpers added 
to make those things easy

> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701996#comment-13701996
 ] 

Noble Paul commented on SOLR-5005:
--

sample script

{code}
var requestParameterQ = param('q'); //or p('q') as a short form
var results = q({'qt': '/select','q':requestParameterQ});
r.add('myfirstscriptresults', results.get('results'));
{code}

> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5016) Spatial clustering/grouping

2013-07-08 Thread Jeroen Steggink (JIRA)
Jeroen Steggink created SOLR-5016:
-

 Summary: Spatial clustering/grouping
 Key: SOLR-5016
 URL: https://issues.apache.org/jira/browse/SOLR-5016
 Project: Solr
  Issue Type: Wish
  Components: spatial
Reporter: Jeroen Steggink
Priority: Minor


Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster or average location.

Jeroen


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Noble Paul (JIRA)
Noble Paul created SOLR-5017:


 Summary: Allow sharding based on the value of a field
 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul


We should be able to create a collection where sharding is done based on the 
value of a given field

collections will be created with numShards=n&shardField=fieldName

A new DocRouter should be added for the same

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702011#comment-13702011
 ] 

ASF subversion and git services commented on SOLR-5010:
---

Commit 1500737 from [~gsingers]
[ https://svn.apache.org/r1500737 ]

SOLR-5010: add copy field support to the REST API

> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010.patch, SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702014#comment-13702014
 ] 

ASF subversion and git services commented on SOLR-5010:
---

Commit 1500744 from [~gsingers]
[ https://svn.apache.org/r1500744 ]

SOLR-5010: merge from trunk

> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010.patch, SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5097:
-

 Summary: Add utility method to Analyzer: public final TokenStream 
tokenStream(String fieldName,String text)
 Key: LUCENE-5097
 URL: https://issues.apache.org/jira/browse/LUCENE-5097
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.3.1
Reporter: Uwe Schindler


It might be a good idea to remove tons of useless code from tests:
Most people use TokenStreams and Analyzers by only passing a String, wrapped by 
a StringReader. It would make life easier, if Analyzer would have an additional 
public (and final!!!) method that simply does the wrapping with StringReader by 
itsself. It might maybe not even needed to throw IOException (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702024#comment-13702024
 ] 

Robert Muir commented on LUCENE-5097:
-

+1

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702026#comment-13702026
 ] 

Uwe Schindler commented on LUCENE-5097:
---

Another suggestion here:
Currently we have a crazy reuseable reader in Field.java. This one could go 
away, instead the Analyzer would store a resuseable reader in 
TokenStreamComponents/the TS cache. Field.java would be simplier as it would 
just call this method to get the TS from a String field.

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4465) Configurable Collectors

2013-07-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702039#comment-13702039
 ] 

Joel Bernstein commented on SOLR-4465:
--

Otis,

The implementation in this ticket is a POC to explore how pluggable collectors 
plugged could be used. I think the best mechanism for expanding collector 
functionality though is through expanded uses of PostFilters.

In order to make this approach viable two things need to be done. First, 
grouping needs to be revamped so that it plays nicely with the PostFilter 
framework. Second, in a distributed environment we need a way to merge the 
output from PostFilters. 

Here are three tickets that are likely to come out of these requirements:   

1) Create a field collapsing PostFilter. This will involve a small change to 
the PostFilter api so it might best be done in Solr 5. This PostFilter will 
handle only the collapsing part of the grouping functionality.

2) Add a Grouping search component to handle the rest of the grouping 
functionality. This component will work with the collapsed docList generated by 
the field collapsing PostFilter. Breaking up the grouping functionality like 
this should make it more flexible and easier to maintain.

3) Add a Search component that allows for pluggable merging of output from 
shards. This would allow aggregating PostFilters to be developed and used with 
distributed search. It would also likely allow custom ranking collectors to be 
inserted through the PostFilter mechanism.







> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.4
>
> Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch
>
>
> This ticket provides a patch to add pluggable collectors to Solr. This patch 
> was generated and tested with Solr 4.1.
> This is how the patch functions:
> Collectors are plugged into Solr in the solconfig.xml using the new 
> collectorFactory element. For example:
> 
> 
> The elements above define two collector factories. The first one is the 
> "default" collectorFactory. The class attribute points to 
> org.apache.solr.handler.component.CollectorFactory, which implements logic 
> that returns the default TopScoreDocCollector and TopFieldCollector. 
> To create your own collectorFactory you must subclass the default 
> CollectorFactory and at a minimum override the getCollector method to return 
> your new collector. 
> The parameter "cl" turns on pluggable collectors:
> cl=true
> If cl is not in the parameters, Solr will automatically use the default 
> collectorFactory.
> *Pluggable Doclist Sorting With the Docs Collector*
> You can specify two types of pluggable collectors. The first type is the docs 
> collector. For example:
> cl.docs=
> The above param points to a named collectorFactory in the solrconfig.xml to 
> construct the collector. The docs collectorFactorys must return a collector 
> that extends the TopDocsCollector base class. Docs collectors are responsible 
> for collecting the doclist.
> You can specify only one docs collector per query.
> You can pass parameters to the docs collector using local params syntax. For 
> example:
> cl.docs=\{! sort=mycustomesort\}mycollector
> If cl=true and a docs collector is not specified, Solr will use the default 
> collectorFactory to create the docs collector.
> *Pluggable Custom Analytics With Delegating Collectors*
> You can also specify any number of custom analytic collectors with the 
> "cl.analytic" parameter. Analytic collectors are designed to collect 
> something else besides the doclist. Typically this would be some type of 
> custom analytic. For example:
> cl.analytic=sum
> The parameter above specifies a analytic collector named sum. Like the docs 
> collectors, "sum" points to a named collectorFactory in the solrconfig.xml. 
> You can specificy any number of analytic collectors by adding additional 
> cl.analytic parameters.
> Analytic collector factories must return Collector instances that extend 
> DelegatingCollector. 
> A sample analytic collector is provided in the patch through the 
> org.apache.solr.handler.component.SumCollectorFactory.
> This collectorFactory provides a very simple DelegatingCollector that groups 
> by a field and sums a column of floats. The sum collector is not designed to 
> be a fully functional sum function but to be a proof of concept for pluggable 
> an

[jira] [Comment Edited] (SOLR-4465) Configurable Collectors

2013-07-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702039#comment-13702039
 ] 

Joel Bernstein edited comment on SOLR-4465 at 7/8/13 2:52 PM:
--

Otis,

The implementation in this ticket is a POC to explore how pluggable collectors 
could be used. I think the best mechanism for expanding collector functionality 
though is through expanded use of PostFilters.

In order to make this approach viable two things need to be done. First, 
grouping needs to be revamped so that it plays nicely with the PostFilter 
framework. Second, in a distributed environment we need a way to merge the 
output from PostFilters. 

Here are three tickets that are likely to come out of these requirements:   

1) Create a field collapsing PostFilter. This will involve a small change to 
the PostFilter api so it might best be done in Solr 5. This PostFilter will 
handle only the collapsing part of the grouping functionality.

2) Add a Grouping search component to handle the rest of the grouping 
functionality. This component will work with the collapsed docList generated by 
the field collapsing PostFilter. Breaking up the grouping functionality like 
this should make it more flexible and easier to maintain.

3) Add a Search component that allows for pluggable merging of output from 
shards. This would allow aggregating PostFilters to be developed and used with 
distributed search. It would also likely allow custom ranking collectors to be 
inserted through the PostFilter mechanism.







  was (Author: joel.bernstein):
Otis,

The implementation in this ticket is a POC to explore how pluggable collectors 
could be used. I think the best mechanism for expanding collector functionality 
though is through expanded uses of PostFilters.

In order to make this approach viable two things need to be done. First, 
grouping needs to be revamped so that it plays nicely with the PostFilter 
framework. Second, in a distributed environment we need a way to merge the 
output from PostFilters. 

Here are three tickets that are likely to come out of these requirements:   

1) Create a field collapsing PostFilter. This will involve a small change to 
the PostFilter api so it might best be done in Solr 5. This PostFilter will 
handle only the collapsing part of the grouping functionality.

2) Add a Grouping search component to handle the rest of the grouping 
functionality. This component will work with the collapsed docList generated by 
the field collapsing PostFilter. Breaking up the grouping functionality like 
this should make it more flexible and easier to maintain.

3) Add a Search component that allows for pluggable merging of output from 
shards. This would allow aggregating PostFilters to be developed and used with 
distributed search. It would also likely allow custom ranking collectors to be 
inserted through the PostFilter mechanism.






  
> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.4
>
> Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch
>
>
> This ticket provides a patch to add pluggable collectors to Solr. This patch 
> was generated and tested with Solr 4.1.
> This is how the patch functions:
> Collectors are plugged into Solr in the solconfig.xml using the new 
> collectorFactory element. For example:
> 
> 
> The elements above define two collector factories. The first one is the 
> "default" collectorFactory. The class attribute points to 
> org.apache.solr.handler.component.CollectorFactory, which implements logic 
> that returns the default TopScoreDocCollector and TopFieldCollector. 
> To create your own collectorFactory you must subclass the default 
> CollectorFactory and at a minimum override the getCollector method to return 
> your new collector. 
> The parameter "cl" turns on pluggable collectors:
> cl=true
> If cl is not in the parameters, Solr will automatically use the default 
> collectorFactory.
> *Pluggable Doclist Sorting With the Docs Collector*
> You can specify two types of pluggable collectors. The first type is the docs 
> collector. For example:
> cl.docs=
> The above param points to a named collectorFactory in the solrconfig.xml to 
> construct the collector. The docs collectorFactorys must return a collector 
> that extends the TopDocsCollector base class. Docs

[jira] [Comment Edited] (SOLR-4465) Configurable Collectors

2013-07-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702039#comment-13702039
 ] 

Joel Bernstein edited comment on SOLR-4465 at 7/8/13 2:52 PM:
--

Otis,

The implementation in this ticket is a POC to explore how pluggable collectors 
could be used. I think the best mechanism for expanding collector functionality 
though is through expanded uses of PostFilters.

In order to make this approach viable two things need to be done. First, 
grouping needs to be revamped so that it plays nicely with the PostFilter 
framework. Second, in a distributed environment we need a way to merge the 
output from PostFilters. 

Here are three tickets that are likely to come out of these requirements:   

1) Create a field collapsing PostFilter. This will involve a small change to 
the PostFilter api so it might best be done in Solr 5. This PostFilter will 
handle only the collapsing part of the grouping functionality.

2) Add a Grouping search component to handle the rest of the grouping 
functionality. This component will work with the collapsed docList generated by 
the field collapsing PostFilter. Breaking up the grouping functionality like 
this should make it more flexible and easier to maintain.

3) Add a Search component that allows for pluggable merging of output from 
shards. This would allow aggregating PostFilters to be developed and used with 
distributed search. It would also likely allow custom ranking collectors to be 
inserted through the PostFilter mechanism.







  was (Author: joel.bernstein):
Otis,

The implementation in this ticket is a POC to explore how pluggable collectors 
plugged could be used. I think the best mechanism for expanding collector 
functionality though is through expanded uses of PostFilters.

In order to make this approach viable two things need to be done. First, 
grouping needs to be revamped so that it plays nicely with the PostFilter 
framework. Second, in a distributed environment we need a way to merge the 
output from PostFilters. 

Here are three tickets that are likely to come out of these requirements:   

1) Create a field collapsing PostFilter. This will involve a small change to 
the PostFilter api so it might best be done in Solr 5. This PostFilter will 
handle only the collapsing part of the grouping functionality.

2) Add a Grouping search component to handle the rest of the grouping 
functionality. This component will work with the collapsed docList generated by 
the field collapsing PostFilter. Breaking up the grouping functionality like 
this should make it more flexible and easier to maintain.

3) Add a Search component that allows for pluggable merging of output from 
shards. This would allow aggregating PostFilters to be developed and used with 
distributed search. It would also likely allow custom ranking collectors to be 
inserted through the PostFilter mechanism.






  
> Configurable Collectors
> ---
>
> Key: SOLR-4465
> URL: https://issues.apache.org/jira/browse/SOLR-4465
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.1
>Reporter: Joel Bernstein
> Fix For: 4.4
>
> Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
> SOLR-4465.patch, SOLR-4465.patch
>
>
> This ticket provides a patch to add pluggable collectors to Solr. This patch 
> was generated and tested with Solr 4.1.
> This is how the patch functions:
> Collectors are plugged into Solr in the solconfig.xml using the new 
> collectorFactory element. For example:
> 
> 
> The elements above define two collector factories. The first one is the 
> "default" collectorFactory. The class attribute points to 
> org.apache.solr.handler.component.CollectorFactory, which implements logic 
> that returns the default TopScoreDocCollector and TopFieldCollector. 
> To create your own collectorFactory you must subclass the default 
> CollectorFactory and at a minimum override the getCollector method to return 
> your new collector. 
> The parameter "cl" turns on pluggable collectors:
> cl=true
> If cl is not in the parameters, Solr will automatically use the default 
> collectorFactory.
> *Pluggable Doclist Sorting With the Docs Collector*
> You can specify two types of pluggable collectors. The first type is the docs 
> collector. For example:
> cl.docs=
> The above param points to a named collectorFactory in the solrconfig.xml to 
> construct the collector. The docs collectorFactorys must return a collector 
> that extends the TopDocsCollector base cl

[jira] [Created] (SOLR-5018) The Overseer should avoid publishing the state for collections that do not exist under the /collections zk node.

2013-07-08 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5018:
-

 Summary: The Overseer should avoid publishing the state for 
collections that do not exist under the /collections zk node.
 Key: SOLR-5018
 URL: https://issues.apache.org/jira/browse/SOLR-5018
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4


In a 'stormy' env, a state might get published after a collection delete - 
bringing back a zombie collection into clusterstate.json. The Overseer should 
defend against this by refusing to publish a state if it cannot find the 
collection in zk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4898) Flesh out the Schema REST API

2013-07-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702051#comment-13702051
 ] 

Steve Rowe commented on SOLR-4898:
--

bq. Steve Rowe What's involved w/ adding copy fields? Do you have anything 
under way there?

I see SOLR-5010 is already committed, so you figured some stuff out by yourself 
:).  I had not done any work on this yet.

I'll comment on SOLR-5010.

> Flesh out the Schema REST API
> -
>
> Key: SOLR-4898
> URL: https://issues.apache.org/jira/browse/SOLR-4898
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Affects Versions: 4.4
>Reporter: Steve Rowe
>
> As of Solr 4.4, the Solr Schema Rest API provides read access to all schema 
> elements (SOLR-4503, SOLR-4658, SOLR-4537, SOLR-4623), and the ability to 
> dynamically add new fields (SOLR-3251).  See the wiki for documentation: 
> [http://wiki.apache.org/solr/SchemaRESTAPI].
> This is an umbrella issue to capture all future additions to the schema REST 
> API, including:
> # adding dynamic fields
> # adding field types
> # adding copy field directives
> # enabling wholesale replacement by PUTing a new schema.
> # modifying fields, dynamic fields, field types, and copy field directives
> # removing fields, dynamic fields, field types, and copy field directives
> # modifying all remaining aspects of the schema: Name, Version, Unique Key, 
> Global Similarity, and Default Query Operator
> I think the first three will be the easiest.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702055#comment-13702055
 ] 

Steve Rowe commented on SOLR-5010:
--

[~gsingers], I think your copyFields implementation is missing an important 
piece of functionality: copying one existing field to another, without adding 
any new fields, i.e. PUT to /schema/copyFields

In fact, I'm not convinced that tacking copyFields directives onto 
/schema/fields is the right way to go, since we have to have the ability to PUT 
to /schema/copyFields for the not-adding-fields case anyway: it needlessly 
complicates the API and the code.

I'll work on a patch.

> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010.patch, SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5097:
--

Attachment: LUCENE-5097.patch

Quick patch for demonstration purposes:

- Moved the ReusableStringReader out of Field.java to the analysis package 
(pkg-private - could also be a inner class in Analyzer; I did this because I 
wanted a separate test)
- added a second tokenStream method that lazy inits the reusable reader and 
stores it in a hidden transient field of TokenStreamComponents

This is all still a little bit hackish, but shows my idea. By this you can 
reuse the StringReader (without synchronization cost) and we dont need extra 
code in Field.java handling the field reuse.

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702064#comment-13702064
 ] 

Grant Ingersoll commented on SOLR-5010:
---

First part of this is in: can setup copy fields when creating new fields.  Next 
would be to add new copy fields for existing fields.

> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010.patch, SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702066#comment-13702066
 ] 

Noble Paul commented on SOLR-4221:
--

The working code is posted here https://github.com/shalinmangar/lucene-solr . 
We will start moving it to SVN after 4.4 branching

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Attachments: SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702067#comment-13702067
 ] 

Grant Ingersoll commented on SOLR-5010:
---

bq. In fact, I'm not convinced that tacking copyFields directives onto 
/schema/fields is the right way to go, since we have to have the ability to PUT 
to /schema/copyFields for the not-adding-fields case anyway: it needlessly 
complicates the API and the code.

We can refactor it to be shared (although it is pretty minimal amount of code) 
across fields and /copyFields.  I do think being able to specify it when adding 
is useful, as it can all be done as part of that single call and is very 
straightforward to do.

> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010.patch, SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5019) spurious ConcurrentModificationException with spell check component

2013-07-08 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-5019:
--

 Summary: spurious ConcurrentModificationException with spell check 
component
 Key: SOLR-5019
 URL: https://issues.apache.org/jira/browse/SOLR-5019
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley
Priority: Minor
 Fix For: 5.0, 4.4


ConcurrentModificationException with spell check component
http://markmail.org/message/bynajxhgzi2wyhx5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5097:
--

Attachment: LUCENE-5097.patch

New patch making BaseTokenStreamTestcase use this method

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702081#comment-13702081
 ] 

Robert Muir commented on LUCENE-5097:
-

I just took a quick glance and this looks fantastic

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5018) The Overseer should avoid publishing the state for collections that do not exist under the /collections zk node.

2013-07-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5018:
--

Attachment: SOLR-5018.patch

Simple patch attached.

> The Overseer should avoid publishing the state for collections that do not 
> exist under the /collections zk node.
> 
>
> Key: SOLR-5018
> URL: https://issues.apache.org/jira/browse/SOLR-5018
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5018.patch
>
>
> In a 'stormy' env, a state might get published after a collection delete - 
> bringing back a zombie collection into clusterstate.json. The Overseer should 
> defend against this by refusing to publish a state if it cannot find the 
> collection in zk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #381: POMs out of sync

2013-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/381/

2 tests failed.
FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=6527, name=recoveryCmdExecutor-3707-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=6527, name=recoveryCmdExecutor-3707-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([EFBA99A9DA9407BE]:0)


FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=6527, name=recoveryCmdExecutor-3707-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.

[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702086#comment-13702086
 ] 

ASF subversion and git services commented on LUCENE-3069:
-

Commit 1500814 from [~billy]
[ https://svn.apache.org/r1500814 ]

LUCENE-3069: reader part, update logic in outputs

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 4.4
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5017:
-

Description: 
We should be able to create a collection where sharding is done based on the 
value of a given field

collections can be created with shardField=fieldName, which will be persisted 
in DocCOllection in ZK

implicit DocRouter would look at this field instead of _shard_ field

CompositeIdDocRouter can also use this field instead of looking at the id 
field. 


  was:
We should be able to create a collection where sharding is done based on the 
value of a given field

collections will be created with numShards=n&shardField=fieldName

A new DocRouter should be added for the same


> Allow sharding based on the value of a field
> 
>
> Key: SOLR-5017
> URL: https://issues.apache.org/jira/browse/SOLR-5017
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> We should be able to create a collection where sharding is done based on the 
> value of a given field
> collections can be created with shardField=fieldName, which will be persisted 
> in DocCOllection in ZK
> implicit DocRouter would look at this field instead of _shard_ field
> CompositeIdDocRouter can also use this field instead of looking at the id 
> field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702101#comment-13702101
 ] 

Jack Krupansky commented on SOLR-5005:
--

There are several distinct use cases I am interested in:

1. A simple pre/post-query script to wrap a normal query. I think that's what 
the initial patch focused on.

2. A replacement for full query processing that has hooks to get at all of the 
pieces of query processing. This could include multi-query processing.

3. An arbitrary "script request processor" that is not tied to either query or 
update handling. This could be a simple hello world, or could be a combination 
of query and update. For example, emulate an atomic update with intelligent 
logic.

4. A long-running, asynchronous version of #3. For example, add a field value 
to every existing document. One request to start it, a request to check its 
status, a request to pause/resume/abort it, and some way to send a message to 
indicate when it completes.

The script handler configuration should have "defaults" to configure the script 
parameters but also allow overrides on the request.


> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



4.4 release planning

2013-07-08 Thread Steve Rowe
As I mentioned a week ago, I plan on branching for 4.4 today, likely late in 
the day (UTC+4).

If all goes well, I'll cut an RC in one week, on July 15th.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5019) spurious ConcurrentModificationException with spell check component

2013-07-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-5019:
--

Assignee: Yonik Seeley

> spurious ConcurrentModificationException with spell check component
> ---
>
> Key: SOLR-5019
> URL: https://issues.apache.org/jira/browse/SOLR-5019
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> ConcurrentModificationException with spell check component
> http://markmail.org/message/bynajxhgzi2wyhx5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702110#comment-13702110
 ] 

Noble Paul commented on SOLR-5005:
--

bq. A simple pre/post-query script to wrap a normal query
It is possible to do pre/post in-between operations in this patch. In fact it 
does not differentiate between any of these ops. The query also has to be fired 
by the script , it just has helper methods to perform those

bq.An arbitrary "script request processor" that is not tied to either query or 
update handling

I wanted the handler to have more fine grained control , so that operations 
personnel can determine operations are allowed. a poor man's ACL. 

The first and most basic is query-only. The next level would be query-update 
and probably a higher level which gives all access which a java RequestHandler 
gets

bq.A long-running, asynchronous version of #3

I see it. It should be a simple request param to make it async. The only extra 
thing you would need is a status/kill command


All said and done. I wish to see some real use cases and make it possible in a 
very simple manner. It should look like a simple recipe for the very common 
use-cases. 


> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5005) ScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701996#comment-13701996
 ] 

Noble Paul edited comment on SOLR-5005 at 7/8/13 4:49 PM:
--

sample script

{code}
var requestParameterQuery = param('query'); //or p('q') as a short form
var results = q({'qt': '/select','q':requestParameterQuery}); // or inline this 
as q({'qt': '/select','q':p('query')})
r.add('myfirstscriptresults', results.get('results'));
{code}

  was (Author: noble.paul):
sample script

{code}
var requestParameterQ = param('q'); //or p('q') as a short form
var results = q({'qt': '/select','q':requestParameterQ});
r.add('myfirstscriptresults', results.get('results'));
{code}
  
> ScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5005) JavaScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5005:
-

Summary: JavaScriptRequestHandler  (was: ScriptRequestHandler)

> JavaScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5005) JavaScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701996#comment-13701996
 ] 

Noble Paul edited comment on SOLR-5005 at 7/8/13 4:55 PM:
--

sample script

{code}
var requestParameterQuery = param('query'); //or p('query') as a short form
var results = q({'qt': '/select','q':requestParameterQuery}); // or inline this 
as q({'qt': '/select','q':p('query')})
r.add('myfirstscriptresults', results.get('results'));// r is the 
SolrQueryResponse object
// you may run more queries .  
{code}

  was (Author: noble.paul):
sample script

{code}
var requestParameterQuery = param('query'); //or p('q') as a short form
var results = q({'qt': '/select','q':requestParameterQuery}); // or inline this 
as q({'qt': '/select','q':p('query')})
r.add('myfirstscriptresults', results.get('results'));
{code}
  
> JavaScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-08 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-5020:


 Summary: Add final() method to DelegatingCollector
 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0


This issue adds a final() method to the DelegatingCollector class so that it 
can be notified when collection is complete. 

The current collect() method assumes that the delegating collector will either 
forward on the document or not with each call. The final() method will allow 
DelegatingCollectors to have more sophisticated behavior.

For example a Field Collapsing delegating collector could collapse the 
documents as the collect() method is being called. Then when the final() method 
is called it could pass the collapsed documents to the delegate collectors.

This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5005) JavaScriptRequestHandler

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701995#comment-13701995
 ] 

Noble Paul edited comment on SOLR-5005 at 7/8/13 4:57 PM:
--


* Only JS is supported. We need only one language now. Need to add more only if 
there is a real pressing need
* Can store the scripts in files "conf/script" or it can be passed as a request 
parameter
* This is only intended to query stuff. And there are very simple helpers added 
to make those things easy

  was (Author: noble.paul):


* Only JS needs supported. We need only one language now. Need to add more only 
if there is a real pressing need
* Can store the scripts in files "conf/script" or it can be passed as a request 
parameter
* This is only intended to query stuff. And there are very simple helpers added 
to make those things easy
  
> JavaScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5020:
-

Attachment: SOLR-5020.patch

Patch for review.

> Add final() method to DelegatingCollector
> -
>
> Key: SOLR-5020
> URL: https://issues.apache.org/jira/browse/SOLR-5020
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-5020.patch
>
>
> This issue adds a final() method to the DelegatingCollector class so that it 
> can be notified when collection is complete. 
> The current collect() method assumes that the delegating collector will 
> either forward on the document or not with each call. The final() method will 
> allow DelegatingCollectors to have more sophisticated behavior.
> For example a Field Collapsing delegating collector could collapse the 
> documents as the collect() method is being called. Then when the final() 
> method is called it could pass the collapsed documents to the delegate 
> collectors.
> This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5017:
-

Description: 
We should be able to create a collection where sharding is done based on the 
value of a given field

collections can be created with shardField=fieldName, which will be persisted 
in DocCollection in ZK

implicit DocRouter would look at this field instead of _shard_ field

CompositeIdDocRouter can also use this field instead of looking at the id 
field. 


  was:
We should be able to create a collection where sharding is done based on the 
value of a given field

collections can be created with shardField=fieldName, which will be persisted 
in DocCOllection in ZK

implicit DocRouter would look at this field instead of _shard_ field

CompositeIdDocRouter can also use this field instead of looking at the id 
field. 



> Allow sharding based on the value of a field
> 
>
> Key: SOLR-5017
> URL: https://issues.apache.org/jira/browse/SOLR-5017
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> We should be able to create a collection where sharding is done based on the 
> value of a given field
> collections can be created with shardField=fieldName, which will be persisted 
> in DocCollection in ZK
> implicit DocRouter would look at this field instead of _shard_ field
> CompositeIdDocRouter can also use this field instead of looking at the id 
> field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4685) JSON response write modification to support RAW JSON

2013-07-08 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702133#comment-13702133
 ] 

Jack Krupansky commented on SOLR-4685:
--

To me, this feels like too much of a hack than a carefully designed feature. Or 
maybe it's just the hard-wired field naming convention, or that this is 
JSON-specific - don't XML, PHP, Ruby, etc., have a similar issue?

Maybe it would be better if you simply had a custom, application-specific 
response writer.

Or, why not just have the client parse JSON string values? I mean, that is easy 
to do in both Java and JavaScript, right?

So, I'm missing out on why this should be a feature of Solr. I mean, Solr's 
responsibility is to return the values of the fields, not format them in an 
application-specific manner.

> JSON response write modification to support RAW JSON
> 
>
> Key: SOLR-4685
> URL: https://issues.apache.org/jira/browse/SOLR-4685
> Project: Solr
>  Issue Type: Improvement
>Reporter: Bill Bell
>Assignee: Erik Hatcher
> Attachments: SOLR-4685.1.patch
>
>
> If the field ends with "_json" allow the field to return raw JSON.
> For example the field,
> office_json -- string
> I already put into the field raw JSON already escaped. I want it to come with 
> no double quotes and not escaped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5097:
--

Attachment: LUCENE-5097.patch

Patch removing all new StringReader(...= where not needed. It got huge, so I 
want to commit this asap, once tests are done.

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702141#comment-13702141
 ] 

Jack Krupansky commented on SOLR-5017:
--

Some clarification is needed:

1. Is this simply telling SolrCloud to use a different field for the key to be 
sharded? With no additional semantics?

2. Or, is this saying that all documents with a particular value in that field 
will be guaranteed to be in the same shard (e.g., so that grouping works 
properly)?

I'm hoping it is the latter.

Thanks.


> Allow sharding based on the value of a field
> 
>
> Key: SOLR-5017
> URL: https://issues.apache.org/jira/browse/SOLR-5017
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> We should be able to create a collection where sharding is done based on the 
> value of a given field
> collections can be created with shardField=fieldName, which will be persisted 
> in DocCollection in ZK
> implicit DocRouter would look at this field instead of _shard_ field
> CompositeIdDocRouter can also use this field instead of looking at the id 
> field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.4 release planning

2013-07-08 Thread Jack Krupansky

+1

-- Jack Krupansky

-Original Message- 
From: Steve Rowe

Sent: Monday, July 08, 2013 12:37 PM
To: dev@lucene.apache.org
Subject: 4.4 release planning

As I mentioned a week ago, I plan on branching for 4.4 today, likely late in 
the day (UTC+4).


If all goes well, I'll cut an RC in one week, on July 15th.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702152#comment-13702152
 ] 

Noble Paul commented on SOLR-5017:
--

Jack ,I think, I got you partially. 

Yes, docs with a same value in a field ,WILL go to the same shard

In case of 'implicit' router there is a 1:1 mapping between the field value and 
the shard

In case of compositeId router there wil be a n:1 mapping between the field 
value and the shard

> Allow sharding based on the value of a field
> 
>
> Key: SOLR-5017
> URL: https://issues.apache.org/jira/browse/SOLR-5017
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> We should be able to create a collection where sharding is done based on the 
> value of a given field
> collections can be created with shardField=fieldName, which will be persisted 
> in DocCollection in ZK
> implicit DocRouter would look at this field instead of _shard_ field
> CompositeIdDocRouter can also use this field instead of looking at the id 
> field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4754) solrcloud does not detect an implicit "host" and does not provide clear error using 4x example

2013-07-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702162#comment-13702162
 ] 

Steve Rowe commented on SOLR-4754:
--

[~markrmil...@gmail.com], do you plan on doing anything for this issue for 4.4? 
 If you don't (and nobody else does), I'll change priority from Blocker to 
Major and move it to 4.5.

> solrcloud does not detect an implicit "host" and does not provide clear error 
> using 4x example
> --
>
> Key: SOLR-4754
> URL: https://issues.apache.org/jira/browse/SOLR-4754
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 4.4
>
>
> Testing out the 4.3.0 RC3, I tried to run through the SolrCloud examples.
> Following the steps for "Example A: Simple two shard cluster" my two nodes 
> started up w/o any obvious problem, however the I noticed the cluster graph 
> was empty, and attempts to index documents failed with invalid url errors 
> when trying to forward the distributed updates.  Closer inspection of the 
> cluster state lead me to discover that the URLs for the nodes as registered 
> with ZK did not include any host information at all.  (details to follow in 
> comment)
> Apparently, the logic for implicitly detecting a hostname to use with 
> SolrCloud failed to work, and did not cause any sort of startup error.
> Important things to note:
> # java clearly _did_ know what the current configured hostname was for my 
> machine, because it appeared correctly in the {{}} tag of the admin 
> UI (pulled from "/admin/system") so i don't think this probablem is specific 
> to any sort of glitch in my hostname configuration.
> # explicitly setting the "host" sys prop (as used in the example solr.xml) 
> worked around the problem
> # I could _not_ reproduce this problem with Solr 4.2.1 (using the 4.2.1 
> example configs)
> We should try to make the hostname/url detection logic smarter (i'm not sure 
> why it isn't working as well as the SystemInfoHandler) and it should fail 
> loudly on startup as last resort rather then registering the node with ZK 
> using an invalid URL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4754) solrcloud does not detect an implicit "host" and does not provide clear error using 4x example

2013-07-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702165#comment-13702165
 ] 

Mark Miller commented on SOLR-4754:
---

Doing something minimal was a 4.3 blocker - it's not a blocker anymore.

> solrcloud does not detect an implicit "host" and does not provide clear error 
> using 4x example
> --
>
> Key: SOLR-4754
> URL: https://issues.apache.org/jira/browse/SOLR-4754
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 4.4
>
>
> Testing out the 4.3.0 RC3, I tried to run through the SolrCloud examples.
> Following the steps for "Example A: Simple two shard cluster" my two nodes 
> started up w/o any obvious problem, however the I noticed the cluster graph 
> was empty, and attempts to index documents failed with invalid url errors 
> when trying to forward the distributed updates.  Closer inspection of the 
> cluster state lead me to discover that the URLs for the nodes as registered 
> with ZK did not include any host information at all.  (details to follow in 
> comment)
> Apparently, the logic for implicitly detecting a hostname to use with 
> SolrCloud failed to work, and did not cause any sort of startup error.
> Important things to note:
> # java clearly _did_ know what the current configured hostname was for my 
> machine, because it appeared correctly in the {{}} tag of the admin 
> UI (pulled from "/admin/system") so i don't think this probablem is specific 
> to any sort of glitch in my hostname configuration.
> # explicitly setting the "host" sys prop (as used in the example solr.xml) 
> worked around the problem
> # I could _not_ reproduce this problem with Solr 4.2.1 (using the 4.2.1 
> example configs)
> We should try to make the hostname/url detection logic smarter (i'm not sure 
> why it isn't working as well as the SystemInfoHandler) and it should fail 
> loudly on startup as last resort rather then registering the node with ZK 
> using an invalid URL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702178#comment-13702178
 ] 

ASF subversion and git services commented on LUCENE-5097:
-

Commit 1500862 from [~thetaphi]
[ https://svn.apache.org/r1500862 ]

LUCENE-5097: Analyzer now has an additional tokenStream(String fieldName, 
String text) method, so wrapping by StringReader for common use is no longer 
needed. This method uses an internal reuseable reader, which was previously 
only used by the Field class.

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702195#comment-13702195
 ] 

ASF subversion and git services commented on LUCENE-5097:
-

Commit 1500864 from [~thetaphi]
[ https://svn.apache.org/r1500864 ]

Merged revision(s) 1500862 from lucene/dev/trunk:
LUCENE-5097: Analyzer now has an additional tokenStream(String fieldName, 
String text) method, so wrapping by StringReader for common use is no longer 
needed. This method uses an internal reuseable reader, which was previously 
only used by the Field class.

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Looking for community guidance on SOLR-4872

2013-07-08 Thread Shawn Heisey

On 7/8/2013 7:14 AM, Benson Margulies wrote:

Dear Lucene Community,

I note that this email has received no response, and the JIRA no further
discussion, since June 19th. As an occasional contributor to this
community, I think that this is unreasonable. My personal belief is that
the Apache Way calls for you to decide something here: use a vote if
needed. You might decide to do _nothing_ at all, but you won't just
leave me waving in the breeze. I suppose that _I_ could call a vote
here, but as a non-committer it would seem presumptuous of me. Also, I
might not find a committer willing to act on it.


For me, this issue is beyond my skill set.  I don't understand the 
solutions, because I have never delved into it.  I would love to help, 
but because of my lack of experience, I really can't.  I do have a 
family and a real job, too.


For the rest of the devs, I offer this as a possible explanation, not as 
an excuse:  It is high summer in the northern hemisphere.  For the US, 
this is prime vacation time.  I'm clueless about Europe, but it's 
probably the same there.  If someone is in vacation mode, they may chime 
in from time to time, but they won't be putting heavy work or lots of 
thought in.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5097) Add utility method to Analyzer: public final TokenStream tokenStream(String fieldName,String text)

2013-07-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5097.
---

   Resolution: Fixed
Fix Version/s: 4.4
   5.0
 Assignee: Uwe Schindler

Thanks Robert for help & discussion!

> Add utility method to Analyzer: public final TokenStream tokenStream(String 
> fieldName,String text)
> --
>
> Key: LUCENE-5097
> URL: https://issues.apache.org/jira/browse/LUCENE-5097
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5097.patch, LUCENE-5097.patch, LUCENE-5097.patch
>
>
> It might be a good idea to remove tons of useless code from tests:
> Most people use TokenStreams and Analyzers by only passing a String, wrapped 
> by a StringReader. It would make life easier, if Analyzer would have an 
> additional public (and final!!!) method that simply does the wrapping with 
> StringReader by itsself. It might maybe not even needed to throw IOException 
> (not sure)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-08 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702197#comment-13702197
 ] 

Jack Krupansky commented on SOLR-5017:
--

Does this proposal eliminate the need to do explicit routing in the key values?

So, instead of having to say "my-value!key-value" for the key value when some 
other field already has "my-value" in it, I can just leave my key as 
"key-value" and with this proposal Solr would read that other field to get 
"my-value" and use it for sharding?


> Allow sharding based on the value of a field
> 
>
> Key: SOLR-5017
> URL: https://issues.apache.org/jira/browse/SOLR-5017
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> We should be able to create a collection where sharding is done based on the 
> value of a given field
> collections can be created with shardField=fieldName, which will be persisted 
> in DocCollection in ZK
> implicit DocRouter would look at this field instead of _shard_ field
> CompositeIdDocRouter can also use this field instead of looking at the id 
> field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.4 release planning

2013-07-08 Thread Shawn Heisey

On 7/8/2013 10:37 AM, Steve Rowe wrote:

As I mentioned a week ago, I plan on branching for 4.4 today, likely late in 
the day (UTC+4).

If all goes well, I'll cut an RC in one week, on July 15th.


+1

Let's make sure Solr isn't in a vulnerable position like it was when 
4.3.0 was released.  That will naturally be more on the heads of us Solr 
devs than the release manager.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 4.4 release planning

2013-07-08 Thread Uwe Schindler
> On 7/8/2013 10:37 AM, Steve Rowe wrote:
> > As I mentioned a week ago, I plan on branching for 4.4 today, likely late in
> the day (UTC+4).
> >
> > If all goes well, I'll cut an RC in one week, on July 15th.
> 
> +1
> 
> Let's make sure Solr isn't in a vulnerable position like it was when
> 4.3.0 was released.  That will naturally be more on the heads of us Solr devs
> than the release manager.

Can we fix the permgen problems? To me, this is un-releaseable at current 
stage, the tests hang not only on Jenkins, but locally, too. To me appears all 
vulnerable :(

Robert and me were contacted by Oracle today, if we found new bugs with Java 8 
because it is now feature-complete and ready to go next stages - and I would 
really like to send them a link to the Jenkins server, but it's all red - I 
would be ashamed showing that to them! To me this looks also not good for users.

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5005) JavaScriptRequestHandler

2013-07-08 Thread Karthick Duraisamy Soundararaj (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702244#comment-13702244
 ] 

Karthick Duraisamy Soundararaj commented on SOLR-5005:
--

[~dsmiley] Yes, I like your idea better. I could modify my code to achieve what 
you want. But at this point, I think [~noble.paul]'s patch matches your 
expectation.



> JavaScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
> Attachments: patch, SOLR-5005.patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Several builds hanging pecause of permgen

2013-07-08 Thread Uwe Schindler
Next one: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6395/console

 

I will vote all releases of 4.4 with -1 until this is fixed! It hangs on my 
local computer, too! Tests pass only ½ of the time, the remaining time it hangs 
with permgen errors.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf Of Dawid 
Weiss
Sent: Monday, July 08, 2013 2:16 PM
To: dev@lucene.apache.org
Subject: Re: Several builds hanging pecause of permgen

 

 

Not much I can do from my side about permgen errors. There is really no way to 
deal with these from within Java (the same process) -- you cannot effectively 
handle anything because your own classes may not load at all. 

 

Dawid

On Mon, Jul 8, 2013 at 1:35 PM, Uwe Schindler  wrote:

Another one, this time on OSX:

http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Robert Muir [mailto:rcm...@gmail.com] 
Sent: Sunday, July 07, 2013 11:15 PM
To: dev@lucene.apache.org
Subject: Re: Several builds hanging pecause of permgen

 

When there were leaks from static classes, we added a checker to LuceneTestCase 
that looks for RAM > N and fails with debugging information.

I wonder if some similar check is possible for this case (to make it easier 
than going thru heapdumps, and to find issues before crash-time)...

On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler  wrote:

Another one: 
http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6385/testReport/junit/junit.framework/TestSuite/org_apache_solr_request_SimpleFacetsTest/

Had to be killed with kill -9


-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]

> Sent: Saturday, July 06, 2013 10:16 PM
> To: dev@lucene.apache.org
> Subject: RE: Several builds hanging pecause of permgen
>
> Another one:
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6375/console
>
> I was only able to kill the JVM with kill -9 I am sure, it's horrible 
> slowdoop!
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Uwe Schindler [mailto:u...@thetaphi.de]
> > Sent: Friday, July 05, 2013 3:59 PM
> > To: dev@lucene.apache.org
> > Subject: Several builds hanging pecause of permgen
> >
> > Several Jenkins builds now hang because of permgen. The runner JVM is
> > dead (can only be killed by -9), last example:
> >
> > http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6360/console
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
> > additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

 

 



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 618 - Failure!

2013-07-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/618/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 9079 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java 
-XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps
 -Dtests.prefix=tests -Dtests.seed=143E6CCF7E42064B -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/commons-collections-3.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-common-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-hdfs-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jersey-core-1.16.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-util-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/junit4-ant-2.0.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/lucene-codecs-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/highlighter/lucene-highlighter-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/memory/lucene-memory-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/misc/lucene-misc-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/spatial/lucene-spatial-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/suggest/lucene-suggest-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/grouping/lucene-grouping-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queries/lucene-queries-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queryparser/lucene-queryparser-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/cglib-nodep-2.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-codec-1.7.jar:/Users/jenkins/jenkins-slave/

[jira] [Commented] (SOLR-5019) spurious ConcurrentModificationException with spell check component

2013-07-08 Thread Aditya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702259#comment-13702259
 ] 

Aditya commented on SOLR-5019:
--

Some Additional Stack from logs. This exception is also observed around the 
error.

2013-06-25 10:52:52,471 WARNING [org.apache.solr.spelling.SpellCheckCollator] 
(ajp-0.0.0.0-8009-50) Exception trying to re-query to check if a spell check 
possibility would return any hits.
java.util.ConcurrentModificationException
at 
java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
at java.util.AbstractList$Itr.next(AbstractList.java:343)
at java.util.AbstractList.equals(AbstractList.java:506)
at org.apache.solr.search.QueryResultKey.isEqual(QueryResultKey.java:96)
at org.apache.solr.search.QueryResultKey.equals(QueryResultKey.java:81)
at java.util.HashMap.put(HashMap.java:376)
at org.apache.solr.search.LRUCache.put(LRUCache.java:123)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1377)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:457)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:410)
at 
org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:112)
at 
org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:203)
at 
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:180)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)
at 
org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)
at 
org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at 
org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:436)
at 
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:384)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:662)


> spurious ConcurrentModificationException with spell check component
> ---
>
> Key: SOLR-5019
> URL: https://issues.apache.org/jira/browse/SOLR-5019
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> ConcurrentModificationException with spell check component
> http://markmail.org/message/bynajxhgzi2wyhx5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-u

Re: [JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 618 - Failure!

2013-07-08 Thread Uwe Schindler
Again permgen. We need to take action. Next step would be to disable all hadoop 
tests. 

Uwe



Policeman Jenkins Server  schrieb:
>Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/618/
>Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>
>All tests passed
>
>Build Log:
>[...truncated 9079 lines...]
>[junit4] ERROR: JVM J0 ended with an exception, command line:
>/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java
>-XX:+UseCompressedOops -XX:+UseParallelGC
>-XX:+HeapDumpOnOutOfMemoryError
>-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps
>-Dtests.prefix=tests -Dtests.seed=143E6CCF7E42064B -Xmx512M
>-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false
>-Dtests.codec=random -Dtests.postingsformat=random
>-Dtests.docvaluesformat=random -Dtests.locale=random
>-Dtests.timezone=random -Dtests.directory=random
>-Dtests.linedocsfile=europarl.lines.txt.gz
>-Dtests.luceneMatchVersion=4.4 -Dtests.cleanthreads=perClass
>-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
>-Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
>-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=.
>-Djava.io.tmpdir=.
>-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
>-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
>-Djava.security.manager=org.apache.lucene.util.TestSecurityManager
>-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
>-Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1
>-Djetty.insecurerandom=1
>-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
>-Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath
>/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/commons-collections-3.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-common-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-hdfs-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jersey-core-1.16.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-util-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/junit4-ant-2.0.10.jar:/Users/jenk
ins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/lucene-codecs-4.4-SNAPSHOT.jar:/Users/
jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/highlighter/lucene-highlighter-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/memory/lucene-memory-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/misc/lucene-misc-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/spatial/lucene-spatial-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/suggest/lucene-suggest-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/grouping/lucene-grouping-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queries/lucene-queries-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queryparser/lucene-queryparser-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib
/cglib-nodep-2.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commo

[jira] [Created] (SOLR-5021) LoggingInfoStream should log TP messages using TRACE level

2013-07-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5021:
--

 Summary: LoggingInfoStream should log TP messages using TRACE level
 Key: SOLR-5021
 URL: https://issues.apache.org/jira/browse/SOLR-5021
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man


SOLR-4977 added an awesome new LoggingInfoStream class, and wisely it does not 
output "TP" category messages because it logs everything at the INFO level -- 
but we could improve this to log TP messages at the TRACE level (and check if 
TRACE is enabled when ased if isEnabled("TP").

That way people who do/don't want to see all the TP messages could control that 
based on whether the LoggingInfoStream logger level was set to allow/filter 
TRACE level log messages.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5016) Spatial clustering/grouping

2013-07-08 Thread Jeroen Steggink (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeroen Steggink updated SOLR-5016:
--

Description: 
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster, average location or a 
polygon of the cluster.

An example of a use would be something like this:
https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png

Jeroen


  was:
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster or average location.

Jeroen



> Spatial clustering/grouping
> ---
>
> Key: SOLR-5016
> URL: https://issues.apache.org/jira/browse/SOLR-5016
> Project: Solr
>  Issue Type: Wish
>  Components: spatial
>Reporter: Jeroen Steggink
>Priority: Minor
>  Labels: clustering, grouping, spatial
>
> Hi,
> It would be great if we could have some sort of spatial clustering/grouping 
> of points for efficiently plotting them on a map.
> I could think of clustering based on the following parameters:
> - Based on regions: continents, countries, statis, cities, etc;
> - A fixed number of clusters;
> - Radius, bbox, polygon
> Retrieved result would give the center of the cluster, average location or a 
> polygon of the cluster.
> An example of a use would be something like this:
> https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
> Jeroen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5016) Spatial clustering/grouping

2013-07-08 Thread Jeroen Steggink (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeroen Steggink updated SOLR-5016:
--

Description: 
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster, average location or a 
polygon of the cluster.

An example of a usecase would be something like this:
https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png

Jeroen


  was:
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster, average location or a 
polygon of the cluster.

An example of a use would be something like this:
https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png

Jeroen



> Spatial clustering/grouping
> ---
>
> Key: SOLR-5016
> URL: https://issues.apache.org/jira/browse/SOLR-5016
> Project: Solr
>  Issue Type: Wish
>  Components: spatial
>Reporter: Jeroen Steggink
>Priority: Minor
>  Labels: clustering, grouping, spatial
>
> Hi,
> It would be great if we could have some sort of spatial clustering/grouping 
> of points for efficiently plotting them on a map.
> I could think of clustering based on the following parameters:
> - Based on regions: continents, countries, statis, cities, etc;
> - A fixed number of clusters;
> - Radius, bbox, polygon
> Retrieved result would give the center of the cluster, average location or a 
> polygon of the cluster.
> An example of a usecase would be something like this:
> https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
> Jeroen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5016) Spatial clustering/grouping

2013-07-08 Thread Jeroen Steggink (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeroen Steggink updated SOLR-5016:
--

Description: 
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster, average location or a 
polygon of the cluster.

An example of a usecase would be something like this:
https://developers.google.com/maps/articles/toomanymarkers#markerclusterer

Jeroen


  was:
Hi,

It would be great if we could have some sort of spatial clustering/grouping of 
points for efficiently plotting them on a map.

I could think of clustering based on the following parameters:
- Based on regions: continents, countries, statis, cities, etc;
- A fixed number of clusters;
- Radius, bbox, polygon

Retrieved result would give the center of the cluster, average location or a 
polygon of the cluster.

An example of a usecase would be something like this:
https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png

Jeroen



> Spatial clustering/grouping
> ---
>
> Key: SOLR-5016
> URL: https://issues.apache.org/jira/browse/SOLR-5016
> Project: Solr
>  Issue Type: Wish
>  Components: spatial
>Reporter: Jeroen Steggink
>Priority: Minor
>  Labels: clustering, grouping, spatial
>
> Hi,
> It would be great if we could have some sort of spatial clustering/grouping 
> of points for efficiently plotting them on a map.
> I could think of clustering based on the following parameters:
> - Based on regions: continents, countries, statis, cities, etc;
> - A fixed number of clusters;
> - Radius, bbox, polygon
> Retrieved result would give the center of the cluster, average location or a 
> polygon of the cluster.
> An example of a usecase would be something like this:
> https://developers.google.com/maps/articles/toomanymarkers#markerclusterer
> Jeroen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5010) Add REST support for Copy Fields

2013-07-08 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702276#comment-13702276
 ] 

Grant Ingersoll edited comment on SOLR-5010 at 7/8/13 6:51 PM:
---

Adds a post to /schema/copyfields to support adding fields directly.

Didn't do any refactoring of shared code yet, as it feels like we should do 
this after 4.4 as there are multiple places to do this that go beyond this patch

  was (Author: gsingers):
Adds a post to /schema/copyfields to support adding fields directly.

Didn't due any refactoring of shared code yet, as it feels like we should do 
this after 4.4 as there are multiple places to do this that go beyond this patch
  
> Add REST support for Copy Fields
> 
>
> Key: SOLR-5010
> URL: https://issues.apache.org/jira/browse/SOLR-5010
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-5010-copyFields.patch, SOLR-5010.patch, 
> SOLR-5010.patch, SOLR-5010.patch
>
>
> Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
> to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >