[jira] [Updated] (SOLR-4221) Custom sharding

2013-07-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-4221:
-

Attachment: SOLR-4221.patch

I plan to commit this soon

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4808) Persist and use router,replicationFactor and maxShardsPerNode at Collection and Shard level

2013-07-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-4808:
-

Summary: Persist and use router,replicationFactor and maxShardsPerNode at 
Collection and Shard level  (was: Persist and use replication factor and 
maxShardsPerNode at Collection and Shard level)

> Persist and use router,replicationFactor and maxShardsPerNode at Collection 
> and Shard level
> ---
>
> Key: SOLR-4808
> URL: https://issues.apache.org/jira/browse/SOLR-4808
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>  Labels: solrcloud
> Attachments: SOLR-4808.patch, SOLR-4808.patch
>
>
> The replication factor for a collection as of now is not persisted and used 
> while adding replicas.
> We should save the replication factor at collection factor as well as shard 
> level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5006) CREATESHARD command for 'implicit' shards

2013-07-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5006:
-

Summary: CREATESHARD command for 'implicit' shards  (was: CREATESHARD , 
DELETESHARD commands for 'implicit' shards)

> CREATESHARD command for 'implicit' shards
> -
>
> Key: SOLR-5006
> URL: https://issues.apache.org/jira/browse/SOLR-5006
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> Custom sharding requires a CREATESHARD/DELETESHARD commands
> It may not be applicable to hash based sharding 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk6) - Build # 6065 - Failure!

2013-07-30 Thread Feihong Huang
hi, the log always show that “Waiting for client to connect to ZooKeeper” and
“Client is connected to Zookeeper” , 

But I look at the code , it only happen when “state == KeeperState.Expired”. 
But this case that state is SyncConnected. How does the code be executed?

Thanks.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/JENKINS-Lucene-Solr-4-x-Linux-32bit-ibm-j9-jdk6-Build-6065-Failure-tp4070574p4081536.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Anyone interested about using GPU to improve the performance of Lucene?

2013-07-30 Thread Yanning Li
Hi Noble,

Thanks for replying. Indeed this is a very interesting field. So we are happy 
to provide some GPUs to folks who want to try to make Solr/Lucene work on GPUs.

Best

Yanning

From: Noble Paul നോബിള്‍ नोब्ळ् [mailto:noble.p...@gmail.com]
Sent: Tuesday, July 30, 2013 10:10 PM
To: dev@lucene.apache.org
Cc: Yanning Li
Subject: Re: Anyone interested about using GPU to improve the performance of 
Lucene?

It does not really have to be a platform independent thing. It can be a 
configurable switch where the user who has a particular h/w should be able use 
that switch and take advantage of the perf boost.

But, we should be able to demonstrate some significant improvement using NVIDIA 
GPUs

On Wed, Jul 10, 2013 at 2:52 AM, Uwe Schindler 
mailto:u...@thetaphi.de>> wrote:
Thanks for the information about the CUDA project!

I think the main reason why you have not heard anything about 
Lucene/Solr/ElasticSearch working together with GPUs is mainly the fact, that 
Apache Lucene and all search servers on top of Lucene (Apache Solr, 
ElasticSearch) are pure Java applications, highly optimized to run in the 
Oracle virtual machine. Currently there is no official support for GPUs from 
Java APIs, you can only use proprietary wrapper libraries to make use of CUDA 
(e.g. http://www.jcuda.org/).

It would be great, if there would be a platform independent way (directly in 
the official Java API) to execute jobs on GPUs.

It might be worth a try to maybe implement the Lucene block codecs (the 
abstraction of the underlying posting list formats) using a GPU. Because this 
is encapsulated in a public API, it could be a separate project, using the 
JNI-based CUDA wrappers to encode/decode PFOR postings lists. The query 
execution logic is harder to port, because there is a lot of abstraction 
included (posting lists are doc-id iterators), which would need to be short 
circuited.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

From: Yanning Li [mailto:yanni...@nvidia.com]
Sent: Tuesday, July 09, 2013 11:02 PM
To: dev@lucene.apache.org
Subject: Anyone interested about using GPU to improve the performance of Lucene?

Hi all,

I work for NVIDIA Tesla Accelerating Computing Group. Recently we are noticed 
that GPU can really accelerate the performance of search engines. There are 
proof points not only from Google, but also from others, such as Yandex, Baidu, 
Bing, etc.  But not much around Solr/Lucene.

So we are trying to engage with Lucene developers more actively.

1)  If possible we could like to hear from your perspective, are there some 
opportunities for GPU in Lucene/Solr?

2)  Wondering is anyone interested  to use GPU to accelerate the 
performance of Lucene/Solr? If so please feel free to let me know, we can send 
out free GPUs to get  project started.

Attached a paper talking about using GPU to accelerate index compression in 
case you have interests.

Looking forward to hear from some of you,

Best

Yanning

This email message is for the sole use of the intended recipient(s) and may 
contain confidential information.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.




--
-
Noble Paul


Re: Anyone interested about using GPU to improve the performance of Lucene?

2013-07-30 Thread Noble Paul നോബിള്‍ नोब्ळ्
It does not really have to be a platform independent thing. It can be a
configurable switch where the user who has a particular h/w should be able
use that switch and take advantage of the perf boost.

But, we should be able to demonstrate some significant improvement using
NVIDIA GPUs


On Wed, Jul 10, 2013 at 2:52 AM, Uwe Schindler  wrote:

> Thanks for the information about the CUDA project!
>
> ** **
>
> I think the main reason why you have not heard anything about
> Lucene/Solr/ElasticSearch working together with GPUs is mainly the fact,
> that Apache Lucene and all search servers on top of Lucene (Apache Solr,
> ElasticSearch) are pure Java applications, highly optimized to run in the
> Oracle virtual machine. Currently there is no official support for GPUs
> from Java APIs, you can only use proprietary wrapper libraries to make use
> of CUDA (e.g. http://www.jcuda.org/).
>
> ** **
>
> It would be great, if there would be a platform independent way (directly
> in the official Java API) to execute jobs on GPUs. 
>
> ** **
>
> It might be worth a try to maybe implement the Lucene block codecs (the
> abstraction of the underlying posting list formats) using a GPU. Because
> this is encapsulated in a public API, it could be a separate project, using
> the JNI-based CUDA wrappers to encode/decode PFOR postings lists. The query
> execution logic is harder to port, because there is a lot of abstraction
> included (posting lists are doc-id iterators), which would need to be short
> circuited.
>
> ** **
>
> Uwe
>
> ** **
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
> ** **
>
> *From:* Yanning Li [mailto:yanni...@nvidia.com]
> *Sent:* Tuesday, July 09, 2013 11:02 PM
> *To:* dev@lucene.apache.org
> *Subject:* Anyone interested about using GPU to improve the performance
> of Lucene?
>
> ** **
>
> Hi all,
>
> ** **
>
> I work for NVIDIA Tesla Accelerating Computing Group. Recently we are
> noticed that GPU can really accelerate the performance of search engines.
> There are proof points not only from Google, but also from others, such as
> Yandex, Baidu, Bing, etc.  But not much around Solr/Lucene. 
>
> ** **
>
> So we are trying to engage with Lucene developers more actively.
>
> **1)  **If possible we could like to hear from your perspective, are
> there some opportunities for GPU in Lucene/Solr?
>
> **2)  **Wondering is anyone interested  to use GPU to accelerate the
> performance of Lucene/Solr? If so please feel free to let me know, we can
> send out free GPUs to get  project started. 
>
> ** **
>
> Attached a paper talking about using GPU to accelerate index compression
> in case you have interests. 
>
> ** **
>
> Looking forward to hear from some of you,
>
> ** **
>
> Best
>
> ** **
>
> Yanning
> --
>
> This email message is for the sole use of the intended recipient(s) and
> may contain confidential information.  Any unauthorized review, use,
> disclosure or distribution is prohibited.  If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message. 
> --
>



-- 
-
Noble Paul


[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724851#comment-13724851
 ] 

Robert Muir commented on LUCENE-5154:
-

The issue with DIH is that one test writes a dataimport.properties (i havent 
looked further).
The issue with velocity is that its logging is using log4j (not slf4j).

Does this module really work if you don't have log4j setup, or do the tests 
only pass because of the test environment...?

Cleanest seems if we could use 
http://velocity.apache.org/engine/devel/apidocs/org/apache/velocity/slf4j/Slf4jLogChute.html
 but I dont see that in a release. I'm sure there are other solutions (not a 
fan of logging, sorry)

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724838#comment-13724838
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508721 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1508721 ]

LUCENE-5154: move test logging config to where it will actually work in solrj 
tests

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724841#comment-13724841
 ] 

Robert Muir commented on LUCENE-5154:
-

I committed fixes for the easy and obvious stuff.

DIH and velocity remain... I'll dig into these.

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724839#comment-13724839
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508722 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508722 ]

LUCENE-5154: move test logging config to where it will actually work in solrj 
tests

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724837#comment-13724837
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508720 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508720 ]

LUCENE-5154: fix test to use getTempDir instead of CWD

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724834#comment-13724834
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508718 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508718 ]

LUCENE-5154: fix test to use getTempDir instead of CWD

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724836#comment-13724836
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508719 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1508719 ]

LUCENE-5154: fix test to use getTempDir instead of CWD

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724833#comment-13724833
 ] 

ASF subversion and git services commented on LUCENE-5154:
-

Commit 1508717 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1508717 ]

LUCENE-5154: fix test to use getTempDir instead of CWD

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724831#comment-13724831
 ] 

Robert Muir commented on LUCENE-5154:
-

   [junit4] Tests with failures:
   solr/solrj: many
   solr/contrib/dataimporthandler:
   [junit4]   - 
org.apache.solr.handler.dataimport.TestDocBuilder.testDeltaImportNoRows_MustNotCommit
   solr/contrib/velocity:
   [junit4]   - 
org.apache.solr.velocity.VelocityResponseWriterTest.testCustomParamTemplate
   [junit4]   - 
org.apache.solr.velocity.VelocityResponseWriterTest.testSolrResourceLoaderTemplate

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5154:


Attachment: LUCENE-5154.patch

Attached patch. I fixed 2 tests. lucene/ tests are passing. solr/ still needs 
work.

> ban tests from writing to CWD
> -
>
> Key: LUCENE-5154
> URL: https://issues.apache.org/jira/browse/LUCENE-5154
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-5154.patch
>
>
> Currently each forked jvm has cwd = tempDir = .
> This provides some minimal protection against tests in different jvms from 
> interfering with each other, but we can do much better by splitting these 
> concerns: and setting cwd = . and tempDir = ./temp
> Tests that write files to CWD can confuse IDE users because they can create 
> dirty checkouts or other issues between different runs, and of course can 
> interfere with other tests in the *same* jvm (there are other possible ways 
> to do this to).
> So a test like this should fail with SecurityException, but currently does 
> not.
> {code}
> public void testBogus() throws Exception {
>   File file = new File("foo.txt");
>   FileOutputStream os = new FileOutputStream(file);
>   os.write(1);
>   os.close();
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5154) ban tests from writing to CWD

2013-07-30 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5154:
---

 Summary: ban tests from writing to CWD
 Key: LUCENE-5154
 URL: https://issues.apache.org/jira/browse/LUCENE-5154
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir
 Attachments: LUCENE-5154.patch

Currently each forked jvm has cwd = tempDir = .

This provides some minimal protection against tests in different jvms from 
interfering with each other, but we can do much better by splitting these 
concerns: and setting cwd = . and tempDir = ./temp

Tests that write files to CWD can confuse IDE users because they can create 
dirty checkouts or other issues between different runs, and of course can 
interfere with other tests in the *same* jvm (there are other possible ways to 
do this to).

So a test like this should fail with SecurityException, but currently does not.

{code}
public void testBogus() throws Exception {
  File file = new File("foo.txt");
  FileOutputStream os = new FileOutputStream(file);
  os.write(1);
  os.close();
}
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread Han Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Jiang updated LUCENE-3069:
--

Attachment: LUCENE-3069.patch

Patch, revive IntersectTermsEnum in TempFSTOrd.

Mike, since we already have an intersect() impl, maybe we can still keep this? 
By the way, it is easy to migrate from TempFST to TempFSTOrd.

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2013-07-30 Thread Andrew Janowczyk (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724748#comment-13724748
 ] 

Andrew Janowczyk commented on LUCENE-2899:
--

A little bit of a shameless plug, but we just wrote a blog post 
[here|http://www.searchbox.com/named-entity-recognition-ner-in-solr/] about 
using the stanford library for NER as a processor factory / request handler for 
Solr. It seems applicable to the audience on this ticket, is it worth 
contributing it to the community via a patch of some sort?

> Add OpenNLP Analysis capabilities as a module
> -
>
> Key: LUCENE-2899
> URL: https://issues.apache.org/jira/browse/LUCENE-2899
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
> Fix For: 5.0, 4.5
>
> Attachments: LUCENE-2899-current.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899-RJN.patch, LUCENE-2899-x.patch, 
> LUCENE-2899-x.patch, LUCENE-2899-x.patch, OpenNLPFilter.java, 
> OpenNLPFilter.java, OpenNLPTokenizer.java, opennlp_trunk.patch
>
>
> Now that OpenNLP is an ASF project and has a nice license, it would be nice 
> to have a submodule (under analysis) that exposed capabilities for it. Drew 
> Farris, Tom Morton and I have code that does:
> * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
> would have to change slightly to buffer tokens)
> * NamedEntity recognition as a TokenFilter
> We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
> either payloads (PartOfSpeechAttribute?) on a token or at the same position.
> I'd propose it go under:
> modules/analysis/opennlp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724736#comment-13724736
 ] 

ASF subversion and git services commented on LUCENE-3069:
-

Commit 1508705 from [~billy] in branch 'dev/branches/lucene3069'
[ https://svn.apache.org/r1508705 ]

LUCENE-3069: add TempFSTOrd, with FST index + specialized block

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Measuring SOLR performance

2013-07-30 Thread Roman Chyla
Hello,

I have been wanting some tools for measuring performance of SOLR, similar
to Mike McCandles' lucene benchmark.

so yet another monitor was born, is described here:
http://29min.wordpress.com/2013/07/31/measuring-solr-query-performance/

I tested it on the problem of garbage collectors (see the blogs for
details) and so far I can't conclude whether highly customized G1 is better
than highly customized CMS, but I think interesting details can be seen
there.

Hope this helps someone, and of course, feel free to improve the tool and
share!

roman


[jira] [Updated] (SOLR-5095) SolrCore.infoRegistry needs overhauled with some form of "namespacing"

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5095:
---

Description: 
While investigating SOLR-3616 / SOLR-2715, I realized the failure i was seeing 
didn't seem to be related to the initial report of that bug, and instead seemed 
to be due to an obvious and fundemental limitation in the way SolrCore keeps 
track of "plugins" using the infoRegistry: It's just a {{Map}} keyed off of the name of the plugin, but there is not 
"namespacing" used in the infoRegistry, so two completley different types of 
plugins with the same name will overwrite each other.

When looking at data using something like /admin/mbeans, this manifests itself 
solely as missing objects: last one .put() into the infoRegistry "wins" -- 
using JMX, both objects are actually visible because of how JMX ObjectNames are 
built arround a set of key=val pairs, and a bug in how JmxMonitorMap 
unregisters existing MBeans when .put() is called on a key it already knows 
about (the unregister call is made using an ObjectName built using the infoBean 
passed to the put() call -- if infoBean.getName() is not exactly the same as 
the previous infoBean put() with the same key, then the MbeanServer will 
continue to know about both of them)


  was:
While investigating SOLR-3616, I realized the failure i was seeing didn't seem 
to be related to the initial report of that bug, and instead seemed to be due 
to an obvious and fundemental limitation in the way SolrCore keeps track of 
"plugins" using the infoRegistry: It's just a {{Map}} 
keyed off of the name of the plugin, but there is not "namespacing" used in the 
infoRegistry, so two completley different types of plugins with the same name 
will overwrite each other.

When looking at data using something like /admin/mbeans, this manifests itself 
solely as missing objects: last one .put() into the infoRegistry "wins" -- 
using JMX, both objects are actually visible because of how JMX ObjectNames are 
built arround a set of key=val pairs, and a bug in how JmxMonitorMap 
unregisters existing MBeans when .put() is called on a key it already knows 
about (the unregister call is made using an ObjectName built using the infoBean 
passed to the put() call -- if infoBean.getName() is not exactly the same as 
the previous infoBean put() with the same key, then the MbeanServer will 
continue to know about both of them)



> SolrCore.infoRegistry needs overhauled with some form of "namespacing"
> --
>
> Key: SOLR-5095
> URL: https://issues.apache.org/jira/browse/SOLR-5095
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5095_bug_demo.patch
>
>
> While investigating SOLR-3616 / SOLR-2715, I realized the failure i was 
> seeing didn't seem to be related to the initial report of that bug, and 
> instead seemed to be due to an obvious and fundemental limitation in the way 
> SolrCore keeps track of "plugins" using the infoRegistry: It's just a 
> {{Map}} keyed off of the name of the plugin, but there 
> is not "namespacing" used in the infoRegistry, so two completley different 
> types of plugins with the same name will overwrite each other.
> When looking at data using something like /admin/mbeans, this manifests 
> itself solely as missing objects: last one .put() into the infoRegistry 
> "wins" -- using JMX, both objects are actually visible because of how JMX 
> ObjectNames are built arround a set of key=val pairs, and a bug in how 
> JmxMonitorMap unregisters existing MBeans when .put() is called on a key it 
> already knows about (the unregister call is made using an ObjectName built 
> using the infoBean passed to the put() call -- if infoBean.getName() is not 
> exactly the same as the previous infoBean put() with the same key, then the 
> MbeanServer will continue to know about both of them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5095) SolrCore.infoRegistry needs overhauled with some form of "namespacing"

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5095:
---

Attachment: SOLR-5095_bug_demo.patch

SOLR-5095_bug_demo.patch is a trivial test patch demonstrating the discrepency 
between the infoRegistry contents and the JMX contents ... the off by 2 error 
is because...

 * out of hte box defaults defin a "query" QParser (NestedQParserPlugin) and a 
"query" SearchComponent (QueryComponent).  
 * in the configs used by the test: there a "dismax" SearchHandler instance 
declared, in addition to the out of the box default "dismax" QParser

{noformat}
  [junit4] FAILURE 0.04s | TestJmxIntegration.testInfoRegistryVsMbeanServer <<<
   [junit4]> Throwable #1: java.lang.AssertionError: regSize != numMbeans 
expected:<79> but was:<81>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([CF457CF6E471C68:3997F649BAD78365]:0)
   [junit4]>at 
org.apache.solr.core.TestJmxIntegration.testInfoRegistryVsMbeanServer(TestJmxIntegration.java:86)
   [junit4]>at java.lang.Thread.run(Thread.java:724)
{noformat}


> SolrCore.infoRegistry needs overhauled with some form of "namespacing"
> --
>
> Key: SOLR-5095
> URL: https://issues.apache.org/jira/browse/SOLR-5095
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5095_bug_demo.patch
>
>
> While investigating SOLR-3616, I realized the failure i was seeing didn't 
> seem to be related to the initial report of that bug, and instead seemed to 
> be due to an obvious and fundemental limitation in the way SolrCore keeps 
> track of "plugins" using the infoRegistry: It's just a {{Map SolrInfoMBean>}} keyed off of the name of the plugin, but there is not 
> "namespacing" used in the infoRegistry, so two completley different types of 
> plugins with the same name will overwrite each other.
> When looking at data using something like /admin/mbeans, this manifests 
> itself solely as missing objects: last one .put() into the infoRegistry 
> "wins" -- using JMX, both objects are actually visible because of how JMX 
> ObjectNames are built arround a set of key=val pairs, and a bug in how 
> JmxMonitorMap unregisters existing MBeans when .put() is called on a key it 
> already knows about (the unregister call is made using an ObjectName built 
> using the infoBean passed to the put() call -- if infoBean.getName() is not 
> exactly the same as the previous infoBean put() with the same key, then the 
> MbeanServer will continue to know about both of them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5095) SolrCore.infoRegistry needs overhauled with some form of "namespacing"

2013-07-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724661#comment-13724661
 ] 

Hoss Man commented on SOLR-5095:



For 5.0, I think we should completley overhaul the way the infoRegistery 
works...

 * Replace the simple {{Map}} with something utilizing 
the "type" of plugin as a namespace or parent hierarchy (ie: RequestHandler, 
SearchComponent, QParserPlugin, etc...)
 * replace all of hte existing JMX ObjectName's used to register MBeans so the 
JMX hierarchy matches the new hierarchical orgainized by plugin "type".
 * update /admin/mbeans so that in addition to the current "cat" and "key" 
lookups you can also browser MBeans by their plugin "type"
 * update the UI to show the list of Mbeans organized by "type" instead of by 
"cat"
 * along the way, make sure we cleanup things like SOLR-3774


For 4.x, i think we should leave this alone -- any kind of meaningful fix I can 
imagine would require changing the names used to register things in JMX, and 
that seems like something too significant to change in a minor release give how 
it will affect existing users who monitor with JMX.  For consistency, we could 
concievably just fix the bug in JmxMonitordMap.put() so that JMX accurately 
reflects only the items in the infoRegistry -- but that wouldn't relaly help 
anyone, it ould just take potentially useful data away from JMX users.


> SolrCore.infoRegistry needs overhauled with some form of "namespacing"
> --
>
> Key: SOLR-5095
> URL: https://issues.apache.org/jira/browse/SOLR-5095
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-5095_bug_demo.patch
>
>
> While investigating SOLR-3616, I realized the failure i was seeing didn't 
> seem to be related to the initial report of that bug, and instead seemed to 
> be due to an obvious and fundemental limitation in the way SolrCore keeps 
> track of "plugins" using the infoRegistry: It's just a {{Map SolrInfoMBean>}} keyed off of the name of the plugin, but there is not 
> "namespacing" used in the infoRegistry, so two completley different types of 
> plugins with the same name will overwrite each other.
> When looking at data using something like /admin/mbeans, this manifests 
> itself solely as missing objects: last one .put() into the infoRegistry 
> "wins" -- using JMX, both objects are actually visible because of how JMX 
> ObjectNames are built arround a set of key=val pairs, and a bug in how 
> JmxMonitorMap unregisters existing MBeans when .put() is called on a key it 
> already knows about (the unregister call is made using an ObjectName built 
> using the infoBean passed to the put() call -- if infoBean.getName() is not 
> exactly the same as the previous infoBean put() with the same key, then the 
> MbeanServer will continue to know about both of them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5095) SolrCore.infoRegistry needs overhauled with some form of "namespacing"

2013-07-30 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5095:
--

 Summary: SolrCore.infoRegistry needs overhauled with some form of 
"namespacing"
 Key: SOLR-5095
 URL: https://issues.apache.org/jira/browse/SOLR-5095
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


While investigating SOLR-3616, I realized the failure i was seeing didn't seem 
to be related to the initial report of that bug, and instead seemed to be due 
to an obvious and fundemental limitation in the way SolrCore keeps track of 
"plugins" using the infoRegistry: It's just a {{Map}} 
keyed off of the name of the plugin, but there is not "namespacing" used in the 
infoRegistry, so two completley different types of plugins with the same name 
will overwrite each other.

When looking at data using something like /admin/mbeans, this manifests itself 
solely as missing objects: last one .put() into the infoRegistry "wins" -- 
using JMX, both objects are actually visible because of how JMX ObjectNames are 
built arround a set of key=val pairs, and a bug in how JmxMonitorMap 
unregisters existing MBeans when .put() is called on a key it already knows 
about (the unregister call is made using an ObjectName built using the infoBean 
passed to the put() call -- if infoBean.getName() is not exactly the same as 
the previous infoBean put() with the same key, then the MbeanServer will 
continue to know about both of them)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5092) Send shard request to multiple replicas

2013-07-30 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724581#comment-13724581
 ] 

Ryan Ernst commented on SOLR-5092:
--

I think this is a dup of SOLR-4449?

{quote}
why is the replica-consistency is so important? what would happen if one phase 
of a distributed request will get a response from replica1 and another phase 
will get a response from replica2?
{quote}

Replica consistency is important because multi phase search in solr depends on 
things like the filter cache being filled and not rerunning the entire search 
on subsequent requests.

{quote}
If shard request will be sent to all of the replicas of each shard
{quote}
That is kind of scary (loading the entire fleet with the same query kind of 
negates some of the benefit of replicas). I would rather do it like SOLR-4449 
and have it configurable.

> Send shard request to multiple replicas
> ---
>
> Key: SOLR-5092
> URL: https://issues.apache.org/jira/browse/SOLR-5092
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java, SolrCloud
>Affects Versions: 4.4
>Reporter: Isaac Hebsh
>Priority: Minor
>  Labels: distributed, performance, shard, solrcloud
> Attachments: SOLR-5092.patch
>
>
> We have a case on a SolrCloud cluster. Queries takes too much QTime, due to a 
> randomly slow shard request. In a noticeable part of queries, the slowest 
> shard consumes more than 4 times qtime than the average.
> Of course, deep inspection of the performance factor should be made on the 
> specific environment.
> But, there is one more idea:
> If shard request will be sent to all of the replicas of each shard, the 
> probability of all the replicas of the same shard to be the slowest is very 
> small. Obviously cluster works harder, but on a (very) low qps, it might be 
> OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3615) JMX Mbeans disappear on core reload

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-3615.


Resolution: Fixed

Based on Emanuele's comments about his last patch, it looks like this issue is 
a dup of SOLR-2623?

(that seems to be the origin of the code incorporated into the final "new patch 
inspired at 4.x fix" posted here)

> JMX Mbeans disappear on core reload
> ---
>
> Key: SOLR-3615
> URL: https://issues.apache.org/jira/browse/SOLR-3615
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 3.4, 3.5, 3.6
>Reporter: Emanuele Lombardi
>  Labels: CoreContainer, CoreReload, JMX
> Attachments: jmxReloadPatch.txt, patch.txt
>
>
> https://issues.apache.org/jira/browse/SOLR-3616
> This fix solves the issue of MBeans disappearing after core reload

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5094.


   Resolution: Fixed
Fix Version/s: 5.0
   4.5

> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724494#comment-13724494
 ] 

ASF subversion and git services commented on SOLR-5094:
---

Commit 1508662 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508662 ]

SOLR-5094: Fix order of setup in TestJmxIntegration so MBean server is running 
before Solr init.  Also fixes SOLR-4418 (merge r1508661)

> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4418) TestJmxIntegration fails with IBM J9 6.0 and 7.0

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724495#comment-13724495
 ] 

ASF subversion and git services commented on SOLR-4418:
---

Commit 1508662 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508662 ]

SOLR-5094: Fix order of setup in TestJmxIntegration so MBean server is running 
before Solr init.  Also fixes SOLR-4418 (merge r1508661)

> TestJmxIntegration fails with IBM J9 6.0 and 7.0
> 
>
> Key: SOLR-4418
> URL: https://issues.apache.org/jira/browse/SOLR-4418
> Project: Solr
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> I'm not sure if this is a JVM bug or an Oracle-specific assumption somewhere 
> or something simple, but if I run:
> {noformat}
> ant test  -Dtestcase=TestJmxIntegration -Dtests.method=testJmxUpdate 
> -Dtests.seed=DC0CB18E606BDE6D -Dtests.slow=true -Dtests.locale=ja_JP 
> -Dtests.timezone=Australia/Darwin -Dtests.file.encoding=UTF-8
> {noformat}
> With J9 6.0 SR12 or J9 7.0 SR3 it fails with this exception:
> {noformat}
> [junit4:junit4]> Throwable #1: java.lang.AssertionError: No mbean found 
> for SolrIndexSearcher
> [junit4:junit4]>  at 
> __randomizedtesting.SeedInfo.seed([DC0CB18E606BDE6D:CA6B83E4F0BD75C6]:0)
> [junit4:junit4]>  at org.junit.Assert.fail(Assert.java:93)
> [junit4:junit4]>  at org.junit.Assert.assertTrue(Assert.java:43)
> [junit4:junit4]>  at org.junit.Assert.assertFalse(Assert.java:68)
> [junit4:junit4]>  at 
> org.apache.solr.core.TestJmxIntegration.testJmxUpdate(TestJmxIntegration.java:99)
> [junit4:junit4]>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [junit4:junit4]>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> [junit4:junit4]>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> [junit4:junit4]>  at java.lang.reflect.Method.invoke(Method.java:611)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> [junit4:junit4]>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsea

[jira] [Resolved] (SOLR-4418) TestJmxIntegration fails with IBM J9 6.0 and 7.0

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4418.


   Resolution: Fixed
Fix Version/s: 5.0
   4.5
 Assignee: Hoss Man

fixed by SOLR-5094

> TestJmxIntegration fails with IBM J9 6.0 and 7.0
> 
>
> Key: SOLR-4418
> URL: https://issues.apache.org/jira/browse/SOLR-4418
> Project: Solr
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Hoss Man
> Fix For: 4.5, 5.0
>
>
> I'm not sure if this is a JVM bug or an Oracle-specific assumption somewhere 
> or something simple, but if I run:
> {noformat}
> ant test  -Dtestcase=TestJmxIntegration -Dtests.method=testJmxUpdate 
> -Dtests.seed=DC0CB18E606BDE6D -Dtests.slow=true -Dtests.locale=ja_JP 
> -Dtests.timezone=Australia/Darwin -Dtests.file.encoding=UTF-8
> {noformat}
> With J9 6.0 SR12 or J9 7.0 SR3 it fails with this exception:
> {noformat}
> [junit4:junit4]> Throwable #1: java.lang.AssertionError: No mbean found 
> for SolrIndexSearcher
> [junit4:junit4]>  at 
> __randomizedtesting.SeedInfo.seed([DC0CB18E606BDE6D:CA6B83E4F0BD75C6]:0)
> [junit4:junit4]>  at org.junit.Assert.fail(Assert.java:93)
> [junit4:junit4]>  at org.junit.Assert.assertTrue(Assert.java:43)
> [junit4:junit4]>  at org.junit.Assert.assertFalse(Assert.java:68)
> [junit4:junit4]>  at 
> org.apache.solr.core.TestJmxIntegration.testJmxUpdate(TestJmxIntegration.java:99)
> [junit4:junit4]>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [junit4:junit4]>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> [junit4:junit4]>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> [junit4:junit4]>  at java.lang.reflect.Method.invoke(Method.java:611)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> [junit4:junit4]>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.Abstrac

[jira] [Commented] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724487#comment-13724487
 ] 

ASF subversion and git services commented on SOLR-5094:
---

Commit 1508661 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1508661 ]

SOLR-5094: Fix order of setup in TestJmxIntegration so MBean server is running 
before Solr init.  Also fixes SOLR-4418

> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724485#comment-13724485
 ] 

Robert Muir commented on SOLR-5094:
---

+1 to commit.

today:
this test fails on clean checkout (with any jvm) if you run ant test 
-Dtestcase=TestJmxIntegration
the "J9 assume/bug" is not related to J9 at all. its the same bug here.

with patch it all works (tested both oracle and J9, with assume removed)


> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4418) TestJmxIntegration fails with IBM J9 6.0 and 7.0

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724488#comment-13724488
 ] 

ASF subversion and git services commented on SOLR-4418:
---

Commit 1508661 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1508661 ]

SOLR-5094: Fix order of setup in TestJmxIntegration so MBean server is running 
before Solr init.  Also fixes SOLR-4418

> TestJmxIntegration fails with IBM J9 6.0 and 7.0
> 
>
> Key: SOLR-4418
> URL: https://issues.apache.org/jira/browse/SOLR-4418
> Project: Solr
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> I'm not sure if this is a JVM bug or an Oracle-specific assumption somewhere 
> or something simple, but if I run:
> {noformat}
> ant test  -Dtestcase=TestJmxIntegration -Dtests.method=testJmxUpdate 
> -Dtests.seed=DC0CB18E606BDE6D -Dtests.slow=true -Dtests.locale=ja_JP 
> -Dtests.timezone=Australia/Darwin -Dtests.file.encoding=UTF-8
> {noformat}
> With J9 6.0 SR12 or J9 7.0 SR3 it fails with this exception:
> {noformat}
> [junit4:junit4]> Throwable #1: java.lang.AssertionError: No mbean found 
> for SolrIndexSearcher
> [junit4:junit4]>  at 
> __randomizedtesting.SeedInfo.seed([DC0CB18E606BDE6D:CA6B83E4F0BD75C6]:0)
> [junit4:junit4]>  at org.junit.Assert.fail(Assert.java:93)
> [junit4:junit4]>  at org.junit.Assert.assertTrue(Assert.java:43)
> [junit4:junit4]>  at org.junit.Assert.assertFalse(Assert.java:68)
> [junit4:junit4]>  at 
> org.apache.solr.core.TestJmxIntegration.testJmxUpdate(TestJmxIntegration.java:99)
> [junit4:junit4]>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [junit4:junit4]>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> [junit4:junit4]>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> [junit4:junit4]>  at java.lang.reflect.Method.invoke(Method.java:611)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> [junit4:junit4]>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.Sy

Re: svn commit: r1508604 - in /lucene/dev/trunk: ./ lucene/ lucene/core/ lucene/core/src/test/org/apache/lucene/util/packed/TestPackedInts.java

2013-07-30 Thread Adrien Grand
On Tue, Jul 30, 2013 at 10:19 PM,   wrote:
> fix test not to generate nullreaders > 10, since it always asserts bulk reads 
> from that position

Thanks Robert!

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724472#comment-13724472
 ] 

Hoss Man commented on SOLR-5094:


bq. somehow this test must rely upon the test order of previous tests?

probably yeah ... if this is the first test running in the VM, it will likeley 
fail, but if another test runs first and causes an MBean server to be created 
then the tests works fine.

and recently committed cahnges seem to have shifted the test ordering in the VM 
slightly?

either way: this just started failing in jenkins as well, suggesting that it's 
not my imagination or something that changed on my machine -- something 
fundemental changed today on both trunk and 4x causing this to fail regardless 
of seed.


> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b99) - Build # 6723 - Failure!

2013-07-30 Thread Chris Hostetter


I'm actually looking into these TestJmxIntegration failures now in SOLR-5094




: Date: Tue, 30 Jul 2013 19:20:23 + (UTC)
: From: Policeman Jenkins Server 
: To: dev@lucene.apache.org, hoss...@apache.org
: Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b99) - Build #
: 6723 - Failure!
: 
: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6723/
: Java: 32bit/jdk1.8.0-ea-b99 -client -XX:+UseConcMarkSweepGC
: 
: 2 tests failed.
: REGRESSION:  org.apache.solr.core.TestJmxIntegration.testJmxUpdate
: 
: Error Message:
: No mbean found for SolrIndexSearcher
: 
: Stack Trace:
: java.lang.AssertionError: No mbean found for SolrIndexSearcher
:   at 
__randomizedtesting.SeedInfo.seed([F18FE6A79C36BC90:E7E8D4CD0CE0173B]:0)
:   at org.junit.Assert.fail(Assert.java:93)
:   at org.junit.Assert.assertTrue(Assert.java:43)
:   at org.junit.Assert.assertFalse(Assert.java:68)
:   at 
org.apache.solr.core.TestJmxIntegration.testJmxUpdate(TestJmxIntegration.java:120)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:491)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:   at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
:   at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
:   at 
com.carr

[jira] [Commented] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724467#comment-13724467
 ] 

Robert Muir commented on SOLR-5094:
---

I have experienced the same issue. 

I think your explanation might make some sense: somehow this test must rely 
upon the test order of previous tests? So someone makes a unrelated commit 
(e.g. that adds a new test) and based on the number of processors i have/the 
phase of the moon/whatever it causes it to start failing on my machine that day.



> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5094:
---

Attachment: SOLR-5094.patch

patch that should fix the problem with the test

> TestJmxIntegration makes no sense
> -
>
> Key: SOLR-5094
> URL: https://issues.apache.org/jira/browse/SOLR-5094
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-5094.patch
>
>
> for reasons i can't explain, TestJmxIntegration stoped working on my machine 
> today, and when i started looking into it, i realized that there's no logical 
> reason why it should be working on any machine at all.
> Back when this test was first written, it was setup to ensure that an MBean 
> server was up and running prior to initializing the SolrCore, and then the 
> configuration used by the test was designed to use JMX if and only if an 
> existing MBean server was running.
> in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] 
> this was (inadvertantly?) changed so that the SolrCore was initialized 
> @BeforeClass, but there was no certainty that the Mbean server was started 
> until later -- so the test fails in a variety of confusing ways because there 
> is no JMX running at all.  The only reason it can succeed is if an MBean 
> server already happens to be running.
> I've got a patch that should fix this in general, or at least make it fail 
> with a clear error if hte problem is that a JMX server isn't found on core 
> init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5094) TestJmxIntegration makes no sense

2013-07-30 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5094:
--

 Summary: TestJmxIntegration makes no sense
 Key: SOLR-5094
 URL: https://issues.apache.org/jira/browse/SOLR-5094
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man


for reasons i can't explain, TestJmxIntegration stoped working on my machine 
today, and when i started looking into it, i realized that there's no logical 
reason why it should be working on any machine at all.

Back when this test was first written, it was setup to ensure that an MBean 
server was up and running prior to initializing the SolrCore, and then the 
configuration used by the test was designed to use JMX if and only if an 
existing MBean server was running.

in [r1402613|https://svn.apache.org/viewvc?view=revision&revision=1402613] this 
was (inadvertantly?) changed so that the SolrCore was initialized @BeforeClass, 
but there was no certainty that the Mbean server was started until later -- so 
the test fails in a variety of confusing ways because there is no JMX running 
at all.  The only reason it can succeed is if an MBean server already happens 
to be running.

I've got a patch that should fix this in general, or at least make it fail with 
a clear error if hte problem is that a JMX server isn't found on core init

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724461#comment-13724461
 ] 

Uwe Schindler commented on LUCENE-5153:
---

Thanks!

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Fix For: 5.0, 4.5
>
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch, 
> LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724450#comment-13724450
 ] 

David Smiley commented on LUCENE-3069:
--

Nice work!  The spatial prefix trees will have even more awesome performance 
with all terms in RAM.  It'd be nice if I could configure the docFreq to be 
memory resident but, as Mike said, adding options like that can be explored 
later.

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724446#comment-13724446
 ] 

Hoss Man commented on SOLR-5093:


I can see the argument for making "field:*" parse as equivalent to "field:[* TO 
*]" if the later is in fact more efficient, but i agree with rob that we 
shouldn't try make the parser pull out individual clauses and construct special 
query objects that are baked by the filterCache.  If i have an fq in my 
solrconfig that looks like this...

{noformat}
X AND Y AND Z
{noformat}

...that entire BooleanQuery should be cached as a single entity in the 
filterCache regardless of what X, Y, and Z really are -- because that's what i 
asked for: a single filter query.

it would suck if the Query Parser looked at the specifics of what each of those 
clauses are and said "I'm going to try and be smart and make each of these 
clauses be special query backed by the filterCache" because now i have 4 
queries in my filterCache instead of just 1, and 3 of them will never be used.



> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724443#comment-13724443
 ] 

Robert Muir commented on SOLR-5093:
---

Solr today doesn't auto-cache. You can specify that you intend for a query to 
act only as a filter with fqs, control the caching behavior of these fqs, and 
so on.

So there is no need to add any additional auto-caching in the queryparser. 
Things like LUCENE-4386 would just cause "filter cache insanity" where its 
cached in duplicate places (on FieldCache.docsWithField as well as in fq 
bitsets).

Auto-caching things in the query can easily pollute the cache with stuff thats 
not actually intended to be reused: then it doesn't really work at all.

> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724430#comment-13724430
 ] 

Jack Krupansky commented on SOLR-5093:
--

bq. This user just has to pull out AND pp:* into another fq of pp:*

Exactly! That's what we (non-Lucene guys) are trying to do - eliminate the need 
for users to have to do that kind of manual optimization.

We want Solr to behave as optimally as possibly OOTB.


> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724428#comment-13724428
 ] 

David Smiley commented on SOLR-5093:


Rob,
You're right for this particular user's use-case that I mentioned.  I 
overlooked that aspect of his query.  Nonetheless, I don't think that negates 
the usefulness of what I propose in this issue though.

If you consider auto-caching "trappy" then you probably don't like Solr very 
much at all then.

> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724422#comment-13724422
 ] 

Robert Muir commented on SOLR-5093:
---

Those same lucene guys are not afraid to object here either.

This user just has to pull out AND pp:* into another fq of pp:*

{quote}
(Each filter is executed and cached separately. When it's time to use them to 
limit the number of results returned by a query, this is done using set 
intersections.) 
{quote}
http://wiki.apache.org/solr/SolrCaching#filterCache

> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724405#comment-13724405
 ] 

Jack Krupansky commented on SOLR-5093:
--

Some time ago I had suggested a related approach: LUCENE-4386 - "Query parser 
should generate FieldValueFilter for pure wildcard terms to boost query 
performance".

There were objections from the Lucene guys, but now that the Solr query parser 
is "divorced" from Lucene, maybe it could be reconsidered.

I couldn't testify as to the relative merits of using the filter cache vs. the 
FieldValueFilter.


> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5092) Send shard request to multiple replicas

2013-07-30 Thread Isaac Hebsh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724376#comment-13724376
 ] 

Isaac Hebsh edited comment on SOLR-5092 at 7/30/13 8:51 PM:


Submitting initial patch.
As [~erickoerickson] suggested on mailing list, changes are only in solrj, so 
nothing should be changed in core.

This change should be very easy. Just move the single HTTP request into 
CompletionService :)

But, the most complicated thing in this patch, is to preserve the original 
exception handling. There are some exceptions which are considered as 
temporary, while other exceptions are fatal. Moreover, we want to preserve the 
zombie list maintenance as is.

  was (Author: isaachebsh):
Submitting initial patch.
As Erick suggested on mailing list, changes are only in solrj, so nothing 
should be changed in core.

This change should be very easy. Just move the single HTTP request into 
CompletionService :)

But, the most complicated thing in this patch, is to preserve the original 
exception handling. There are some exceptions which are considered as 
temporary, while other exceptions are fatal. Moreover, we want to preserve the 
zombie list maintenance as is.
  
> Send shard request to multiple replicas
> ---
>
> Key: SOLR-5092
> URL: https://issues.apache.org/jira/browse/SOLR-5092
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java, SolrCloud
>Affects Versions: 4.4
>Reporter: Isaac Hebsh
>Priority: Minor
>  Labels: distributed, performance, shard, solrcloud
> Attachments: SOLR-5092.patch
>
>
> We have a case on a SolrCloud cluster. Queries takes too much QTime, due to a 
> randomly slow shard request. In a noticeable part of queries, the slowest 
> shard consumes more than 4 times qtime than the average.
> Of course, deep inspection of the performance factor should be made on the 
> specific environment.
> But, there is one more idea:
> If shard request will be sent to all of the replicas of each shard, the 
> probability of all the replicas of the same shard to be the slowest is very 
> small. Obviously cluster works harder, but on a (very) low qps, it might be 
> OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-5153.


   Resolution: Fixed
Fix Version/s: 4.5
   5.0

Thanks Rob. I applied your improvement and committed.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Fix For: 5.0, 4.5
>
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch, 
> LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5092) Send shard request to multiple replicas

2013-07-30 Thread Isaac Hebsh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724396#comment-13724396
 ] 

Isaac Hebsh commented on SOLR-5092:
---

Question:
in HttpShardHandler, I can find the next comment:
{code}
// maps "localhost:8983|localhost:7574" to a shuffled 
List("http://localhost:8983","http://localhost:7574";)
// This is primarily to keep track of what order we should use to query the 
replicas of a shard
// so that we use the same replica for all phases of a distributed request.
shardToURLs = new HashMap>();
{code}

why is the replica-consistency is so important? what would happen if one phase 
of a distributed request will get a response from replica1 and another phase 
will get a response from replica2?
I think this situation might happen in the current state, if one replica stops 
to response during the distributed process.

> Send shard request to multiple replicas
> ---
>
> Key: SOLR-5092
> URL: https://issues.apache.org/jira/browse/SOLR-5092
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java, SolrCloud
>Affects Versions: 4.4
>Reporter: Isaac Hebsh
>Priority: Minor
>  Labels: distributed, performance, shard, solrcloud
> Attachments: SOLR-5092.patch
>
>
> We have a case on a SolrCloud cluster. Queries takes too much QTime, due to a 
> randomly slow shard request. In a noticeable part of queries, the slowest 
> shard consumes more than 4 times qtime than the average.
> Of course, deep inspection of the performance factor should be made on the 
> specific environment.
> But, there is one more idea:
> If shard request will be sent to all of the replicas of each shard, the 
> probability of all the replicas of the same shard to be the slowest is very 
> small. Obviously cluster works harder, but on a (very) low qps, it might be 
> OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724397#comment-13724397
 ] 

ASF subversion and git services commented on LUCENE-5153:
-

Commit 1508623 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508623 ]

LUCENE-5153:  Allow wrapping Reader from AnalyzerWrapper

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch, 
> LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724393#comment-13724393
 ] 

Robert Muir commented on SOLR-5093:
---

Err, this user already had this in their FQ. So if they had a filtercache, he'd 
be using it.

he should pull that slow piece to a separate FQ so its cached by itself. I 
don't understand why the queryparser needs to do anything else here (especially 
any trappy auto-caching)

> Rewrite field:* to use the filter cache
> ---
>
> Key: SOLR-5093
> URL: https://issues.apache.org/jira/browse/SOLR-5093
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: David Smiley
>
> Sometimes people writes a query including something like {{field:*}} which 
> matches all documents that have an indexed value in that field.  That can be 
> particularly expensive for tokenized text, numeric, and spatial fields.  The 
> expert advise is to index a separate boolean field that is used in place of 
> these query clauses, but that's annoying to do and it can take users a while 
> to realize that's what they need to do.
> I propose that Solr's query parser rewrite such queries to return a query 
> backed by Solr's filter cache.  The underlying query happens once (and it's 
> slow this time) and then it's cached after which it's super-fast to reuse.  
> Unfortunately Solr's filter cache is currently index global, not per-segment; 
> that's being handled in a separate issue.  
> Related to this, it may be worth considering if Solr should behind the scenes 
> index a field that records which fields have indexed values, and then it 
> could use this indexed data to power these queries so they are always fast to 
> execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
> use this.
> For an example of how a user bumped into this, see:
> http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724391#comment-13724391
 ] 

ASF subversion and git services commented on LUCENE-5153:
-

Commit 1508622 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1508622 ]

LUCENE-5153:  Allow wrapping Reader from AnalyzerWrapper

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch, 
> LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5093) Rewrite field:* to use the filter cache

2013-07-30 Thread David Smiley (JIRA)
David Smiley created SOLR-5093:
--

 Summary: Rewrite field:* to use the filter cache
 Key: SOLR-5093
 URL: https://issues.apache.org/jira/browse/SOLR-5093
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: David Smiley


Sometimes people writes a query including something like {{field:*}} which 
matches all documents that have an indexed value in that field.  That can be 
particularly expensive for tokenized text, numeric, and spatial fields.  The 
expert advise is to index a separate boolean field that is used in place of 
these query clauses, but that's annoying to do and it can take users a while to 
realize that's what they need to do.

I propose that Solr's query parser rewrite such queries to return a query 
backed by Solr's filter cache.  The underlying query happens once (and it's 
slow this time) and then it's cached after which it's super-fast to reuse.  
Unfortunately Solr's filter cache is currently index global, not per-segment; 
that's being handled in a separate issue.  

Related to this, it may be worth considering if Solr should behind the scenes 
index a field that records which fields have indexed values, and then it could 
use this indexed data to power these queries so they are always fast to 
execute.  Likewise, {{\[\* TO \*\]}} open-ended range queries could similarly 
use this.

For an example of how a user bumped into this, see:
http://lucene.472066.n3.nabble.com/Performance-question-on-Spatial-Search-tt4081150.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5092) Send shard request to multiple replicas

2013-07-30 Thread Isaac Hebsh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isaac Hebsh updated SOLR-5092:
--

Attachment: SOLR-5092.patch

Submitting initial patch.
As Erick suggested on mailing list, changes are only in solrj, so nothing 
should be changed in core.

This change should be very easy. Just move the single HTTP request into 
CompletionService :)

But, the most complicated thing in this patch, is to preserve the original 
exception handling. There are some exceptions which are considered as 
temporary, while other exceptions are fatal. Moreover, we want to preserve the 
zombie list maintenance as is.

> Send shard request to multiple replicas
> ---
>
> Key: SOLR-5092
> URL: https://issues.apache.org/jira/browse/SOLR-5092
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java, SolrCloud
>Affects Versions: 4.4
>Reporter: Isaac Hebsh
>Priority: Minor
>  Labels: distributed, performance, shard, solrcloud
> Attachments: SOLR-5092.patch
>
>
> We have a case on a SolrCloud cluster. Queries takes too much QTime, due to a 
> randomly slow shard request. In a noticeable part of queries, the slowest 
> shard consumes more than 4 times qtime than the average.
> Of course, deep inspection of the performance factor should be made on the 
> specific environment.
> But, there is one more idea:
> If shard request will be sent to all of the replicas of each shard, the 
> probability of all the replicas of the same shard to be the slowest is very 
> small. Obviously cluster works harder, but on a (very) low qps, it might be 
> OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5092) Send shard request to multiple replicas

2013-07-30 Thread Isaac Hebsh (JIRA)
Isaac Hebsh created SOLR-5092:
-

 Summary: Send shard request to multiple replicas
 Key: SOLR-5092
 URL: https://issues.apache.org/jira/browse/SOLR-5092
 Project: Solr
  Issue Type: Improvement
  Components: clients - java, SolrCloud
Affects Versions: 4.4
Reporter: Isaac Hebsh
Priority: Minor


We have a case on a SolrCloud cluster. Queries takes too much QTime, due to a 
randomly slow shard request. In a noticeable part of queries, the slowest shard 
consumes more than 4 times qtime than the average.

Of course, deep inspection of the performance factor should be made on the 
specific environment.

But, there is one more idea:

If shard request will be sent to all of the replicas of each shard, the 
probability of all the replicas of the same shard to be the slowest is very 
small. Obviously cluster works harder, but on a (very) low qps, it might be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5153:


Attachment: LUCENE-5153.patch

just a tiny improvement to the test (uses basetokenstream assert)

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch, 
> LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 330 - Still Failing

2013-07-30 Thread Robert Muir
I committed a fix.

On Tue, Jul 30, 2013 at 3:01 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/330/
>
> 1 tests failed.
> FAILED:  org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull
>
> Error Message:
>
>
> Stack Trace:
> java.lang.AssertionError
> at
> __randomizedtesting.SeedInfo.seed([1F00AC77CDB3B5F8:721ABE202E8339ED]:0)
> at
> org.apache.lucene.util.packed.PackedInts$NullReader.get(PackedInts.java:709)
> at
> org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull(TestPackedInts.java:556)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at java.lang.Thread.run(Thread.java:679)
>
>
>
>
> Build Log:
> [...truncated 794 lines...]
>[junit4] Suite: org.apache.lucene.util.packed.TestPackedInts
>[junit4]   2> NOTE: download the large Jenkins line-docs file by
> running 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPackedInts
> -Dtests.method=testPackedIntsNull -Dtests.seed=1F00AC77CDB3B5F8
>

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 55775 - Failure!

2013-07-30 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/55775/

1 tests failed.
REGRESSION:  org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E9BA044CFBEFB54B:84A0161B18DF395E]:0)
at 
org.apache.lucene.util.packed.PackedInts$NullReader.get(PackedInts.java:709)
at 
org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull(TestPackedInts.java:556)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 1325 lines...]
BUILD FAILED
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build.xml:49:
 The following error occurred while executing this line:
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/common-build.xml:1230:
 The following error occurred while executing this line:
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/common-build.xml:873:
 There were test failures: 363 suites, 2313 tests, 1 failure, 59 ignored (46 
assumptions)

Total time: 4 minutes 15 seconds
Build step 'Invoke Ant' marked 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b99) - Build # 6723 - Failure!

2013-07-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6723/
Java: 32bit/jdk1.8.0-ea-b99 -client -XX:+UseConcMarkSweepGC

2 tests failed.
REGRESSION:  org.apache.solr.core.TestJmxIntegration.testJmxUpdate

Error Message:
No mbean found for SolrIndexSearcher

Stack Trace:
java.lang.AssertionError: No mbean found for SolrIndexSearcher
at 
__randomizedtesting.SeedInfo.seed([F18FE6A79C36BC90:E7E8D4CD0CE0173B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.core.TestJmxIntegration.testJmxUpdate(TestJmxIntegration.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)


REGRESSION:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Messag

[jira] [Updated] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5153:
---

Attachment: LUCENE-5153.patch

Patch with discussed fixes and test.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch, LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5152) Lucene FST is not immutale

2013-07-30 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724280#comment-13724280
 ] 

Simon Willnauer commented on LUCENE-5152:
-

bq. So its really just a BytesRef bug right? 
well in theory that is true. Yet, if you have an arc in your hand you can 
basically change it by passing it to a subsequent call to readNextTargetArc or 
whatever that would override the values completely. BytesRef is tricky but not 
the root cause of this issue. I do think that if you call:

{noformat} 
public Arc findTargetArc(int labelToMatch, Arc follow, Arc arc, 
BytesReader in) throws IOException
{noformat}

it should always fill the arc that is provided so everything you do with it is 
up to you. Aside of this I agree BytesRef is tricky and we should fix if 
possible.

> Lucene FST is not immutale
> --
>
> Key: LUCENE-5152
> URL: https://issues.apache.org/jira/browse/LUCENE-5152
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/FSTs
>Affects Versions: 4.4
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, 4.5
>
> Attachments: LUCENE-5152.patch
>
>
> a spinnoff from LUCENE-5120 where the analyzing suggester modified a returned 
> output from and FST (BytesRef) which caused sideffects in later execution. 
> I added an assertion into the FST that checks if a cached root arc is 
> modified and in-fact this happens for instance in our MemoryPostingsFormat 
> and I bet we find more places. We need to think about how to make this less 
> trappy since it can cause bugs that are super hard to find.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 330 - Still Failing

2013-07-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/330/

1 tests failed.
FAILED:  org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1F00AC77CDB3B5F8:721ABE202E8339ED]:0)
at 
org.apache.lucene.util.packed.PackedInts$NullReader.get(PackedInts.java:709)
at 
org.apache.lucene.util.packed.TestPackedInts.testPackedIntsNull(TestPackedInts.java:556)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)




Build Log:
[...truncated 794 lines...]
   [junit4] Suite: org.apache.lucene.util.packed.TestPackedInts
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPackedInts 
-Dtests.method=testPackedIntsNull -Dtests.seed=1F00AC77CDB3B5F8 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=en_SG -Dtests.timezone=SystemV/EST5 -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.10s J1 | TestPackedInts.test

[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724266#comment-13724266
 ] 

Shai Erera commented on LUCENE-5153:


bq. I dont see the test in the patch

Hmm, I was sure I created a new patch. Will upload one soon, after I move the 
test under lucene/core.

bq. I think its good to make wrapComponents just return the components as a 
default.

Ok, will do.

bq. the getWrappedAnalyzer is explained by its javadocs

Duh, I should have read them before. :)

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724253#comment-13724253
 ] 

Michael McCandless commented on LUCENE-3069:


Wow, those are nice perf results, without implementing intersect!

Intersect really is an optional operation, so we could stop here/now and button 
everything up :)

I like this approach: you moved all the metadata (docFreq, totalTermFreq, 
long[] and byte[] from the PostingsFormatBase) into blocks, and then when we 
really need a term's metadata we go to its block and scan for it (like block 
tree).

I wonder if we could use MonotonicAppendingLongBuffer instead of long[] for the 
in-memory skip data?  Right now it's I think 48 bytes per block (block = 128 
terms), so I guess that's fairly small (.375 bytes per term).

{quote}
It is a little similar to BTTR now, and we can someday control how much
data to keep memory resident (e.g. keep stats in memory but metadata on 
disk, however this should be another issue).
{quote}
That's a nice (future) plus; this way the app can keep "only" the terms+ords in 
RAM, and leave all term metadata on disk.  But this is definitely optional for 
the project and we should separately explore it ...

{quote}
Another good part is, it naturally supports seek by ord.(ah, 
actually I don't understand where it is used).
{quote}

This is also a nice side-effect!

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5088) ClassCastException is thrown when trying to use custom SearchHandler.

2013-07-30 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724241#comment-13724241
 ] 

Pavel Yaskevich commented on SOLR-5088:
---

Thanks for the tip [~mkhludnev], putting a handler into war file did help, I'm 
resolving the ticket. Confirmed that I was doing it wrong :)

> ClassCastException is thrown when trying to use custom SearchHandler.
> -
>
> Key: SOLR-5088
> URL: https://issues.apache.org/jira/browse/SOLR-5088
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Pavel Yaskevich
>
> Hi guys,
>   I'm trying to replace solr.SearchHandler to custom one in solrconfig.xml 
> for one of the stores, and it's throwing following exception: 
> {noformat}
> Caused by: org.apache.solr.common.SolrException: RequestHandler init failure
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:167)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:772)
>   ... 13 more
> Caused by: org.apache.solr.common.SolrException: Error Instantiating Request 
> Handler, org.my.solr.index.CustomSearchHandler failed to instantiate 
> org.apache.solr.request.SolrRequestHandler
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:551)
>   at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:603)
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:153)
>   ... 14 more
> Caused by: java.lang.ClassCastException: class 
> org.my.solr.index.CustomSearchHandler
>   at java.lang.Class.asSubclass(Class.java:3116)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:433)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:381)
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:530)
>   ... 16 more
> {noformat}
> I actually tried extending SearchHandler, and implementing SolrRequestHandler 
> as well as extending RequestHandlerBase and it's all the same 
> ClassCastException result...
> org.my.solr.index.CustomSearchHandler is definitely in class path and 
> recompiled every retry. 
> Maybe I'm doing something terribly wrong?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5088) ClassCastException is thrown when trying to use custom SearchHandler.

2013-07-30 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich resolved SOLR-5088.
---

Resolution: Not A Problem

> ClassCastException is thrown when trying to use custom SearchHandler.
> -
>
> Key: SOLR-5088
> URL: https://issues.apache.org/jira/browse/SOLR-5088
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Pavel Yaskevich
>
> Hi guys,
>   I'm trying to replace solr.SearchHandler to custom one in solrconfig.xml 
> for one of the stores, and it's throwing following exception: 
> {noformat}
> Caused by: org.apache.solr.common.SolrException: RequestHandler init failure
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:167)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:772)
>   ... 13 more
> Caused by: org.apache.solr.common.SolrException: Error Instantiating Request 
> Handler, org.my.solr.index.CustomSearchHandler failed to instantiate 
> org.apache.solr.request.SolrRequestHandler
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:551)
>   at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:603)
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:153)
>   ... 14 more
> Caused by: java.lang.ClassCastException: class 
> org.my.solr.index.CustomSearchHandler
>   at java.lang.Class.asSubclass(Class.java:3116)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:433)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:381)
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:530)
>   ... 16 more
> {noformat}
> I actually tried extending SearchHandler, and implementing SolrRequestHandler 
> as well as extending RequestHandlerBase and it's all the same 
> ClassCastException result...
> org.my.solr.index.CustomSearchHandler is definitely in class path and 
> recompiled every retry. 
> Maybe I'm doing something terribly wrong?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724230#comment-13724230
 ] 

Robert Muir commented on LUCENE-5153:
-

I dont see the test in the patch... but I think it should be under lucene/core 
(and just wrap MockAnalyzer with MockCharFilter or something).

I think its good to make wrapComponents just return the components as a 
default. This will make PerFieldAnalyzerWrapper look less stupid :)

the getWrappedAnalyzer is explained by its javadocs. You might want a different 
analyzer for different fields.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5153:
---

Attachment: LUCENE-5153.patch

Added a test to TestingAnalyzers which is under lucene/analysis/common. Is 
there a suitable test under lucene/core?

Also, now that someone can override either components or reader, maybe 
wrapComponents should also not be abstract, returning the passed components? Or 
we make both of them abstract?

Another question (unrelated to this issue) -- why do we need getWrappedAnalyzer 
vs taking the wrapped analyzer in the ctor, like all of our Filter classes do?

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch, LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-5080) Ability to Configure Expirable Caches (use Google Collections - MapMaker/CacheBuilder for SolrCache)

2013-07-30 Thread Kranti Parisa
Agree with you, we do have unique identifiers for the 5 min windows in
terms of window start time. Just wanted to tell GC to clean up un-used
caches instead of having them on the JVM. So that we may use JVM/RAM for
serving more queries as it would have more free memory.

Do you have any suggestions for the common JVM settings while using Solr
(of course the values depends on the actual use case) something similar to
http://jprante.github.io/2012/11/28/Elasticsearch-Java-Virtual-Machine-settings-explained.html?

Thanks & Regards,
Kranti K Parisa
http://www.linkedin.com/in/krantiparisa



On Sun, Jul 28, 2013 at 6:42 PM, Erick Erickson wrote:

> I'd certainly do that before trying to have a custom cache policy. Measure,
> _then_ fix. If you have your autowarm parameters set up, when your
> searchers come up you'll get good responses on your queries.
>
> Of course that will put some load on the machine, but find out whether
> the load is noticeable before you make the switch.
>
> Or be really cheap. For the 5 minute interval, tack on some kind of
> meaningless
> value to the FQ that doesn't change the effect. Then change that every 5
> minutes
> and your old fq cache entries won't be re-used and will age out as time
> passes.
>
> FWIW,
> Erick
>
> On Sun, Jul 28, 2013 at 12:37 PM, Kranti Parisa (JIRA) 
> wrote:
> >
> > [
> https://issues.apache.org/jira/browse/SOLR-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721993#comment-13721993]
> >
> > Kranti Parisa commented on SOLR-5080:
> > -
> >
> > Sure, new searcher will invalidate the caches. But the use cases are we
> don't want to expire the other caches than FilterCache. And for us the
> filters are time bounded, for every 5 minutes the availability changes. I
> am trying to set up a multi core environment and use joins (with FQ).
> Replication happens for every 30 min. If we open a new searcher for every 5
> min, then all the other caches are also invalidated and during runtime it
> may cost us to rebuild those caches. Instead of that, the idea is to have a
> facility to configure the FilterCaches with 5 min expiration policy on one
> of the cores (where availability changes every 5 min) so that we can
> maintain the JVM sizes which will also be an imp factor on high load.
> >
> > So, you suggest to open new searcher which will invalid all the caches
> on the specific core?
> >
> >> Ability to Configure Expirable Caches (use Google Collections -
> MapMaker/CacheBuilder for SolrCache)
> >>
> 
> >>
> >> Key: SOLR-5080
> >> URL: https://issues.apache.org/jira/browse/SOLR-5080
> >> Project: Solr
> >>  Issue Type: New Feature
> >>Reporter: Kranti Parisa
> >>
> >> We should be able to configure the expirable caches, especially for
> filterCaches. In some cases, the filterCaches are not valid beyond certain
> time (example 5 minutes).
> >> Google collections has MapMaker/CacheBuilder which does allow expiration
> >>
> http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/MapMaker.html
> >>
> http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/cache/CacheBuilder.html
> >> SolrCache, LRUCache etc can be implemented with MapMaker or CacheBuilder
> >
> > --
> > This message is automatically generated by JIRA.
> > If you think it was sent incorrectly, please contact your JIRA
> administrators
> > For more information on JIRA, see:
> http://www.atlassian.com/software/jira
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-4221) Custom sharding

2013-07-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-4221:
-

Attachment: SOLR-4221.patch

OverseerCollectionProcessor test errors fixed

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4951) randomize merge policy testing in solr

2013-07-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4951.


   Resolution: Fixed
Fix Version/s: 4.5
   5.0
 Assignee: Hoss Man

r1508521 & r1508552

> randomize merge policy testing in solr
> --
>
> Key: SOLR-4951
> URL: https://issues.apache.org/jira/browse/SOLR-4951
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, 4.5
>
> Attachments: SOLR-4951.patch
>
>
> split off from SOLR-4942...
> * add a new RandomMergePolicy that implements MergePolicy by proxying to 
> another instance selected at creation using one of the 
> LuceneTestCase.new...MergePolicy methods
> * updated test configs to refer to this new MergePolicy
> * borrow the "tests.shardhandler.randomSeed" logic in SolrTestCaseJ4 to give 
> our RandomMergePolicy a consistent seed at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4951) randomize merge policy testing in solr

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724154#comment-13724154
 ] 

ASF subversion and git services commented on SOLR-4951:
---

Commit 1508552 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508552 ]

SOLR-4951: Better randomization of MergePolicy in Solr tests (merge r1508521)

> randomize merge policy testing in solr
> --
>
> Key: SOLR-4951
> URL: https://issues.apache.org/jira/browse/SOLR-4951
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-4951.patch
>
>
> split off from SOLR-4942...
> * add a new RandomMergePolicy that implements MergePolicy by proxying to 
> another instance selected at creation using one of the 
> LuceneTestCase.new...MergePolicy methods
> * updated test configs to refer to this new MergePolicy
> * borrow the "tests.shardhandler.randomSeed" logic in SolrTestCaseJ4 to give 
> our RandomMergePolicy a consistent seed at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724131#comment-13724131
 ] 

Hoss Man commented on LUCENE-5153:
--

FWIW: a recent thread on this very point...

http://mail-archives.apache.org/mod_mbox/lucene-java-user/201306.mbox/%3cad079bd2-e01e-4e00-b8f6-17594b6c4...@likeness.com%3E

+1 to the wrapReader semantics in the patch.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724124#comment-13724124
 ] 

Robert Muir commented on LUCENE-5153:
-

Sounds good to me! +1 to the patch, though we might want to add a test.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724119#comment-13724119
 ] 

Shai Erera commented on LUCENE-5153:


bq. I think this is the right thing?

I tend to agree. If by wrapping we look at the wrapped object as a black box, 
then we should only allow intervention on its fronts -- before its char filters 
and after its token stream.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4951) randomize merge policy testing in solr

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724052#comment-13724052
 ] 

ASF subversion and git services commented on SOLR-4951:
---

Commit 1508521 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1508521 ]

SOLR-4951: Better randomization of MergePolicy in Solr tests

> randomize merge policy testing in solr
> --
>
> Key: SOLR-4951
> URL: https://issues.apache.org/jira/browse/SOLR-4951
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-4951.patch
>
>
> split off from SOLR-4942...
> * add a new RandomMergePolicy that implements MergePolicy by proxying to 
> another instance selected at creation using one of the 
> LuceneTestCase.new...MergePolicy methods
> * updated test configs to refer to this new MergePolicy
> * borrow the "tests.shardhandler.randomSeed" logic in SolrTestCaseJ4 to give 
> our RandomMergePolicy a consistent seed at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread Han Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Jiang updated LUCENE-3069:
--

Attachment: LUCENE-3069.patch

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread Han Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Jiang updated LUCENE-3069:
--

Attachment: (was: LUCENE-5152.patch)

> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-3069.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4024) DebugComponent enhancement to report on what documents are potentially missing fields

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved SOLR-4024.
---

Resolution: Won't Fix

StatsComponent can do this

> DebugComponent enhancement to report on what documents are potentially 
> missing fields
> -
>
> Key: SOLR-4024
> URL: https://issues.apache.org/jira/browse/SOLR-4024
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
> Fix For: 5.0, 4.5
>
>
> It's often handy when debugging to know when a document is missing a field 
> that is either searched against or in the schema

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2965) Support Landing Pages/Redirects

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved SOLR-2965.
---

Resolution: Won't Fix

> Support Landing Pages/Redirects
> ---
>
> Key: SOLR-2965
> URL: https://issues.apache.org/jira/browse/SOLR-2965
> Project: Solr
>  Issue Type: New Feature
>  Components: Rules
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
>
> In some cases, it is useful for the search engine to bypass doing any search 
> at all and simply return a result indicating the user should be redirected to 
> a landing page.  Initial thinking on implementation is to add a key/value to 
> the header and return no results.  This could be implemented in the 
> QueryElevationComponent (or it's extension, see SOLR-2580).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-07-30 Thread Han Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Jiang updated LUCENE-3069:
--

Attachment: LUCENE-5152.patch

Previous design put much stress on decoding of Outputs. 
This becomes disaster for wildcard queries: like for f*nd, 
we usually have to walk to the last character of FST, then
find that it is not 'd' and automaton doesn't accept this.
In this case, TempFST is actually iterating all the result
of f*, which decodes all the metadata for them...

So I'm trying another approach, the main idea is to load 
metadata & stats as lazily as possible. 
Here I use FST as term index, and leave all other stuff 
in a single term block. The term index FST holds the relationship 
between , and in the term block we can maintain a skip list
for find related metadata & stats.

It is a little similar to BTTR now, and we can someday control how much
data to keep memory resident (e.g. keep stats in memory but metadata on 
disk, however this should be another issue).
Another good part is, it naturally supports seek by ord.(ah, 
actually I don't understand where it is used).

Tests pass, and intersect is not implemented yet.
perf based on 1M wiki data, between non-intersect TempFST and TempFSTOrd:

{noformat}
TaskQPS base  StdDevQPS comp  StdDev
Pct diff
PKLookup  373.80  (0.0%)  320.30  (0.0%)  
-14.3% ( -14% -  -14%)
  Fuzzy1   43.82  (0.0%)   47.10  (0.0%)
7.5% (   7% -7%)
 Prefix3  399.62  (0.0%)  433.95  (0.0%)
8.6% (   8% -8%)
  Fuzzy2   14.26  (0.0%)   15.95  (0.0%)   
11.9% (  11% -   11%)
 Respell   40.69  (0.0%)   46.29  (0.0%)   
13.8% (  13% -   13%)
Wildcard   83.44  (0.0%)   96.54  (0.0%)   
15.7% (  15% -   15%)
{noformat}

perf hit on pklookup should be sane, since I haven't optimize the skip list.

I'll update intersect() later, and later we'll cutover to 
PagedBytes & PackedLongBuffer.


> Lucene should have an entirely memory resident term dictionary
> --
>
> Key: LUCENE-3069
> URL: https://issues.apache.org/jira/browse/LUCENE-3069
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Simon Willnauer
>Assignee: Han Jiang
>  Labels: gsoc2013
> Fix For: 5.0, 4.5
>
> Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
> LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
> LUCENE-5152.patch
>
>
> FST based TermDictionary has been a great improvement yet it still uses a 
> delta codec file for scanning to terms. Some environments have enough memory 
> available to keep the entire FST based term dict in memory. We should add a 
> TermDictionary implementation that encodes all needed information for each 
> term into the FST (custom fst.Output) and builds a FST from the entire term 
> not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2951) Augment QueryElevationComponent results

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved SOLR-2951.
---

Resolution: Won't Fix

DocTransformers

> Augment QueryElevationComponent results
> ---
>
> Key: SOLR-2951
> URL: https://issues.apache.org/jira/browse/SOLR-2951
> Project: Solr
>  Issue Type: Improvement
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
>
> It would be nice if, in the elevate.xml, you could add fields for the docs 
> that get added to, or modify, the document being returned.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1004) Create Lucene-Patch Build capability in Hudson

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved LUCENE-1004.
-

Resolution: Won't Fix

> Create Lucene-Patch Build capability in Hudson
> --
>
> Key: LUCENE-1004
> URL: https://issues.apache.org/jira/browse/LUCENE-1004
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
>
> This issue will be used to test the creation of a "Lucene-Patch" capability 
> in Hudson that automatically applies submitted Patches (when the Patch 
> Available) flag is checked and then marks the issue with a +/- 1 to the bug 
> so that committers know whether it works or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5091) Clean up Servlets APIs, Kill SolrDispatchFilter, simplify API creation

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll reassigned SOLR-5091:
-

Assignee: Grant Ingersoll

> Clean up Servlets APIs, Kill SolrDispatchFilter, simplify API creation
> --
>
> Key: SOLR-5091
> URL: https://issues.apache.org/jira/browse/SOLR-5091
> Project: Solr
>  Issue Type: Improvement
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Fix For: 5.0
>
>
> This is an issue to track a series of sub issues related to deprecated and 
> crufty Servlet/REST API code.  I'll create sub-tasks to manage them.
> # Clean up all the old UI stuff (old redirects)
> # Kill/Simplify SolrDispatchFilter -- for instance, why not make the user 
> always have a core name in 5.0?  i.e. /collection1 is the default core
> ## I'd like to move to just using Guice's servlet extension to do this, 
> which, I think will also make it easier to run Solr in other containers (i.e. 
> non-servlet environments) due to the fact that you don't have to tie the 
> request handling logic specifically to a Servlet.
> # Simplify the creation and testing of REST and other APIs via Guice + 
> Restlet, which I've done on a number of occasions.
> ## It might be also possible to move all of the APIs onto Restlet and 
> maintain back compat through a simple restlet proxy (still exploring this).  
> This would also have the benefit of abstracting the core request processing 
> out of the Servlet context and make that an implementation detail.
> ## Moving to Guice, IMO, will make it easier to isolate and test individual 
> components by being able to inject mocks easier.
> I am close to a working patch for some of this.  I will post incremental 
> updates/issues as I move forward on this, but I think we should take 5.x as 
> an opportunity to be more agnostic of container and I believe the approach I 
> have in mind will do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2580) Create Components to Support Using Business Rules in Solr

2013-07-30 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved SOLR-2580.
---

Resolution: Won't Fix

> Create Components to Support Using Business Rules in Solr
> -
>
> Key: SOLR-2580
> URL: https://issues.apache.org/jira/browse/SOLR-2580
> Project: Solr
>  Issue Type: New Feature
>  Components: Rules
>Reporter: Tomás Fernández Löbbe
>Assignee: Grant Ingersoll
> Fix For: 5.0, 4.5
>
>
> The goal is to be able to adjust the relevance of documents based on user 
> defined business rules.
> For example, in a e-commerce site, when the user chooses the "shoes" 
> category, we may be interested in boosting products from a certain brand. 
> This can be expressed as a rule in the following way:
> rule "Boost Adidas products when searching shoes"
> when
> $qt : QueryTool()
> TermQuery(term.field=="category", term.text=="shoes")
> then
> $qt.boost("{!lucene}brand:adidas");
> end
> The QueryTool object should be used to alter the main query in a easy way. 
> Even more human-like rules can be written:
> rule "Boost Adidas products when searching shoes"
>  when
> Query has term "shoes" in field "product"
>  then
> Add boost query "{!lucene}brand:adidas"
> end
> These rules are written in a text file in the config directory and can be 
> modified at runtime. Rules will be managed using JBoss Drools: 
> http://www.jboss.org/drools/drools-expert.html
> On a first stage, it will allow to add boost queries or change sorting fields 
> based on the user query, but it could be extended to allow more options.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5140) Slowdown of the span queries caused by LUCENE-4946

2013-07-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724003#comment-13724003
 ] 

Adrien Grand commented on LUCENE-5140:
--

I will commit the patch as-is soon and have a look at the lucenebench reports 
in the next days if there is no objection.

> Slowdown of the span queries caused by LUCENE-4946
> --
>
> Key: LUCENE-5140
> URL: https://issues.apache.org/jira/browse/LUCENE-5140
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-5140.patch
>
>
> [~romseygeek] noticed that span queries have been slower since LUCENE-4946 
> got committed.
> http://people.apache.org/~mikemccand/lucenebench/SpanNear.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5091) Clean up Servlets APIs, Kill SolrDispatchFilter, simplify API creation

2013-07-30 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created SOLR-5091:
-

 Summary: Clean up Servlets APIs, Kill SolrDispatchFilter, simplify 
API creation
 Key: SOLR-5091
 URL: https://issues.apache.org/jira/browse/SOLR-5091
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
 Fix For: 5.0


This is an issue to track a series of sub issues related to deprecated and 
crufty Servlet/REST API code.  I'll create sub-tasks to manage them.

# Clean up all the old UI stuff (old redirects)
# Kill/Simplify SolrDispatchFilter -- for instance, why not make the user 
always have a core name in 5.0?  i.e. /collection1 is the default core
## I'd like to move to just using Guice's servlet extension to do this, which, 
I think will also make it easier to run Solr in other containers (i.e. 
non-servlet environments) due to the fact that you don't have to tie the 
request handling logic specifically to a Servlet.
# Simplify the creation and testing of REST and other APIs via Guice + Restlet, 
which I've done on a number of occasions.
## It might be also possible to move all of the APIs onto Restlet and maintain 
back compat through a simple restlet proxy (still exploring this).  This would 
also have the benefit of abstracting the core request processing out of the 
Servlet context and make that an implementation detail.
## Moving to Guice, IMO, will make it easier to isolate and test individual 
components by being able to inject mocks easier.

I am close to a working patch for some of this.  I will post incremental 
updates/issues as I move forward on this, but I think we should take 5.x as an 
opportunity to be more agnostic of container and I believe the approach I have 
in mind will do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5081) Highly parallel document insertion hangs SolrCloud

2013-07-30 Thread Mike Schrag (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723996#comment-13723996
 ] 

Mike Schrag commented on SOLR-5081:
---

I'll kill it again today and grab traces from a few of the nodes.

> Highly parallel document insertion hangs SolrCloud
> --
>
> Key: SOLR-5081
> URL: https://issues.apache.org/jira/browse/SOLR-5081
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3.1
>Reporter: Mike Schrag
> Attachments: threads.txt
>
>
> If I do a highly parallel document load using a Hadoop cluster into an 18 
> node solrcloud cluster, I can deadlock solr every time.
> The ulimits on the nodes are:
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 1031181
> max locked memory   (kbytes, -l) unlimited
> max memory size (kbytes, -m) unlimited
> open files  (-n) 32768
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 515590
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
> The open file count is only around 4000 when this happens.
> If I bounce all the servers, things start working again, which makes me think 
> this is Solr and not ZK.
> I'll attach the stack trace from one of the servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723992#comment-13723992
 ] 

Adrien Grand commented on LUCENE-5153:
--

I think this is the right thing? On the opposite, if wrapReader inserted char 
filters at the end of the charfilter chain, the behavior of the wrapper 
analyzer would be altered (it would allow to insert something between the first 
CharFilter and the last TokenFilter of the wrapped analyzer).

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5087) CoreAdminHandler.handleMergeAction generating NullPointerException

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723961#comment-13723961
 ] 

ASF subversion and git services commented on SOLR-5087:
---

Commit 1508494 from [~erickoerickson] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508494 ]

Added entry for SOLR-5087

> CoreAdminHandler.handleMergeAction generating NullPointerException
> --
>
> Key: SOLR-5087
> URL: https://issues.apache.org/jira/browse/SOLR-5087
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Patrick Hunt
>Assignee: Erick Erickson
> Fix For: 5.0, 4.5
>
> Attachments: SOLR-5087.patch
>
>
> CoreAdminHandler.handleMergeAction is generating NullPointerException
> If directoryFactory.get(...) in handleMergeAction throws an exception the 
> original error is lost as the finally clause will attempt to clean up and 
> generate an NPE. (notice that "dirsToBeReleased" is pre-allocated with nulls 
> that are not filled in)
> {noformat}
> ERROR org.apache.solr.core.SolrCore: java.lang.NullPointerException
> at 
> org.apache.solr.core.CachingDirectoryFactory.release(CachingDirectoryFactory.java:430)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:380)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5087) CoreAdminHandler.handleMergeAction generating NullPointerException

2013-07-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-5087.
--

Resolution: Fixed

Thanks Patrick! I forgot CHANGES.txt, I'll add shortly.

> CoreAdminHandler.handleMergeAction generating NullPointerException
> --
>
> Key: SOLR-5087
> URL: https://issues.apache.org/jira/browse/SOLR-5087
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Patrick Hunt
>Assignee: Erick Erickson
> Fix For: 5.0, 4.5
>
> Attachments: SOLR-5087.patch
>
>
> CoreAdminHandler.handleMergeAction is generating NullPointerException
> If directoryFactory.get(...) in handleMergeAction throws an exception the 
> original error is lost as the finally clause will attempt to clean up and 
> generate an NPE. (notice that "dirsToBeReleased" is pre-allocated with nulls 
> that are not filled in)
> {noformat}
> ERROR org.apache.solr.core.SolrCore: java.lang.NullPointerException
> at 
> org.apache.solr.core.CachingDirectoryFactory.release(CachingDirectoryFactory.java:430)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:380)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5087) CoreAdminHandler.handleMergeAction generating NullPointerException

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723958#comment-13723958
 ] 

ASF subversion and git services commented on SOLR-5087:
---

Commit 1508491 from [~erickoerickson] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1508491 ]

SOLR-5087, CoreAdminHandler.handleMergeAction generating NullPointerException. 
Thanks Patrick

> CoreAdminHandler.handleMergeAction generating NullPointerException
> --
>
> Key: SOLR-5087
> URL: https://issues.apache.org/jira/browse/SOLR-5087
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Patrick Hunt
>Assignee: Erick Erickson
> Fix For: 5.0, 4.5
>
> Attachments: SOLR-5087.patch
>
>
> CoreAdminHandler.handleMergeAction is generating NullPointerException
> If directoryFactory.get(...) in handleMergeAction throws an exception the 
> original error is lost as the finally clause will attempt to clean up and 
> generate an NPE. (notice that "dirsToBeReleased" is pre-allocated with nulls 
> that are not filled in)
> {noformat}
> ERROR org.apache.solr.core.SolrCore: java.lang.NullPointerException
> at 
> org.apache.solr.core.CachingDirectoryFactory.release(CachingDirectoryFactory.java:430)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:380)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4696) All threads become blocked resulting in hang when bulk adding

2013-07-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-4696.


Resolution: Duplicate

I'm pretty sure this is a duplicate of SOLR-5081, we can re-open if not.

> All threads become blocked resulting in hang when bulk adding
> -
>
> Key: SOLR-4696
> URL: https://issues.apache.org/jira/browse/SOLR-4696
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.1, 4.2, 4.2.1
> Environment: Ubuntu 12.04.2 LTS 3.5.0-27-generic
> Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
> KVM, 4xCPU, 5GB RAM, 4GB heap.
> 4 cores, 2 shards, 2 nodes, tomcat7
>Reporter: matt knecht
>  Labels: hang
> Attachments: screenshot-1.jpg, solrconfig.xml, solr.jstack.1, 
> solr.jstack.2
>
>
> During a bulk load after about 150,000 documents load, thread usage spikes, 
> solr no longer processes any documents.  Any additional documents added 
> result in a new thread until the pool is exhausted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723941#comment-13723941
 ] 

Robert Muir commented on LUCENE-5153:
-

One odd thing is that wrapComponents adds to the end of the TokenStream chain, 
but with this patch wrapReader inserts into the beginning of the charfilter 
chain.

Not saying its wrong, but is it the right thing?

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723936#comment-13723936
 ] 

Shai Erera commented on LUCENE-5153:


If there are no objections, I'll commit it.

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5087) CoreAdminHandler.handleMergeAction generating NullPointerException

2013-07-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723926#comment-13723926
 ] 

ASF subversion and git services commented on SOLR-5087:
---

Commit 1508476 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1508476 ]

SOLR-5087, CoreAdminHandler.handleMergeAction generating NullPointerException. 
Thanks Patrick

> CoreAdminHandler.handleMergeAction generating NullPointerException
> --
>
> Key: SOLR-5087
> URL: https://issues.apache.org/jira/browse/SOLR-5087
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Patrick Hunt
>Assignee: Erick Erickson
> Fix For: 5.0, 4.5
>
> Attachments: SOLR-5087.patch
>
>
> CoreAdminHandler.handleMergeAction is generating NullPointerException
> If directoryFactory.get(...) in handleMergeAction throws an exception the 
> original error is lost as the finally clause will attempt to clean up and 
> generate an NPE. (notice that "dirsToBeReleased" is pre-allocated with nulls 
> that are not filled in)
> {noformat}
> ERROR org.apache.solr.core.SolrCore: java.lang.NullPointerException
> at 
> org.apache.solr.core.CachingDirectoryFactory.release(CachingDirectoryFactory.java:430)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:380)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5152) Lucene FST is not immutale

2013-07-30 Thread Han Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723922#comment-13723922
 ] 

Han Jiang commented on LUCENE-5152:
---

bq. So its really just a BytesRef bug right? 
+1, so tricky

> Lucene FST is not immutale
> --
>
> Key: LUCENE-5152
> URL: https://issues.apache.org/jira/browse/LUCENE-5152
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/FSTs
>Affects Versions: 4.4
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, 4.5
>
> Attachments: LUCENE-5152.patch
>
>
> a spinnoff from LUCENE-5120 where the analyzing suggester modified a returned 
> output from and FST (BytesRef) which caused sideffects in later execution. 
> I added an assertion into the FST that checks if a cached root arc is 
> modified and in-fact this happens for instance in our MemoryPostingsFormat 
> and I bet we find more places. We need to think about how to make this less 
> trappy since it can cause bugs that are super hard to find.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5153) Allow wrapping Reader from AnalyzerWrapper

2013-07-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5153:
---

Attachment: LUCENE-5153.patch

simple patch

> Allow wrapping Reader from AnalyzerWrapper
> --
>
> Key: LUCENE-5153
> URL: https://issues.apache.org/jira/browse/LUCENE-5153
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5153.patch
>
>
> It can be useful to allow AnalyzerWrapper extensions to wrap the Reader given 
> to initReader, e.g. with a CharFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >