Re: Next release...

2015-11-13 Thread Adrien Grand
Le jeu. 12 nov. 2015 à 18:21, Erick Erickson  a
écrit :

> Are there any tentative (not firm commitments) time frames for 6.0?
>

I don't think we talked about it before. Maybe we could aim for something
like February 2016. This would be one year after 5.0, which I think is a
nice trade-off between us being able to drop support for old index formats
and users not feeling to much pressure to upgrade all the time. It also
means that features that are almost ready like the new dimensional format
and cdcr wouldn't have to wait for too long before going into the hands of
our users.


[jira] [Commented] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003911#comment-15003911
 ] 

Ishan Chattopadhyaya commented on SOLR-8287:


Thanks for looking at the patch, [~cpoerschke]. I think that is a good point, 
would defend against any potential precision loss in future.

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003887#comment-15003887
 ] 

Christine Poerschke commented on SOLR-8287:
---

The {{TrieDoubleField}} change is similar to what {{TrieFloatField}} currently 
has, that looks good to me.

The {{TrieLongField}} change is similar to what {{TrieIntField}} currently has, 
but I'm wondering:
* {{TrieIntField.toNativeType}} attempts {{Float.parseFloat}} if the 
{{Integer.parseInt}} attempt throws a {{NumberFormatException}}
* in the current patch {{TrieLongField.toNativeType}} attempts 
{{Float.parseFloat}} if the {{Long.parseLong}} attempt throws a 
{{NumberFormatException}} but might {{Double.parseDouble}} be attempted instead?

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8276) Atomic updates & RTG don't work with non-stored docvalues

2015-11-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8276:
---
Description: Currently, for atomic updates, the non-stored docvalues fields 
are neither (a) carried forward to updated document, nor (b) do operations like 
"inc" work on them. Also, RTG of documents containing such fields doesn't work 
if the document is fetched from the index.  (was: Currently, for atomic 
updates, the non-stored docvalues fields are neither (a) carried forward to 
updated document, nor (b) do operations like "inc" work on them.)
Summary: Atomic updates & RTG don't work with non-stored docvalues  
(was: Atomic updates with non-stored docvalues don't work)

> Atomic updates & RTG don't work with non-stored docvalues
> -
>
> Key: SOLR-8276
> URL: https://issues.apache.org/jira/browse/SOLR-8276
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8276.patch, SOLR-8276.patch
>
>
> Currently, for atomic updates, the non-stored docvalues fields are neither 
> (a) carried forward to updated document, nor (b) do operations like "inc" 
> work on them. Also, RTG of documents containing such fields doesn't work if 
> the document is fetched from the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8276) Atomic updates with non-stored docvalues don't work

2015-11-13 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8276:
---
Attachment: SOLR-8276.patch

Better patch updated.

> Atomic updates with non-stored docvalues don't work
> ---
>
> Key: SOLR-8276
> URL: https://issues.apache.org/jira/browse/SOLR-8276
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8276.patch, SOLR-8276.patch
>
>
> Currently, for atomic updates, the non-stored docvalues fields are neither 
> (a) carried forward to updated document, nor (b) do operations like "inc" 
> work on them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8275) Unclear error message during recovery

2015-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004088#comment-15004088
 ] 

Mark Miller commented on SOLR-8275:
---

If you don't look at the code and understand how all that works and what the 
params mean, it's useless info really anyway. The best you can do is copy and 
paste it to someone that does understand.

> Unclear error message during recovery
> -
>
> Key: SOLR-8275
> URL: https://issues.apache.org/jira/browse/SOLR-8275
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: SOLR-8275.patch, SOLR-8275.patch
>
>
> A SolrCloud install got into a bad state (mostly around LeaderElection, I 
> think) and during recovery one of the nodes was giving me this message:
> {noformat}
> 2015-11-09 13:00:56,158 ERROR org.apache.solr.cloud.RecoveryStrategy: Error 
> while trying to recover. 
> core=c1_shard1_replica4:java.util.concurrent.ExecutionException: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:599)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:370)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:236)
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:621)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:292)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:288)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The crux of this message: "I was asked to wait on state recovering for shard1 
> in c1 on node2:8983_solr but I still do not see the requested state. I see 
> state: recovering" seems contradictory. At a minimum, we should improve this 
> error, but there might also be some erroneous logic going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6168) enhance collapse QParser so that "group head" documents can be selected by more complex sort options

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004187#comment-15004187
 ] 

ASF subversion and git services commented on SOLR-6168:
---

Commit 1714234 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714234 ]

SOLR-6168: Add a 'sort' local param to the collapse QParser to support using 
complex sort options to select the representitive doc for each collapsed group 
(merge 1714133)

> enhance collapse QParser so that "group head" documents can be selected by 
> more complex sort options
> 
>
> Key: SOLR-6168
> URL: https://issues.apache.org/jira/browse/SOLR-6168
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.7.1, 4.8.1
>Reporter: Umesh Prasad
>Assignee: Joel Bernstein
> Attachments: CollapsingQParserPlugin-6168.patch.1-1stcut, 
> SOLR-6168-group-head-inconsistent-with-sort.patch, SOLR-6168.patch, 
> SOLR-6168.patch, SOLR-6168.patch, SOLR-6168.patch, SOLR-6168.patch
>
>
> The fundemental goal of this issue is add additional support to the 
> CollapseQParser so that as an alternative to the existing min/max localparam 
> options, more robust sort syntax can be used to sort on multiple criteria 
> when selecting the "group head" documents used to represent each collapsed 
> group.
> Since support for arbitrary, multi-clause, sorting is almost certainly going 
> to require more RAM then the existing min/max functionaly, this new 
> functionality should be in addition to the existing min/max localparam 
> implementation, not a replacement of it.
> (NOTE: early comments made in this jira may be confusing in historical 
> context due to the way this issue was originally filed as a bug report)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004069#comment-15004069
 ] 

Mark Miller commented on SOLR-8279:
---

I'm going to do some more work on this, but committing what I have for now.

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004158#comment-15004158
 ] 

ASF subversion and git services commented on SOLR-8287:
---

Commit 1714226 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1714226 ]

SOLR-8287: TrieDoubleField and TrieLongField now override toNativeType

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Christine Poerschke
> Attachments: SOLR-8287.patch, SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004246#comment-15004246
 ] 

Mark Miller commented on SOLR-8288:
---

bq. This null check is not necessary.

I was originally trying to use leaderCoreNodeName for the identity check, in 
which case you do need the null check. I switched to using the core url for the 
check, but I have left this null check in - I find it makes it much more 
explicit that that variable can be null here, rather than just counting on the 
fact that the equals method will handle the null how we want.

bq. Is it worth adding a test where a node tries to put itself into recovery?

If you can add a test for this, it would be nice to have, but I don't see a 
good way to do it without some invasive ugly code. It should probably spin out 
into it's own JIRA unless something can be done quickly.

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8275) Unclear error message during recovery

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004074#comment-15004074
 ] 

Mike Drob commented on SOLR-8275:
-

bq. Yeah, but that is easy enough to deduce.
It's easy enough when you are looking at the code and are familiar with the 
logic. Telling people to look at the code to figure out why something failed is 
pretty awful from a usability perspective, though.

> Unclear error message during recovery
> -
>
> Key: SOLR-8275
> URL: https://issues.apache.org/jira/browse/SOLR-8275
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: SOLR-8275.patch, SOLR-8275.patch
>
>
> A SolrCloud install got into a bad state (mostly around LeaderElection, I 
> think) and during recovery one of the nodes was giving me this message:
> {noformat}
> 2015-11-09 13:00:56,158 ERROR org.apache.solr.cloud.RecoveryStrategy: Error 
> while trying to recover. 
> core=c1_shard1_replica4:java.util.concurrent.ExecutionException: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:599)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:370)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:236)
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:621)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:292)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:288)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The crux of this message: "I was asked to wait on state recovering for shard1 
> in c1 on node2:8983_solr but I still do not see the requested state. I see 
> state: recovering" seems contradictory. At a minimum, we should improve this 
> error, but there might also be some erroneous logic going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8276) Atomic updates & RTG don't work with non-stored docvalues

2015-11-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004292#comment-15004292
 ] 

Yonik Seeley commented on SOLR-8276:


bq. I couldn't test for multivalued non-stored docvalues fields, since I got 
the following exception during the field() function query on a multivalued 
field: can not use FieldCache on multivalued field: intdvMulti. Am I missing 
something obvious?

Since function queries don't support multi-valued fields, we should go through 
docValues API instead?

> Atomic updates & RTG don't work with non-stored docvalues
> -
>
> Key: SOLR-8276
> URL: https://issues.apache.org/jira/browse/SOLR-8276
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8276.patch, SOLR-8276.patch
>
>
> Currently, for atomic updates, the non-stored docvalues fields are neither 
> (a) carried forward to updated document, nor (b) do operations like "inc" 
> work on them. Also, RTG of documents containing such fields doesn't return 
> those fields if the document is fetched from the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Next release...

2015-11-13 Thread david.w.smi...@gmail.com
+1 to not sooner than February 2016 (one year later)

On Fri, Nov 13, 2015 at 6:25 AM Adrien Grand  wrote:

> Le jeu. 12 nov. 2015 à 18:21, Erick Erickson  a
> écrit :
>
>> Are there any tentative (not firm commitments) time frames for 6.0?
>>
>
> I don't think we talked about it before. Maybe we could aim for something
> like February 2016. This would be one year after 5.0, which I think is a
> nice trade-off between us being able to drop support for old index formats
> and users not feeling to much pressure to upgrade all the time. It also
> means that features that are almost ready like the new dimensional format
> and cdcr wouldn't have to wait for too long before going into the hands of
> our users.
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-4905) Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded collection that has a replica on all nodes

2015-11-13 Thread Paul Blanchaert (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004226#comment-15004226
 ] 

Paul Blanchaert commented on SOLR-4905:
---

Hi Mikhail,
I wanted to raise the issue today, but didn't find the time before the weekend.
Also found an issue while debugging, will report next monday...
Thanks

> Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded 
> collection that has a replica on all nodes
> --
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
>Assignee: Timothy Potter
> Fix For: 5.1, Trunk
>
> Attachments: SOLR-4905.patch, SOLR-4905.patch, patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



EOF contract in TransactionLog

2015-11-13 Thread Renaud Delbru

Dear all,

in one of the unit tests of CDCR, we stumble upon the following issue:

 [junit4]   2> java.io.EOFException
 [junit4]   2>at 
org.apache.solr.common.util.FastInputStream.readByte(FastInputStream.java:208)
 [junit4]   2>at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:198)
 [junit4]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:641)
 [junit4]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:154)


From the comment of the LogReader#next() method, the contract should 
have been to return null if EOF is reached. However, this does not seem 
to be respected as per stack trace. Is it a bug and should I open an 
issue to fix it ? Or is it just the method comment that is not up to 
date (and should be probably fixed as well) ?


Thanks
--
Renaud Delbru


[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004209#comment-15004209
 ] 

Mike Drob commented on SOLR-8288:
-

bq. +if (leaderCoreNodeName != null && 
cloudDesc.getCoreNodeName().equals(leaderCoreNodeName) // we are still same 
leader
This null check is not necessary.


Is it worth adding a test where a node tries to put itself into recovery? Not 
sure if that's something that is actually possible to stub out.

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004065#comment-15004065
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1714218 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714218 ]

SOLR-8279: Add a new SolrCloud test that stops and starts the cluster while 
indexing data.

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3191) field exclusion from fl

2015-11-13 Thread Scott Stults (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004086#comment-15004086
 ] 

Scott Stults commented on SOLR-3191:


Right. The reason I ask is, aliases would be a decent alternative for folks who 
need field names like "1_product" should we start enforcing recommended field 
name restrictions in the schema. But there are other places in the response 
where we need to render a field name, like in facets and highlighting (I don't 
know the details of how that's done), so allowing funky aliases could have side 
effects.

There are other tickets specifically about leading digits in field names 
(SOLR-7070, SOLR-3407), so I like [~ehatcher]'s suggestion of keeping this 
focused on the exclusion list aspect and addressing field name 
enforcement/warning elsewhere.

> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
> Attachments: SOLR-3191.patch, SOLR-3191.patch, SOLR-3191.patch, 
> SOLR-3191.patch
>
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-11-13 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8230:
--
Attachment: SOLR-8230.patch

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004342#comment-15004342
 ] 

ASF subversion and git services commented on SOLR-8287:
---

Commit 1714243 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714243 ]

SOLR-8287: TrieDoubleField and TrieLongField now override toNativeType (merge 
in revision 1714226 from trunk)

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Christine Poerschke
> Attachments: SOLR-8287.patch, SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4021) JavaBinCodec has poor default behavior for unrecognized classes of objects

2015-11-13 Thread Gregg Donovan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregg Donovan updated SOLR-4021:

Attachment: SOLR-4021.patch

> JavaBinCodec has poor default behavior for unrecognized classes of objects
> --
>
> Key: SOLR-4021
> URL: https://issues.apache.org/jira/browse/SOLR-4021
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.0
>Reporter: Hoss Man
> Attachments: SOLR-4021.patch
>
>
> It seems that JavaBinCodec has inconsistent serialize/deserialize behavior 
> when dealing with objects of classes that it doesn't recognized.  In 
> particular, unrecnognized objects seem to be serialized with the full 
> classname prepented to the "toString()" value, and then that resulting 
> concatentated string is left as is during deserialization.
> as a concrete example: serializing & deserializing a BigDecimal value results 
> in a final value like "java.math.BigDecimal:1848.66" even though for most 
> users the simple toString() value would have worked as intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8287.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.4

Thanks Ishan!

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Christine Poerschke
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8287.patch, SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8176) Model distributed graph traversals with Streaming Expressions

2015-11-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8176:


Assignee: Joel Bernstein

> Model distributed graph traversals with Streaming Expressions
> -
>
> Key: SOLR-8176
> URL: https://issues.apache.org/jira/browse/SOLR-8176
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrCloud, SolrJ
>Affects Versions: Trunk
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>  Labels: Graph
> Fix For: Trunk
>
>
> I think it would be useful to model a few *distributed graph traversal* use 
> cases with Solr's *Streaming Expression* language. This ticket will explore 
> different approaches with a goal of implementing two or three common graph 
> traversal use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-13 Thread Renaud Delbru (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renaud Delbru updated SOLR-8263:

Attachment: SOLR-8263-trunk-2.patch

A new version of the patch (this replaces the previous one) which includes a 
fix related to the write lock.
In the previous patch, the write lock was removed accidentally while 
re-initialising the update log with the new set of tlog files (the init method 
was creating a new instance of the VersionInfo). As a consequence there was a 
small time frame where updates were lost (a batch of documents were missed in 1 
over 10 runs). The fix introduces a new init method that preserves the original 
VersionInfo instance and therefore preserves the write lock.
I have run the test 50 times without seeing anymore the issue.

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8289) Add a test that verifies a core cannot put itself into LIR

2015-11-13 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8289:
---

 Summary: Add a test that verifies a core cannot put itself into LIR
 Key: SOLR-8289
 URL: https://issues.apache.org/jira/browse/SOLR-8289
 Project: Solr
  Issue Type: Test
  Components: SolrCloud
Reporter: Mike Drob


A core should not be able to put itself into LIR - we already have some 
defensive checks around this, but it would be good to verify that our checks 
are sufficient with a test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4021) JavaBinCodec has poor default behavior for unrecognized classes of objects

2015-11-13 Thread Gregg Donovan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregg Donovan updated SOLR-4021:

Attachment: SOLR-4021.diff

This adds BigDecimal support to javabin by serializing/deserializing as a 
string. The 
[JavaDocs|https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html#toString--]
 make toString sound like a reasonable way to serialize BigDecimal.

> JavaBinCodec has poor default behavior for unrecognized classes of objects
> --
>
> Key: SOLR-4021
> URL: https://issues.apache.org/jira/browse/SOLR-4021
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.0
>Reporter: Hoss Man
> Attachments: SOLR-4021.diff
>
>
> It seems that JavaBinCodec has inconsistent serialize/deserialize behavior 
> when dealing with objects of classes that it doesn't recognized.  In 
> particular, unrecnognized objects seem to be serialized with the full 
> classname prepented to the "toString()" value, and then that resulting 
> concatentated string is left as is during deserialization.
> as a concrete example: serializing & deserializing a BigDecimal value results 
> in a final value like "java.math.BigDecimal:1848.66" even though for most 
> users the simple toString() value would have worked as intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4021) JavaBinCodec has poor default behavior for unrecognized classes of objects

2015-11-13 Thread Gregg Donovan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregg Donovan updated SOLR-4021:

Attachment: (was: SOLR-4021.diff)

> JavaBinCodec has poor default behavior for unrecognized classes of objects
> --
>
> Key: SOLR-4021
> URL: https://issues.apache.org/jira/browse/SOLR-4021
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.0
>Reporter: Hoss Man
>
> It seems that JavaBinCodec has inconsistent serialize/deserialize behavior 
> when dealing with objects of classes that it doesn't recognized.  In 
> particular, unrecnognized objects seem to be serialized with the full 
> classname prepented to the "toString()" value, and then that resulting 
> concatentated string is left as is during deserialization.
> as a concrete example: serializing & deserializing a BigDecimal value results 
> in a final value like "java.math.BigDecimal:1848.66" even though for most 
> users the simple toString() value would have worked as intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4021) JavaBinCodec has poor default behavior for unrecognized classes of objects

2015-11-13 Thread Gregg Donovan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004590#comment-15004590
 ] 

Gregg Donovan edited comment on SOLR-4021 at 11/13/15 7:46 PM:
---

This patch adds BigDecimal support to javabin by serializing/deserializing as a 
string. The 
[JavaDocs|https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html#toString--]
 make toString sound like a reasonable way to serialize BigDecimal.


was (Author: greggny3):
This adds BigDecimal support to javabin by serializing/deserializing as a 
string. The 
[JavaDocs|https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html#toString--]
 make toString sound like a reasonable way to serialize BigDecimal.

> JavaBinCodec has poor default behavior for unrecognized classes of objects
> --
>
> Key: SOLR-4021
> URL: https://issues.apache.org/jira/browse/SOLR-4021
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.0
>Reporter: Hoss Man
>
> It seems that JavaBinCodec has inconsistent serialize/deserialize behavior 
> when dealing with objects of classes that it doesn't recognized.  In 
> particular, unrecnognized objects seem to be serialized with the full 
> classname prepented to the "toString()" value, and then that resulting 
> concatentated string is left as is during deserialization.
> as a concrete example: serializing & deserializing a BigDecimal value results 
> in a final value like "java.math.BigDecimal:1848.66" even though for most 
> users the simple toString() value would have worked as intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8176) Model distributed graph traversals with Streaming Expressions

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004412#comment-15004412
 ] 

Joel Bernstein edited comment on SOLR-8176 at 11/13/15 6:09 PM:


I need to dig into the TinkerPop API. I think implementing Gremlin would be the 
desired end game. 

I see a distributed Gremlin implementation as another Parallel Computing 
problem, like the Parallel SQL interface. This is where the Streaming API comes 
in. If we can model graph traversals with the Streaming API then we can have a 
Gremlin parser that compiles to Streaming API objects. This was the approach 
taken with the SQL interface.

So this ticket is really about laying the Parallel Computing framework for 
supporting graph traversals. 

I do agree that looking at TinkerPop will be very useful in understanding what 
to model.


was (Author: joel.bernstein):
I need to dig into the TinkerPop API. I think implementing Gremlin would be the 
desired end game. 

I see distributed Gremlin implementation as another Parallel Computing problem, 
like the Parallel SQL interface. This is where the Streaming API comes in. If 
we model graph traversals with the Streaming API then we can have a Gremlin 
parser that compiles to Streaming API objects. This was the approach taken with 
the SQL interface.

So this ticket is really about laying the Parallel Computing framework for 
supporting graph traversals. 

Although I do agree that looking at TinkerPop will be very useful in 
understanding what to model.

> Model distributed graph traversals with Streaming Expressions
> -
>
> Key: SOLR-8176
> URL: https://issues.apache.org/jira/browse/SOLR-8176
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrCloud, SolrJ
>Affects Versions: Trunk
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>  Labels: Graph
> Fix For: Trunk
>
>
> I think it would be useful to model a few *distributed graph traversal* use 
> cases with Solr's *Streaming Expression* language. This ticket will explore 
> different approaches with a goal of implementing two or three common graph 
> traversal use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004456#comment-15004456
 ] 

Mike Drob commented on SOLR-8288:
-

bq. If you can add a test for this, it would be nice to have, but I don't see a 
good way to do it without some invasive ugly code. It should probably spin out 
into it's own JIRA unless something can be done quickly.

Yea, I don't see an immediately apparent way to do this. Was hoping you knew 
something I didn't. Filed SOLR-8289.

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Rajput updated SOLR-6736:
---
Attachment: SOLR-6736-newapi.patch

Uploading an updated patch with the tests using the new API [~anshumg]

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 851 - Still Failing

2015-11-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/851/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
No registered leader was found after waiting for 3ms , collection: 
halfdeletedcollection2 slice: shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found after 
waiting for 3ms , collection: halfdeletedcollection2 slice: shard1
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:637)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deleteCollectionWithDownNodes(CollectionsAPIDistributedZkTest.java:259)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (SOLR-8176) Model distributed graph traversals with Streaming Expressions

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004412#comment-15004412
 ] 

Joel Bernstein commented on SOLR-8176:
--

I need to dig into the TinkerPop API. I think implementing Gremlin would be the 
desired end game. 

I see distributed Gremlin implementation as another Parallel Computing problem, 
like the Parallel SQL interface. This is where the Streaming API comes in. If 
we model graph traversals with the Streaming API then we can have a Gremlin 
parser that compiles to Streaming API objects. This was the approach taken with 
the SQL interface.

So this ticket is really about laying the Parallel Computing framework for 
supporting graph traversals. 

Although I do agree that looking at TinkerPop will be very useful in 
understanding what to model.

> Model distributed graph traversals with Streaming Expressions
> -
>
> Key: SOLR-8176
> URL: https://issues.apache.org/jira/browse/SOLR-8176
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrCloud, SolrJ
>Affects Versions: Trunk
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>  Labels: Graph
> Fix For: Trunk
>
>
> I think it would be useful to model a few *distributed graph traversal* use 
> cases with Solr's *Streaming Expression* language. This ticket will explore 
> different approaches with a goal of implementing two or three common graph 
> traversal use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-11-13 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004521#comment-15004521
 ] 

Michael Sun commented on SOLR-8230:
---

Uploaded first patch for review. It adds a facet trace field in response to 
reveal the debugging information of facet if debugQuery=true in request. Here 
is some thoughts for the patch.

1. In the patch, it shows the facet processor used, elapse time and facet 
description for each step to execute facet. If there is sub-facet, it shows the 
facet hierarchy as well. It helps to understand how the facet is executed and 
the cost of each step. 
2. The total count and unique count is not included and is planned for next 
sub-task.
3. The debug information is stored along with FacetContext since FacetContext 
maintains steps and hierarchy of facet execution. FacetContext is organized in 
a tree structure. The root of the tree is stored in ResponseBuilder as 
"FacetContext".
4. FacetDebugInfo.getFacetDebugInfoInJSON() is designed to be static. The 
reason is FacetContext is package private. From DebugComponent.java, there is 
no good way to access facet debug information.
5. The escape string in JSON output format in facet trace in the response is 
somehow not completely correct. I am trying to figure it out.


> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8275) Unclear error message during recovery

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004577#comment-15004577
 ] 

Mike Drob commented on SOLR-8275:
-

The wait loop logging is at INFO level, while this is at ERROR level. Some 
places decide to turn off INFO logging for a variety of reasons (ill advised as 
it may be). I'm hoping that a clearer error message will save the next 
developer who is stuck debugging this particular piece some time. Tracing 
through the code probably took me 20 minutes on the first pass, and I'm not 
going to claim that I am smart enough to remember what it means the next time I 
have to go look. I feel like letting the logs tell me what the problem is 
explicitly would be a really useful improvement.

> Unclear error message during recovery
> -
>
> Key: SOLR-8275
> URL: https://issues.apache.org/jira/browse/SOLR-8275
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: SOLR-8275.patch, SOLR-8275.patch
>
>
> A SolrCloud install got into a bad state (mostly around LeaderElection, I 
> think) and during recovery one of the nodes was giving me this message:
> {noformat}
> 2015-11-09 13:00:56,158 ERROR org.apache.solr.cloud.RecoveryStrategy: Error 
> while trying to recover. 
> core=c1_shard1_replica4:java.util.concurrent.ExecutionException: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:599)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:370)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:236)
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: I was 
> asked to wait on state recovering for shard1 in c1 on node2:8983_solr but I 
> still do not see the requested state. I see state: recovering live:true 
> leader from ZK: http://node1:8983/solr/c1_shard1_replica2/
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:621)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:292)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer$1.call(HttpSolrServer.java:288)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The crux of this message: "I was asked to wait on state recovering for shard1 
> in c1 on node2:8983_solr but I still do not see the requested state. I see 
> state: recovering" seems contradictory. At a minimum, we should improve this 
> error, but there might also be some erroneous logic going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004615#comment-15004615
 ] 

Mike Drob edited comment on SOLR-8279 at 11/13/15 8:01 PM:
---

bq. +threads = new ArrayList<>(2);
Should be ArrayList<>(numThreads);

{quote}
+thread.safeStop();
+thread.safeStop();
{quote}
Typo, or some nuance here?

{quote}
+  public void stopAndStartAllReplicas() throws Exception, InterruptedException 
\{
+chaosMonkey.stopAll(random().nextInt(2000));
+
+Thread.sleep(1000);
+
+chaosMonkey.startAll();
+  }
{quote}
Is sleeping for one second sufficient here? Do we want to instead sleep until 
some condition is met (like all the servers are fully down, in case there is a 
straggler)?


was (Author: mdrob):
bq. +threads = new ArrayList<>(2);
Should be ArrayList<>(numThreads);

{quote}
+thread.safeStop();
+thread.safeStop();
{quote}
Typo, or some nuance here?

{quote}
+  public void stopAndStartAllReplicas() throws Exception, InterruptedException 
{
+chaosMonkey.stopAll(random().nextInt(2000));
+
+Thread.sleep(1000);
+
+chaosMonkey.startAll();
+  }
{quote}
Is sleeping for one second sufficient here? Do we want to instead sleep until 
some condition is met (like all the servers are fully down, in case there is a 
straggler)?

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-13 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004348#comment-15004348
 ] 

Renaud Delbru edited comment on SOLR-8263 at 11/13/15 5:38 PM:
---

A new version of the patch (this replaces the previous one) which includes a 
fix related to the write lock.
In the previous patch, the write lock was removed accidentally while 
re-initialising the update log with the new set of tlog files (the init method 
was creating a new instance of the VersionInfo). As a consequence there was a 
small time frame where updates were lost (a batch of documents was missed in 1 
over 10 runs). The fix introduces a new init method that preserves the original 
VersionInfo instance and therefore preserves the write lock.
I have run the test 50 times without seeing anymore the issue.


was (Author: rendel):
A new version of the patch (this replaces the previous one) which includes a 
fix related to the write lock.
In the previous patch, the write lock was removed accidentally while 
re-initialising the update log with the new set of tlog files (the init method 
was creating a new instance of the VersionInfo). As a consequence there was a 
small time frame where updates were lost (a batch of documents were missed in 1 
over 10 runs). The fix introduces a new init method that preserves the original 
VersionInfo instance and therefore preserves the write lock.
I have run the test 50 times without seeing anymore the issue.

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004615#comment-15004615
 ] 

Mike Drob commented on SOLR-8279:
-

bq. +threads = new ArrayList<>(2);
Should be ArrayList<>(numThreads);

{quote}
+thread.safeStop();
+thread.safeStop();
{quote}
Typo, or some nuance here?

{quote}
+  public void stopAndStartAllReplicas() throws Exception, InterruptedException 
{
+chaosMonkey.stopAll(random().nextInt(2000));
+
+Thread.sleep(1000);
+
+chaosMonkey.startAll();
+  }
{quote}
Is sleeping for one second sufficient here? Do we want to instead sleep until 
some condition is met (like all the servers are fully down, in case there is a 
straggler)?

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8290) remove SchemaField.checkFieldCacheSource's unused QParser argument

2015-11-13 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8290:
-

 Summary: remove SchemaField.checkFieldCacheSource's unused QParser 
argument
 Key: SOLR-8290
 URL: https://issues.apache.org/jira/browse/SOLR-8290
 Project: Solr
  Issue Type: Wish
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


>From what I could see with a little looking around the argument was added in 
>2011 but not used then or since.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: EOF contract in TransactionLog

2015-11-13 Thread Erick Erickson
Hmm, I took a quick look at this and the only non-CDCR places that
call this method test for null on return. They also drop into
exceptions which looks like a different code path than expected for
EOF, see UpdateLog#LogReplayer#doReplay (lines 1,337 and 1,347 in
trunk).

I've put in a _very_ simple patch in TransactionLog.next() to wrap the
Object o = codec.readVal() and catch EOFException, returning null in
that case and I'm running tests now. Looks like a JIRA to me.

I wonder how this is affecting tlog replays?

On Fri, Nov 13, 2015 at 9:03 AM, Renaud Delbru  wrote:
> Dear all,
>
> in one of the unit tests of CDCR, we stumble upon the following issue:
>
>[junit4]   2> java.io.EOFException
>[junit4]   2>  at
> org.apache.solr.common.util.FastInputStream.readByte(FastInputStream.java:208)
>[junit4]   2>  at
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:198)
>[junit4]   2>  at
> org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:641)
>[junit4]   2>  at
> org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:154)
>
> From the comment of the LogReader#next() method, the contract should have
> been to return null if EOF is reached. However, this does not seem to be
> respected as per stack trace. Is it a bug and should I open an issue to fix
> it ? Or is it just the method comment that is not up to date (and should be
> probably fixed as well) ?
>
> Thanks
> --
> Renaud Delbru

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8075.
---
Resolution: Fixed

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, 
> SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Ray (JIRA)
Ray created SOLR-8291:
-

 Summary: Get NPE during calling export handler
 Key: SOLR-8291
 URL: https://issues.apache.org/jira/browse/SOLR-8291
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Ray


Get NPE during calling export handler, here is the stack trace:
at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
at 
org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at 
org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
at 
org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
at 
org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
at 
org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
at 
org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
at 
org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
at 
org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
at 
org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
at java.lang.Thread.run(Thread.java:745)

It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004792#comment-15004792
 ] 

ASF subversion and git services commented on SOLR-8288:
---

Commit 1714271 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1714271 ]

SOLR-8288: DistributedUpdateProcessor#doFinish should explicitly check and 
ensure it does not try to put itself into LIR.

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004799#comment-15004799
 ] 

Mike Drob commented on SOLR-8288:
-

bq. +&& 
!stdNode.getNodeProps().getCoreUrl().equals(leaderProps.getCoreUrl())) { // we 
do not want to put ourself into LIR

If we are comparing URLs, then would it make sense to check against the 
[replicaURL|https://github.com/apache/lucene-solr/blob/eab11f7fe242710216786db70814a0f492342b38/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java#L824]
 that we saw the error on? 

{{replicaUrl.equals(leaderProps.getCoreUrl())}} instead?

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004853#comment-15004853
 ] 

Ray commented on SOLR-8291:
---

yes, some docs didn't have the value in field, it is same as SOLR-8285

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004873#comment-15004873
 ] 

Ray commented on SOLR-8291:
---

for my case, the sort field is not null, but some fields in fl  was null.

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004885#comment-15004885
 ] 

Varun Rajput commented on SOLR-6736:


Thanks [~gchanan] for the detailed review. Please also refer to my previous 
patch 
(https://issues.apache.org/jira/secure/attachment/12768656/SOLR-6736-newapi.patch)
 in which I did go the route of using Overseer but passing the content stream 
in the zookeeper message didn't work out as it got converted into a string. 
Maybe there is a better way of doing that which didn't occur to me. I can take 
your help in going that route or an alternate one.

As for the security, this is implemented using a system property which needs to 
be set before starting up solr, as suggested by a few members in this ticket. 
The flag "isUploadEnabled" checks if uploading configsets is enabled.
{code}
   public ConfigSetsHandler(final CoreContainer coreContainer) {
 this.coreContainer = coreContainer;
+isUploadEnabled = Boolean.parseBoolean(System.getProperty(
+ConfigSetParams.ENABLE_CONFIGSET_UPLOAD, 
ConfigSetParams.ENABLE_CONFIGSET_UPLOAD_DEFAULT));
   }
{code}

I will take care of the minor corrections, thanks for pointing them out!

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator

2015-11-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6276:
-
Fix Version/s: 6.0

> Add matchCost() api to TwoPhaseDocIdSetIterator
> ---
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.0, 5.4
>
> Attachments: LUCENE-6276-ExactPhraseOnly.patch, 
> LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch
>
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like 
> estimate of nanoseconds or similar. 
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array 
> so that cheaper ones are called first. Today it has no idea if one scorer is 
> a simple phrase scorer on a short field vs another that might do some geo 
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g. 
> totalTermFreq/maxDoc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004847#comment-15004847
 ] 

Erick Erickson commented on SOLR-8291:
--

Do all of your docs have values in the field? If not maybe a dupe of SOLR-8285?

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004632#comment-15004632
 ] 

ASF subversion and git services commented on LUCENE-6276:
-

Commit 1714261 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1714261 ]

LUCENE-6276: Added TwoPhaseIterator.matchCost().

> Add matchCost() api to TwoPhaseDocIdSetIterator
> ---
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6276-ExactPhraseOnly.patch, 
> LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch
>
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like 
> estimate of nanoseconds or similar. 
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array 
> so that cheaper ones are called first. Today it has no idea if one scorer is 
> a simple phrase scorer on a short field vs another that might do some geo 
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g. 
> totalTermFreq/maxDoc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 617 - Failure

2015-11-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/617/

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2C3F5BD46BC7DE97]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2C3F5BD46BC7DE97]:0)




Build Log:
[...truncated 11285 lines...]
   [junit4] Suite: org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J1/temp/solr.cloud.LeaderInitiatedRecoveryOnCommitTest_2C3F5BD46BC7DE97-001/init-core-data-001
   [junit4]   2> 1050206 INFO  
(SUITE-LeaderInitiatedRecoveryOnCommitTest-seed#[2C3F5BD46BC7DE97]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1050208 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1050212 INFO  (Thread-4673) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1050212 INFO  (Thread-4673) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1050311 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.ZkTestServer start zk server on port:46583
   [junit4]   2> 1050312 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1050333 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1050345 INFO  (zkCallback-810-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1556c7f1 
name:ZooKeeperConnection Watcher:127.0.0.1:46583 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1050346 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1050346 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1050346 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1050349 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1050381 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1050397 INFO  (zkCallback-811-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@99e97f2 name:ZooKeeperConnection 
Watcher:127.0.0.1:46583/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 1050398 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1050398 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1050398 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2> 1050399 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards
   [junit4]   2> 1050401 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection
   [junit4]   2> 1050402 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection/shards
   [junit4]   2> 1050403 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1050403 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[2C3F5BD46BC7DE97]) [] 
o.a.s.c.c.SolrZkClient 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004844#comment-15004844
 ] 

Gregory Chanan commented on SOLR-6736:
--

The approach of doing the upload directly in the handler seems problematic 
because it doesn't involve the exclusivity enforcement in the Overseer.  Are 
you sure if you do things like concurrently UPLOAD and DELETE things will be 
left in a sensible state?  I guess the alternative is writing the zip into the 
zookeeper message (could that overrun the jute.maxbuffer) or uploading it 
someone else in ZK and having the Overseer copy it to the correct location.  
The later means you have to upload twice, but I'm guessing this operation is 
rare anyway.

I also don't see any signature checking for security.  I don't see how we can 
have this feature without security.

More minor comments:
{code}
+  if (action == ConfigSetAction.UPLOAD) {
+if (isUploadEnabled) {
+  hanldeConfigUploadRequest(req, rsp);
+  return;
+} else {
+  throw new SolrException(SolrException.ErrorCode.UNAUTHORIZED, 
+  "Uploads are not enabled. Please set the system property \"" 
+  + ConfigSetParams.ENABLE_CONFIGSET_UPLOAD + "\" to true");
+}
+  }
{code}
I don't see why this can't follow the usual operation.call path.  Also handle 
is not spelled correctly.

{code}
+String httpMethod = (String) req.getContext().get("httpMethod");
+if (!"POST".equals(httpMethod)) {
+  throw new SolrException(ErrorCode.BAD_REQUEST,
+  "The upload action supports POST requests only");
+}
{code}
If we are going to check this stuff, I'd rather enforce it for all ConfigSet 
requests.  This can be done by storing the allowed verbs in the ConfigSetAction 
enum itself or somewhere in the handler.

{code}
+if (zkClient.exists(configPathInZk, true)) {
+  throw new SolrException(ErrorCode.SERVER_ERROR,
+  "The configuration " + configSetName + " already exists in 
zookeeper");
+}
+
{code}
This looks like it should be a BAD_REQUEST?

{code}
+for (ContentStream contentStream : req.getContentStreams()) {
  ..
+  break;
+}
{code}
This just reads the first content stream?  Why have a loop then?

How about solrj classes for the requests/responses that you can use in the 
test, instead of hand parsing everything?

{code}
+//Checking error when mo configuration name is specified in request
{code}
mo -> no

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004701#comment-15004701
 ] 

ASF subversion and git services commented on LUCENE-6276:
-

Commit 1714266 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714266 ]

LUCENE-6276: Added TwoPhaseIterator.matchCost().

> Add matchCost() api to TwoPhaseDocIdSetIterator
> ---
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.4
>
> Attachments: LUCENE-6276-ExactPhraseOnly.patch, 
> LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch
>
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like 
> estimate of nanoseconds or similar. 
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array 
> so that cheaper ones are called first. Today it has no idea if one scorer is 
> a simple phrase scorer on a short field vs another that might do some geo 
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g. 
> totalTermFreq/maxDoc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator

2015-11-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6276.
--
   Resolution: Fixed
Fix Version/s: 5.4

I just committed the changes. Thanks Paul!

> Add matchCost() api to TwoPhaseDocIdSetIterator
> ---
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.4
>
> Attachments: LUCENE-6276-ExactPhraseOnly.patch, 
> LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch
>
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like 
> estimate of nanoseconds or similar. 
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array 
> so that cheaper ones are called first. Today it has no idea if one scorer is 
> a simple phrase scorer on a short field vs another that might do some geo 
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g. 
> totalTermFreq/maxDoc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004872#comment-15004872
 ] 

Joel Bernstein commented on SOLR-8291:
--

It does appear to be the null in the sort field causing this error. 

I should have some time next week to work on SOLR-8285.

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8290) remove SchemaField.checkFieldCacheSource's unused QParser argument

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8290:
--
Attachment: SOLR-8290.patch

patch against trunk

> remove SchemaField.checkFieldCacheSource's unused QParser argument
> --
>
> Key: SOLR-8290
> URL: https://issues.apache.org/jira/browse/SOLR-8290
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8290.patch
>
>
> From what I could see with a little looking around the argument was added in 
> 2011 but not used then or since.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7989) After a new leader is elected it, it should ensure it's state is ACTIVE if it has already registered with ZK.

2015-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-7989.
---
Resolution: Fixed

> After a new leader is elected it, it should ensure it's state is ACTIVE if it 
> has already registered with ZK.
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8288.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.4

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6874) WhitespaceTokenizer should tokenize on NBSP

2015-11-13 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-6874:
-

Assignee: Uwe Schindler

> WhitespaceTokenizer should tokenize on NBSP
> ---
>
> Key: LUCENE-6874
> URL: https://issues.apache.org/jira/browse/LUCENE-6874
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: David Smiley
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: LUCENE-6874-chartokenizer.patch, 
> LUCENE-6874-chartokenizer.patch, LUCENE-6874-chartokenizer.patch, 
> LUCENE-6874-jflex.patch, LUCENE-6874.patch, LUCENE_6874_jflex.patch, 
> icu-datasucker.patch, unicode-ws-tokenizer.patch, unicode-ws-tokenizer.patch, 
> unicode-ws-tokenizer.patch
>
>
> WhitespaceTokenizer uses [Character.isWhitespace 
> |http://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#isWhitespace-int-]
>  to decide what is whitespace.  Here's a pertinent excerpt:
> bq. It is a Unicode space character (SPACE_SEPARATOR, LINE_SEPARATOR, or 
> PARAGRAPH_SEPARATOR) but is not also a non-breaking space ('\u00A0', 
> '\u2007', '\u202F')
> Perhaps Character.isWhitespace should have been called 
> isLineBreakableWhitespace?
> I think WhitespaceTokenizer should tokenize on this.  I am aware it's easy to 
> work around but why leave this trap in by default?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6874) WhitespaceTokenizer should tokenize on NBSP

2015-11-13 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004794#comment-15004794
 ] 

Uwe Schindler commented on LUCENE-6874:
---

If nobody objects, I will commit this tomorrow.

> WhitespaceTokenizer should tokenize on NBSP
> ---
>
> Key: LUCENE-6874
> URL: https://issues.apache.org/jira/browse/LUCENE-6874
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: David Smiley
>Priority: Minor
> Attachments: LUCENE-6874-chartokenizer.patch, 
> LUCENE-6874-chartokenizer.patch, LUCENE-6874-chartokenizer.patch, 
> LUCENE-6874-jflex.patch, LUCENE-6874.patch, LUCENE_6874_jflex.patch, 
> icu-datasucker.patch, unicode-ws-tokenizer.patch, unicode-ws-tokenizer.patch, 
> unicode-ws-tokenizer.patch
>
>
> WhitespaceTokenizer uses [Character.isWhitespace 
> |http://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#isWhitespace-int-]
>  to decide what is whitespace.  Here's a pertinent excerpt:
> bq. It is a Unicode space character (SPACE_SEPARATOR, LINE_SEPARATOR, or 
> PARAGRAPH_SEPARATOR) but is not also a non-breaking space ('\u00A0', 
> '\u2007', '\u202F')
> Perhaps Character.isWhitespace should have been called 
> isLineBreakableWhitespace?
> I think WhitespaceTokenizer should tokenize on this.  I am aware it's easy to 
> work around but why leave this trap in by default?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004795#comment-15004795
 ] 

ASF subversion and git services commented on SOLR-8288:
---

Commit 1714272 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714272 ]

SOLR-8288: DistributedUpdateProcessor#doFinish should explicitly check and 
ensure it does not try to put itself into LIR.

> DistributedUpdateProcessor#doFinish should explicitly check and ensure it 
> does not try to put itself into LIR.
> --
>
> Key: SOLR-8288
> URL: https://issues.apache.org/jira/browse/SOLR-8288
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8288.patch
>
>
> We have to be careful about this because currently, something like a commit 
> is sent over http even to the local node and if that fails for some reason, 
> the leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005002#comment-15005002
 ] 

Ray commented on SOLR-8291:
---

1) the issues will not happen all the time, but when it happen, all the queries 
will failed, because we are only use this kind of query to fetch data.
2) we had 3 shards for the collection, for each shard, there will be about 15 
segments.

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8286) Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004927#comment-15004927
 ] 

ASF subversion and git services commented on SOLR-8286:
---

Commit 1714279 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714279 ]

SOLR-8286: Remove instances of solr.hdfs.blockcache.write.enabled from tests 
and docs

> Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs
> --
>
> Key: SOLR-8286
> URL: https://issues.apache.org/jira/browse/SOLR-8286
> Project: Solr
>  Issue Type: Bug
>  Components: documentation, Tests
>Affects Versions: 5.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8286.patch
>
>
> solr.hdfs.blockcache.write.enabled is currently disabled whether or not you 
> set it in the solr configs.  It makes sense to just avoid mentioning it in 
> the docs (it's still there on 
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS) and 
> the morphlines examples.  Best case, it's unnecessary information.  Worse 
> case, it causes people trouble on versions where it's not disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8286) Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004926#comment-15004926
 ] 

ASF subversion and git services commented on SOLR-8286:
---

Commit 1714278 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1714278 ]

SOLR-8286: Remove instances of solr.hdfs.blockcache.write.enabled from tests 
and docs

> Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs
> --
>
> Key: SOLR-8286
> URL: https://issues.apache.org/jira/browse/SOLR-8286
> Project: Solr
>  Issue Type: Bug
>  Components: documentation, Tests
>Affects Versions: 5.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8286.patch
>
>
> solr.hdfs.blockcache.write.enabled is currently disabled whether or not you 
> set it in the solr configs.  It makes sense to just avoid mentioning it in 
> the docs (it's still there on 
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS) and 
> the morphlines examples.  Best case, it's unnecessary information.  Worse 
> case, it causes people trouble on versions where it's not disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 187 - Failure!

2015-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/187/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:60018 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:60018 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([ECD66C304D901EF0:648253EAE36C7308]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:278)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1474)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:60018 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:208)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:173)
... 37 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestCloudSchemaless


[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2015-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005142#comment-15005142
 ] 

Mark Miller commented on SOLR-6305:
---

I only looked at the lock file above. Perhaps something is preventing the same 
thing from working on index files - odd, but certainly possible.

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004906#comment-15004906
 ] 

Joel Bernstein edited comment on SOLR-8291 at 11/13/15 11:30 PM:
-

It looks like the ExportQParserPlugin is not populating the Bits for each 
segment. Not sure why that would be. 

[~runiu], Could you post you're entire query?

Are you using this in conjunction with the CollapsingQParserPlugin?


was (Author: joel.bernstein):
It looks like the ExportQParserPlugin is not populating the Bits for each 
segment. Not sure why that would be. 

[~runiu], Could post you're entire query?

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004906#comment-15004906
 ] 

Joel Bernstein commented on SOLR-8291:
--

It looks like the ExportQParserPlugin is not populating the Bits for each 
segment. Not sure why that would be. 

[~runiu], Could post you're entire query?

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8284) JSON Facet API sort:index causes NPE

2015-11-13 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8284:
--
Attachment: SOLR-8284.patch

> JSON Facet API sort:index causes NPE
> 
>
> Key: SOLR-8284
> URL: https://issues.apache.org/jira/browse/SOLR-8284
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Minor
> Attachments: SOLR-8284.patch
>
>
> sort:index was meant to be a shortcut for sort:"index asc", but this 
> currently causes a NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 187 - Failure!

2015-11-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/187/
Java: multiarch/jdk1.7.0 -d64 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([6398EFBD9A322ED4:F96C925F04A8B2E8]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:766)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:241)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=0]
xml response was: 

02530530530530what's 
inside?info151778147208003584042muLti-Default2015-11-14T02:46:47.84Z


request was:version=2.2=standard=id:530=20=0
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:759)
... 40 more





[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004891#comment-15004891
 ] 

Gregory Chanan commented on SOLR-6736:
--

bq. Please also refer to my previous patch 
(https://issues.apache.org/jira/secure/attachment/12768656/SOLR-6736-newapi.patch)
 in which I did go the route of using Overseer but passing the content stream 
in the zookeeper message didn't work out as it got converted into a string

Is that the correct link?  I don't see any Overseer changes there.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004916#comment-15004916
 ] 

Gregory Chanan commented on SOLR-6736:
--

Ok, I see it now.  I'll take a look when I get the chance.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8286) Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs

2015-11-13 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-8286.
--
Resolution: Fixed

Committed to 5.4, Trunk and removed from the cwiki page.

> Remove instances of solr.hdfs.blockcache.write.enabled from tests and docs
> --
>
> Key: SOLR-8286
> URL: https://issues.apache.org/jira/browse/SOLR-8286
> Project: Solr
>  Issue Type: Bug
>  Components: documentation, Tests
>Affects Versions: 5.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8286.patch
>
>
> solr.hdfs.blockcache.write.enabled is currently disabled whether or not you 
> set it in the solr configs.  It makes sense to just avoid mentioning it in 
> the docs (it's still there on 
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS) and 
> the morphlines examples.  Best case, it's unnecessary information.  Worse 
> case, it causes people trouble on versions where it's not disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004979#comment-15004979
 ] 

Anshum Gupta commented on SOLR-6736:


Thanks for the patch Varun.

It would also be good to leave room for allowing/disallowing/checking certain 
properties in the config file itself, even if that requires some sort of 
parsing e.g. remoteStreaming etc. to make sure that the files don't contain 
undesirable settings. Just to be clear, I'm not saying we hard-code this in 
there, but leave room for either configuration or extension.

Also, in addition to what Greg mentioned, here are a few more minor suggestions:
* the use of final keyword isn't required here:
{code}
final String ENABLE_CONFIGSET_UPLOAD = "configs.upload";
final String ENABLE_CONFIGSET_UPLOAD_DEFAULT = "false";
{code}
* In ConfigSetsHandler, the error code shouldn't be UNAUTHORIZED as no other 
user/credentials would be allowed to do this. Perhaps a FORBIDDEN or 
BAD_REQUEST would serve this better?
{code}
throw new SolrException(SolrException.ErrorCode.UNAUTHORIZED, 
  "Uploads are not enabled. Please set the system property \"" 
  + ConfigSetParams.ENABLE_CONFIGSET_UPLOAD + "\" to true");
{code}
* With 'Exception' there, you should remove everything else from 
ConfigSetsHandler here:
{code}
  private void hanldeConfigUploadRequest(SolrQueryRequest req, 
SolrQueryResponse rsp) throws IOException,
  KeeperException, InterruptedException, Exception {
{code}
* Very small and trivial but there are a few unwanted imports. 


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004892#comment-15004892
 ] 

Joel Bernstein commented on SOLR-8291:
--

Actually on further review it's not completely clear why this occurring. I'll 
keep reviewing the code.

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004895#comment-15004895
 ] 

Varun Rajput edited comment on SOLR-6736 at 11/13/15 11:23 PM:
---

Yes, the changes I made are in OverseerConfigSetMessageHandler. If the link 
doesn't get you to the right patch, you may open the patch named 
"SOLR-6736-newapi.patch" that I uploaded on October 25, 2015 from the 
attachments.


was (Author: varunrajput):
Yes, the changes I made are in OverseerConfigSetMessageHandler. If the link 
doesn't get you to the right patch, you may open the patch named 
"SOLR-6736-newapi.patch" that I uploaded on October 25, 2015 from the 
attachements.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004895#comment-15004895
 ] 

Varun Rajput commented on SOLR-6736:


Yes, the changes I made are in OverseerConfigSetMessageHandler. If the link 
doesn't get you to the right patch, you may open the patch named 
"SOLR-6736-newapi.patch" that I uploaded on October 25, 2015 from the 
attachements.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004919#comment-15004919
 ] 

Varun Rajput commented on SOLR-6736:


Great, thanks in advance.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004925#comment-15004925
 ] 

Joel Bernstein commented on SOLR-8291:
--

Could you also post how you have configured the /export handler in the 
solrconfig.xml.

Also, do you have a custom PostFilter that might not be processing each segment?


> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004938#comment-15004938
 ] 

Ray commented on SOLR-8291:
---

Hi Joel:
   1. I am not using CollapsingQParserPlugin, here is my query:, I found id1 
and id2 sometimes had null values
   
http://host/solr/collection/export?q=date:[NOW+TO+*]=false=id1,id2,doubleValue1=doubleValue1+desc
   2. We don't have custom PostFilter, here is the configure for export handler:

  
{!xport}
xsort
false
  
  
query
  




> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) Get NPE during calling export handler

2015-11-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004951#comment-15004951
 ] 

Joel Bernstein commented on SOLR-8291:
--

We're going to have to try and reproduce this to fix it.

1) Is every query failing for you, or just this one?
2) How many segments are in the index?

> Get NPE during calling export handler
> -
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8284) JSON Facet API sort:index causes NPE

2015-11-13 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004984#comment-15004984
 ] 

Michael Sun commented on SOLR-8284:
---

it's probably because facet.sortVariable is not set if there is no direction 
set. The patch is uploaded. There is a test added in patch to catch this 
failure.


> JSON Facet API sort:index causes NPE
> 
>
> Key: SOLR-8284
> URL: https://issues.apache.org/jira/browse/SOLR-8284
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Minor
>
> sort:index was meant to be a shortcut for sort:"index asc", but this 
> currently causes a NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1016 - Still Failing

2015-11-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1016/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:44745/bnc/delete_data_dir

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44745/bnc/delete_data_dir
at 
__randomizedtesting.SeedInfo.seed([F14E7BEDE03608BB:791A44374ECA6543]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:585)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:197)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-11-13 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005137#comment-15005137
 ] 

Varun Rajput commented on SOLR-6736:


Hi Anshum, thanks for the suggestions. I will put all these in, in the next 
patch. In terms of checking certain properties, we can do that but this upload 
is for the complete config set and not a single file. Do you have any 
suggestions on doing that for a complete config set upload?

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> newzkconf.zip, test_private.pem, test_pub.der, zkconfighandler.zip, 
> zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6893) replace CorePlusExtensionsParser with CorePlusQueries[PlusSandbox]Parser

2015-11-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003864#comment-15003864
 ] 

Christine Poerschke commented on LUCENE-6893:
-

Oops, I missed out on some {{CorePlusExtensionsParser}} references, please 
ignore current patch, will extend/replace it later.

> replace CorePlusExtensionsParser with CorePlusQueries[PlusSandbox]Parser
> 
>
> Key: LUCENE-6893
> URL: https://issues.apache.org/jira/browse/LUCENE-6893
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6893.patch
>
>
> proposed change (patch against trunk to follow):
>  * replace {{CorePlusExtensionsParser}} with {{CorePlusQueriesParser}} and 
> {{CorePlusQueriesPlusSandboxParser}} (the latter extending the former).
> motivation:
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which uses 
> {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
>  * we wish to use or inherit from {{CorePlusExtensionsParser}} but not pull 
> in any sandbox code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6883) Getting exception _t.si (No such file or directory)

2015-11-13 Thread Tejas Jethva (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003841#comment-15003841
 ] 

Tejas Jethva commented on LUCENE-6883:
--

Thanks Michael,

We are using 4.2 since long time and never used 3.x.

If it is fixed, on which version we can check?

> Getting exception _t.si (No such file or directory)
> ---
>
> Key: LUCENE-6883
> URL: https://issues.apache.org/jira/browse/LUCENE-6883
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Tejas Jethva
>
> We are getting following exception when we are trying to update cache. 
> Following are two scenario when we get this error
> scenario 1:
> 2015-11-03 06:45:18,213 [main] ERROR java.io.FileNotFoundException: 
> /app/cache/index-persecurity/PERSECURITY_INDEX-QCH/_mb.si (No such file or 
> directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at 
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
>   .
>   
> scenario 2:
> java.io.FileNotFoundException: 
> /app/1.0.5_loadtest/index-persecurity/PERSECURITY_INDEX-ITQ/_t.si (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:347)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:284)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:247)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
>   ..
>   
>   
> What might be the possible reasons for this?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8285) Ensure that /export handles documents that have no value for the field gracefully.

2015-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8285:
--
Summary: Ensure that /export handles documents that have no value for the 
field gracefully.  (was: Insure that /export handles documents that have no 
value for the field gracefully.)

> Ensure that /export handles documents that have no value for the field 
> gracefully.
> --
>
> Key: SOLR-8285
> URL: https://issues.apache.org/jira/browse/SOLR-8285
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004035#comment-15004035
 ] 

Mark Miller commented on SOLR-7989:
---

Also, something I was not really aware of initially, what you are doing does 
seem to be a valid test for the force leader API that you added in the other 
issue.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7989) After a new leader is elected it, it should ensure it's state is ACTIVE if it has already registered with ZK.

2015-11-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7989:
--
Summary: After a new leader is elected it, it should ensure it's state is 
ACTIVE if it has already registered with ZK.  (was: Down replica elected 
leader, stays down after successful election)

> After a new leader is elected it, it should ensure it's state is ACTIVE if it 
> has already registered with ZK.
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004038#comment-15004038
 ] 

ASF subversion and git services commented on SOLR-7989:
---

Commit 1714211 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1714211 ]

SOLR-7989: After a new leader is elected it, it should ensure it's state is 
ACTIVE if it has already registered with ZK.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) After a new leader is elected it, it should ensure it's state is ACTIVE if it has already registered with ZK.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004040#comment-15004040
 ] 

ASF subversion and git services commented on SOLR-7989:
---

Commit 1714212 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714212 ]

SOLR-7989: After a new leader is elected it, it should ensure it's state is 
ACTIVE if it has already registered with ZK.

> After a new leader is elected it, it should ensure it's state is ACTIVE if it 
> has already registered with ZK.
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004053#comment-15004053
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1714216 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1714216 ]

SOLR-8279: Add a new SolrCloud test that stops and starts the cluster while 
indexing data.

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003992#comment-15003992
 ] 

ASF subversion and git services commented on SOLR-8075:
---

Commit 1714204 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1714204 ]

SOLR-8075: Fix faulty implementation.

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, 
> SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-11-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003997#comment-15003997
 ] 

ASF subversion and git services commented on SOLR-8075:
---

Commit 1714205 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1714205 ]

SOLR-8075: Fix faulty implementation.

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, 
> SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6893) factor out CorePlusQueriesParser from CorePlusExtensionsParser

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6893:

Summary: factor out CorePlusQueriesParser from CorePlusExtensionsParser  
(was: replace CorePlusExtensionsParser with CorePlusQueries[PlusSandbox]Parser)

> factor out CorePlusQueriesParser from CorePlusExtensionsParser
> --
>
> Key: LUCENE-6893
> URL: https://issues.apache.org/jira/browse/LUCENE-6893
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6893.patch
>
>
> proposed change (patch against trunk to follow):
>  * replace {{CorePlusExtensionsParser}} with {{CorePlusQueriesParser}} and 
> {{CorePlusQueriesPlusSandboxParser}} (the latter extending the former).
> motivation:
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which uses 
> {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
>  * we wish to use or inherit from {{CorePlusExtensionsParser}} but not pull 
> in any sandbox code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6893) factor out CorePlusQueriesParser from CorePlusExtensionsParser

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6893:

Description: 
proposed change (patch against trunk to follow):

before:
 * {{CorePlusExtensionsParser}} extends {{CoreParser}}
 * {{CorePlusExtensionsParser}} uses {{(LikeThis|Boosting)QueryBuilder}} 
which uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
 * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}

after:
 * {{CorePlusQueriesParser}} extends {{CoreParser}}
 * {{CorePlusQueriesParser}} uses {{(LikeThis|Boosting)QueryBuilder}} which 
uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
 * {{CorePlusExtensionsParser}} extends {{CorePlusQueriesParser}}
 * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}

motivation:
 * we wish to use or inherit from a {{CorePlus...Parser}} and use 
{{org.apache.lucene.queries.\*}} but not pull in any 
{{org.apache.lucene.sandbox.\*}} code


  was:
proposed change (patch against trunk to follow):
 * replace {{CorePlusExtensionsParser}} with {{CorePlusQueriesParser}} and 
{{CorePlusQueriesPlusSandboxParser}} (the latter extending the former).

motivation:
 * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which uses 
{{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
 * we wish to use or inherit from {{CorePlusExtensionsParser}} but not pull in 
any sandbox code



> factor out CorePlusQueriesParser from CorePlusExtensionsParser
> --
>
> Key: LUCENE-6893
> URL: https://issues.apache.org/jira/browse/LUCENE-6893
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6893.patch
>
>
> proposed change (patch against trunk to follow):
> before:
>  * {{CorePlusExtensionsParser}} extends {{CoreParser}}
>  * {{CorePlusExtensionsParser}} uses {{(LikeThis|Boosting)QueryBuilder}} 
> which uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
> uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
> 
> after:
>  * {{CorePlusQueriesParser}} extends {{CoreParser}}
>  * {{CorePlusQueriesParser}} uses {{(LikeThis|Boosting)QueryBuilder}} 
> which uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
>  * {{CorePlusExtensionsParser}} extends {{CorePlusQueriesParser}}
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
> uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
> 
> motivation:
>  * we wish to use or inherit from a {{CorePlus...Parser}} and use 
> {{org.apache.lucene.queries.\*}} but not pull in any 
> {{org.apache.lucene.sandbox.\*}} code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6893) factor out CorePlusQueriesParser from CorePlusExtensionsParser

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6893:

Attachment: LUCENE-6893.patch

attaching revised/simplified proposed patch against trunk

> factor out CorePlusQueriesParser from CorePlusExtensionsParser
> --
>
> Key: LUCENE-6893
> URL: https://issues.apache.org/jira/browse/LUCENE-6893
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6893.patch, LUCENE-6893.patch
>
>
> proposed change (patch against trunk to follow):
> before:
>  * {{CorePlusExtensionsParser}} extends {{CoreParser}}
>  * {{CorePlusExtensionsParser}} uses {{(LikeThis|Boosting)QueryBuilder}} 
> which uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
> uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
> 
> after:
>  * {{CorePlusQueriesParser}} extends {{CoreParser}}
>  * {{CorePlusQueriesParser}} uses {{(LikeThis|Boosting)QueryBuilder}} 
> which uses {{org.apache.lucene.queries.(BoostingQuery|mlt.MoreLikeThisQuery)}}
>  * {{CorePlusExtensionsParser}} extends {{CorePlusQueriesParser}}
>  * {{CorePlusExtensionsParser}} uses {{FuzzyLikeThisQueryBuilder}} which 
> uses {{org.apache.lucene.sandbox.queries.(FuzzyLikeThisQuery|SlowFuzzyQuery)}}
> 
> motivation:
>  * we wish to use or inherit from a {{CorePlus...Parser}} and use 
> {{org.apache.lucene.queries.\*}} but not pull in any 
> {{org.apache.lucene.sandbox.\*}} code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8287) TrieLongField and TrieDoubleField should override toNativeType

2015-11-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-8287:
-

Assignee: Christine Poerschke

> TrieLongField and TrieDoubleField should override toNativeType
> --
>
> Key: SOLR-8287
> URL: https://issues.apache.org/jira/browse/SOLR-8287
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Christine Poerschke
> Attachments: SOLR-8287.patch, SOLR-8287.patch
>
>
> Although the TrieIntField and TrieFloatField override the toNativeType() 
> method, the TrieLongField and TrieDoubleField do not do so. 
> This method is called during atomic updates by the AtomicUpdateDocumentMerger 
> for the "set" operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8288) DistributedUpdateProcessor#doFinish should explicitly check and ensure it does not try to put itself into LIR.

2015-11-13 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8288:
-

 Summary: DistributedUpdateProcessor#doFinish should explicitly 
check and ensure it does not try to put itself into LIR.
 Key: SOLR-8288
 URL: https://issues.apache.org/jira/browse/SOLR-8288
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller


We have to be careful about this because currently, something like a commit is 
sent over http even to the local node and if that fails for some reason, the 
leader might try and LIR itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6883) Getting exception _t.si (No such file or directory)

2015-11-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004018#comment-15004018
 ] 

Michael McCandless commented on LUCENE-6883:


OK, the only issue I could think of was when reading 3.x segments, but that's 
not happening here.

bq. If it is fixed, on which version we can check?

Not sure if it's fixed since we don't quite know what you are hitting.

But it's entirely possible whatever you're hitting is fixed: there have been 
many fixes since 4.2.  Why not upgrade to the latest (5.3.1 as of now)?

> Getting exception _t.si (No such file or directory)
> ---
>
> Key: LUCENE-6883
> URL: https://issues.apache.org/jira/browse/LUCENE-6883
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Tejas Jethva
>
> We are getting following exception when we are trying to update cache. 
> Following are two scenario when we get this error
> scenario 1:
> 2015-11-03 06:45:18,213 [main] ERROR java.io.FileNotFoundException: 
> /app/cache/index-persecurity/PERSECURITY_INDEX-QCH/_mb.si (No such file or 
> directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at 
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
>   .
>   
> scenario 2:
> java.io.FileNotFoundException: 
> /app/1.0.5_loadtest/index-persecurity/PERSECURITY_INDEX-ITQ/_t.si (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:347)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:284)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:247)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
>   ..
>   
>   
> What might be the possible reasons for this?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >