[jira] [Commented] (SOLR-10780) A new collection property autoRebalanceLeaders

2017-05-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030666#comment-16030666
 ] 

Erick Erickson commented on SOLR-10780:
---

As the original author of all that REGALANCELEADERS stuff, I'll be happy to see 
it go away, it's always been arcane ;)

The intent of the original was to prevent 100s of leaders being on the same 
Solr instance in cases where there were many, many shards spread across many 
machines and each machine would host a replica of each shard. In that case 
measurable performance degradation happened because, even though the extra work 
for the leader wasn't onerous, the cumulative extra work was.

And since there is no use for BALANCESHARDUNIQUE other than preferredLeader 
(that I know of), this and the REBALANCELEADERS API commands are overkill.

I think the intent of this functionality can be implemented much more simply. 
When a replica comes up and after it becomes active, if it examines the state 
of the collection and notes "too many" leaders on a particular node, if could 
simply request that it become the leader of its shard.

By waiting until it's active, we should avoid conditions where a replica wants 
to become the leader but hasn't synced.

I think this is quite legitimate as part of the general autoscaling effort, the 
time is now.

Let's say I have 100 nodes, 100 shards and 100 replicas/shard. That is, each 
node hosts one replica for each shard. Now I run around and start up all the 
nodes. How do we keep from unnecessary leadership changes? Maybe throttle this 
somehow?

Or two replicas for the same shard request leadership at the same time

Or is this the Overseer's job? Something like a "balancing thread" that notices 
this condition and sends "you should be leader" messages to particular 
replicas. Or something that has a global view of what's happening cluster wide 
(as yet undefined)...


> A new collection property autoRebalanceLeaders 
> ---
>
> Key: SOLR-10780
> URL: https://issues.apache.org/jira/browse/SOLR-10780
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Noble Paul
>
> In solrcloud , the first replica to get started in a given shard becomes the 
> leader of that shard. This is a problem during cluster restarts. the first 
> node to get started have al leaders and that node ends up being very heavily 
> loaded. The solution we have today is to invoke a REBALANCELEADERS command 
> explicitly so that the system ends up with  a uniform distribution of leaders 
> across nodes. This is a manual operation and we can make the system do it 
> automatically. 
> so each collection can have an {{autoRebalanceLeaders}} flag . If it is set 
> to true whenever a replica becomes {{ACTIVE}} in a shard , a 
> {{REBALANCELEADER}} is invoked for that shard 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030660#comment-16030660
 ] 

Hrishikesh Gadre commented on SOLR-10777:
-

Actually SOLR-9536 is fixed in 6.3 release. So it is quite possible that you 
are hitting it in 6.2.1. But I am a bit surprised to see that happening in 
6.5.1 as well.

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10780) A new collection property autoRebalanceLeaders

2017-05-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-10780:
--
Description: 
In solrcloud , the first replica to get started in a given shard becomes the 
leader of that shard. This is a problem during cluster restarts. the first node 
to get started have al leaders and that node ends up being very heavily loaded. 
The solution we have today is to invoke a REBALANCELEADERS command explicitly 
so that the system ends up with  a uniform distribution of leaders across 
nodes. This is a manual operation and we can make the system do it 
automatically. 
so each collection can have an {{autoRebalanceLeaders}} flag . If it is set to 
true whenever a replica becomes {{ACTIVE}} in a shard , a {{REBALANCELEADER}} 
is invoked for that shard 


  was:
In solrcloud , the first replica to get started in a given shard becomes the 
leader of that shard. This is a problem during cluster restarts. the first node 
to get started have al leaders and that node ends up being very heavily loaded. 
The solution we have today is to invoke a REBALANCELEADERS command explicitly 
so that the system ends up with  a uniform distribution of leaders across 
nodes. This is a manual operation and we can make the system do it 
automatically. 
so each collection can have an {{autoRebalanceLeaders}} flag . If it is set to 
true whenever a replica becomes {{ACTIVE}} in a shard , a {{REBALANCELEADER}} 
is invoked for that collection 



> A new collection property autoRebalanceLeaders 
> ---
>
> Key: SOLR-10780
> URL: https://issues.apache.org/jira/browse/SOLR-10780
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Noble Paul
>
> In solrcloud , the first replica to get started in a given shard becomes the 
> leader of that shard. This is a problem during cluster restarts. the first 
> node to get started have al leaders and that node ends up being very heavily 
> loaded. The solution we have today is to invoke a REBALANCELEADERS command 
> explicitly so that the system ends up with  a uniform distribution of leaders 
> across nodes. This is a manual operation and we can make the system do it 
> automatically. 
> so each collection can have an {{autoRebalanceLeaders}} flag . If it is set 
> to true whenever a replica becomes {{ACTIVE}} in a shard , a 
> {{REBALANCELEADER}} is invoked for that shard 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030652#comment-16030652
 ] 

Hrishikesh Gadre commented on SOLR-10777:
-

bq. Since the regular expression has explicitly defined the start (^) and end 
($), I think find() and matches() should return identical results 

Note - this is applicable only in this case since we are invoking find() only 
once on a given Matcher instance. 

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030648#comment-16030648
 ] 

Hrishikesh Gadre commented on SOLR-10777:
-

[~singh.nilesh]

bq. also in OldBackUpDirectory`s pattern match should haver matcher.matches(), 
to fetch the group(1) value.

Since the regular expression has explicitly defined the start (^) and end ($), 
I think find() and matches() should return identical results (which I verified 
by writing a small program). Also refer to 
https://stackoverflow.com/questions/4450045/difference-between-matches-and-find-in-java-regex

BTW can you post the actual stack trace of the error you found? I am a bit 
surprised since we fixed NPE in the same codebase few months ago. Refer to 
SOLR-9536

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10780) A new collection property autoRebalanceLeaders

2017-05-30 Thread Noble Paul (JIRA)
Noble Paul created SOLR-10780:
-

 Summary: A new collection property autoRebalanceLeaders 
 Key: SOLR-10780
 URL: https://issues.apache.org/jira/browse/SOLR-10780
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Noble Paul


In solrcloud , the first replica to get started in a given shard becomes the 
leader of that shard. This is a problem during cluster restarts. the first node 
to get started have al leaders and that node ends up being very heavily loaded. 
The solution we have today is to invoke a REBALANCELEADERS command explicitly 
so that the system ends up with  a uniform distribution of leaders across 
nodes. This is a manual operation and we can make the system do it 
automatically. 
so each collection can have an {{autoRebalanceLeaders}} flag . If it is set to 
true whenever a replica becomes {{ACTIVE}} in a shard , a {{REBALANCELEADER}} 
is invoked for that collection 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6917) Deprecate and rename NumericField/RangeQuery to LegacyNumeric

2017-05-30 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030636#comment-16030636
 ] 

Trejkaz commented on LUCENE-6917:
-

Did this ever hit the dev list? Google can't seem to find it. I'm wondering if 
a fix was ever found. Not because I'm seeing it when building Lucene - but 
because I'm seeing the same error when trying to build our own stuff. And clean 
does seem to stop it sometimes, but for some people it doesn't, so I'm trying 
to figure out whether anyone knows the actual cause. It's a bit of a vague 
error message.


> Deprecate and rename NumericField/RangeQuery to LegacyNumeric
> -
>
> Key: LUCENE-6917
> URL: https://issues.apache.org/jira/browse/LUCENE-6917
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.0
>
> Attachments: LUCENE-6917-broken-javadocs.patch, LUCENE-6917.patch, 
> LUCENE-6917.patch, LUCENE-6917.patch
>
>
> DimensionalValues seems to be better across the board (indexing time, 
> indexing size, search-speed, search-time heap required) than NumericField, at 
> least in my testing so far.
> I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, 
> {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename 
> with {{Legacy}} prefix?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10778) Ant precommit task WARNINGS about unclosed resources

2017-05-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10778:
--
Attachment: notclosed.txt

File with warnings about not closed objects (and probably other WARNINGS to) 
from precommit.

> Ant precommit task WARNINGS about unclosed resources
> 
>
> Key: SOLR-10778
> URL: https://issues.apache.org/jira/browse/SOLR-10778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.6
>Reporter: Andrew Musselman
>Priority: Minor
> Attachments: notclosed.txt
>
>
> During precommit we are seeing lots of warnings about resources that aren't 
> being closed, which could pose problems based on chat amongst the team. Log 
> snippet for example:
> [mkdir] Created dir: 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] Compiling 419 source files to 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
>  (at line 920)
>  [ecj-lint] new LBHttpSolrClient(httpSolrClientBuilder, httpClient, 
> solrServerUrls) :
>  [ecj-lint] 
> ^^^
>  [ecj-lint] Resource leak: '' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/StreamingBinaryResponseParser.java
>  (at line 49)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 90)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec();
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 113)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10779) JavaBinCodec's close/finish pattern is trappy

2017-05-30 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-10779:
-

 Summary: JavaBinCodec's close/finish pattern is trappy
 Key: SOLR-10779
 URL: https://issues.apache.org/jira/browse/SOLR-10779
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


Having the marshal() code call finish which in turn calls close() is trappy. 
The marshal code is not robust anyway since if there's an exception before the 
try loop, it will not close the resource.

Sub task of SOLR-10778



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10778) Ant precommit task WARNINGS about unclosed resources

2017-05-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030619#comment-16030619
 ] 

Erick Erickson commented on SOLR-10778:
---

I was looking at this too a little today, and it's tricky. Some things should 
be changed I think, but it'll be a case-by-case sort of thing. By my count 
there are 386 warnings in the code base about failing to close something, See 
the file I'll attach in a minute.

For instance, there's this pattern:

 JavaBinCodec codec = new JavaBinCodec();
codec.marshal(nl, os);

that generates one of these warnings since JavaBinCodec implements Closeable.

But "codec.marshal()" calls "codec.finish()" which in turn calls 
"codec.close()" which tests a flag and conditionally calls "codec.finish()" 
which sets the flag that's checked in "codec.close()" so it doesn't get into an 
infinite loop. No, I don't want to clarify that

I think having marshal() have this side-effect is trappy, I'd much rather see a 
try-with-resources:
try (JavaBinCodec codec = new JavaBinCodec()) {
codec.marshal(nl, os);
}
and the marshal() code just do it's thing and the code in finish() just be 
moved to close(). The marshal() code is not robust anyway:
  public void marshal(Object nl, OutputStream os) throws IOException {
initWrite(os);
try {
  writeVal(nl);
} finally {
  finish();
}
  }

if an error is thrown from initWrite finish (and thus close) won't be called 
and this _would_ be a resource leak.

I was about to write that one of the classes that has this a lot is IndexWriter 
and constructs like "RefCounted iw = 
solrCoreState.getIndexWriter(core);" scare me. But looking more closely, almost 
all of the warnings are in test files and the code that constructs an 
IndexWriter but doesn't close it so it'd probably be safe to try-with-resources 
on it (or other). This wouldn't affect the running Solr since it's test code 
though, so this is largely cosmetic.

***

Since there are so many warnings, and since (I'd think) there will be some 
classes that lend themselves to clean up and some that don't, maybe the best 
thing to do would be to create sub-jiras for bite-sized chunks and link them 
here. That way we can have a sanity check for classes that people _know_ are 
tricky. 

I'll kick one off for JavaBinCodec.





> Ant precommit task WARNINGS about unclosed resources
> 
>
> Key: SOLR-10778
> URL: https://issues.apache.org/jira/browse/SOLR-10778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.6
>Reporter: Andrew Musselman
>Priority: Minor
>
> During precommit we are seeing lots of warnings about resources that aren't 
> being closed, which could pose problems based on chat amongst the team. Log 
> snippet for example:
> [mkdir] Created dir: 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] Compiling 419 source files to 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
>  (at line 920)
>  [ecj-lint] new LBHttpSolrClient(httpSolrClientBuilder, httpClient, 
> solrServerUrls) :
>  [ecj-lint] 
> ^^^
>  [ecj-lint] Resource leak: '' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/StreamingBinaryResponseParser.java
>  (at line 49)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 90)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec();
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 113)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030614#comment-16030614
 ] 

Nilesh Singh commented on SOLR-10777:
-

https://cwiki.apache.org/confluence/display/solr/Making+and+Restoring+Backups

by executing this command 
"http://localhost:8983/solr/gettingstarted/replication?command=backup; there 
won`t be any name, so the snapshotName would be null.

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+171) - Build # 3627 - Still Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3627/
Java: 64bit/jdk-9-ea+171 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([599FDF61E293A50F:334DE00EBA7075C0]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 39 more




Build Log:
[...truncated 1699 lines...]
   [junit4] JVM J1: stderr 

[jira] [Resolved] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-05-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10698.
---
   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-05-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030580#comment-16030580
 ] 

ASF subversion and git services commented on SOLR-10698:


Commit f9f64a9574d6da18a7395a22ce73c40438899c11 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9f64a9 ]

SOLR-10698: Fix precommit

(cherry picked from commit c71ce16)


> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-05-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030579#comment-16030579
 ] 

ASF subversion and git services commented on SOLR-10698:


Commit a8db3701e93bca385bf15058c1abdbfd786a88ae in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a8db370 ]

SOLR-10698: StreamHandler should allow connections to be closed early

(cherry picked from commit 02b1c8a)


> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Christian Moen
Hello,

Here's my +1
SUCCESS! [0:43:54.749827] (Linux server)

Best,
Christian

On Wed, May 31, 2017 at 3:07 AM Ishan Chattopadhyaya 
wrote:

> Please vote for release candidate 5 for Lucene/Solr 6.6.0The artifacts can be 
> downloaded 
> from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241You
>  can run the smoke tester directly with this command:python3 -u 
> dev-tools/scripts/smokeTestRelease.py 
> \https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241Here's
>  my +1SUCCESS! [1:23:31.105482]
>
>


[jira] [Updated] (SOLR-10757) cleanup/refactor/fix deprecated methods/constructors in CollectionAdminRequest

2017-05-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10757:

Attachment: SOLR-10757.patch

this took longer then i thought (because of how many tests were still using the 
deprecated methods, and how many places had "defered" argument checks in 
getParams that i moved to the constructor because there are no longer "setter" 
methods ... but i think it's good to go.

We should now be able to change a lot of state variables (that are only set in 
constructors) to be "final" -- but i'm going to leave that for another issue 
 this patch is huge enough.

> cleanup/refactor/fix deprecated methods/constructors in CollectionAdminRequest
> --
>
> Key: SOLR-10757
> URL: https://issues.apache.org/jira/browse/SOLR-10757
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10757.patch
>
>
> spinning off of SOLR-10755...
> * CollectionAdminRequest
> ** has roughly double the number of deprecations of all other solrj classes 
> combined
> ** many of the deprecated methods/constructors are still used in a lot of 
> places in tests
> ** in many cases the non-deprecated "constructor" versions aren't validating 
> the same way the deprecated setters do
> ** in at least one case i see obvious bugs in the non-deprecated methods (see 
> ForceLeader constructors)
> ** once many of these deprecated setters are removed, a lot of member 
> variables should become final



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Alexandre Rafalovitch
+1 SUCCESS! [3:10:19.763661] (Mac book pro)
My two DIH example rewrites also work, tested by hand.

Regards,
   Alex.

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 30 May 2017 at 20:50, Steve Rowe  wrote:
> Yeah, I can’t see any “Too many open files” messages in your log.
>
> From your log:
>
> -
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ReplaceNodeTest 
> -Dtests.method=test -Dtests.seed=545A8F7F914CAA60 -Dtests.slow=true 
> -Dtests.locale=zh-HK -Dtests.timezone=Indian/Cocos -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 65.7s | ReplaceNodeTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([545A8F7F914CAA60:DC0EB0A53FB0C798]:0)
>[junit4]>at 
> org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:79)
> -
>
> I tried again, and ^^ doesn't reproduce on my macbook pro.
>
> Looks like this is a (roughly) 10-second timeout (200 x 50ms) - maybe the 
> operation is just taking longer than that? - could you try increasing the 200 
> below to a larger number?  maybe also check for other statuses than just 
> COMPLETED and FAILED?  (there is also RUNNING, SUBMITTED, and NOT_FOUND):
>
> -
> 67: new CollectionAdminRequest.ReplaceNode(node2bdecommissioned, 
> emptyNode).processAsync("000", cloudClient);
> 68: CollectionAdminRequest.RequestStatus requestStatus = 
> CollectionAdminRequest.requestStatus("000");
> 69: boolean success = false;
> 70: for (int i = 0; i < 200; i++) {
> 71:   CollectionAdminRequest.RequestStatusResponse rsp = 
> requestStatus.process(cloudClient);
> 72:   if (rsp.getRequestStatus() == RequestStatusState.COMPLETED) {
> 73: success = true;
> 74: break;
> 75:   }
> 76:   assertFalse(rsp.getRequestStatus() == RequestStatusState.FAILED);
> 77:  Thread.sleep(50);
> 78: }
> 79: assertTrue(success);
> -
>
> --
> Steve
> www.lucidworks.com
>
>> On May 30, 2017, at 5:58 PM, Mike Drob  wrote:
>>
>> Thanks, Steve.
>>
>>
>> I've uploaded a failure log to 
>> http://home.apache.org/~mdrob/lucene-solr_6_6/failure
>>
>> My ulimit settings are:
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>>
>> file size   (blocks, -f) unlimited
>>
>> max locked memory   (kbytes, -l) unlimited
>>
>> max memory size (kbytes, -m) unlimited
>>
>> open files  (-n) 4096
>>
>> pipe size(512 bytes, -p) 1
>>
>> stack size  (kbytes, -s) 8192
>>
>> cpu time   (seconds, -t) unlimited
>>
>> max user processes  (-u) 709
>>
>> virtual memory  (kbytes, -v) unlimited
>>
>>
>>
>> Do you think that open files limit is too low? I didn't see any evidence in 
>> the log of that (could easily have missed it though).
>>
>>
>> On Tue, May 30, 2017 at 4:32 PM, Steve Rowe  wrote:
>> Hi Mike,
>>
>> > On May 30, 2017, at 5:07 PM, Mike Drob  wrote:
>> >
>> > Was able to reproduce on both the unpacked RC and on branch_6_6 in the 
>> > repo with
>> >
>> > ant test -Dtestcase=ReplaceNodeTest -Dtests.seed=545A8F7F914CAA60 
>> > -Dtests.asserts=true
>> >
>> > My environment:
>> >
>> > Apache Ant(TM) version 1.10.1 compiled on February 2 2017
>> >
>> > java version "1.8.0_131"
>> >
>> > Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
>> >
>> > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>> >
>> > Mac OS X 10.12.4
>>
>> The repro line above does not reproduce for me:
>> * on Linux on branch_6_6 (Debian 8.8, Oracle JDK 1.8.0_77, Ant 1.9.4);
>> * on MacOS 10.12.5, Oracle JDK 1.8.0_112, Ant 1.9.6.
>>
>> Mike, can you provide a failure log?
>>
>> I went looking for Jenkins failures of this test, and the only public ones I 
>> see are from Policeman Jenkins on OSX, all of them caused by "Too many open 
>> files".
>>
>> On my local Jenkins, I see ObjectTracker failures for this test (an 
>> unreleased object) on branch_6x, but the most recent was from mid-February.
>>
>> --
>> Steve
>> www.lucidworks.com
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_131) - Build # 19745 - Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19745/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testIntersects
 { seed=[2F1D36888A3DFF33:6399A63BE17573A5]}

Error Message:
Should have matched I#0:Pt(x=121.0,y=-42.0) Q:Pt(x=121.0,y=-42.0)

Stack Trace:
java.lang.AssertionError: Should have matched I#0:Pt(x=121.0,y=-42.0) 
Q:Pt(x=121.0,y=-42.0)
at 
__randomizedtesting.SeedInfo.seed([2F1D36888A3DFF33:6399A63BE17573A5]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.fail(RandomSpatialOpFuzzyPrefixTreeTest.java:399)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.doTest(RandomSpatialOpFuzzyPrefixTreeTest.java:386)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testIntersects(RandomSpatialOpFuzzyPrefixTreeTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 8983 lines...]
   [junit4] Suite: 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest
   [junit4]   2> maijs 31, 2017 7:50:32 AM 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest setupGrid
   [junit4]   2> INFO: 

[jira] [Created] (SOLR-10778) Ant precommit task WARNINGS about unclosed resources

2017-05-30 Thread Andrew Musselman (JIRA)
Andrew Musselman created SOLR-10778:
---

 Summary: Ant precommit task WARNINGS about unclosed resources
 Key: SOLR-10778
 URL: https://issues.apache.org/jira/browse/SOLR-10778
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 4.6
Reporter: Andrew Musselman
Priority: Minor


During precommit we are seeing lots of warnings about resources that aren't 
being closed, which could pose problems based on chat amongst the team. Log 
snippet for example:

[mkdir] Created dir: 
/var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
 [ecj-lint] Compiling 419 source files to 
/var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
 (at line 920)
 [ecj-lint] new LBHttpSolrClient(httpSolrClientBuilder, httpClient, 
solrServerUrls) :
 [ecj-lint] 
^^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/StreamingBinaryResponseParser.java
 (at line 49)
 [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
 [ecj-lint]  ^
 [ecj-lint] Resource leak: 'codec' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
 (at line 90)
 [ecj-lint] JavaBinCodec codec = new JavaBinCodec();
 [ecj-lint]  ^
 [ecj-lint] Resource leak: 'codec' is never closed
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
 (at line 113)
 [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
 [ecj-lint]  ^
 [ecj-lint] Resource leak: 'codec' is never closed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030484#comment-16030484
 ] 

Varun Thacker commented on SOLR-10777:
--

Hi Nilesh,

So how were you taking the backups?

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_131) - Build # 3626 - Still Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3626/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([A32CF0EF417AE312:C9FECF80199933DD]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 11596 lines...]
   [junit4] Suite: 

[jira] [Updated] (SOLR-10773) Add support for replica types in V2 API

2017-05-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10773:
-
Attachment: SOLR-10773.patch

> Add support for replica types in V2 API
> ---
>
> Key: SOLR-10773
> URL: https://issues.apache.org/jira/browse/SOLR-10773
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-10773.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7857) CharTokenizer-derived tokenizers and KeywordTokenizer emit multiple tokens when the max length is exceeded

2017-05-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030459#comment-16030459
 ] 

Steve Rowe commented on LUCENE-7857:


I agree with Robert.

See my answer to a question about why StandardTokenizer effectively splits 
tokens that are longer than maxTokenLength in this recent java-user mailing 
list thread: 
[https://lists.apache.org/thread.html/42af955be9522cff0d28b47d7fa723d90846ad011157503fcf687f99@%3Cjava-user.lucene.apache.org%3E].

The workaround I outlined on that thread would work here too: set 
maxTokenLength super-high, then use LengthFilter to remove tokens longer than 
what you want to keep.

> CharTokenizer-derived tokenizers and KeywordTokenizer emit multiple tokens 
> when the max length is exceeded
> --
>
> Key: LUCENE-7857
> URL: https://issues.apache.org/jira/browse/LUCENE-7857
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Assigning to myself to not lose track of it.
> LUCENE-7705 introduced the ability to define the allowable token length for 
> these tokenizers other than hard-code it to 255. It's always been the case 
> that when the hard-coded limit was exceeded, multiple tokens would be 
> emitted. However, the tests for LUCENE-7705 exposed a problem.
> Suppose the max length is 3 and the doc contains "letter". Two tokens are 
> emitted and indexed: "let" and "ter".
> Now suppose the search is for "lett". If the default operator is AND or 
> phrase queries are constructed the query fails since the tokens emitted are 
> "let" and "t". Only if the operator is OR is the document found, and even 
> then it won't be correct since searching for "lett" would match a document 
> indexed with "bett" because it would match on the bare "t".
> Proposal: 
> The remainder of the token should be ignored when maxTokenLen is exceeded.
> [~rcmuir][~steve_rowe][~tomasflobbe] comments? Again, this behavior was not 
> introduced by LUCENE-7705, it's just that it would be very hard to notice 
> with the default 255 char limit.
> I'm not quite sure why master generates a parsed query of:
> field:let field:t
> and 6x generates
> field:"let t"
> so the tests succeeded on master but not on 6x



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030458#comment-16030458
 ] 

Nilesh Singh commented on SOLR-10777:
-

Thank You, I have added the patch.

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilesh Singh updated SOLR-10777:

Attachment: SOLR-10777.patch

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png, SOLR-10777.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10752) replicationFactor default should be 0 if tlogReplicas is specified when creating a collection

2017-05-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10752.
--
   Resolution: Fixed
 Assignee: Tomás Fernández Löbbe
Fix Version/s: master (7.0)

> replicationFactor default should be 0 if tlogReplicas is specified when 
> creating a collection
> -
>
> Key: SOLR-10752
> URL: https://issues.apache.org/jira/browse/SOLR-10752
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: SOLR-10752.patch, SOLR-10752.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10752) replicationFactor default should be 0 if tlogReplicas is specified when creating a collection

2017-05-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030449#comment-16030449
 ] 

ASF subversion and git services commented on SOLR-10752:


Commit c824b097b4f2d2f8d7b9560ee258c3a2515fbcf0 in lucene-solr's branch 
refs/heads/master from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c824b09 ]

SOLR-10752: replicationFactor default is 0 if tlogReplicas > 0 is specified


> replicationFactor default should be 0 if tlogReplicas is specified when 
> creating a collection
> -
>
> Key: SOLR-10752
> URL: https://issues.apache.org/jira/browse/SOLR-10752
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10752.patch, SOLR-10752.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10752) replicationFactor default should be 0 if tlogReplicas is specified when creating a collection

2017-05-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10752:
-
Attachment: SOLR-10752.patch

> replicationFactor default should be 0 if tlogReplicas is specified when 
> creating a collection
> -
>
> Key: SOLR-10752
> URL: https://issues.apache.org/jira/browse/SOLR-10752
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10752.patch, SOLR-10752.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Steve Rowe
Yeah, I can’t see any “Too many open files” messages in your log.

From your log:

-
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ReplaceNodeTest 
-Dtests.method=test -Dtests.seed=545A8F7F914CAA60 -Dtests.slow=true 
-Dtests.locale=zh-HK -Dtests.timezone=Indian/Cocos -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 65.7s | ReplaceNodeTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([545A8F7F914CAA60:DC0EB0A53FB0C798]:0)
   [junit4]>at 
org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:79)
-

I tried again, and ^^ doesn't reproduce on my macbook pro.

Looks like this is a (roughly) 10-second timeout (200 x 50ms) - maybe the 
operation is just taking longer than that? - could you try increasing the 200 
below to a larger number?  maybe also check for other statuses than just 
COMPLETED and FAILED?  (there is also RUNNING, SUBMITTED, and NOT_FOUND):

-
67: new CollectionAdminRequest.ReplaceNode(node2bdecommissioned, 
emptyNode).processAsync("000", cloudClient);
68: CollectionAdminRequest.RequestStatus requestStatus = 
CollectionAdminRequest.requestStatus("000");
69: boolean success = false;
70: for (int i = 0; i < 200; i++) {
71:   CollectionAdminRequest.RequestStatusResponse rsp = 
requestStatus.process(cloudClient);
72:   if (rsp.getRequestStatus() == RequestStatusState.COMPLETED) {
73: success = true;
74: break;
75:   }
76:   assertFalse(rsp.getRequestStatus() == RequestStatusState.FAILED);
77:  Thread.sleep(50);
78: }
79: assertTrue(success);
-

--
Steve
www.lucidworks.com

> On May 30, 2017, at 5:58 PM, Mike Drob  wrote:
> 
> Thanks, Steve.
> 
> 
> I've uploaded a failure log to 
> http://home.apache.org/~mdrob/lucene-solr_6_6/failure
> 
> My ulimit settings are:
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> 
> file size   (blocks, -f) unlimited
> 
> max locked memory   (kbytes, -l) unlimited
> 
> max memory size (kbytes, -m) unlimited
> 
> open files  (-n) 4096
> 
> pipe size(512 bytes, -p) 1
> 
> stack size  (kbytes, -s) 8192
> 
> cpu time   (seconds, -t) unlimited
> 
> max user processes  (-u) 709
> 
> virtual memory  (kbytes, -v) unlimited
> 
> 
> 
> Do you think that open files limit is too low? I didn't see any evidence in 
> the log of that (could easily have missed it though).
> 
> 
> On Tue, May 30, 2017 at 4:32 PM, Steve Rowe  wrote:
> Hi Mike,
> 
> > On May 30, 2017, at 5:07 PM, Mike Drob  wrote:
> >
> > Was able to reproduce on both the unpacked RC and on branch_6_6 in the repo 
> > with
> >
> > ant test -Dtestcase=ReplaceNodeTest -Dtests.seed=545A8F7F914CAA60 
> > -Dtests.asserts=true
> >
> > My environment:
> >
> > Apache Ant(TM) version 1.10.1 compiled on February 2 2017
> >
> > java version "1.8.0_131"
> >
> > Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> >
> > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> >
> > Mac OS X 10.12.4
> 
> The repro line above does not reproduce for me:
> * on Linux on branch_6_6 (Debian 8.8, Oracle JDK 1.8.0_77, Ant 1.9.4);
> * on MacOS 10.12.5, Oracle JDK 1.8.0_112, Ant 1.9.6.
> 
> Mike, can you provide a failure log?
> 
> I went looking for Jenkins failures of this test, and the only public ones I 
> see are from Policeman Jenkins on OSX, all of them caused by "Too many open 
> files".
> 
> On my local Jenkins, I see ObjectTracker failures for this test (an 
> unreleased object) on branch_6x, but the most recent was from mid-February.
> 
> --
> Steve
> www.lucidworks.com
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Tomas Fernandez Lobbe
+1

SUCCESS! [0:41:27.779418]

> On May 30, 2017, at 11:07 AM, Ishan Chattopadhyaya  wrote:
> 
> Please vote for release candidate 5 for Lucene/Solr 6.6.0
> 
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241
>  
> 
> 
> You can run the smoke tester directly with this command:
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241
>  
> 
> 
> Here's my +1
> SUCCESS! [1:23:31.105482]



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030414#comment-16030414
 ] 

Tomás Fernández Löbbe commented on SOLR-10777:
--

Hi [~singh.nil...@hotmail.com], this wiki page explains how you can contribute 
your code: 
https://wiki.apache.org/solr/HowToContribute#Contributing_Code_.28Features.2C_Bug_Fixes.2C_Tests.2C_etc29

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilesh Singh updated SOLR-10777:

Attachment: Screen Shot 2017-05-30 at 7.54.18 PM.png
Screen Shot 2017-05-30 at 7.53.50 PM.png

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
> Attachments: Screen Shot 2017-05-30 at 7.53.50 PM.png, Screen Shot 
> 2017-05-30 at 7.54.18 PM.png
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilesh Singh updated SOLR-10777:

Comment: was deleted

(was: 20m)

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030383#comment-16030383
 ] 

Nilesh Singh commented on SOLR-10777:
-

20m

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilesh Singh updated SOLR-10777:

Labels:   (was: beginner)

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7857) CharTokenizer-derived tokenizers and KeywordTokenizer emit multiple tokens when the max length is exceeded

2017-05-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030373#comment-16030373
 ] 

Robert Muir commented on LUCENE-7857:
-

my opinion: behavior should be consistent with StandardTokenizer & co.

I don't think we should do heroic efforts to do great things with too-long 
tokens. If someone wants maxTokenLen of 3 or something, then i think its better 
to look at n-grams for that case.

> CharTokenizer-derived tokenizers and KeywordTokenizer emit multiple tokens 
> when the max length is exceeded
> --
>
> Key: LUCENE-7857
> URL: https://issues.apache.org/jira/browse/LUCENE-7857
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Assigning to myself to not lose track of it.
> LUCENE-7705 introduced the ability to define the allowable token length for 
> these tokenizers other than hard-code it to 255. It's always been the case 
> that when the hard-coded limit was exceeded, multiple tokens would be 
> emitted. However, the tests for LUCENE-7705 exposed a problem.
> Suppose the max length is 3 and the doc contains "letter". Two tokens are 
> emitted and indexed: "let" and "ter".
> Now suppose the search is for "lett". If the default operator is AND or 
> phrase queries are constructed the query fails since the tokens emitted are 
> "let" and "t". Only if the operator is OR is the document found, and even 
> then it won't be correct since searching for "lett" would match a document 
> indexed with "bett" because it would match on the bare "t".
> Proposal: 
> The remainder of the token should be ignored when maxTokenLen is exceeded.
> [~rcmuir][~steve_rowe][~tomasflobbe] comments? Again, this behavior was not 
> introduced by LUCENE-7705, it's just that it would be very hard to notice 
> with the default 255 char limit.
> I'm not quite sure why master generates a parsed query of:
> field:let field:t
> and 6x generates
> field:"let t"
> so the tests succeeded on master but not on 6x



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1316 - Still Unstable

2017-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1316/

3 tests failed.
FAILED:  
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testIntersects
 { seed=[415F3AEF8B5C61C2:DFD033CD426E2BC0]}

Error Message:
Should have matched I#0:Pt(x=18.0,y=53.0) Q:Pt(x=18.0,y=53.0)

Stack Trace:
java.lang.AssertionError: Should have matched I#0:Pt(x=18.0,y=53.0) 
Q:Pt(x=18.0,y=53.0)
at 
__randomizedtesting.SeedInfo.seed([415F3AEF8B5C61C2:DFD033CD426E2BC0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.fail(RandomSpatialOpFuzzyPrefixTreeTest.java:399)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.doTest(RandomSpatialOpFuzzyPrefixTreeTest.java:386)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpFuzzyPrefixTreeTest.testIntersects(RandomSpatialOpFuzzyPrefixTreeTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:33709/solr/test_col: Async exception during distributed 
update: Error from server at 
http://127.0.0.1:52623/solr/test_col_shard2_replica_n1: Server Error
request: 

[jira] [Updated] (SOLR-10752) replicationFactor default should be 0 if tlogReplicas is specified when creating a collection

2017-05-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10752:
-
Attachment: SOLR-10752.patch

> replicationFactor default should be 0 if tlogReplicas is specified when 
> creating a collection
> -
>
> Key: SOLR-10752
> URL: https://issues.apache.org/jira/browse/SOLR-10752
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10752.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030293#comment-16030293
 ] 

Nilesh Singh commented on SOLR-10777:
-

I have fixed the issue with Backup file/directory deletion and pattern 
matching. Kindly guide me to the process to submit this fix. Thank you.

> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
>  Labels: beginner
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilesh Singh updated SOLR-10777:

Description: 
in Solr back up SnapShooter will try to delete the old backups saved on the 
disk automatically, but deletion fails as the files may not be the pattern type 
expected by the OldBackUpDirectory. 

```
  private static final Pattern dirNamePattern = 
Pattern.compile("^snapshot[.](.*)$");
```
In this case the following code throws NPE.

```
if (obd.getTimestamp().isPresent()) {
  dirs.add(obd);
}
```

also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
fetch the group(1) value.


  was:
in Solr back up SnapShooter will try to delete the old backups saved on the 
disk automatically, but deletion fails as the files may not be the pattern type 
expected by the OldBackUpDirectory. 

```
  private static final Pattern dirNamePattern = 
Pattern.compile("^snapshot[.](.*)$");
```
In this case the following code throws NPE.

```
if (obd.getTimestamp().isPresent()) {
  dirs.add(obd);
}
```

also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
fetch the group(1) value.


> Replication Backup creation fails with NPE, while deleting the old backups
> --
>
> Key: SOLR-10777
> URL: https://issues.apache.org/jira/browse/SOLR-10777
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, replication (java)
>Affects Versions: 6.2.1, 6.5.1
>Reporter: Nilesh Singh
>Priority: Minor
>  Labels: beginner
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> in Solr back up SnapShooter will try to delete the old backups saved on the 
> disk automatically, but deletion fails as the files may not be the pattern 
> type expected by the OldBackUpDirectory. 
> ```
>   private static final Pattern dirNamePattern = 
> Pattern.compile("^snapshot[.](.*)$");
> ```
> In this case the following code throws NPE.
> ```
> if (obd.getTimestamp().isPresent()) {
>   dirs.add(obd);
> }
> ```
> also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
> fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10777) Replication Backup creation fails with NPE, while deleting the old backups

2017-05-30 Thread Nilesh Singh (JIRA)
Nilesh Singh created SOLR-10777:
---

 Summary: Replication Backup creation fails with NPE, while 
deleting the old backups
 Key: SOLR-10777
 URL: https://issues.apache.org/jira/browse/SOLR-10777
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore, replication (java)
Affects Versions: 6.5.1, 6.2.1
Reporter: Nilesh Singh
Priority: Minor


in Solr back up SnapShooter will try to delete the old backups saved on the 
disk automatically, but deletion fails as the files may not be the pattern type 
expected by the OldBackUpDirectory. 

```
  private static final Pattern dirNamePattern = 
Pattern.compile("^snapshot[.](.*)$");
```
In this case the following code throws NPE.

```
if (obd.getTimestamp().isPresent()) {
  dirs.add(obd);
}
```

also in OldBackUpDirectory`s pattern match should haver matcher.matches(), to 
fetch the group(1) value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7857) CharTokenizer-derived tokenizers and KeywordTokenizer emit multiple tokens when the max length is exceeded

2017-05-30 Thread Erick Erickson (JIRA)
Erick Erickson created LUCENE-7857:
--

 Summary: CharTokenizer-derived tokenizers and KeywordTokenizer 
emit multiple tokens when the max length is exceeded
 Key: LUCENE-7857
 URL: https://issues.apache.org/jira/browse/LUCENE-7857
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Erick Erickson
Assignee: Erick Erickson


Assigning to myself to not lose track of it.

LUCENE-7705 introduced the ability to define the allowable token length for 
these tokenizers other than hard-code it to 255. It's always been the case that 
when the hard-coded limit was exceeded, multiple tokens would be emitted. 
However, the tests for LUCENE-7705 exposed a problem.

Suppose the max length is 3 and the doc contains "letter". Two tokens are 
emitted and indexed: "let" and "ter".

Now suppose the search is for "lett". If the default operator is AND or phrase 
queries are constructed the query fails since the tokens emitted are "let" and 
"t". Only if the operator is OR is the document found, and even then it won't 
be correct since searching for "lett" would match a document indexed with 
"bett" because it would match on the bare "t".

Proposal: 

The remainder of the token should be ignored when maxTokenLen is exceeded.

[~rcmuir][~steve_rowe][~tomasflobbe] comments? Again, this behavior was not 
introduced by LUCENE-7705, it's just that it would be very hard to notice with 
the default 255 char limit.

I'm not quite sure why master generates a parsed query of:
field:let field:t
and 6x generates
field:"let t"
so the tests succeeded on master but not on 6x



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_131) - Build # 3625 - Still Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3625/
Java: 64bit/jdk1.8.0_131 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([8134D2D4AC565D62:B6AF26CA949A80C6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:131)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:317)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (PYLUCENE-37) Extended interfaces beyond first are ignored

2017-05-30 Thread Andi Vajda (JIRA)

[ 
https://issues.apache.org/jira/browse/PYLUCENE-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030244#comment-16030244
 ] 

Andi Vajda commented on PYLUCENE-37:


There seems to be something off with that code indeed.  
However, it would be helpful if you could include a small trivial example java 
code
that triggers the bug you found and an explanation of what you'd expect it
to do instead. This helps me ensure there is no misunderstanding and also helps 
with reproducing the bug.
Thanks !

> Extended interfaces beyond first are ignored
> 
>
> Key: PYLUCENE-37
> URL: https://issues.apache.org/jira/browse/PYLUCENE-37
> Project: PyLucene
>  Issue Type: Bug
>Reporter: Jesper Mattsson
>
> When generating wrapper for a Java interface that extends more than one other 
> interface, then only the first extended interface is used when generating the 
> C++ class.
> In cpp.header(), the code snippets:
> {code}
> if cls.isInterface():
> if interfaces:
> superCls = interfaces.pop(0)
> {code}
> and:
> {code}
> line(out, indent, 'class %s%s : public %s {',
>  _dll_export, cppname(names[-1]), absname(cppnames(superNames)))
> {code}
> are likely responsible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030223#comment-16030223
 ] 

Erick Erickson commented on LUCENE-7705:


OK, I see what's happening. I noted earlier that the way this has always been 
implemented, multiple tokens are emitted when the token length is exceeded. In 
this case, the token sent in the doc is "letter". So two tokens are emitted:
"let" and "ter". With positions incremented between I think.

The search is against "lett". For some reason, the parsed query in 6x is:
PhraseQuery(letter0:let t)

while in master it's:
letter0:let letter0:t

Even this is wrong, it just happens to succeed because the default operator is 
OR, so the fact that and the tokens in the index do not include a bare "t" 
finds the doc by chance, not design.

I think the right solution is to stop emitting tokens for a particular value 
once maxTokenLen is exceeded. I'll raise a new JIRA and we can debate it here.

This is _not_ any change in behavior resulting from the changes in this JIRA, 
the tests just expose something that's always been the case but nobody's 
noticed.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10531) JMX cache beans names / properties changed in 6.4

2017-05-30 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030221#comment-16030221
 ] 

Walter Underwood commented on SOLR-10531:
-

I'm getting our people to open a case with New Relic about this problem. It 
might be in their code, but they are the ones who can figure all that out.

When that is done, I'll link this back to that case and reopen it.

> JMX cache beans names / properties changed in 6.4
> -
>
> Key: SOLR-10531
> URL: https://issues.apache.org/jira/browse/SOLR-10531
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4, 6.5
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Attachments: branch_6_3.png, branch_6x.png
>
>
> As reported by [~wunder]:
> {quote}
> New Relic displays the cache hit rate for each collection, showing the query 
> result cache, filter cache, and document cache.
> With 6.5.0, that page shows this message:
> New Relic recorded no Solr caches data for this application in the last 
> 24 hours
> If you think there should be Solr data here, first check to see that JMX 
> is enabled for your application server. If enabled, then please contact 
> support.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Mike Drob
Thanks, Steve.


I've uploaded a failure log to
http://home.apache.org/~mdrob/lucene-solr_6_6/failure

My ulimit settings are:
core file size  (blocks, -c) 0

data seg size   (kbytes, -d) unlimited

file size   (blocks, -f) unlimited

max locked memory   (kbytes, -l) unlimited

max memory size (kbytes, -m) unlimited

open files  (-n) 4096

pipe size(512 bytes, -p) 1

stack size  (kbytes, -s) 8192

cpu time   (seconds, -t) unlimited

max user processes  (-u) 709

virtual memory  (kbytes, -v) unlimited


Do you think that open files limit is too low? I didn't see any evidence in
the log of that (could easily have missed it though).

On Tue, May 30, 2017 at 4:32 PM, Steve Rowe  wrote:

> Hi Mike,
>
> > On May 30, 2017, at 5:07 PM, Mike Drob  wrote:
> >
> > Was able to reproduce on both the unpacked RC and on branch_6_6 in the
> repo with
> >
> > ant test -Dtestcase=ReplaceNodeTest -Dtests.seed=545A8F7F914CAA60
> -Dtests.asserts=true
> >
> > My environment:
> >
> > Apache Ant(TM) version 1.10.1 compiled on February 2 2017
> >
> > java version "1.8.0_131"
> >
> > Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> >
> > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> >
> > Mac OS X 10.12.4
>
> The repro line above does not reproduce for me:
> * on Linux on branch_6_6 (Debian 8.8, Oracle JDK 1.8.0_77, Ant 1.9.4);
> * on MacOS 10.12.5, Oracle JDK 1.8.0_112, Ant 1.9.6.
>
> Mike, can you provide a failure log?
>
> I went looking for Jenkins failures of this test, and the only public ones
> I see are from Policeman Jenkins on OSX, all of them caused by "Too many
> open files".
>
> On my local Jenkins, I see ObjectTracker failures for this test (an
> unreleased object) on branch_6x, but the most recent was from mid-February.
>
> --
> Steve
> www.lucidworks.com
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [DISCUSS] Sandbox module dependencies

2017-05-30 Thread Robert Muir
On Tue, May 30, 2017 at 4:44 PM, Adrien Grand  wrote:
> The dependency convention, as I understand it, is that core may not depend
> on anything (either libraries or other modules) and modules may not depend
> on other modules. Then obviously we have exceptions for practical reasons,
> such as highlighting depending on the queries module, otherwise we could
> only highlight core queries. But we should keep treating them as exceptions
> and discuss introducing new dependencies on a case-by-case basis?
>

Going with your example, I think the exception is really an undesired
"practical reason" though. Its obviously an abstraction violation,
even if its hard to fix. Ideally the right abstractions (such as
abstract class Query) would have the correct stuff (such as
extractTerms) so that this dependency wasn't necessary... this
separation would ensure that custom queries can be highlighted too. I
realize its more complex than this, but I think its a good example.

I think the analyzers is another good one to think about, its in a
better state than highlighting (for the most part, I am sure there are
bad exceptions here too!!!): the abstractions
(Tokenizer/TokenFilter/Analyzer) are in core. test-framework has stuff
like MockAnalyzer that, tests should really be using since its geared
at preventing bugs (versus concrete analyzers). So other modules dont
need to depend on lucene-analyzers-XYZ to work with the analysis
chain.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7845) spatial RPT optimization when query by point or common date range

2017-05-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030182#comment-16030182
 ] 

David Smiley commented on LUCENE-7845:
--

I've reproduced this and boiled it down to a trivial test that fails -- index a 
point, search by the same point.  There appears to be a bug in 
*PackedQuadPrefixTree*... I haven't precisely determined the exact bug but I 
suspect it's a bad assumption about mutability of the passed BytesRef; as some 
of its methods about it I find highly suspect.  I'm tempted to redo most of 
PackedQuadPrefixTree.PackedQuadCell so that it doesn't even retain/cache the 
byte[]; instead it can compute it only when needed -- in some cases it won't 
ever be (thus a performance win).

> spatial RPT optimization when query by point or common date range
> -
>
> Key: LUCENE-7845
> URL: https://issues.apache.org/jira/browse/LUCENE-7845
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
> Attachments: LUCENE_7845_query_by_point_optimization.patch
>
>
> If the query to an RPT index is a 2D point, or if using 
> NumerBrangePrefixTreeStrategy / DateRangePrefixTree (Solr DateRangeField) if 
> the query is a grid cell (a common date range unit like some particular day), 
> then we can do some optimizations, especially if the data is pointsOnly.  If 
> the data is pointsOnly the strategy can return a TermQuery, if the data isn't 
> then we can at least tweak the prefixGridScanLevel.  This is motivated by two 
> scenarios:
> * indexing polygons and doing lookups by a point (AKA reverse geocoding)
> * indexing date instances and doing date range faceting. Solr's code for this 
> has a fast path for a TermQuery, although more is needed beyond this issue to 
> get there.
> _This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Steve Rowe
+1

Lucene docs, changes and javadocs look good, and the smoke tester was happy: 
SUCCESS! [0:45:38.111474]

--
Steve
www.lucidworks.com

> On May 30, 2017, at 2:07 PM, Ishan Chattopadhyaya  wrote:
> 
> Please vote for release candidate 5 for Lucene/Solr 6.6.0
> 
> 
> The artifacts can be downloaded from:
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241
> 
> You can run the smoke tester directly with this command:
> 
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241
> 
> Here's my +1
> 
> SUCCESS! [1:23:31.105482]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Steve Rowe
Hi Mike,

> On May 30, 2017, at 5:07 PM, Mike Drob  wrote:
> 
> Was able to reproduce on both the unpacked RC and on branch_6_6 in the repo 
> with
> 
> ant test -Dtestcase=ReplaceNodeTest -Dtests.seed=545A8F7F914CAA60 
> -Dtests.asserts=true
> 
> My environment:
> 
> Apache Ant(TM) version 1.10.1 compiled on February 2 2017
> 
> java version "1.8.0_131"
> 
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> 
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> 
> Mac OS X 10.12.4

The repro line above does not reproduce for me:
* on Linux on branch_6_6 (Debian 8.8, Oracle JDK 1.8.0_77, Ant 1.9.4); 
* on MacOS 10.12.5, Oracle JDK 1.8.0_112, Ant 1.9.6.

Mike, can you provide a failure log?

I went looking for Jenkins failures of this test, and the only public ones I 
see are from Policeman Jenkins on OSX, all of them caused by "Too many open 
files".

On my local Jenkins, I see ObjectTracker failures for this test (an unreleased 
object) on branch_6x, but the most recent was from mid-February.

--
Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Linux (32bit/jdk1.8.0_131) - Build # 30 - Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/30/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([12F3BA3DD2546F12:9AA785E77CA802EA]:0)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12904 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent
   [junit4]   2> Creating dataDir: 

Re: [jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Erick Erickson
Ah, ok. that one fails for me too on 6x, was trying on trunk. Thanks... Digging.

On Tue, May 30, 2017 at 12:44 PM, Steve Rowe (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030016#comment-16030016
>  ]
>
> Steve Rowe commented on LUCENE-7705:
> 
>
> My seed reproduces for me both on Linux and on my Macbook pro (Sierra 
> 10.12.5, Oracle JDK 1.8.0_112).  Note that the original failure was on 
> branch_6x (and that's where I repro'd).
>
>> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
>> max token length
>> -
>>
>> Key: LUCENE-7705
>> URL: https://issues.apache.org/jira/browse/LUCENE-7705
>> Project: Lucene - Core
>>  Issue Type: Improvement
>>Reporter: Amrit Sarkar
>>Assignee: Erick Erickson
>>Priority: Minor
>> Fix For: master (7.0), 6.7
>>
>> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
>> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
>> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>>
>>
>> SOLR-10186
>> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
>> limit for the CharTokenizer? In order to change this limit it requires that 
>> people copy/paste the incrementToken into some new class since 
>> incrementToken is final.
>> KeywordTokenizer can easily change the default (which is also 256 bytes), 
>> but to do so requires code rather than being able to configure it in the 
>> schema.
>> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
>> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
>> (Factories) it would take adding a c'tor to the base class in Lucene and 
>> using it in the factory.
>> Any objections?
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.15#6346)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Mike Drob
I tried running the smoke test command as specified but ran into test
failures:

   [junit4] Tests with failures [seed: 545A8F7F914CAA60]:

   [junit4]   - org.apache.solr.cloud.CdcrBootstrapTest.
testBootstrapWithSourceCluster

   [junit4]   - org.apache.solr.cloud.CdcrBootstrapTest.
testConvertClusterToCdcrAndBootstrap

   [junit4]   - org.apache.solr.cloud.CdcrBootstrapTest.
testBootstrapWithContinousIndexingOnSourceCluster

   [junit4]   - org.apache.solr.cloud.CdcrBootstrapTest (suite)

   [junit4]   - org.apache.solr.cloud.ReplaceNodeTest.test


Was able to reproduce on both the unpacked RC and on branch_6_6 in the repo
with

ant test -Dtestcase=ReplaceNodeTest -Dtests.seed=545A8F7F914CAA60
-Dtests.asserts=true


My environment:

Apache Ant(TM) version 1.10.1 compiled on February 2 2017


java version "1.8.0_131"

Java(TM) SE Runtime Environment (build 1.8.0_131-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)


Mac OS X 10.12.4

On Tue, May 30, 2017 at 1:07 PM, Ishan Chattopadhyaya 
wrote:

> Please vote for release candidate 5 for Lucene/Solr 6.6.0The artifacts can be 
> downloaded 
> from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241You
>  can run the smoke tester directly with this command:python3 -u 
> dev-tools/scripts/smokeTestRelease.py 
> \https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241
>
>


[jira] [Commented] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-05-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030090#comment-16030090
 ] 

Erick Erickson commented on SOLR-10698:
---

Grabbed to back-port to 6.7

> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-05-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-10698:
-

Assignee: Erick Erickson

> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Sandbox module dependencies

2017-05-30 Thread Adrien Grand
The dependency convention, as I understand it, is that core may not depend
on anything (either libraries or other modules) and modules may not depend
on other modules. Then obviously we have exceptions for practical reasons,
such as highlighting depending on the queries module, otherwise we could
only highlight core queries. But we should keep treating them as exceptions
and discuss introducing new dependencies on a case-by-case basis?

Le mar. 30 mai 2017 à 19:55, Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> a écrit :

> lucene-queryparser has an existing dependency on lucene-sandbox and it
> puzzled/confused me when first coming across is. Nothing depending on
> sandbox would seem clearer i.e. it's sandboxed, isolated, contained, etc.
>
> Instead of comments/statements (in caps or otherwise) in the ivy.xml
> files, could "dependency conventions" be checked via tools (precommit)
> somehow?
>
> Perhaps a separate thread should be started for the question of what other
> "dependency conventions" we have and wish to keep going forward?
>
> Christine
>
> From: dev@lucene.apache.org At: 05/30/17 18:34:20
> To: dev@lucene.apache.org
> Subject: Re: [DISCUSS] Sandbox module dependencies
>
>
>
> On Tue, May 30, 2017 at 1:18 PM Andrzej Białecki <
> andrzej.biale...@lucidworks.com> wrote:
>
>> Hi,
>>
>> I’m inclined to say “both core and non-sandbox modules MUST NOT depend on
>> sandbox”. Otherwise it would mean that regular modules, of which users’
>> expectations are that they are mostly stable and usable, depend on
>> half-baked experimental stuff with no guarantees whatsoever concerning
>> stability and usability, which doesn’t make sense.
>>
>
> Definitely.  In retrospect my wording was a bit confusing.  I meant to say
> _of course_ lucene-core should not depend on lucene-sandbox.  There isn't
> clarity on other modules, though; hence my discussion proposal.
>
> ~ David
>
>
>> +1 to explicit mentioning changed dependencies.
>>
>> On 30 May 2017, at 15:14, David Smiley  wrote:
>>
>> Within the Lucene project (not talking about Solr), can Lucene modules
>> (other than Core) depend on our lucene-sandbox module?  I intuitively
>> assumed "no" but it's purely based on some notion in my head about the role
>> of sandbox and not because of any prior decision.  I figure that
>> functionality in sandbox is either half-baked and thus nothing (in Lucene)
>> should depend on that stuff, or the fully baked stuff would graduate to
>> some module that is appropriate.
>>
>> Conversely, I figure the sandbox module can depend on whatever is
>> convenient; it's half-baked code after all.
>>
>> Also, this should be an obvious statement but apparently it needs to be
>> said: if you are introducing or removing a dependency, then say so in your
>> JIRA issue!  One shouldn't need to read through a patch (or read a commit
>> diff) to be aware of changes in dependencies!  It's important enough to be
>> stated in the JIRA, even if it's a test dependency.  And it's not much to
>> ask of us.  Perhaps I should commit a statement in caps to this effect in
>> our ivy.xml files to help us not forget?
>>
>> What's prompting this is LUCENE-7838 in which the lucene-classification
>> module in 7.0 now depends on the sandbox module.  We *could* get into the
>> particulars of that here but I'd rather this thread just express some
>> general philosophy of approach to the dependencies of this module (and
>> perhaps in general), and then leave specific circumstances to JIRA issues.
>>
>> ~ David
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
>> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>


[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_131) - Build # 19743 - Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19743/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
org.apache.lucene.store.AlreadyClosedException: Already closed

Stack Trace:
javax.management.RuntimeMBeanException: 
org.apache.lucene.store.AlreadyClosedException: Already closed
at 
__randomizedtesting.SeedInfo.seed([D957F1C2B43A7261:578695F8D97B2A04]:0)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
org.apache.solr.core.TestJmxIntegration.testJmxRegistration(TestJmxIntegration.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10745) Reliably create nodeAdded / nodeLost events

2017-05-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030023#comment-16030023
 ] 

Shalin Shekhar Mangar commented on SOLR-10745:
--

Thanks Andrzej. I made a pass through the code at jira/SOLR-10745 branch.

A few comments:
# Should we write events to nodeLost, nodeAdded even when there are no 
corresponding (active) triggers? -- it seems wasteful and worse the data will 
keep growing with no one to delete it
# I agree with your choice of using persistent znodes for nodeLost events. Same 
for using ephemeral for nodeAdded because if the node goes away, the znode does 
too and we obviously never want to fire a nodeAdded trigger if the node itself 
is no more. I can't think of any cons to using ephemeral here except it is 
inconsistent with how we handle nodeLost events.
# While processing these events, i.e. before adding them to the tracking map, 
we must check actual state of the node at the time e.g. if a node came back, we 
don't want to add it to the NodeLostTrigger's tracking map
# Perhaps add some error handling code which ensures that we mark the node as 
live even if the multi op fails? I don't think it can fail but I just want to 
ensure that we fail to start Solr if cannot create the live node.
# TriggerIntegrationTest can use SolrZkClient.clean() which does the same thing 
as deleteChildrenRecursively
# nodeNameVsTimeAdded is now ConcurrentHashMap but it is never accessed 
concurrently?
# I'd prefer that retreiving marker paths should be done once during startup in 
ScheduledTrigger.run(). Doing that each time the trigger is run is redundant.
# minor nit - in testNodesEventRegistration, the code comment says "we want 
both triggers to fire" but the latch is initialized with 3.

> Reliably create nodeAdded / nodeLost events
> ---
>
> Key: SOLR-10745
> URL: https://issues.apache.org/jira/browse/SOLR-10745
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>  Labels: autoscaling
> Fix For: master (7.0)
>
>
> When Overseer node goes down then depending on the current phase of trigger 
> execution a {{nodeLost}} event may not have been generated. Similarly, when a 
> new node is added and Overseer goes down before the trigger saves a 
> checkpoint (and before it produces {{nodeAdded}} event) this event may never 
> be generated.
> The proposed solution would be to modify how nodeLost / nodeAdded information 
> is recorded in the cluster:
> * new nodes should do a ZK multi-write to both {{/live_nodes}} and 
> additionally to a predefined location eg. 
> {{/autoscaling/nodeAdded/}}. On the first execution of Trigger.run 
> in the new Overseer leader it would check this location for new znodes, which 
> would indicate that node has been added, and then generate a new event and 
> remove the znode that corresponds to the event.
> * node lost events should also be recorded to a predefined location eg. 
> {{/autoscaling/nodeLost/}}. Writing to this znode would be 
> attempted simultaneously by a few randomly selected nodes to make sure at 
> least one of them succeeds. On the first run of the new trigger instance (in 
> new Overseer leader) event generation would follow the sequence described 
> above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030016#comment-16030016
 ] 

Steve Rowe commented on LUCENE-7705:


My seed reproduces for me both on Linux and on my Macbook pro (Sierra 10.12.5, 
Oracle JDK 1.8.0_112).  Note that the original failure was on branch_6x (and 
that's where I repro'd).

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029981#comment-16029981
 ] 

Erick Erickson commented on LUCENE-7705:


Hmmm, Steve's seed succeeds on my macbook pro, and I beasted this test 100 
times on my mac pro without failures. Not quite sure what to do next...

I wonder if SOLR-10562 is biting us again? Although that doesn't really make 
sense as the line before this failure finds the same document.

I'll beast this on a different system I have access to when I can (it's shared).

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks

2017-05-30 Thread Vivek Narang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029990#comment-16029990
 ] 

Vivek Narang commented on SOLR-10317:
-

Hello,
I have been down with illness for last three days, I will resume my activity 
shortly. Regards

> Solr Nightly Benchmarks
> ---
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
>  Issue Type: Task
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
> Attachments: changes-lucene-20160907.json, 
> changes-solr-20160907.json, managed-schema, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, 
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf, 
> solrconfig.xml
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be 
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr 
> nodes, both in SolrCloud and standalone mode, and record timing information 
> of various operations like indexing, querying, faceting, grouping, 
> replication etc.
> # It should be possible to run them either as an independent suite or as a 
> Jenkins job, and we should be able to report timings as graphs (Jenkins has 
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it 
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md 
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr 
> in some of these frameworks above. However, the benchmarks run are very 
> limited. Any of these can be a starting point, or a new framework can as well 
> be used. The motivation is to be able to cover every functionality of Solr 
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure 
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+171) - Build # 3624 - Still Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3624/
Java: 64bit/jdk-9-ea+171 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([6907001B426A0026:3D53F741A89D0E9]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 39 more




Build Log:
[...truncated 13282 lines...]
   [junit4] Suite: 

[jira] [Commented] (LUCENE-7844) UnifiedHighlighter: simplify "maxPassages" input API

2017-05-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029975#comment-16029975
 ] 

David Smiley commented on LUCENE-7844:
--

bq. I'd be in favor of leaving the current parallel array approach and working 
towards a fieldOption approach. I can offer to help on that end!

+1  (separate issue)

> UnifiedHighlighter: simplify "maxPassages" input API
> 
>
> Key: LUCENE-7844
> URL: https://issues.apache.org/jira/browse/LUCENE-7844
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE_7844__UH_maxPassages_simplification.patch
>
>
> The "maxPassages" input to the UnifiedHighlighter can be provided as an array 
> to some of the public methods on UnifiedHighlighter.  When it's provided as 
> an array, the index in the array is for the field in a parallel array. I 
> think this is awkward and furthermore it's inconsistent with the way this 
> highlighter customizes things on a by field basis.  Instead, the parameter 
> can be a simple int default (not an array), and then there can be a protected 
> method like {{getMaxPassageCount(String field}} that returns an Integer 
> which, when non-null, replaces the default value for this field.
> Aside from API simplicity and consistency, this will also remove some 
> annoying parallel array sorting going on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7838) Add a knn classifier based on fuzzy like this

2017-05-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029939#comment-16029939
 ] 

David Smiley commented on LUCENE-7838:
--

bq. CHANGES.txt:
I guess I need to be clearer.  Why _isn't_ there a CHANGES.txt entry?  Beyond 
mentioning what the title says, mentioning the new dependency would be 
appropriate (required IMO).

bq. patch

Nevermind; you were going the CTR path (which I welcome) instead of RTC.  CTR 
is outside our defacto norms of behavior here.  Maybe I should follow suit and 
we will try to change that :-)

> Add a knn classifier based on fuzzy like this
> -
>
> Key: LUCENE-7838
> URL: https://issues.apache.org/jira/browse/LUCENE-7838
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
>
> FLT mixes fuzzy and MLT, in the context of Lucene based classification it 
> might be useful to add such a fuzziness to a dedicated KNN classifier (based 
> on FLT queries).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10770) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029930#comment-16029930
 ] 

ASF subversion and git services commented on SOLR-10770:


Commit 4608e7d03690755a05f53693519db6b06c6e9d7c in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4608e7d ]

SOLR-10770: Fix precommit


> Add date formatting to timeseries Streaming Expression
> --
>
> Key: SOLR-10770
> URL: https://issues.apache.org/jira/browse/SOLR-10770
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10770.patch
>
>
> Currently the timeseries Streaming Expression returns an entire date-time 
> string for each bucket even for coarse grain time gaps. It would be better if 
> we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10770) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029929#comment-16029929
 ] 

ASF subversion and git services commented on SOLR-10770:


Commit 520762913af97f761377e03139499aeee31d2a9f in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5207629 ]

SOLR-10770: Add date formatting to timeseries Streaming Expression


> Add date formatting to timeseries Streaming Expression
> --
>
> Key: SOLR-10770
> URL: https://issues.apache.org/jira/browse/SOLR-10770
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10770.patch
>
>
> Currently the timeseries Streaming Expression returns an entire date-time 
> string for each bucket even for coarse grain time gaps. It would be better if 
> we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7852) out-of-date Copyright year(s) on NOTICE.txt files?

2017-05-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029919#comment-16029919
 ] 

Steve Rowe commented on LUCENE-7852:


bq. And another question: if/how to mention this change in CHANGES.txt

I think if you do decide to include it in CHANGES.txt (which I don't think is 
necessary, but I've seen arguments that every change should get a CHANGES 
entry), it should go under "Other changes".  Either way should be fine, I think.

> out-of-date Copyright year(s) on NOTICE.txt files?
> --
>
> Key: LUCENE-7852
> URL: https://issues.apache.org/jira/browse/LUCENE-7852
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7852.patch, LUCENE-7852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10770) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10770:
--
Attachment: SOLR-10770.patch

> Add date formatting to timeseries Streaming Expression
> --
>
> Key: SOLR-10770
> URL: https://issues.apache.org/jira/browse/SOLR-10770
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10770.patch
>
>
> Currently the timeseries Streaming Expression returns an entire date-time 
> string for each bucket even for coarse grain time gaps. It would be better if 
> we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7852) out-of-date Copyright year(s) on NOTICE.txt files?

2017-05-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7852:

Attachment: LUCENE-7852.patch

Attaching revised patch with 2001 as the start year for Lucene.

And another question: if/how to mention this change in CHANGES.txt
* mention in the 'Upgrading from ...' notes?
* mention in 'Other changes'?
* mention in some other way?
* no need to mention?

> out-of-date Copyright year(s) on NOTICE.txt files?
> --
>
> Key: LUCENE-7852
> URL: https://issues.apache.org/jira/browse/LUCENE-7852
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7852.patch, LUCENE-7852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reference guide editing: newbie notes and kudos to Cassandra and Hoss:

2017-05-30 Thread Steve Rowe

> On May 30, 2017, at 2:04 PM, Erick Erickson  wrote:
> 
> 4> I don't think minor edits require a JIRA, larger ones maybe. Pretty
> much just like the CWiki I suppose.

One more thing I ran into while making the changes on SOLR-10758:

5> I included a new "Ref Guide” section under the 6.6 release in 
solr/CHANGES.txt, but this was premature, since: a) the ref guide release is 
still separate from the code release, so solr/CHANGES.txt isn’t the right place 
(yet); and b) even after we make the ref guide release part of the code 
release, it’s not clear that ref guide change notes belong in solr/CHANGES.txt, 
since e.g. javadocs-only changes never get mentioned there.  (Personally I 
think there should eventually be some form of CHANGES-like release notes for 
the ref guide.)

(I haven’t reverted my “Ref Guide” section addition to solr/CHANGES.txt because 
there is a 6.6 RC vote underway, and if it succeeds reversion will be 
pointless.)

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029872#comment-16029872
 ] 

Steve Rowe commented on LUCENE-7705:


Here's another reproducing failure 
[https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3612]: 

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestMaxTokenLenTokenizer -Dtests.method=testSingleFieldSameAnalyzers 
-Dtests.seed=FE4BE1CA39C9E0DA -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=vi-VN -Dtests.timezone=Australia/NSW -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.07s J1 | 
TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
query
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([FE4BE1CA39C9E0DA:9499DEA5612A3015]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
   [junit4]>at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]> Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
   [junit4]>xml response was: 
   [junit4]> 
   [junit4]> 00
   [junit4]> 
   [junit4]>request was:q=letter0:lett=xml
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
{lowerCase0=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 whiteSpace=Lucene50(blocksize=128), letter=BlockTreeOrds(blocksize=128), 
lowerCase=Lucene50(blocksize=128), 
unicodeWhiteSpace=BlockTreeOrds(blocksize=128), letter0=FST50, 
unicodeWhiteSpace0=FST50, 
keyword0=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 
id=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 keyword=Lucene50(blocksize=128), 
whiteSpace0=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
 docValues:{}, maxPointsInLeafNode=579, maxMBSortInHeap=5.430197160407458, 
sim=RandomSimilarity(queryNorm=false,coord=crazy): {}, locale=vi-VN, 
timezone=Australia/NSW
   [junit4]   2> NOTE: Linux 4.10.0-21-generic amd64/Oracle Corporation 
1.8.0_131 (64-bit)/cpus=8,threads=1,free=234332328,total=536870912
{noformat}

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release Lucene/Solr 6.6.0 RC5

2017-05-30 Thread Ishan Chattopadhyaya
Please vote for release candidate 5 for Lucene/Solr 6.6.0The artifacts
can be downloaded
from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241You
can run the smoke tester directly with this command:python3 -u
dev-tools/scripts/smokeTestRelease.py
\https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC5-rev5c7a7b65d2aa7ce5ec96458315c661a18b320241Here's
my +1SUCCESS! [1:23:31.105482]


Reference guide editing: newbie notes and kudos to Cassandra and Hoss:

2017-05-30 Thread Erick Erickson
I've just made my first edit of the new AsciiDoc reference guide. It's
about 10,000 times easier than editing the CWiki. Every time I had to
edit the CWiki I put it off because I found it...painful. I don't have
that excuse any more.

So listen up foks ;). There's one less excuse for not editing the ref
guide as part of the normal process of submitting JIRAs.

Cassandra and Hoss have done  yeoman's duty converting the old ref
guide to the new format, they deserve a _lot_ of applause.

Newbie notes:

0> The raw files live in solr>>solr-ref-guide>>src

1> These are now just files that are part of the project. Can be
edited/checked in with the patch. Hint. Hint. Hint.

1.1> Since they only live locally before being pushed, you can work on
them incrementally.

1.2> Checking them in follows the same rules as normal JIRAs, you need
to push the change in all relevant branches.

2> At least intelliJ has a nifty plugin that lets you see the effects
as you type. Highly recommended. I'd guess other IDEs have one too.

Minor nits about the plugin: it takes a bit to find the "soft wrap":
`View>>active editor>>use soft wraps`. The raw file must have the
focus to find this option. And I haven't yet found out how to split
the screen horizontally rather than vertically between the raw and
formatted view.

3> Solr>>solr-ref-guide>>meta-docs has instructions. See particularly
asciidoc-syntax.adoc and editing-tools.adoc your first time in.

4> I don't think minor edits require a JIRA, larger ones maybe. Pretty
much just like the CWiki I suppose.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10775) When using PULL replicas, it would be nice for the client to be able to know how old the current index is

2017-05-30 Thread JIRA
Tomás Fernández Löbbe created SOLR-10775:


 Summary: When using PULL replicas, it would be nice for the client 
to be able to know how old the current index is
 Key: SOLR-10775
 URL: https://issues.apache.org/jira/browse/SOLR-10775
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe


The goal is that, if the replica couldn't replicate from the leader in some 
time the client should be able to accept/reject the response



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10774) ReplicationHandler should fail when using "ReplicateFromLeader" (TLOG/PULL replica types) if the node is no longer the leader

2017-05-30 Thread JIRA
Tomás Fernández Löbbe created SOLR-10774:


 Summary: ReplicationHandler should fail when using 
"ReplicateFromLeader" (TLOG/PULL replica types) if the node is no longer the 
leader
 Key: SOLR-10774
 URL: https://issues.apache.org/jira/browse/SOLR-10774
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10773) Add support for replica types in V2 API

2017-05-30 Thread JIRA
Tomás Fernández Löbbe created SOLR-10773:


 Summary: Add support for replica types in V2 API
 Key: SOLR-10773
 URL: https://issues.apache.org/jira/browse/SOLR-10773
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10772) Add support for replica types in CLI

2017-05-30 Thread JIRA
Tomás Fernández Löbbe created SOLR-10772:


 Summary: Add support for replica types in CLI
 Key: SOLR-10772
 URL: https://issues.apache.org/jira/browse/SOLR-10772
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe


Create collection should support different types of replicas



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Sandbox module dependencies

2017-05-30 Thread Christine Poerschke (BLOOMBERG/ LONDON)
lucene-queryparser has an existing dependency on lucene-sandbox and it 
puzzled/confused me when first coming across is. Nothing depending on sandbox 
would seem clearer i.e. it's sandboxed, isolated, contained, etc.

Instead of comments/statements (in caps or otherwise) in the ivy.xml files, 
could "dependency conventions" be checked via tools (precommit) somehow?

Perhaps a separate thread should be started for the question of what other 
"dependency conventions" we have and wish to keep going forward?

Christine

From: dev@lucene.apache.org At: 05/30/17 18:34:20
To: dev@lucene.apache.org
Subject: Re: [DISCUSS] Sandbox module dependencies


On Tue, May 30, 2017 at 1:18 PM Andrzej Białecki 
 wrote:

Hi,

I’m inclined to say “both core and non-sandbox modules MUST NOT depend on 
sandbox”. Otherwise it would mean that regular modules, of which users’ 
expectations are that they are mostly stable and usable, depend on half-baked 
experimental stuff with no guarantees whatsoever concerning stability and 
usability, which doesn’t make sense.

Definitely.  In retrospect my wording was a bit confusing.  I meant to say _of 
course_ lucene-core should not depend on lucene-sandbox.  There isn't clarity 
on other modules, though; hence my discussion proposal.
 
~ David


+1 to explicit mentioning changed dependencies.


On 30 May 2017, at 15:14, David Smiley  wrote:
Within the Lucene project (not talking about Solr), can Lucene modules (other 
than Core) depend on our lucene-sandbox module?  I intuitively assumed "no" but 
it's purely based on some notion in my head about the role of sandbox and not 
because of any prior decision.  I figure that functionality in sandbox is 
either half-baked and thus nothing (in Lucene) should depend on that stuff, or 
the fully baked stuff would graduate to some module that is appropriate.  

Conversely, I figure the sandbox module can depend on whatever is convenient; 
it's half-baked code after all.

Also, this should be an obvious statement but apparently it needs to be said: 
if you are introducing or removing a dependency, then say so in your JIRA 
issue!  One shouldn't need to read through a patch (or read a commit diff) to 
be aware of changes in dependencies!  It's important enough to be stated in the 
JIRA, even if it's a test dependency.  And it's not much to ask of us.  Perhaps 
I should commit a statement in caps to this effect in our ivy.xml files to help 
us not forget?

What's prompting this is LUCENE-7838 in which the lucene-classification module 
in 7.0 now depends on the sandbox module.  We *could* get into the particulars 
of that here but I'd rather this thread just express some general philosophy of 
approach to the dependencies of this module (and perhaps in general), and then 
leave specific circumstances to JIRA issues.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com


-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-7852) out-of-date Copyright year(s) on NOTICE.txt files?

2017-05-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029803#comment-16029803
 ] 

Steve Rowe commented on LUCENE-7852:


More NOTICE-related stuff here: [http://www.apache.org/legal/src-headers.html], 
but no discussion about how to decide about years.

bq. How to determine the start year for lucene/NOTICE.txt and/or is just an end 
year sufficient?

According to [https://en.wikipedia.org/wiki/Copyright_notice], "The copyright 
notice must also contain the year in which the work was first published (or 
created)".  Lucene's CHANGES.txt says the 1.2 RC1 was the first Apache release 
(previous releases were hosted at Sourceforge), and that happened on 
2001-10-02.  I'd argue that this is the first year of publication (as Apache 
Lucene).

Since each release is a separate publication/creation, I think we should be 
including both the original date of publication and the current year of the 
latest release.

bq. If X.0 is released in (say) 2017 and X.Y is released in (say) 2018, 
presumably the end year gets bumped up to 2018. What about X.Y.1 in (say) 2019, 
is the end year bumped up again to 2019 or does it stay at 2018 since it is 
only a bugfix release?

I think the fact of a release, regardless of major/minor/point type, is 
sufficient to consider it a distinct publishing/creation act, and so should get 
its release year included in the copyright notice; in your example, I think 
X.Y.1's 2019 release should bump the copyright year to 2019.

> out-of-date Copyright year(s) on NOTICE.txt files?
> --
>
> Key: LUCENE-7852
> URL: https://issues.apache.org/jira/browse/LUCENE-7852
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread Erick Erickson
Let me take a look. Tests passed for me, probably some of the random stuff?


On Tue, May 30, 2017 at 9:39 AM, Tomás Fernández Löbbe (JIRA)
 wrote:
>
> [ 
> https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029694#comment-16029694
>  ]
>
> Tomás Fernández Löbbe commented on LUCENE-7705:
> ---
>
> TestMaxTokenLenTokenizer seems to be failing in Jenkins
> {noformat}
> 1 tests failed.
> FAILED:  
> org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers
>
> Error Message:
> Exception during query
>
> Stack Trace:
> java.lang.RuntimeException: Exception during query
> at 
> __randomizedtesting.SeedInfo.seed([FE4BE1CA39C9E0DA:9499DEA5612A3015]:0)
> at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
> at 
> org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> 

Re: [DISCUSS] Sandbox module dependencies

2017-05-30 Thread David Smiley
On Tue, May 30, 2017 at 1:18 PM Andrzej Białecki <
andrzej.biale...@lucidworks.com> wrote:

> Hi,
>
> I’m inclined to say “both core and non-sandbox modules MUST NOT depend on
> sandbox”. Otherwise it would mean that regular modules, of which users’
> expectations are that they are mostly stable and usable, depend on
> half-baked experimental stuff with no guarantees whatsoever concerning
> stability and usability, which doesn’t make sense.
>

Definitely.  In retrospect my wording was a bit confusing.  I meant to say
_of course_ lucene-core should not depend on lucene-sandbox.  There isn't
clarity on other modules, though; hence my discussion proposal.

~ David


> +1 to explicit mentioning changed dependencies.
>
> On 30 May 2017, at 15:14, David Smiley  wrote:
>
> Within the Lucene project (not talking about Solr), can Lucene modules
> (other than Core) depend on our lucene-sandbox module?  I intuitively
> assumed "no" but it's purely based on some notion in my head about the role
> of sandbox and not because of any prior decision.  I figure that
> functionality in sandbox is either half-baked and thus nothing (in Lucene)
> should depend on that stuff, or the fully baked stuff would graduate to
> some module that is appropriate.
>
> Conversely, I figure the sandbox module can depend on whatever is
> convenient; it's half-baked code after all.
>
> Also, this should be an obvious statement but apparently it needs to be
> said: if you are introducing or removing a dependency, then say so in your
> JIRA issue!  One shouldn't need to read through a patch (or read a commit
> diff) to be aware of changes in dependencies!  It's important enough to be
> stated in the JIRA, even if it's a test dependency.  And it's not much to
> ask of us.  Perhaps I should commit a statement in caps to this effect in
> our ivy.xml files to help us not forget?
>
> What's prompting this is LUCENE-7838 in which the lucene-classification
> module in 7.0 now depends on the sandbox module.  We *could* get into the
> particulars of that here but I'd rather this thread just express some
> general philosophy of approach to the dependencies of this module (and
> perhaps in general), and then leave specific circumstances to JIRA issues.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Replica naming changed.

2017-05-30 Thread Erick Erickson
Tomás:

Thought that might be the case. I'm not opposed to the naming change
as long as it serves a purpose as this one does. People shouldn't
write scripts that assume hard-coded names anyway ;).

Ishan:

I give long odds that we _will_ need the capability to reassign
replica roles. One of the points of this work is that people need to
fine-tune how nodes on the cluster are utilized, which I'd imagine
means reassigning replica functions.

All:

I rather like that the node name gives us information about the role.
having to go to the state.json file to find some other property is
onerous, especially in very large installations.

Proposal:

Let's keep the naming convention with the n, t, and p notation. If in
the future we want to reassign a replica's role we can rename the node
when its role changes.

On Tue, May 30, 2017 at 10:12 AM, Ishan Chattopadhyaya
 wrote:
> Do we shut ourselves out of the possibility to ever re-assigning the replica
> types, by using this naming convention?
> For example, is there any conceivable scenario in future whereby an NRT
> replica can become a TLOG replica?
> Never mind my asking if this flexibility is something we're sure we'll never
> need.
>
> On Tue, May 30, 2017 at 9:49 PM, Tomas Fernandez Lobbe 
> wrote:
>>
>> Hi Erick,
>> This change is part of replica types. I mentioned this in SOLR-10233, but
>> you are right, I should have mentioned probably in the dev list to get to
>> more people. The last character represents the type of replica (n->NRT,
>> t->TLOG, p->PULL). This is certainly not required and can be reverted back
>> if people has concerns. I found it very useful when developing and I think
>> it will also be helpful in prod, since the replica name is present in most
>> log entries (since the MDC logging changes).
>>
>> Tomás
>>
>> > On May 30, 2017, at 8:54 AM, Erick Erickson 
>> > wrote:
>> >
>> > I noticed recently that our replica names are changing (master only?)
>> > to collection_shard1_replica_n1. Why?
>> >
>> > Mostly I wan to be sure we consider whether this change is worth the
>> > confusion before it gets out into the wild. If it's just an aesthetic
>> > change I question whether it's worth the confusion it'll generate. If
>> > it serves a real purpose, that's another story..
>> >
>> > Erick
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.6 - Build # 16 - Still unstable

2017-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.6/16/

1 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
expected:<413> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<413> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([56241B0EB8C24A8B:DE7024D4163E2773]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:333)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

Re: [DISCUSS] Sandbox module dependencies

2017-05-30 Thread Andrzej Białecki
Hi,

I’m inclined to say “both core and non-sandbox modules MUST NOT depend on 
sandbox”. Otherwise it would mean that regular modules, of which users’ 
expectations are that they are mostly stable and usable, depend on half-baked 
experimental stuff with no guarantees whatsoever concerning stability and 
usability, which doesn’t make sense.

+1 to explicit mentioning changed dependencies.

> On 30 May 2017, at 15:14, David Smiley  wrote:
> 
> Within the Lucene project (not talking about Solr), can Lucene modules (other 
> than Core) depend on our lucene-sandbox module?  I intuitively assumed "no" 
> but it's purely based on some notion in my head about the role of sandbox and 
> not because of any prior decision.  I figure that functionality in sandbox is 
> either half-baked and thus nothing (in Lucene) should depend on that stuff, 
> or the fully baked stuff would graduate to some module that is appropriate.  
> 
> Conversely, I figure the sandbox module can depend on whatever is convenient; 
> it's half-baked code after all.
> 
> Also, this should be an obvious statement but apparently it needs to be said: 
> if you are introducing or removing a dependency, then say so in your JIRA 
> issue!  One shouldn't need to read through a patch (or read a commit diff) to 
> be aware of changes in dependencies!  It's important enough to be stated in 
> the JIRA, even if it's a test dependency.  And it's not much to ask of us.  
> Perhaps I should commit a statement in caps to this effect in our ivy.xml 
> files to help us not forget?
> 
> What's prompting this is LUCENE-7838 in which the lucene-classification 
> module in 7.0 now depends on the sandbox module.  We *could* get into the 
> particulars of that here but I'd rather this thread just express some general 
> philosophy of approach to the dependencies of this module (and perhaps in 
> general), and then leave specific circumstances to JIRA issues.
> 
> ~ David
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley 
>  | Book: 
> http://www.solrenterprisesearchserver.com 
> 


[jira] [Assigned] (SOLR-9910) Allow setting of additional jetty options in bin/solr and bin/solr.cmd

2017-05-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-9910:
-

Assignee: Mark Miller

> Allow setting of additional jetty options in bin/solr and bin/solr.cmd
> --
>
> Key: SOLR-9910
> URL: https://issues.apache.org/jira/browse/SOLR-9910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-9910.patch, SOLR-9910.patch
>
>
> Command line tools allow the option {{-a}} to add JVM options to start 
> command. Proposing to add {{-j}} option to add additional config for jetty 
> (the part after {{start.jar}}).
> Motivation: jetty can be configured with start.ini in server directory. 
> Running multiple Solr instances, however, requires the configuration per 
> instance that cannot share the start.ini with other instances.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Replica naming changed.

2017-05-30 Thread Ishan Chattopadhyaya
Do we shut ourselves out of the possibility to ever re-assigning the
replica types, by using this naming convention?
For example, is there any conceivable scenario in future whereby an NRT
replica can become a TLOG replica?
Never mind my asking if this flexibility is something we're sure we'll
never need.

On Tue, May 30, 2017 at 9:49 PM, Tomas Fernandez Lobbe 
wrote:

> Hi Erick,
> This change is part of replica types. I mentioned this in SOLR-10233, but
> you are right, I should have mentioned probably in the dev list to get to
> more people. The last character represents the type of replica (n->NRT,
> t->TLOG, p->PULL). This is certainly not required and can be reverted back
> if people has concerns. I found it very useful when developing and I think
> it will also be helpful in prod, since the replica name is present in most
> log entries (since the MDC logging changes).
>
> Tomás
>
> > On May 30, 2017, at 8:54 AM, Erick Erickson 
> wrote:
> >
> > I noticed recently that our replica names are changing (master only?)
> > to collection_shard1_replica_n1. Why?
> >
> > Mostly I wan to be sure we consider whether this change is worth the
> > confusion before it gets out into the wild. If it's just an aesthetic
> > change I question whether it's worth the confusion it'll generate. If
> > it serves a real purpose, that's another story..
> >
> > Erick
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029694#comment-16029694
 ] 

Tomás Fernández Löbbe commented on LUCENE-7705:
---

TestMaxTokenLenTokenizer seems to be failing in Jenkins
{noformat}
1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([FE4BE1CA39C9E0DA:9499DEA5612A3015]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 


[jira] [Resolved] (SOLR-10771) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-10771.
---
Resolution: Duplicate

> Add date formatting to timeseries Streaming Expression
> --
>
> Key: SOLR-10771
> URL: https://issues.apache.org/jira/browse/SOLR-10771
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> Currently the timeseries Streaming Expression returns an entire date-time 
> string for each bucket even for coarse grain time gaps. It would be better if 
> we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10770) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10770:
-

 Summary: Add date formatting to timeseries Streaming Expression
 Key: SOLR-10770
 URL: https://issues.apache.org/jira/browse/SOLR-10770
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the timeseries Streaming Expression returns an entire date-time 
string for each bucket even for coarse grain time gaps. It would be better if 
we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10771) Add date formatting to timeseries Streaming Expression

2017-05-30 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10771:
-

 Summary: Add date formatting to timeseries Streaming Expression
 Key: SOLR-10771
 URL: https://issues.apache.org/jira/browse/SOLR-10771
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the timeseries Streaming Expression returns an entire date-time 
string for each bucket even for coarse grain time gaps. It would be better if 
we could specify an output template format for the date-time buckets. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8668) Remove support for (in favour of )

2017-05-30 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029686#comment-16029686
 ] 

Christine Poerschke commented on SOLR-8668:
---

bq. ... this comment still exists ...

Oops, fixed now. Had seen that comment and thought of its removal but then got 
distracted by the actual change ...

> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8668) Remove support for (in favour of )

2017-05-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029672#comment-16029672
 ] 

Hoss Man commented on SOLR-8668:


bq. Good catch that! I've gone and renamed from effectiveUseCompoundFileSetting 
to useCompoundFile ...

this comment still exists when the variable is used to build the IWC...

{code}
 // do this after buildMergePolicy since the backcompat logic 
 // there may modify the effective useCompoundFile
-iwc.setUseCompoundFile(getUseCompoundFile());
+iwc.setUseCompoundFile(useCompoundFile);
{code}

...and that's now missleading/confusing.


> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10531) JMX cache beans names / properties changed in 6.4

2017-05-30 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-10531.
--
Resolution: Cannot Reproduce

I wasn't able to reproduce this issue, and there were no additional data to 
illustrate the problem. Closing as Cannot Reproduce - please reopen if you have 
relevant data.

> JMX cache beans names / properties changed in 6.4
> -
>
> Key: SOLR-10531
> URL: https://issues.apache.org/jira/browse/SOLR-10531
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4, 6.5
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Attachments: branch_6_3.png, branch_6x.png
>
>
> As reported by [~wunder]:
> {quote}
> New Relic displays the cache hit rate for each collection, showing the query 
> result cache, filter cache, and document cache.
> With 6.5.0, that page shows this message:
> New Relic recorded no Solr caches data for this application in the last 
> 24 hours
> If you think there should be Solr data here, first check to see that JMX 
> is enabled for your application server. If enabled, then please contact 
> support.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:Release planning for 7.0

2017-05-30 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hi Everyone,

Just to say that https://issues.apache.org/jira/browse/SOLR-8668 for 
 removal should complete later this week, hopefully.

And on an unrelated note, does anyone have any history or experience with the 
NOTICE.txt files? Including https://issues.apache.org/jira/browse/LUCENE-7852 
in 7.0 would be good i think (though it being a small change the issue would 
not need to block branch_7x branch cutting).

Thanks,
Christine

From: dev@lucene.apache.org At: 05/03/17 16:56:09
To: dev@lucene.apache.org
Subject: Re:Release planning for 7.0

Hi,

It's May already, and with 6.6 lined up, I think we should start planning on 
how we want to proceed with 7.0, in terms of both - the timeline, and what it 
would likely contain.

I am not suggesting we start the release process right away, but just wanted to 
start a discussion around the above mentioned lines.

With 6.6 in the pipeline, I think sometime in June would be a good time to cut 
a release branch. What do all of you think?

P.S: This email is about 'discussion' and 'planning', so if someone wants to 
volunteer to be the release manager, please go ahead. I can't remember if 
someone did explicit volunteer to wear this hat for 7.0. If no one volunteers, 
I will take it up.

-Anshum


[jira] [Commented] (SOLR-8668) Remove support for (in favour of )

2017-05-30 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029637#comment-16029637
 ] 

Christine Poerschke commented on SOLR-8668:
---

bq. The one thing that jumped out at me is 
SolrIndexConfig.effectiveUseCompoundFileSetting. ...

Good catch that! I've gone and renamed from _effectiveUseCompoundFileSetting_ 
to _useCompoundFile_ (with no _get...()_) accessor to match the style of the 
other settings.



Changes from master merged into the working branch and tests pass for me 
locally. 
https://github.com/apache/lucene-solr/compare/jira/solr-8668#files_bucket shows 
all the changes. Further comments and reviews welcome as usual.

If there are no further comments then I'll aim to commit this to master branch 
Thursday or Friday this week (June 1st or June 2nd).

> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Replica naming changed.

2017-05-30 Thread Tomas Fernandez Lobbe
Hi Erick, 
This change is part of replica types. I mentioned this in SOLR-10233, but you 
are right, I should have mentioned probably in the dev list to get to more 
people. The last character represents the type of replica (n->NRT, t->TLOG, 
p->PULL). This is certainly not required and can be reverted back if people has 
concerns. I found it very useful when developing and I think it will also be 
helpful in prod, since the replica name is present in most log entries (since 
the MDC logging changes). 

Tomás

> On May 30, 2017, at 8:54 AM, Erick Erickson  wrote:
> 
> I noticed recently that our replica names are changing (master only?)
> to collection_shard1_replica_n1. Why?
> 
> Mostly I wan to be sure we consider whether this change is worth the
> confusion before it gets out into the wild. If it's just an aesthetic
> change I question whether it's worth the confusion it'll generate. If
> it serves a real purpose, that's another story..
> 
> Erick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_131) - Build # 3623 - Still Unstable!

2017-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3623/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([6ECC6A4CC6D671C7:41E55239E35A108]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 13306 lines...]
   [junit4] Suite: 

Replica naming changed.

2017-05-30 Thread Erick Erickson
I noticed recently that our replica names are changing (master only?)
to collection_shard1_replica_n1. Why?

Mostly I wan to be sure we consider whether this change is worth the
confusion before it gets out into the wild. If it's just an aesthetic
change I question whether it's worth the confusion it'll generate. If
it serves a real purpose, that's another story..

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >