[jira] [Commented] (SOLR-4531) corrupted tlog causes recovery failed

2016-10-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604296#comment-15604296
 ] 

Shalin Shekhar Mangar commented on SOLR-4531:
-

I looked at the SOLR-4359 commit but I don't see where an 
IndexOutOfBoundsException is caught. The only error handling that SOLR-4359 
added was to skip the tlog if the next record is null.

> corrupted tlog causes recovery failed
> -
>
> Key: SOLR-4531
> URL: https://issues.apache.org/jira/browse/SOLR-4531
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
>Reporter: Simon Scofield
> Attachments: SOLR-4531.patch
>
>
> One of the solr nodes in our SolrCloud was killed. It caused tlog was 
> corrupted. Now the node can't finish recoverying. There is an excepion:
> Caused by: java.lang.IndexOutOfBoundsException: Index: 14, Size: 13
>   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
>   at java.util.ArrayList.get(ArrayList.java:322)
>   at 
> org.apache.solr.update.TransactionLog$LogCodec.readExternString(TransactionLog.java:128)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:120)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:121)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:708)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
>   at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
>   at org.apache.solr.update.UpdateLog.init(UpdateLog.java:241)
>   at org.apache.solr.update.UpdateHandler.initLog(UpdateHandler.java:94)
>   at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:123)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:97)
>   ... 31 more
> I check the code in UpdateLog.java. I find that only IOException is catched 
> when the above expception happens.
> {code:title=solr\\core\\src\\java\\org\\apache\\solr\\update\\UpdateLog.java|borderStyle=solid}
> private void update() {
>   int numUpdates = 0;
>   updateList = new ArrayList(logList.size());
>   deleteByQueryList = new ArrayList();
>   deleteList = new ArrayList();
>   updates = new HashMap(numRecordsToKeep);
>   for (TransactionLog oldLog : logList) {
> List updatesForLog = new ArrayList();
> TransactionLog.ReverseReader reader = null;
> try {
>   reader = oldLog.getReverseReader();
>   while (numUpdates < numRecordsToKeep) {
> Object o = reader.next();
> if (o==null) break;
> try {
>   // should currently be a List
>   List entry = (List)o;
>   // TODO: refactor this out so we get common error handling
>   int opAndFlags = (Integer)entry.get(0);
>   if (latestOperation == 0) {
> latestOperation = opAndFlags;
>   }
>   int oper = opAndFlags & UpdateLog.OPERATION_MASK;
>   long version = (Long) entry.get(1);
>   switch (oper) {
> case UpdateLog.ADD:
> case UpdateLog.DELETE:
> case UpdateLog.DELETE_BY_QUERY:
>   Update update = new Update();
>   update.log = oldLog;
>   update.pointer = reader.position();
>   update.version = version;
>   updatesForLog.add(update);
>   updates.put(version, update);
>   
>   if (oper == UpdateLog.DELETE_BY_QUERY) {
> deleteByQueryList.add(update);
>   } else if (oper == UpdateLog.DELETE) {
> deleteList.add(new DeleteUpdate(version, 
> (byte[])entry.get(2)));
>   }
>   
>   

[jira] [Commented] (SOLR-4531) corrupted tlog causes recovery failed

2016-10-24 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604254#comment-15604254
 ] 

Cao Manh Dat commented on SOLR-4531:


The patch just contain the test, because this issue already fixed by SOLR-4359.




> corrupted tlog causes recovery failed
> -
>
> Key: SOLR-4531
> URL: https://issues.apache.org/jira/browse/SOLR-4531
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
>Reporter: Simon Scofield
> Attachments: SOLR-4531.patch
>
>
> One of the solr nodes in our SolrCloud was killed. It caused tlog was 
> corrupted. Now the node can't finish recoverying. There is an excepion:
> Caused by: java.lang.IndexOutOfBoundsException: Index: 14, Size: 13
>   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
>   at java.util.ArrayList.get(ArrayList.java:322)
>   at 
> org.apache.solr.update.TransactionLog$LogCodec.readExternString(TransactionLog.java:128)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:120)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:121)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:708)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
>   at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
>   at org.apache.solr.update.UpdateLog.init(UpdateLog.java:241)
>   at org.apache.solr.update.UpdateHandler.initLog(UpdateHandler.java:94)
>   at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:123)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:97)
>   ... 31 more
> I check the code in UpdateLog.java. I find that only IOException is catched 
> when the above expception happens.
> {code:title=solr\\core\\src\\java\\org\\apache\\solr\\update\\UpdateLog.java|borderStyle=solid}
> private void update() {
>   int numUpdates = 0;
>   updateList = new ArrayList(logList.size());
>   deleteByQueryList = new ArrayList();
>   deleteList = new ArrayList();
>   updates = new HashMap(numRecordsToKeep);
>   for (TransactionLog oldLog : logList) {
> List updatesForLog = new ArrayList();
> TransactionLog.ReverseReader reader = null;
> try {
>   reader = oldLog.getReverseReader();
>   while (numUpdates < numRecordsToKeep) {
> Object o = reader.next();
> if (o==null) break;
> try {
>   // should currently be a List
>   List entry = (List)o;
>   // TODO: refactor this out so we get common error handling
>   int opAndFlags = (Integer)entry.get(0);
>   if (latestOperation == 0) {
> latestOperation = opAndFlags;
>   }
>   int oper = opAndFlags & UpdateLog.OPERATION_MASK;
>   long version = (Long) entry.get(1);
>   switch (oper) {
> case UpdateLog.ADD:
> case UpdateLog.DELETE:
> case UpdateLog.DELETE_BY_QUERY:
>   Update update = new Update();
>   update.log = oldLog;
>   update.pointer = reader.position();
>   update.version = version;
>   updatesForLog.add(update);
>   updates.put(version, update);
>   
>   if (oper == UpdateLog.DELETE_BY_QUERY) {
> deleteByQueryList.add(update);
>   } else if (oper == UpdateLog.DELETE) {
> deleteList.add(new DeleteUpdate(version, 
> (byte[])entry.get(2)));
>   }
>   
>   break;
> case UpdateLog.COMMIT:
>   break;
> default:
>   

[jira] [Commented] (SOLR-4531) corrupted tlog causes recovery failed

2016-10-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604170#comment-15604170
 ] 

Shalin Shekhar Mangar commented on SOLR-4531:
-

I don't see how this issue was/is fixed. Seems like it can still happen? It's 
good that you randomize the number of bytes to truncate. Can you beast this 
test to see if we can get it to fail?

Looking at the patch -- the cluster.startJettySolrRunner() calls are redundant?
{code}
+
+ChaosMonkey.start(cluster.getJettySolrRunners());
+cluster.startJettySolrRunner();
+cluster.startJettySolrRunner();
+cluster.startJettySolrRunner();
+cluster.startJettySolrRunner();
{code}

> corrupted tlog causes recovery failed
> -
>
> Key: SOLR-4531
> URL: https://issues.apache.org/jira/browse/SOLR-4531
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
>Reporter: Simon Scofield
> Attachments: SOLR-4531.patch
>
>
> One of the solr nodes in our SolrCloud was killed. It caused tlog was 
> corrupted. Now the node can't finish recoverying. There is an excepion:
> Caused by: java.lang.IndexOutOfBoundsException: Index: 14, Size: 13
>   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
>   at java.util.ArrayList.get(ArrayList.java:322)
>   at 
> org.apache.solr.update.TransactionLog$LogCodec.readExternString(TransactionLog.java:128)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:120)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readOrderedMap(JavaBinCodec.java:121)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:184)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readArray(JavaBinCodec.java:451)
>   at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:182)
>   at 
> org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:708)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
>   at 
> org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
>   at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
>   at org.apache.solr.update.UpdateLog.init(UpdateLog.java:241)
>   at org.apache.solr.update.UpdateHandler.initLog(UpdateHandler.java:94)
>   at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:123)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:97)
>   ... 31 more
> I check the code in UpdateLog.java. I find that only IOException is catched 
> when the above expception happens.
> {code:title=solr\\core\\src\\java\\org\\apache\\solr\\update\\UpdateLog.java|borderStyle=solid}
> private void update() {
>   int numUpdates = 0;
>   updateList = new ArrayList(logList.size());
>   deleteByQueryList = new ArrayList();
>   deleteList = new ArrayList();
>   updates = new HashMap(numRecordsToKeep);
>   for (TransactionLog oldLog : logList) {
> List updatesForLog = new ArrayList();
> TransactionLog.ReverseReader reader = null;
> try {
>   reader = oldLog.getReverseReader();
>   while (numUpdates < numRecordsToKeep) {
> Object o = reader.next();
> if (o==null) break;
> try {
>   // should currently be a List
>   List entry = (List)o;
>   // TODO: refactor this out so we get common error handling
>   int opAndFlags = (Integer)entry.get(0);
>   if (latestOperation == 0) {
> latestOperation = opAndFlags;
>   }
>   int oper = opAndFlags & UpdateLog.OPERATION_MASK;
>   long version = (Long) entry.get(1);
>   switch (oper) {
> case UpdateLog.ADD:
> case UpdateLog.DELETE:
> case UpdateLog.DELETE_BY_QUERY:
>   Update update = new Update();
>   update.log = oldLog;
>   update.pointer = reader.position();
>   update.version = version;
>   updatesForLog.add(update);
>   updates.put(version, update);
>   
>  

[jira] [Commented] (SOLR-3318) LBHttpSolrServer should allow to specify a preferred server for a query

2016-10-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604149#comment-15604149
 ] 

Shalin Shekhar Mangar commented on SOLR-3318:
-

The idea of fixing servers for a query is valid but I'd do it differently in 
SolrCloud -- see SOLR-8146 for a better idea of using snitches for routing.

> LBHttpSolrServer should allow to specify a preferred server for a query
> ---
>
> Key: SOLR-3318
> URL: https://issues.apache.org/jira/browse/SOLR-3318
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.0-ALPHA
>Reporter: Martin Grotzke
>Priority: Minor
> Attachments: SOLR-3318.git.patch
>
>
> For a user query we make several solr queries that differ only slightly and 
> therefore should use/reuse objects cached from the first query (we're using a 
> custom request handler and custom caches).
> Thus such subsequent queries should hit the same solr server.
> The implemented solution looks like this:
> * The client obtains a live SolrServer from LBHttpSolrServer
> * The client provides this SolrServer as preferred server for a query
> * If the preferred server is no longer alive the request is retried on 
> another live server
> * Everything else follows the existing logic:
> ** After live servers are exhausted, any servers previously marked as dead 
> will be tried before failing the request
> ** If no live servers are found a SolrServerException is thrown
> The implementation is also [on 
> github|https://github.com/magro/lucene-solr/commit/a75aef3d].
> Mailing list thread: 
> http://lucene.472066.n3.nabble.com/LBHttpSolrServer-to-query-a-preferred-server-tt3884140.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604023#comment-15604023
 ] 

Joel Bernstein edited comment on SOLR-9559 at 10/25/16 3:49 AM:


All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark. The *daemon* function can be used to construct Actors that interact with 
each other through work queues and mail boxes.

2) Massively scalable stored queries and alerts. See the *topic* function for 
more details on subscribing to a query.

3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.






was (Author: joel.bernstein):
All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark. The daemon function can be used build Actors that interact with each 
other through work queues and mail boxes.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.





> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at 

[jira] [Comment Edited] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604023#comment-15604023
 ] 

Joel Bernstein edited comment on SOLR-9559 at 10/25/16 3:48 AM:


All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark. The daemon function can be used build Actors that interact with each 
other through work queues and mail boxes.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.






was (Author: joel.bernstein):
All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.





> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18134 - Unstable!

2016-10-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18134/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib {p0=DV}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([E1BA2FA123B6E51E:3D7A86D74E3741D4]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Comment Edited] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604023#comment-15604023
 ] 

Joel Bernstein edited comment on SOLR-9559 at 10/25/16 3:25 AM:


All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.






was (Author: joel.bernstein):
All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.





> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604023#comment-15604023
 ] 

Joel Bernstein commented on SOLR-9559:
--

All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark.
2) Massively scalable stored queries and alerts. See the topic function for 
more details on subscribing to a query.
3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.





> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-9687:

Fix Version/s: master (7.0)
   6.3

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604007#comment-15604007
 ] 

Tomás Fernández Löbbe edited comment on SOLR-9687 at 10/25/16 3:08 AM:
---

This is the commit to master: 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ce57e8a8f4274db9ad1a78f06d37a7c9e02b3fb8
I forgot to include the Jira code in the commit comment.


was (Author: tomasflobbe):
This is the commit to master: ce57e8a8f4274db9ad1a78f06d37a7c9e02b3fb8
I forgot to include the Jira code in the commit comment.

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604007#comment-15604007
 ] 

Tomás Fernández Löbbe commented on SOLR-9687:
-

This is the commit to master: ce57e8a8f4274db9ad1a78f06d37a7c9e02b3fb8
I forgot to include the Jira code in the commit comment.

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604000#comment-15604000
 ] 

ASF subversion and git services commented on SOLR-9687:
---

Commit 96e847a10c532663e39ad2de184ed8582e5eb0e2 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96e847a ]

SOLR-9687: Fixed Interval Facet count issue in cases of open/close intervals on 
the same values


> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) Luke NoSuchFileException

2016-10-24 Thread Gopalakrishnan B (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603980#comment-15603980
 ] 

Gopalakrishnan B commented on SOLR-9120:


Team - Do we have an update on when this will be fixed and what version ?

Currently, as a workaround we are restarting our entire solr cluster to resolve 
the luke call failure.

> Luke NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Commented] (SOLR-6566) Document query timeAllowed during term iterations

2016-10-24 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603953#comment-15603953
 ] 

Anshum Gupta commented on SOLR-6566:


[~ctargett] We need to add more details about what to expect from the parameter 
i.e. it's not a hard stop at the exact time. Also that this wouldn't be the 
time within which to expect a response, but spans over term iteration and doc 
collection only.

> Document query timeAllowed during term iterations
> -
>
> Key: SOLR-6566
> URL: https://issues.apache.org/jira/browse/SOLR-6566
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>
> Need to document Query timeout during TermsEnumeration (SOLR-5986).
> Query can now be made to timeout during requests that involve 
> TermsEnumeration as opposed to only Doc Collection i..e During search as well 
> as MLT handler usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603936#comment-15603936
 ] 

David Smiley commented on SOLR-9559:


Hi Joel. What do you think about naming this {{eval}} instead?  That name seems 
more congruent with the name & purpose of the eval() method in various 
programming environments.  You are very close to the code so I can see how 
ExecutorStream came to your mind in light of it using an ExecutorService 
underneath.

I noticed that {{StreamTask}} loops over the tuples and does nothing with the 
result.  Why is that?  And might you use Java 7 try-with-resources over there?

I admit I'm a little confused as to the use-case -- why would someone embed a 
streaming expression be embedded in a tuple?  Perhaps some sort of persistent 
distributed work queue?  But then how are error conditions handled... do we 
concern ourselves with not running the same expression multiple times?

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+140) - Build # 2031 - Unstable!

2016-10-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2031/
Java: 64bit/jdk-9-ea+140 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib 
{p0=STREAM}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([93316320775D7906:4FF1CA561ADCDDCC]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Resolved] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9654.

Resolution: Fixed

Hopefully should be fixed now.

> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603818#comment-15603818
 ] 

ASF subversion and git services commented on SOLR-9654:
---

Commit e06c60dd9dfa23c1eac2a95cabbeb2269c23f1cf in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e06c60d ]

SOLR-9654: tests: specify descending count sort for streaming


> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603813#comment-15603813
 ] 

ASF subversion and git services commented on SOLR-9654:
---

Commit c9132ac66100ab46bea480397396105f8489b239 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c9132ac ]

SOLR-9654: tests: specify descending count sort for streaming


> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened SOLR-9654:

  Assignee: Yonik Seeley

investigating failing test

> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-9687:

Attachment: SOLR-9687.patch

Here is a patch with the proposed fix and tests

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1449 - Still Unstable

2016-10-24 Thread Yonik Seeley
Hmmm, this should be 100% predictable...
But IIRC, I think David may have added some randomness to the test.
I'll look into it.

-Yonik


On Mon, Oct 24, 2016 at 7:11 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1449/
>
> 6 tests failed.
> FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib 
> {p0=DVHASH}
>
> Error Message:
> mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
>
> Stack Trace:
> java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
> at 
> __randomizedtesting.SeedInfo.seed([2F7D52519DC157A4:F3BDFB27F040F36E]:0)
> at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
> at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
> at 
> org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
> at 
> org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
> at 
> org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
> at 
> org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
> at 
> org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> 

[jira] [Assigned] (SOLR-9684) Add scheduler Streaming Expression

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-9684:


Assignee: Joel Bernstein

> Add scheduler Streaming Expression
> --
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps a list of streams and *prioritizes* the 
> iteration of the streams. This allows there to be different task queues with 
> different priorities.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(scheduler(topic(), topic(), topic(
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1449 - Still Unstable

2016-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1449/

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib 
{p0=DVHASH}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([2F7D52519DC157A4:F3BDFB27F040F36E]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr_s* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

*Sample syntax*:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

*Sample syntax*:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.






> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr_s* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

*Sample syntax*:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr_s* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

*Sample syntax*:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.






> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Attachment: SOLR-9559.patch

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9689) Process updates concurrently during PeerSync

2016-10-24 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603424#comment-15603424
 ] 

Pushkar Raste edited comment on SOLR-9689 at 10/24/16 10:28 PM:


POC for applying updates concurrently. 
Please review it and let me know if there are gaping issues. 

I would also appreciate any suggestions to handle out of order {{DBQ}} (I think 
by default we keep a few {{DBQs}} around to account for out of order upates), 
may be we can increase the number of {{DBQs}} we keep around if {{DBQs}} have 
{{PEER_SYNC}} flag set on it.


was (Author: praste):
POC for applying updates concurrently. 
Please review it and let me know if there are gaping issues. 

I would also appreciate any suggestions to handle out of order {{DBQ} (I think 
by default we keep a few {{DBQs}} around to account for out of order upates), 
may be we can increase the number of {{DBQs}} we keep around if {{DBQs}} have 
{{PEER_SYNC}} flag set on it.

> Process updates concurrently during PeerSync
> 
>
> Key: SOLR-9689
> URL: https://issues.apache.org/jira/browse/SOLR-9689
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
> Attachments: SOLR-9689.patch
>
>
> This came up during discussion with [~shalinmangar]
> During {{PeerSync}}, updates are applied one a time by looping through the 
> updates received from the leader. This is slow and could keep node in 
> recovery for a long time if number of updates to apply were large. 
> We can apply updates concurrently, this should be no different than what 
> could happen during normal indexing (we can't really ensure that a replica 
> will process updates in the same order as the leader or other replicas).
> There are few corner cases around dbq we should be careful about. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9689) Process updates concurrently during PeerSync

2016-10-24 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9689:

Attachment: SOLR-9689.patch

POC for applying updates concurrently. 
Please review it and let me know if there are gaping issues. 

I would also appreciate any suggestions to handle out of order {{DBQ} (I think 
by default we keep a few {{DBQs}} around to account for out of order upates), 
may be we can increase the number of {{DBQs}} we keep around if {{DBQs}} have 
{{PEER_SYNC}} flag set on it.

> Process updates concurrently during PeerSync
> 
>
> Key: SOLR-9689
> URL: https://issues.apache.org/jira/browse/SOLR-9689
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
> Attachments: SOLR-9689.patch
>
>
> This came up during discussion with [~shalinmangar]
> During {{PeerSync}}, updates are applied one a time by looping through the 
> updates received from the leader. This is slow and could keep node in 
> recovery for a long time if number of updates to apply were large. 
> We can apply updates concurrently, this should be no different than what 
> could happen during normal indexing (we can't really ensure that a replica 
> will process updates in the same order as the leader or other replicas).
> There are few corner cases around dbq we should be careful about. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9689) Process updates concurrently during PeerSync

2016-10-24 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9689:

Summary: Process updates concurrently during PeerSync  (was: Process 
updates concurrently during {{PeerSync}})

> Process updates concurrently during PeerSync
> 
>
> Key: SOLR-9689
> URL: https://issues.apache.org/jira/browse/SOLR-9689
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>
> This came up during discussion with [~shalinmangar]
> During {{PeerSync}}, updates are applied one a time by looping through the 
> updates received from the leader. This is slow and could keep node in 
> recovery for a long time if number of updates to apply were large. 
> We can apply updates concurrently, this should be no different than what 
> could happen during normal indexing (we can't really ensure that a replica 
> will process updates in the same order as the leader or other replicas).
> There are few corner cases around dbq we should be careful about. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9689) Process updates concurrently during {{PeerSync}}

2016-10-24 Thread Pushkar Raste (JIRA)
Pushkar Raste created SOLR-9689:
---

 Summary: Process updates concurrently during {{PeerSync}}
 Key: SOLR-9689
 URL: https://issues.apache.org/jira/browse/SOLR-9689
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Pushkar Raste


This came up during discussion with [~shalinmangar]

During {{PeerSync}}, updates are applied one a time by looping through the 
updates received from the leader. This is slow and could keep node in recovery 
for a long time if number of updates to apply were large. 

We can apply updates concurrently, this should be no different than what could 
happen during normal indexing (we can't really ensure that a replica will 
process updates in the same order as the leader or other replicas).

There are few corner cases around dbq we should be careful about. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Attachment: SOLR-9559.patch

Added parallel test case

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9641) Emit distributed tracing information from Solr

2016-10-24 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9641:

Attachment: SOLR-9641.patch

Patch v2:
* Incorporated feedback from David and Christine as discussed in earlier 
comments.
* Moved core tracing logic out of SolrCore and into HttpSolrCall
* Added sample trace configuration section to solr.xml
* Added tracing to write response, and more granular tracing in general

> Emit distributed tracing information from Solr
> --
>
> Key: SOLR-9641
> URL: https://issues.apache.org/jira/browse/SOLR-9641
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
> Fix For: master (7.0)
>
> Attachments: SOLR-9641.patch, SOLR-9641.patch
>
>
> While Solr already offers a few tools for exposing timing, this information 
> can be difficult to aggregate and analyze. By integrating distributed tracing 
> into Solr operations, we can gain new performance and behaviour insights.
> One such solution can be accomplished via Apache HTrace (incubating).
> (More rationale to follow.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-10-24 Thread Michael Nilsson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603212#comment-15603212
 ] 

Michael Nilsson commented on SOLR-8542:
---

Hey [~adeppa], 

So our plan is to get this merged into master, roughly solr 7x, very soon.  We 
will then be working on backporting the commit/patch to 6x so it can be rolled 
out in a solr release.  We would strongly recommend you upgrade to 6x to get 
access to a sturdier and more performant solr version with access to new 
features like the plugin.

If upgrading to 6x is not possible, you could cherry-pick the commit into your 
own branch_5x solr repo and resolve any conflicts.  However, there have been 
many changes compared to what's in master which affect the code the plugin was 
built on, so the backporting would take some effort.  

-Mike

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9641) Emit distributed tracing information from Solr

2016-10-24 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603146#comment-15603146
 ] 

Mike Drob commented on SOLR-9641:
-

Which one is the "default"? I see {{./solr/example/exampledocs/solr.xml}} and 
{{./solr/server/solr/solr.xml}}


> Emit distributed tracing information from Solr
> --
>
> Key: SOLR-9641
> URL: https://issues.apache.org/jira/browse/SOLR-9641
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
> Fix For: master (7.0)
>
> Attachments: SOLR-9641.patch
>
>
> While Solr already offers a few tools for exposing timing, this information 
> can be difficult to aggregate and analyze. By integrating distributed tracing 
> into Solr operations, we can gain new performance and behaviour insights.
> One such solution can be accomplished via Apache HTrace (incubating).
> (More rationale to follow.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9641) Emit distributed tracing information from Solr

2016-10-24 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603141#comment-15603141
 ] 

Mike Drob commented on SOLR-9641:
-

bq. in CoreContainer there is one zkSys.getZkController().getNodeName() and one 
getZkController().getNodeName() call, they could be combined into one call with 
result kept in local variable or both could use or not use zkSys for clarity.
Done.
bq. In SearchHandler, how about also having trace scopes for the 
handleResponses and finishStage steps? Or if the intention is to only trace 
component methods which typically make requests to other shards maybe not trace 
the prepare step?
Hmm... yes, this could make sense. I didn't want to put too much in for the 
distributed request portion because that also gets traced on the remote peers. 
But you're right that something should be looked at here. Adding it around only 
handleResponse and finishStage seems insufficient? There is a lot of other 
things going on in the distribute branch there. Will come back to this later...
bq. In CoreAdminHandler for the callInfo.call(); there is the traceDescription 
+ " async" scope i.e. differentiation between sync and async. Just wondering if 
something similar might be useful for SearchHandler's without-debug and 
with-debug prepare and process scopes?
You mean labelling the debug scope with a debug description? Yea, that's 
doable. My async description was largely a hack, I think, and will probably go 
away in favor of something more generic.
bq. In the tests, curious why only [0] is being added in the getReceivers 
methods?
Because there was only one receiver configured per jetty. I'll change this to 
grab them all.
bq. In the tests, might the Random random() method be passed down to SpanId
Good idea. I'll make a utility method in Solr for now, but also filed HTRACE-391

> Emit distributed tracing information from Solr
> --
>
> Key: SOLR-9641
> URL: https://issues.apache.org/jira/browse/SOLR-9641
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
> Fix For: master (7.0)
>
> Attachments: SOLR-9641.patch
>
>
> While Solr already offers a few tools for exposing timing, this information 
> can be difficult to aggregate and analyze. By integrating distributed tracing 
> into Solr operations, we can gain new performance and behaviour insights.
> One such solution can be accomplished via Apache HTrace (incubating).
> (More rationale to follow.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 603 - Failure

2016-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/603/

No tests ran.

Build Log:
[...truncated 40573 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (36.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.0 MB in 0.03 sec (1043.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.6 MB in 0.06 sec (1155.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.3 MB in 0.06 sec (1179.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6088 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6088 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (257.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 39.4 MB in 0.04 sec (1029.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 139.2 MB in 0.13 sec (1073.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 148.3 MB in 0.14 sec (1090.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  
   

[jira] [Assigned] (SOLR-9441) Solr collection backup on HDFS can only be manipulated by the Solr process owner

2016-10-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-9441:
-

Assignee: Mark Miller

> Solr collection backup on HDFS can only be manipulated by the Solr process 
> owner
> 
>
> Key: SOLR-9441
> URL: https://issues.apache.org/jira/browse/SOLR-9441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: trunk
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>
> When we backup Solr collection using HDFS backup repository, the backup 
> folder (and the files) are created with permissions 755 (i.e. only solr 
> process owner can delete/move the backup folder). This is inconvenient from 
> user perspective since the backup is essentially a full-copy of the Solr 
> collection and hence manipulating it doesn't affect the Solr collection state 
> in any way.
> We should provide an option by which we can enable other users to manipulate 
> the backup folders. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: SOLR-9634: correct name of deprecated/removed method in solr/CHANGES.txt

2016-10-24 Thread Alan Woodward
Oops, thanks Christine!

Alan Woodward
www.flax.co.uk


> On 24 Oct 2016, at 18:59, cpoersc...@apache.org wrote:
> 
> Repository: lucene-solr
> Updated Branches:
>  refs/heads/master 97339e2ca -> 37871de29
> 
> 
> SOLR-9634: correct name of deprecated/removed method in solr/CHANGES.txt
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/37871de2
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/37871de2
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/37871de2
> 
> Branch: refs/heads/master
> Commit: 37871de29bc5bd329eeb2f6867f3f8ca3b96e84f
> Parents: 97339e2
> Author: Christine Poerschke 
> Authored: Mon Oct 24 18:58:26 2016 +0100
> Committer: Christine Poerschke 
> Committed: Mon Oct 24 18:58:26 2016 +0100
> 
> --
> solr/CHANGES.txt | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/37871de2/solr/CHANGES.txt
> --
> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
> index e223b4d..3bb28c4 100644
> --- a/solr/CHANGES.txt
> +++ b/solr/CHANGES.txt
> @@ -98,7 +98,7 @@ Upgrade Notes
> 
> * The create/deleteCollection methods on MiniSolrCloudCluster have been
>   deprecated.  Clients should instead use the CollectionAdminRequest API.  In
> -  addition, MiniSolrCloudCluster#uploadConfigSet(File, String) has been
> +  addition, MiniSolrCloudCluster#uploadConfigDir(File, String) has been
>   deprecated in favour of #uploadConfigSet(Path, String)
> 
> * The bin/solr.in.sh (bin/solr.in.cmd on Windows) is now completely commented 
> by default. Previously, this wasn't so,
> 



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3624 - Unstable!

2016-10-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3624/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample

Error Message:
Expected 10 to be found in the testCloudExamplePrompt collection but only found 
6

Stack Trace:
java.lang.AssertionError: Expected 10 to be found in the testCloudExamplePrompt 
collection but only found 6
at 
__randomizedtesting.SeedInfo.seed([1ECA1D58E7792911:C5BBFD92D00CEC77]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample(TestSolrCLIRunExample.java:457)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No 

[JENKINS] Lucene-Solr-Tests-master - Build # 1448 - Unstable

2016-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1448/

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=7811, 
name=updateExecutor-1220-thread-4, state=RUNNABLE, group=TGRP-RecoveryZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7811, name=updateExecutor-1220-thread-4, 
state=RUNNABLE, group=TGRP-RecoveryZkTest]
at 
__randomizedtesting.SeedInfo.seed([94BC77D554E0DAEA:1CE8480FFA1CB712]:0)
Caused by: org.apache.solr.common.SolrException: Replica: 
https://127.0.0.1:38697/_pep/cj/collection1/ should have been marked under 
leader initiated recovery in ZkController but wasn't.
at __randomizedtesting.SeedInfo.seed([94BC77D554E0DAEA]:0)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryThread.run(LeaderInitiatedRecoveryThread.java:88)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11321 lines...]
   [junit4] Suite: org.apache.solr.cloud.RecoveryZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2/temp/solr.cloud.RecoveryZkTest_94BC77D554E0DAEA-001/init-core-data-001
   [junit4]   2> 840136 INFO  
(SUITE-RecoveryZkTest-seed#[94BC77D554E0DAEA]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 840137 INFO  
(SUITE-RecoveryZkTest-seed#[94BC77D554E0DAEA]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/_pep/cj
   [junit4]   2> 840138 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 840145 INFO  (Thread-2173) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 840145 INFO  (Thread-2173) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 840239 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] o.a.s.c.ZkTestServer 
start zk server on port:38997
   [junit4]   2> 840249 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 840251 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 840253 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 840254 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 840255 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 840256 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 840258 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 840259 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 840260 INFO  
(TEST-RecoveryZkTest.test-seed#[94BC77D554E0DAEA]) [] 
o.a.s.c.AbstractZkTestCase put 

[jira] [Resolved] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9654.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.3

> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9688) Add a command-line tool to manage the snapshots functionality

2016-10-24 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9688:
--

 Summary: Add a command-line tool to manage the snapshots 
functionality
 Key: SOLR-9688
 URL: https://issues.apache.org/jira/browse/SOLR-9688
 Project: Solr
  Issue Type: Sub-task
Reporter: Hrishikesh Gadre
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602853#comment-15602853
 ] 

ASF subversion and git services commented on SOLR-9654:
---

Commit a27897d81bd2e2cfad1dad0a6cb5b94d638a6851 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a27897d ]

SOLR-9654: add overrequest param to JSON Facet API


> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Support snapshot management functionality for a solr collection

2016-10-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602837#comment-15602837
 ] 

Varun Thacker commented on SOLR-9038:
-

Hi Hrishikesh,

Here is fine. I'll move it over to the ref guide once you have posted it here. 
Thanks!

> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 536 - Unstable!

2016-10-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/536/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestMaxPositionInOldIndex

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestMaxPositionInOldIndex_E08C73A66FC4722C-001\maxposindex-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestMaxPositionInOldIndex_E08C73A66FC4722C-001\maxposindex-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestMaxPositionInOldIndex_E08C73A66FC4722C-001\maxposindex-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestMaxPositionInOldIndex_E08C73A66FC4722C-001\maxposindex-001

at __randomizedtesting.SeedInfo.seed([E08C73A66FC4722C]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9DAC53D20E69A725:6ADFBD8AC88108C3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 

[jira] [Commented] (SOLR-9654) add overrequest parameter to field faceting

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602804#comment-15602804
 ] 

ASF subversion and git services commented on SOLR-9654:
---

Commit 4a85163754e16b466cb4ef3dd0de92fe7d5b87d1 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a85163 ]

SOLR-9654: add overrequest param to JSON Facet API


> add overrequest parameter to field faceting
> ---
>
> Key: SOLR-9654
> URL: https://issues.apache.org/jira/browse/SOLR-9654
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
> Attachments: SOLR-9654.patch, SOLR-9654.patch
>
>
> Add an "overrequest" parameter that can control the amount of overrequest 
> done for distributed search.  Among other things, this parameter will aid in 
> testing simple refinement cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9634) Deprecate collection methods on MiniSolrCloudCluster

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602731#comment-15602731
 ] 

ASF subversion and git services commented on SOLR-9634:
---

Commit 16b4e220973763cf5bcfd0018555c32b6067ccff in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16b4e22 ]

SOLR-9634: correct name of deprecated/removed method in solr/CHANGES.txt


> Deprecate collection methods on MiniSolrCloudCluster
> 
>
> Key: SOLR-9634
> URL: https://issues.apache.org/jira/browse/SOLR-9634
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9634.patch
>
>
> MiniSolrCloudCluster has a bunch of createCollection() and deleteCollection() 
> special methods, which aren't really necessary given that we expose a 
> solrClient.  We should deprecate these, and point users to the 
> CollectionAdminRequest API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9634) Deprecate collection methods on MiniSolrCloudCluster

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602728#comment-15602728
 ] 

ASF subversion and git services commented on SOLR-9634:
---

Commit 37871de29bc5bd329eeb2f6867f3f8ca3b96e84f in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=37871de ]

SOLR-9634: correct name of deprecated/removed method in solr/CHANGES.txt


> Deprecate collection methods on MiniSolrCloudCluster
> 
>
> Key: SOLR-9634
> URL: https://issues.apache.org/jira/browse/SOLR-9634
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9634.patch
>
>
> MiniSolrCloudCluster has a bunch of createCollection() and deleteCollection() 
> special methods, which aren't really necessary given that we expose a 
> solrClient.  We should deprecate these, and point users to the 
> CollectionAdminRequest API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Support snapshot management functionality for a solr collection

2016-10-24 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602681#comment-15602681
 ] 

Hrishikesh Gadre commented on SOLR-9038:


[~varunthacker] I think we need more details around snapshot management 
functionality. I have a write-up on that. Should I submit the text to this JIRA 
? 


> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602641#comment-15602641
 ] 

Tomás Fernández Löbbe commented on SOLR-9687:
-

Thanks [~andycsolr], I'll take a look later today. If you can, this is how you 
can generate a patch: 
https://wiki.apache.org/solr/HowToContribute#Contributing_Code_.28Features.2C_Bug_Fixes.2C_Tests.2C_etc29
 You can also do a pull request if you are familiar with github, 
https://wiki.apache.org/solr/HowToContribute#Working_with_GitHub

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-9687:
---

Assignee: Tomás Fernández Löbbe

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2216) Highlighter query exceeds maxBooleanClause limit due to range query

2016-10-24 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-2216.
---
Resolution: Implemented

The fix is in LUCENE-7520.

> Highlighter query exceeds maxBooleanClause limit due to range query
> ---
>
> Key: SOLR-2216
> URL: https://issues.apache.org/jira/browse/SOLR-2216
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
> Environment: Linux solr-2.bizjournals.int 2.6.18-194.3.1.el5 #1 SMP 
> Thu May 13 13:08:30 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> JAVA_OPTS="-client -Dcom.sun.management.jmxremote=true 
> -Dcom.sun.management.jmxremote.port= 
> -Dcom.sun.management.jmxremote.authenticate=true 
> -Dcom.sun.management.jmxremote.access.file=/root/.jmxaccess 
> -Dcom.sun.management.jmxremote.password.file=/root/.jmxpasswd 
> -Dcom.sun.management.jmxremote.ssl=false -XX:+UseCompressedOops 
> -XX:MaxPermSize=512M -Xms10240M -Xmx15360M -XX:+UseParallelGC 
> -XX:+AggressiveOpts -XX:NewRatio=5"
> top - 11:38:49 up 124 days, 22:37,  1 user,  load average: 5.20, 4.35, 3.90
> Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
> Cpu(s): 47.5%us,  2.9%sy,  0.0%ni, 49.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  24679008k total, 18179980k used,  6499028k free,   125424k buffers
> Swap: 26738680k total,29276k used, 26709404k free,  8187444k cached
>Reporter: Ken Stanley
>
> For a full detail of the issue, please see the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201011.mbox/%3CAANLkTimE8z8yOni+u0Nsbgct1=ef7e+su0_waku2c...@mail.gmail.com%3E
> The nutshell version of the issue is that when I have a query that contains 
> ranges on a specific (non-highlighted) field, the highlighter component is 
> attempting to create a query that exceeds the value of maxBooleanClauses set 
> from solrconfig.xml. This is despite my explicit setting of hl.field, 
> hl.requireFieldMatch, and various other hightlight options in the query. 
> As suggested by Koji in the follow-up response, I removed the range queries 
> from my main query, and SOLR and highlighting were happy to fulfill my 
> request. It was suggested that if removing the range queries worked that this 
> might potentially be a bug, hence my filing this JIRA ticket. For what it is 
> worth, if I move my range queries into an fq, I do not get the exception 
> about exceeding maxBooleanClauses, and I get the effect that I was looking 
> for. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Attachment: SOLR-9559.patch

Added initial test case. A parallel test case is still needed.

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9621) Remove several guava, apache commons calls in favor of java 8 alternatives

2016-10-24 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602551#comment-15602551
 ] 

David Smiley commented on SOLR-9621:


A single all-inclusive patch is easier; thanks.
Can you run "ant precommit" too please?

> Remove several guava, apache commons calls in favor of java 8 alternatives
> --
>
> Key: SOLR-9621
> URL: https://issues.apache.org/jira/browse/SOLR-9621
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9621.patch
>
>
> Now that Solr is against Java 8, we can take advantage of replacing some 
> guava and apache commons calls with JDK standards. I'd like to start by 
> replacing the following:
> com.google.common.base.Supplier  -> java.util.function.Supplier
> com.google.common.base.Predicate -> java.util.function.Predicate
> com.google.common.base.Charsets -> java.nio.charset.StandardCharsets
> org.apache.commons.codec.Charsets -> java.nio.charset.StandardCharsets
> com.google.common.collect.Ordering -> java.util.Comparator
> com.google.common.base.Joiner -> java.util.stream.Collectors::joining
> com.google.common.base.Function -> java.util.function.Function
> com.google.common.base.Preconditions::checkNotNull -> 
> java.util.Objects::requireNonNull
> com.google.common.base.Objects::equals -> java.util.Objects::equals
> com.google.common.base.Objects::hashCode -> java.util.Objects::hashCode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7520) WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time

2016-10-24 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602482#comment-15602482
 ] 

Cao Manh Dat commented on LUCENE-7520:
--

Thanks, that's will be nice.

> WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time
> 
>
> Key: LUCENE-7520
> URL: https://issues.apache.org/jira/browse/LUCENE-7520
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Cao Manh Dat
>Assignee: David Smiley
> Fix For: 6.3
>
> Attachments: LUCENE-7520.patch
>
>
> Currently WeightedSpanTermExtractor will rewrite MultiTermQuery regardless of 
> the field being requested for highlighting. In some case like SOLR-2216, It 
> can be costly and cause TooManyClauses exception for no reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2216) Highlighter query exceeds maxBooleanClause limit due to range query

2016-10-24 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602475#comment-15602475
 ] 

Cao Manh Dat commented on SOLR-2216:


I think we can close this issues now, LUCENE-7520 is the root of the problem.

> Highlighter query exceeds maxBooleanClause limit due to range query
> ---
>
> Key: SOLR-2216
> URL: https://issues.apache.org/jira/browse/SOLR-2216
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
> Environment: Linux solr-2.bizjournals.int 2.6.18-194.3.1.el5 #1 SMP 
> Thu May 13 13:08:30 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> JAVA_OPTS="-client -Dcom.sun.management.jmxremote=true 
> -Dcom.sun.management.jmxremote.port= 
> -Dcom.sun.management.jmxremote.authenticate=true 
> -Dcom.sun.management.jmxremote.access.file=/root/.jmxaccess 
> -Dcom.sun.management.jmxremote.password.file=/root/.jmxpasswd 
> -Dcom.sun.management.jmxremote.ssl=false -XX:+UseCompressedOops 
> -XX:MaxPermSize=512M -Xms10240M -Xmx15360M -XX:+UseParallelGC 
> -XX:+AggressiveOpts -XX:NewRatio=5"
> top - 11:38:49 up 124 days, 22:37,  1 user,  load average: 5.20, 4.35, 3.90
> Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
> Cpu(s): 47.5%us,  2.9%sy,  0.0%ni, 49.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  24679008k total, 18179980k used,  6499028k free,   125424k buffers
> Swap: 26738680k total,29276k used, 26709404k free,  8187444k cached
>Reporter: Ken Stanley
>
> For a full detail of the issue, please see the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201011.mbox/%3CAANLkTimE8z8yOni+u0Nsbgct1=ef7e+su0_waku2c...@mail.gmail.com%3E
> The nutshell version of the issue is that when I have a query that contains 
> ranges on a specific (non-highlighted) field, the highlighter component is 
> attempting to create a query that exceeds the value of maxBooleanClauses set 
> from solrconfig.xml. This is despite my explicit setting of hl.field, 
> hl.requireFieldMatch, and various other hightlight options in the query. 
> As suggested by Koji in the follow-up response, I removed the range queries 
> from my main query, and SOLR and highlighting were happy to fulfill my 
> request. It was suggested that if removing the range queries worked that this 
> might potentially be a bug, hence my filing this JIRA ticket. For what it is 
> worth, if I move my range queries into an fq, I do not get the exception 
> about exceeding maxBooleanClauses, and I get the effect that I was looking 
> for. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9621) Remove several guava, apache commons calls in favor of java 8 alternatives

2016-10-24 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602465#comment-15602465
 ] 

Michael Braun commented on SOLR-9621:
-

Found the forbidden files. Should I commit just a patch from the previous of 
adding those lines, or include a new patch with the forbidden signature files 
affected as well?

> Remove several guava, apache commons calls in favor of java 8 alternatives
> --
>
> Key: SOLR-9621
> URL: https://issues.apache.org/jira/browse/SOLR-9621
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9621.patch
>
>
> Now that Solr is against Java 8, we can take advantage of replacing some 
> guava and apache commons calls with JDK standards. I'd like to start by 
> replacing the following:
> com.google.common.base.Supplier  -> java.util.function.Supplier
> com.google.common.base.Predicate -> java.util.function.Predicate
> com.google.common.base.Charsets -> java.nio.charset.StandardCharsets
> org.apache.commons.codec.Charsets -> java.nio.charset.StandardCharsets
> com.google.common.collect.Ordering -> java.util.Comparator
> com.google.common.base.Joiner -> java.util.stream.Collectors::joining
> com.google.common.base.Function -> java.util.function.Function
> com.google.common.base.Preconditions::checkNotNull -> 
> java.util.Objects::requireNonNull
> com.google.common.base.Objects::equals -> java.util.Objects::equals
> com.google.common.base.Objects::hashCode -> java.util.Objects::hashCode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9621) Remove several guava, apache commons calls in favor of java 8 alternatives

2016-10-24 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602465#comment-15602465
 ] 

Michael Braun edited comment on SOLR-9621 at 10/24/16 4:19 PM:
---

Found the forbidden files. Should I attach just a patch from the previous of 
adding those lines, or include a new patch with the forbidden signature files 
affected as well?


was (Author: mbraun688):
Found the forbidden files. Should I commit just a patch from the previous of 
adding those lines, or include a new patch with the forbidden signature files 
affected as well?

> Remove several guava, apache commons calls in favor of java 8 alternatives
> --
>
> Key: SOLR-9621
> URL: https://issues.apache.org/jira/browse/SOLR-9621
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9621.patch
>
>
> Now that Solr is against Java 8, we can take advantage of replacing some 
> guava and apache commons calls with JDK standards. I'd like to start by 
> replacing the following:
> com.google.common.base.Supplier  -> java.util.function.Supplier
> com.google.common.base.Predicate -> java.util.function.Predicate
> com.google.common.base.Charsets -> java.nio.charset.StandardCharsets
> org.apache.commons.codec.Charsets -> java.nio.charset.StandardCharsets
> com.google.common.collect.Ordering -> java.util.Comparator
> com.google.common.base.Joiner -> java.util.stream.Collectors::joining
> com.google.common.base.Function -> java.util.function.Function
> com.google.common.base.Preconditions::checkNotNull -> 
> java.util.Objects::requireNonNull
> com.google.common.base.Objects::equals -> java.util.Objects::equals
> com.google.common.base.Objects::hashCode -> java.util.Objects::hashCode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9629) Fix SolrJ warnings and use of deprecated methods in org.apache.solr.client.solrj.impl package

2016-10-24 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602457#comment-15602457
 ] 

Michael Braun commented on SOLR-9629:
-

Thanks [~elyograg], sorry about the wildcard there.

> Fix SolrJ warnings and use of deprecated methods in 
> org.apache.solr.client.solrj.impl package
> -
>
> Key: SOLR-9629
> URL: https://issues.apache.org/jira/browse/SOLR-9629
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (7.0)
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9629.patch, SOLR-9629.patch
>
>
> There are some warnings (generic types and deprecation) that appear in the 
> org.apache.solr.client.solrj.impl package which can be easily fixed. Other 
> than some simple fixes, includes a fix to use a MultiPartEntityBuilder to 
> create rather than using a deprecated constructor on MultiPartEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9588) Review and remove Guava dependency from SolrJ

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602447#comment-15602447
 ] 

ASF subversion and git services commented on SOLR-9588:
---

Commit 6ce4bbd2c25310a10b1e46adcbc1c99da1e7878a in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ce4bbd ]

 SOLR-9588: removed guava dependency


> Review and remove Guava dependency from SolrJ
> -
>
> Key: SOLR-9588
> URL: https://issues.apache.org/jira/browse/SOLR-9588
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9588.patch, SOLR-9588.patch, SOLR-9588.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2216) Highlighter query exceeds maxBooleanClause limit due to range query

2016-10-24 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602400#comment-15602400
 ] 

Cassandra Targett commented on SOLR-2216:
-

[~caomanhdat], I added a link from this issue to LUCENE-7520 since I noticed 
this issue mentioned in the description there. Now that that issue is 
committed, do you believe we can close this? Or is there another test you'd 
like to run first?

> Highlighter query exceeds maxBooleanClause limit due to range query
> ---
>
> Key: SOLR-2216
> URL: https://issues.apache.org/jira/browse/SOLR-2216
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
> Environment: Linux solr-2.bizjournals.int 2.6.18-194.3.1.el5 #1 SMP 
> Thu May 13 13:08:30 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> JAVA_OPTS="-client -Dcom.sun.management.jmxremote=true 
> -Dcom.sun.management.jmxremote.port= 
> -Dcom.sun.management.jmxremote.authenticate=true 
> -Dcom.sun.management.jmxremote.access.file=/root/.jmxaccess 
> -Dcom.sun.management.jmxremote.password.file=/root/.jmxpasswd 
> -Dcom.sun.management.jmxremote.ssl=false -XX:+UseCompressedOops 
> -XX:MaxPermSize=512M -Xms10240M -Xmx15360M -XX:+UseParallelGC 
> -XX:+AggressiveOpts -XX:NewRatio=5"
> top - 11:38:49 up 124 days, 22:37,  1 user,  load average: 5.20, 4.35, 3.90
> Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
> Cpu(s): 47.5%us,  2.9%sy,  0.0%ni, 49.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  24679008k total, 18179980k used,  6499028k free,   125424k buffers
> Swap: 26738680k total,29276k used, 26709404k free,  8187444k cached
>Reporter: Ken Stanley
>
> For a full detail of the issue, please see the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201011.mbox/%3CAANLkTimE8z8yOni+u0Nsbgct1=ef7e+su0_waku2c...@mail.gmail.com%3E
> The nutshell version of the issue is that when I have a query that contains 
> ranges on a specific (non-highlighted) field, the highlighter component is 
> attempting to create a query that exceeds the value of maxBooleanClauses set 
> from solrconfig.xml. This is despite my explicit setting of hl.field, 
> hl.requireFieldMatch, and various other hightlight options in the query. 
> As suggested by Koji in the follow-up response, I removed the range queries 
> from my main query, and SOLR and highlighting were happy to fulfill my 
> request. It was suggested that if removing the range queries worked that this 
> might potentially be a bug, hence my filing this JIRA ticket. For what it is 
> worth, if I move my range queries into an fq, I do not get the exception 
> about exceeding maxBooleanClauses, and I get the effect that I was looking 
> for. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread Andy Chillrud (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602338#comment-15602338
 ] 

Andy Chillrud commented on SOLR-9687:
-

Couldn't figure out how to create a patch file, but I was able to resolve the 
issue in Solr 5.3.1 by modifying the getSortedIntervals() method. Replaced the 
last line of the method
{code}
 return o1.start.compareTo(o2.start);
{code}
with
{code}
int startComparison = o1.start.compareTo(o2.start);
if (startComparison == 0) {
  if (o1.startOpen != o2.startOpen) {
if (!o1.startOpen) {
  return -1;
}
else {
  return 1;
}
  }
}
return startComparison;
{code}

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9648) Wrap all solr merge policies with SolrMergePolicy

2016-10-24 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602256#comment-15602256
 ] 

Keith Laban commented on SOLR-9648:
---

Hi christine  let me try to address each of these

bq. currently force-merge happens only when externally triggered
true

bq. the force-merge behaviour added by the wrap is (proposed to be) executed 
only on startup
this is just where force merge is explicitly called in an effort to upgrade 
segments

bq. the configured merge policy could (at least theoretically) disallow force 
merges
not true, this implementation will fall through to the delegate if there are no 
segments to upgrade


bq. The {{MAX_UPGRADES_AT_A_TIME = 5;}} sounds similar to what the 
MergeScheduler does (unless merge-on-startup bypasses the merge scheduler 
somehow?)
not sure if force merge abides by the MergeScheduler

bq. IndexWriter has a UNBOUNDED_MAX_MERGE_SEGMENTS==-1 which if made 
non-private could perhaps be used in the cmd.maxOptimizeSegments = 
Integer.MAX_VALUE;
could be an interesting approach

bq. UpgradeIndexMergePolicy also sounds very similar actually.
I saw this but chose not to use it because the implementation doesn't fallback 
to the delegating merge policy.

bq. The SolrMergePolicy has no solr dependencies, might it be renamed to 
something else and be part of the lucene code base?
That is true right now, but I hope we can use the same approach to add in hooks 
for other solr specifc things if we need later. And hopefully also use this for 
things like adding/removing docvalues when the schema changes




> Wrap all solr merge policies with SolrMergePolicy
> -
>
> Key: SOLR-9648
> URL: https://issues.apache.org/jira/browse/SOLR-9648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
> Attachments: SOLR-9648-WIP.patch
>
>
> Wrap the entry point for all merge policies with a single entry point merge 
> policy for more fine grained control over merging with minimal configuration. 
> The main benefit will be to allow upgrading of segments on startup when 
> lucene version changes. Ideally we can use the same approach for adding and 
> removing of doc values when the schema changes and hopefully other index type 
> changes such as Trie -> Point types, or even analyzer changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread Andy Chillrud (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Chillrud updated SOLR-9687:

Description: 
Using the interval facet definitions:
* \{!key=Positive}(0,*]
* \{!key=Zero}\[0,0]
* \{!key=Negative}(*,0)

A document with the value "0" in the numeric field the intervals are being 
applied to is not counted in the Zero interval. If I change the order of the 
definitions to , Negative, Zero, Positive the "0" value is correctly counted in 
the Zero interval.

Tracing into the 5.3.1 code the problem is in the 
org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
method sorts the interval definitions for a field by their starting value is 
doesn't take into account the startOpen property. When two intervals have equal 
start values it needs to sort intervals where startOpen == false before 
intervals where startOpen == true.

In the accumIntervalWithValue() method it checks which intervals each document 
value should be considered a match for. It iterates through the sorted 
intervals and stops checking subsequent intervals when LOWER_THAN_START result 
is returned. If the Positive interval is sorted before the Zero interval it 
never checks a zero value against the Zero interval.

I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
code, and it looks like the same issue will occur in 6.2.1.






  was:
Using the interval facet definitions:
* \{!key=Negative}(*,0)
* \{!key=Zero}\[0,0]
* \{!key=Positive}(0,*]

A document with the value "0" in the numeric field the intervals are being 
applied to is not counted in the Zero interval. If I change the order of the 
definitions to Positive, Zero, Negative, the "0" value is correctly counted in 
the Zero interval.

Tracing into the 5.3.1 code the problem is in the 
org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
method sorts the interval definitions for a field by their starting value is 
doesn't take into account the startOpen property. When two intervals have equal 
start values it needs to sort intervals where startOpen == false before 
intervals where startOpen == true.

In the accumIntervalWithValue() method it checks which intervals each document 
value should be considered a match for. It iterates through the sorted 
intervals and stops checking subsequent intervals when LOWER_THAN_START result 
is returned. If the Positive interval is sorted before the Zero interval it 
never checks a zero value against the Zero interval.

I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
code, and it looks like the same issue will occur in 6.2.1.







> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 184 - Still Unstable

2016-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/184/

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (61 > 60) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (61 > 60) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([9880255D9B5F0EC5:10D41A8735A3633D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2016-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602182#comment-15602182
 ] 

Barta Tamás commented on SOLR-7964:
---

I have just tested on 6.2.1 and it works well.

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barta Tamás updated SOLR-7964:
--
Comment: was deleted

(was: The patch worked on version 5.4.1 but doesn't work on 6.2.1.)

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5958) Document (and fix) numShards and router selection parameter in SolrCloud

2016-10-24 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-5958:

Component/s: documentation

> Document (and fix) numShards and router selection parameter in SolrCloud
> 
>
> Key: SOLR-5958
> URL: https://issues.apache.org/jira/browse/SOLR-5958
> Project: Solr
>  Issue Type: Task
>  Components: documentation, SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
>
> Right now numShards works in rather mysterious ways (unless you know how it 
> works). We should clearly document the following:
> * If we start SolrCloud with bootstrapping, without mentioning numShards 
> parameter, it defaults to 1 and also defaults the router to 'implicit'.
> * Mentioning numShards param, defaults the router to compositeId.
> * Though the bootstrap does not treat numShards as a required param, the 
> Collection API does and throw an error if we don't specify numShards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6566) Document query timeAllowed during term iterations

2016-10-24 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602132#comment-15602132
 ] 

Cassandra Targett commented on SOLR-6566:
-

[~anshumg] - what needs to be documented here? I see timeAllowed is included in 
the page 
https://cwiki.apache.org/confluence/display/solr/Common+Query+Parameters, but 
not clear from the reference here and linked issue if that needs to be changed 
or updated or just added somewhere else.

> Document query timeAllowed during term iterations
> -
>
> Key: SOLR-6566
> URL: https://issues.apache.org/jira/browse/SOLR-6566
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>
> Need to document Query timeout during TermsEnumeration (SOLR-5986).
> Query can now be made to timeout during requests that involve 
> TermsEnumeration as opposed to only Doc Collection i..e During search as well 
> as MLT handler usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2016-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602122#comment-15602122
 ] 

Barta Tamás commented on SOLR-7964:
---

The patch worked on version 5.4.1 but doesn't work on 6.2.1.

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6566) Document query timeAllowed during term iterations

2016-10-24 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6566:

Component/s: documentation

> Document query timeAllowed during term iterations
> -
>
> Key: SOLR-6566
> URL: https://issues.apache.org/jira/browse/SOLR-6566
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>
> Need to document Query timeout during TermsEnumeration (SOLR-5986).
> Query can now be made to timeout during requests that involve 
> TermsEnumeration as opposed to only Doc Collection i..e During search as well 
> as MLT handler usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-24 Thread Andy Chillrud (JIRA)
Andy Chillrud created SOLR-9687:
---

 Summary: Values not assigned to all valid Interval Facet intervals 
in some cases
 Key: SOLR-9687
 URL: https://issues.apache.org/jira/browse/SOLR-9687
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: faceting
Affects Versions: 5.3.1
Reporter: Andy Chillrud


Using the interval facet definitions:
* \{!key=Negative}(*,0)
* \{!key=Zero}\[0,0]
* \{!key=Positive}(0,*]

A document with the value "0" in the numeric field the intervals are being 
applied to is not counted in the Zero interval. If I change the order of the 
definitions to Positive, Zero, Negative, the "0" value is correctly counted in 
the Zero interval.

Tracing into the 5.3.1 code the problem is in the 
org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
method sorts the interval definitions for a field by their starting value is 
doesn't take into account the startOpen property. When two intervals have equal 
start values it needs to sort intervals where startOpen == false before 
intervals where startOpen == true.

In the accumIntervalWithValue() method it checks which intervals each document 
value should be considered a match for. It iterates through the sorted 
intervals and stops checking subsequent intervals when LOWER_THAN_START result 
is returned. If the Positive interval is sorted before the Zero interval it 
never checks a zero value against the Zero interval.

I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
code, and it looks like the same issue will occur in 6.2.1.








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9371) Fix bin/solr script calculations - start/stop wait time and RMI_PORT

2016-10-24 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602032#comment-15602032
 ] 

Ere Maijala commented on SOLR-9371:
---

Oops, sorry for being so vague. With "both" I meant "committing this" and "put 
it on 6_2 as well". I can't help with Windows, and I think that's why SOLR-8065 
got stalled, but if you agree on doing this separately for Windows, that would 
be great. It's not easy to maintain your own version of the solr script when 
it's being enhanced all the time, and this issue has existed way too long 
already without anyone stepping up to do something about the Windows version.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT
> 
>
> Key: SOLR-9371
> URL: https://issues.apache.org/jira/browse/SOLR-9371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9371.patch, SOLR-9371.patch
>
>
> The bin/solr script doesn't wait long enough for Solr to stop before it sends 
> the KILL signal to the process.  The start could use a longer wait too.
> Also, the RMI_PORT is calculated by simply prefixing the port number with a 
> "1" instead of adding 1.  If the solr port has five digits, then the rmi 
> port will be invalid, because it will be greater than 65535.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7462) Faster search APIs for doc values

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602025#comment-15602025
 ] 

ASF subversion and git services commented on LUCENE-7462:
-

Commit 97339e2cacc308c3689d1cd16dfbc44ebea60788 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=97339e2 ]

LUCENE-7462: Fix LegacySortedSetDocValuesWrapper to reset `upTo` when calling 
`advanceExact`.


> Faster search APIs for doc values
> -
>
> Key: LUCENE-7462
> URL: https://issues.apache.org/jira/browse/LUCENE-7462
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0)
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7462-advanceExact.patch, LUCENE-7462.patch
>
>
> While the iterator API helps deal with sparse doc values more efficiently, it 
> also makes search-time operations more costly. For instance, the old 
> random-access API allowed to compute facets on a given segment without any 
> conditionals, by just incrementing the counter at index {{ordinal+1}} while 
> the new API requires to advance the iterator if necessary and then check 
> whether it is exactly on the right document or not.
> Since it is very common for fields to exist across most documents, I suspect 
> codecs will keep an internal structure that is similar to the current codec 
> in the dense case, by having a dense representation of the data and just 
> making the iterator skip over the minority of documents that do not have a 
> value.
> I suggest that we add APIs that make things cheaper at search time. For 
> instance in the case of SORTED doc values, it could look like 
> {{LegacySortedDocValues}} with the additional restriction that documents can 
> only be consumed in order. Codecs that can implement this API efficiently 
> would hide it behind a {{SortedDocValues}} adapter, and then at search time 
> facets and comparators (which liked the {{LegacySortedDocValues}} API better) 
> would either unwrap or hide the SortedDocValues they got behind a more 
> random-access API (which would only happen in the truly sparse case if the 
> codec optimizes the dense case).
> One challenge is that we already use the same idea for hiding single-valued 
> impls behind multi-valued impls, so we would need to enforce the order in 
> which the wrapping needs to happen. At first sight, it seems that it would be 
> best to do the single-value-behind-multi-value-API wrapping above the 
> random-access-behind-iterator-API wrapping. The complexity of 
> wrapping/unwrapping in the right order could be contained in the 
> {{DocValues}} helper class.
> I think this change would also simplify search-time consumption of doc 
> values, which currently needs to spend several lines of code positioning the 
> iterator everytime it needs to do something interesting with doc values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9371) Fix bin/solr script calculations - start/stop wait time and RMI_PORT

2016-10-24 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602006#comment-15602006
 ] 

Erick Erickson commented on SOLR-9371:
--

Can you help with the Windows scripting? If so, please attach a patch.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT
> 
>
> Key: SOLR-9371
> URL: https://issues.apache.org/jira/browse/SOLR-9371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9371.patch, SOLR-9371.patch
>
>
> The bin/solr script doesn't wait long enough for Solr to stop before it sends 
> the KILL signal to the process.  The start could use a longer wait too.
> Also, the RMI_PORT is calculated by simply prefixing the port number with a 
> "1" instead of adding 1.  If the solr port has five digits, then the rmi 
> port will be invalid, because it will be greater than 65535.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7520) WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time

2016-10-24 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-7520.
--
   Resolution: Fixed
Fix Version/s: 6.3

Thanks Dat!

I tweaked the early-return logic to happen a bit earlier to avoid the needless 
invocation of getLeafReader() which can be rather expensive (sometimes needing 
to index the content).

I put the CHANGES.txt entry into "Improvements" because it's debatable if this 
is a bug fix or an optimization. 

> WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time
> 
>
> Key: LUCENE-7520
> URL: https://issues.apache.org/jira/browse/LUCENE-7520
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Cao Manh Dat
>Assignee: David Smiley
> Fix For: 6.3
>
> Attachments: LUCENE-7520.patch
>
>
> Currently WeightedSpanTermExtractor will rewrite MultiTermQuery regardless of 
> the field being requested for highlighting. In some case like SOLR-2216, It 
> can be costly and cause TooManyClauses exception for no reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7520) WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601981#comment-15601981
 ] 

ASF subversion and git services commented on LUCENE-7520:
-

Commit 167013c44102e1d5679235b94370f59dcbc92726 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=167013c ]

LUCENE-7520: WSTE shouldn't expand MTQ if its field doesn't match filter

(cherry picked from commit e1b0693)


> WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time
> 
>
> Key: LUCENE-7520
> URL: https://issues.apache.org/jira/browse/LUCENE-7520
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Cao Manh Dat
>Assignee: David Smiley
> Attachments: LUCENE-7520.patch
>
>
> Currently WeightedSpanTermExtractor will rewrite MultiTermQuery regardless of 
> the field being requested for highlighting. In some case like SOLR-2216, It 
> can be costly and cause TooManyClauses exception for no reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7520) WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601978#comment-15601978
 ] 

ASF subversion and git services commented on LUCENE-7520:
-

Commit e1b06938b4b0442b18878e59fde57e29ca641499 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1b0693 ]

LUCENE-7520: WSTE shouldn't expand MTQ if its field doesn't match filter


> WeightedSpanTermExtractor should not rewrite MultiTermQuery all the time
> 
>
> Key: LUCENE-7520
> URL: https://issues.apache.org/jira/browse/LUCENE-7520
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Cao Manh Dat
>Assignee: David Smiley
> Attachments: LUCENE-7520.patch
>
>
> Currently WeightedSpanTermExtractor will rewrite MultiTermQuery regardless of 
> the field being requested for highlighting. In some case like SOLR-2216, It 
> can be costly and cause TooManyClauses exception for no reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9533) Reload core config when a core is reloaded

2016-10-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596545#comment-15596545
 ] 

Joel Bernstein edited comment on SOLR-9533 at 10/24/16 1:09 PM:


I've been looking for a SolrCloud hook for the solrcore.properties but there 
does not appear to one. I suspect this is by design as it's called an 
*external* properties file in the documentation.



was (Author: joel.bernstein):
I've been looking for a SolrCloud hook for the solrcore.properties but there 
does not appear to one. I suspect this is by design as it's called an 
*extermal* properties file in the documentation.


> Reload core config when a core is reloaded
> --
>
> Key: SOLR-9533
> URL: https://issues.apache.org/jira/browse/SOLR-9533
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Gethin James
>Assignee: Joel Bernstein
> Attachments: SOLR-9533.patch, SOLR-9533.patch
>
>
> I am reloading a core using {{coreContainer.reload(coreName)}}.  However it 
> doesn't seem to reload the configuration.  I have changed solrcore.properties 
> on the file system but the change doesn't get picked up.
> The coreContainer.reload method seems to call:
> {code}
> CoreDescriptor cd = core.getCoreDescriptor();
> {code}
> I can't see a way to reload CoreDescriptor, so it isn't picking up my 
> changes.  It simply reuses the existing CoreDescriptor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9686) Adding the book "Relevant Search" to the book resource list. https://www.manning.com/books/relevant-search

2016-10-24 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601888#comment-15601888
 ] 

Alexandre Rafalovitch commented on SOLR-9686:
-

We already have SOLR-7565 for this. I also asked on that issue whether Manning 
can create a dedicated discount for the book for people getting to it from the 
Solr resource page. Yes or no does not affect whether the book is listed, I 
just know that sometimes publishers do this kind of offers.

> Adding the book "Relevant Search" to the book resource list. 
> https://www.manning.com/books/relevant-search
> --
>
> Key: SOLR-9686
> URL: https://issues.apache.org/jira/browse/SOLR-9686
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
> Environment: Book resource list
>Reporter: Christopher Kaufmann
>Priority: Minor
>  Labels: documentation
>
> Relevant Search demystifies relevance work and shows you that a search engine 
> is a programmable relevance framework. You'll learn how to apply 
> Elasticsearch or Solr to your business's unique ranking problems. The book 
> demonstrates how to program relevance and how to incorporate secondary data 
> sources, taxonomies, text analytics, and personalization. By the end, you’ll 
> be able to achieve a virtuous cycle of provable, measurable relevance 
> improvements over a search product’s lifetime.
> *Here is the link to the book on our website: 
> https://www.manning.com/books/relevant-search



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2087) Dismax handler not handling +/- correctly

2016-10-24 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2087.
---
Resolution: Fixed

I certainly don't see any viable next actions on this issue. 

> Dismax handler not handling +/- correctly
> -
>
> Key: SOLR-2087
> URL: https://issues.apache.org/jira/browse/SOLR-2087
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 1.4
>Reporter: Gabriel Weinberg
>
> If I do a query like: i'm a walking contradiction it matches pf as 
> text:"i'm_a a_walking walking contradiction"^2.0, and it matches fine.
> If I do a query like: i'm a +walking contradiction it matches pf as 
> text:"i'm_a a_+walking +walking contradiction"^2.0 and doesn't match at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7506) Roll over GC logs by default via bin/solr scripts

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-7506.
---
Resolution: Fixed

This is now in. An improvement could be to keep the logs in {{archive/}} for a 
few days instead of deleting on every startup...

> Roll over GC logs by default via bin/solr scripts
> -
>
> Key: SOLR-7506
> URL: https://issues.apache.org/jira/browse/SOLR-7506
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Shalin Shekhar Mangar
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7506.patch, SOLR-7506.patch
>
>
> The Oracle JDK supports rolling over GC logs. I propose to add the following 
> to the solr.in.{sh,cmd} scripts to enable it by default:
> {code}
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M
> {code}
> Unfortunately, the JDK doesn't have any option to append to existing log 
> instead of overwriting so the latest log is overwritten. Maybe we can have 
> the bin/solr script roll that after the process is killed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7506) Roll over GC logs by default via bin/solr scripts

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601860#comment-15601860
 ] 

ASF subversion and git services commented on SOLR-7506:
---

Commit ed203978fcb953f8196317c68eae18342f95cc44 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed20397 ]

SOLR-7506: Roll over GC logs by default via bin/solr scripts

(cherry picked from commit ef57374)


> Roll over GC logs by default via bin/solr scripts
> -
>
> Key: SOLR-7506
> URL: https://issues.apache.org/jira/browse/SOLR-7506
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Shalin Shekhar Mangar
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7506.patch, SOLR-7506.patch
>
>
> The Oracle JDK supports rolling over GC logs. I propose to add the following 
> to the solr.in.{sh,cmd} scripts to enable it by default:
> {code}
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M
> {code}
> Unfortunately, the JDK doesn't have any option to append to existing log 
> instead of overwriting so the latest log is overwritten. Maybe we can have 
> the bin/solr script roll that after the process is killed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3318) LBHttpSolrServer should allow to specify a preferred server for a query

2016-10-24 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601852#comment-15601852
 ] 

Alexandre Rafalovitch commented on SOLR-3318:
-

[[~shalinmangar] Thanks for the catch. I reviewed the LBHttpSolrClient and I 
see no indication of server stickiness being implemented.

Do you think it is a viable feature still? One that could be marked newdev, as 
it has a patch that needs to be adapted to master?

> LBHttpSolrServer should allow to specify a preferred server for a query
> ---
>
> Key: SOLR-3318
> URL: https://issues.apache.org/jira/browse/SOLR-3318
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.0-ALPHA
>Reporter: Martin Grotzke
>Priority: Minor
> Attachments: SOLR-3318.git.patch
>
>
> For a user query we make several solr queries that differ only slightly and 
> therefore should use/reuse objects cached from the first query (we're using a 
> custom request handler and custom caches).
> Thus such subsequent queries should hit the same solr server.
> The implemented solution looks like this:
> * The client obtains a live SolrServer from LBHttpSolrServer
> * The client provides this SolrServer as preferred server for a query
> * If the preferred server is no longer alive the request is retried on 
> another live server
> * Everything else follows the existing logic:
> ** After live servers are exhausted, any servers previously marked as dead 
> will be tried before failing the request
> ** If no live servers are found a SolrServerException is thrown
> The implementation is also [on 
> github|https://github.com/magro/lucene-solr/commit/a75aef3d].
> Mailing list thread: 
> http://lucene.472066.n3.nabble.com/LBHttpSolrServer-to-query-a-preferred-server-tt3884140.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9686) Adding the book "Relevant Search" to the book resource list. https://www.manning.com/books/relevant-search

2016-10-24 Thread Christopher Kaufmann (JIRA)
Christopher Kaufmann created SOLR-9686:
--

 Summary: Adding the book "Relevant Search" to the book resource 
list. https://www.manning.com/books/relevant-search
 Key: SOLR-9686
 URL: https://issues.apache.org/jira/browse/SOLR-9686
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
 Environment: Book resource list
Reporter: Christopher Kaufmann
Priority: Minor


Relevant Search demystifies relevance work and shows you that a search engine 
is a programmable relevance framework. You'll learn how to apply Elasticsearch 
or Solr to your business's unique ranking problems. The book demonstrates how 
to program relevance and how to incorporate secondary data sources, taxonomies, 
text analytics, and personalization. By the end, you’ll be able to achieve a 
virtuous cycle of provable, measurable relevance improvements over a search 
product’s lifetime.

*Here is the link to the book on our website: 
https://www.manning.com/books/relevant-search



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9371) Fix bin/solr script calculations - start/stop wait time and RMI_PORT

2016-10-24 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601833#comment-15601833
 ] 

Ere Maijala commented on SOLR-9371:
---

Please do both. I've been hoping to get this fixed for over a year (see the 
linked issue SOLR-8065).

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT
> 
>
> Key: SOLR-9371
> URL: https://issues.apache.org/jira/browse/SOLR-9371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9371.patch, SOLR-9371.patch
>
>
> The bin/solr script doesn't wait long enough for Solr to stop before it sends 
> the KILL signal to the process.  The start could use a longer wait too.
> Also, the RMI_PORT is calculated by simply prefixing the port number with a 
> "1" instead of adding 1.  If the solr port has five digits, then the rmi 
> port will be invalid, because it will be greater than 65535.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7506) Roll over GC logs by default via bin/solr scripts

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601817#comment-15601817
 ] 

ASF subversion and git services commented on SOLR-7506:
---

Commit ef5737466e4597c21c80b167f1db295c081578d4 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef57374 ]

SOLR-7506: Roll over GC logs by default via bin/solr scripts


> Roll over GC logs by default via bin/solr scripts
> -
>
> Key: SOLR-7506
> URL: https://issues.apache.org/jira/browse/SOLR-7506
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Shalin Shekhar Mangar
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7506.patch, SOLR-7506.patch
>
>
> The Oracle JDK supports rolling over GC logs. I propose to add the following 
> to the solr.in.{sh,cmd} scripts to enable it by default:
> {code}
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M
> {code}
> Unfortunately, the JDK doesn't have any option to append to existing log 
> instead of overwriting so the latest log is overwritten. Maybe we can have 
> the bin/solr script roll that after the process is killed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9518) Kerberos Delegation Tokens doesn't work without a chrooted ZK

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601821#comment-15601821
 ] 

ASF subversion and git services commented on SOLR-9518:
---

Commit a973ca1752fccecee8db7d2a7a09ded7159e4c58 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a973ca1 ]

SOLR-9518: Kerberos Delegation Tokens don't work without a chrooted ZK


> Kerberos Delegation Tokens doesn't work without a chrooted ZK
> -
>
> Key: SOLR-9518
> URL: https://issues.apache.org/jira/browse/SOLR-9518
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: SOLR-9518-6x.patch, SOLR-9518.patch, SOLR-9518.patch, 
> SOLR-9518.patch
>
>
> Starting up Solr 6.2.0 (with delegation tokens enabled) that doesn't have a 
> chrooted ZK, I see the following in the startup logs:
> {code}
> 2016-09-15 07:08:22.453 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
> 2016-09-15 07:08:22.477 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1927)
> at 
> org.apache.solr.security.KerberosPlugin.init(KerberosPlugin.java:138)
> at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:316)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:442)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:158)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:134)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:856)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1379)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1341)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:772)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:517)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
> at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:180)
> at 
> org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:458)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:64)
> at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:610)
> at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
> {code}
> To me, it seems that adding a check for the presence of a chrooted ZK, and, 
> calculating the relative ZK path only if it exists should suffice. I'll add a 
> patch for this shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601820#comment-15601820
 ] 

ASF subversion and git services commented on SOLR-9506:
---

Commit 265d425b00181dd384fa963e46dc35b92b7e02c0 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=265d425 ]

SOLR-9506: cache IndexFingerprint for each segment


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506.patch, 
> SOLR-9506.patch, SOLR-9506_POC.patch, SOLR-9506_final.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9263) New Admin gui fails to parse local params in the "Raw Query Parameters" query field

2016-10-24 Thread Brian Sawyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601818#comment-15601818
 ] 

Brian Sawyer commented on SOLR-9263:


I can confirm that the issue seems to be fixed in 6.2.

> New Admin gui fails to parse local params in the "Raw Query Parameters" query 
> field
> ---
>
> Key: SOLR-9263
> URL: https://issues.apache.org/jira/browse/SOLR-9263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.0.1
>Reporter: Brian Sawyer
>Assignee: Alexandre Rafalovitch
> Attachments: SOLR-9263.patch
>
>
> Including any local params in the "Raw Query Parameters" query field, such as 
> for a rerank query 
> {noformat}rq={!rerank reRankQuery=$rqq reRankDocs=1000 
> reRankWeight=3}=(hi+hello+hey+hiya){noformat} results in an error:
> {noformat}
> org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
> Expected identifier at pos 20 str='{!rerank reRankQuery'
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:219)
> {noformat}
> It's clear that the resulting URL is malformed:
> {noformat}
> http://localhost:8983/solr/collection1/select?fl=name,%20score=on=greetings={!rerank%20reRankQuery=(hi+hello+hey+hiya)=json
> {noformat}
> This appears to be due to javascript code naively splitting on '='.
> /solr/webapp/web/js/angular/controllers/query.js
> {code}
> if ($scope.rawParams) {
>   var rawParams = $scope.rawParams.split(/[&\n]/);
>   for (var i in rawParams) {
> var param = rawParams[i];
> var parts = param.split("=");
>   }
> }
> {code}
> I've attached a possible patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9255) Rename SOLR_AUTHENTICATION_CLIENT_CONFIGURER -> SOLR_AUTHENTICATION_CLIENT_BUILDER

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9255.
---
Resolution: Fixed

> Rename SOLR_AUTHENTICATION_CLIENT_CONFIGURER -> 
> SOLR_AUTHENTICATION_CLIENT_BUILDER
> --
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: master (7.0)
>Reporter: Martin Löper
>Assignee: Jan Høydahl
> Fix For: master (7.0)
>
> Attachments: SOLR-9255.patch, SOLR-9255.patch, SOLR-9255.patch
>
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9255) Rename SOLR_AUTHENTICATION_CLIENT_CONFIGURER -> SOLR_AUTHENTICATION_CLIENT_BUILDER

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601804#comment-15601804
 ] 

ASF subversion and git services commented on SOLR-9255:
---

Commit 61e180b7efa965edd4979b15ee56d946d50f8221 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=61e180b ]

SOLR-9255: Rename SOLR_AUTHENTICATION_CLIENT_CONFIGURER -> 
SOLR_AUTHENTICATION_CLIENT_BUILDER


> Rename SOLR_AUTHENTICATION_CLIENT_CONFIGURER -> 
> SOLR_AUTHENTICATION_CLIENT_BUILDER
> --
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: master (7.0)
>Reporter: Martin Löper
>Assignee: Jan Høydahl
> Fix For: master (7.0)
>
> Attachments: SOLR-9255.patch, SOLR-9255.patch, SOLR-9255.patch
>
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2016-10-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601789#comment-15601789
 ] 

Yonik Seeley commented on SOLR-7036:


Looking at the code added to UnInvertedField for this, I realized that it's 
just faceting by field X and then finding the number of unique values in field 
Y for each bucket.

In JSON syntax:
{code}
json.facet={
  myfacet : {
type: terms,
field : X,
facet : {
  ycount : "unique(Y)"  // this will be the grouped count
}
  }
}
{code}

\Which makes me wonder if we can utilize the facet module more.  Some 
advantages of doing so:
- support for other field types w/o insanity (numerics, multiValued, etc)
- distributed support


> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, SOLR-7036_zipped.zip, 
> jstack-output.txt, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9680) Better error messages in SolrCLI when authentication required

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-9680:
-

Assignee: Jan Høydahl

> Better error messages in SolrCLI when authentication required
> -
>
> Key: SOLR-9680
> URL: https://issues.apache.org/jira/browse/SOLR-9680
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>
> Currently the status tool does not distinguish between 
> Authentication/Authorization errors and other IO errors and just throws a 
> generic Exception with the 401 HTML output from Jetty:
> {noformat}
> $ bin/solr status
> Found 1 Solr nodes: 
> Solr process 4332 running on port 8983
> ERROR: Failed to get system information from http://localhost:8983/solr due 
> to: org.apache.http.client.ClientProtocolException: Expected JSON response 
> from server but received: 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/info/system. Reason:
> require authentication
> 
> 
> Typically, this indicates a problem with the Solr server; check the Solr 
> server logs for more information.
> {noformat}
> Instead, the tool should exit with a clear message that authentication is 
> required, and the status tool should throw a security related exception that 
> tool consumers (such as assertTool) can detect. Due to this {{assert -u}} 
> also fails when Solr is password protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9610) New AssertTool in SolrCLI

2016-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9610:
--
Attachment: SOLR-9610-timeout.patch

New patch:
* Using HttpHead request instead of URLConnection in assertSolrNotRunning. 
Utility method {{attemptHttpHead}}
* Hardened way of detecting auth problems in getJson(). Sending a HEAD request 
was bad here since even a HEAD request would create a collection when used on 
collection API URL. Now catching {{ClientProtocolException}} instead.
* Tested with and without authentication for commands create, healthcheck, 
status, assert & delete

> New AssertTool in SolrCLI
> -
>
> Key: SOLR-9610
> URL: https://issues.apache.org/jira/browse/SOLR-9610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9610-timeout.patch, SOLR-9610-timeout.patch, 
> SOLR-9610.patch, SOLR-9610.patch
>
>
> Moving some code from SOLR-7826 over here. This is a new AssertTool which can 
> be used to assert that we are (not) root user and more. Usage:
> {noformat}
> usage: bin/solr assert [-m ] [-e] [-rR] [-s ] [-S ] [-u
> ] [-x ] [-X ]
>  -e,--exitcode Return an exit code instead of printing
>error message on assert fail.
>  -help Print this message
>  -m,--message Exception message to be used in place of
>the default error message
>  -R,--not-root Asserts that we are NOT the root user
>  -r,--root Asserts that we are the root user
>  -S,--not-started Asserts that Solr is NOT started on a
>certain URL
>  -s,--started Asserts that Solr is started on a certain
>URL
>  -u,--same-user Asserts that we run as same user that owns
>
>  -x,--existsAsserts that directory  exists
>  -X,--not-existsAsserts that directory  does NOT
> {noformat}
> This can then also be used from bin/solr through e.g. {{run_tool assert -r}}, 
> or from Java Code static methods such as 
> {{AssertTool.assertSolrRunning(String url)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9647) CollectionsAPIDistributedZkTest got stuck, reproduces failure

2016-10-24 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-9647.
--
Resolution: Cannot Reproduce

> CollectionsAPIDistributedZkTest got stuck, reproduces failure
> -
>
> Key: SOLR-9647
> URL: https://issues.apache.org/jira/browse/SOLR-9647
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9647.patch
>
>
>  I have to shoot 
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1129/ just 
> because "Took 1 day 12 hr on lucene".
>[junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:08:30, 
> stalled for 48990s at: CollectionsAPIDistributedZkTest.test
>[junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:09:30, 
> stalled for 49050s at: CollectionsAPIDistributedZkTest.test
>  It's just got stuck. Then I run it locally, it passes from Eclipse, but 
> fails when I run from cmd>ant. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9518) Kerberos Delegation Tokens doesn't work without a chrooted ZK

2016-10-24 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9518:
---
Attachment: SOLR-9518-6x.patch

Thanks Noble. Here's the patch for branch_6x.

> Kerberos Delegation Tokens doesn't work without a chrooted ZK
> -
>
> Key: SOLR-9518
> URL: https://issues.apache.org/jira/browse/SOLR-9518
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: SOLR-9518-6x.patch, SOLR-9518.patch, SOLR-9518.patch, 
> SOLR-9518.patch
>
>
> Starting up Solr 6.2.0 (with delegation tokens enabled) that doesn't have a 
> chrooted ZK, I see the following in the startup logs:
> {code}
> 2016-09-15 07:08:22.453 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
> 2016-09-15 07:08:22.477 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1927)
> at 
> org.apache.solr.security.KerberosPlugin.init(KerberosPlugin.java:138)
> at 
> org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:316)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:442)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:158)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:134)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:137)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:856)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1379)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1341)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:772)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:517)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
> at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:499)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:147)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:180)
> at 
> org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:458)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:64)
> at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:610)
> at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:529)
> {code}
> To me, it seems that adding a check for the presence of a chrooted ZK, and, 
> calculating the relative ZK path only if it exists should suffice. I'll add a 
> patch for this shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9506) cache IndexFingerprint for each segment

2016-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601675#comment-15601675
 ] 

ASF subversion and git services commented on SOLR-9506:
---

Commit 184b0f221559eaed5f273b1907e8af07bc95fec9 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=184b0f2 ]

SOLR-9506: cache IndexFingerprint for each segment


> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506.patch, SOLR-9506.patch, SOLR-9506.patch, 
> SOLR-9506.patch, SOLR-9506_POC.patch, SOLR-9506_final.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >