[jira] [Resolved] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-11724.
--
Resolution: Fixed

Thanks Amrit

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BugFix release 7.3.1

2018-04-24 Thread Varun Thacker
Thanks Dat! It's committed

I was however not able to run the full tests successfully. I ran into
thousands of these messages which seem to be fixed by
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commitdiff;h=a4789db
? Although https://issues.apache.org/jira/browse/SOLR-12200 is still open.


   [junit4]   2> 1947362 ERROR (OverseerAutoScalingTriggerThre
ad-72082625281785866-dummy.host.com:8984_solr-n_06)
[n:dummy.host.com:8984_solr] o.a.s.c.a.OverseerTriggerThread A ZK error
has occurred

   [junit4]   2> java.io.IOException:
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /autoscaling.json

   [junit4]   2> at org.apache.solr.client.solrj.impl.ZkDistribStateManager.
getAutoScalingConfig(ZkDistribStateManager.java:183)

   [junit4]   2> at org.apache.solr.client.solrj.cloud.autoscaling.
DistribStateManager.getAutoScalingConfig(DistribStateManager.java:78)

   [junit4]   2> at org.apache.solr.cloud.autoscaling.
OverseerTriggerThread.run(OverseerTriggerThread.java:126)

   [junit4]   2> at java.lang.Thread.run(Thread.java:745)

   [junit4]   2> Caused by:
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /autoscaling.json

On Tue, Apr 24, 2018 at 7:35 PM, Đạt Cao Mạnh 
wrote:

> Hi Varun,
>
> Go ahead, I will start the build tomorrow.
>
> On Wed, Apr 25, 2018 at 1:21 AM Varun Thacker  wrote:
>
>> Hi Dat,
>>
>> What's the timeline in mind that you have for creating a Solr 7.3.1 RC?
>>
>> I want to backport SOLR-12065 / SOLR-11724 and I can wrap it up today
>>
>> On Mon, Apr 23, 2018 at 1:01 AM, Alan Woodward 
>> wrote:
>>
>>> Done
>>>
>>> > On 23 Apr 2018, at 04:12, Đạt Cao Mạnh 
>>> wrote:
>>> >
>>> > Hi Alan,
>>> >
>>> > Can you backport LUCENE-8254 to branch_7_3?
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>


[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451682#comment-16451682
 ] 

ASF subversion and git services commented on SOLR-11724:


Commit 8fa7687413558b3bc65cbbbeb722a21314187e6a in lucene-solr's branch 
refs/heads/branch_7_3 from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8fa7687 ]

SOLR-11724: Cdcr bootstrapping should ensure that non-leader replicas should 
sync with the leader


> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12065) Restore replica always in buffering state

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451681#comment-16451681
 ] 

ASF subversion and git services commented on SOLR-12065:


Commit 8894db1a727dff5a52444f9b4a5838995a8f7513 in lucene-solr's branch 
refs/heads/branch_7_3 from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8894db1 ]

SOLR-12065: A successful restore collection should mark the shard state as 
active and not buffering


> Restore replica always in buffering state
> -
>
> Key: SOLR-12065
> URL: https://issues.apache.org/jira/browse/SOLR-12065
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, 
> SOLR-12065.patch, SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, 
> restore_snippet.log
>
>
> Steps to reproduce:
>  
>  - 
> [http://localhost:8983/solr/admin/collections?action=CREATE=test_backup=1=1]
>  - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [ \{"id" : "1"}
> ]' 
>  - 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=test_backup=test_backup=/Users/varunthacker/backups]
>  - 
> [http://localhost:8983/solr/admin/collections?action=RESTORE=test_backup=/Users/varunthacker/backups=test_restore]
>  * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [
> {"id" : "2"}
> ]'
>  * Snippet when you try adding a document
> {code:java}
> INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit 
> while not ACTIVE - state: BUFFERING replay: false
> INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor;
>  [test_restore_shard1_replica_n21] webapp=/solr path=/update 
> params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code}
>  * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] 
> it's always 1 (BUFFERING)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12065) Restore replica always in buffering state

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12065:
-
Fix Version/s: master (8.0)

> Restore replica always in buffering state
> -
>
> Key: SOLR-12065
> URL: https://issues.apache.org/jira/browse/SOLR-12065
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, 
> SOLR-12065.patch, SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, 
> restore_snippet.log
>
>
> Steps to reproduce:
>  
>  - 
> [http://localhost:8983/solr/admin/collections?action=CREATE=test_backup=1=1]
>  - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [ \{"id" : "1"}
> ]' 
>  - 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=test_backup=test_backup=/Users/varunthacker/backups]
>  - 
> [http://localhost:8983/solr/admin/collections?action=RESTORE=test_backup=/Users/varunthacker/backups=test_restore]
>  * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [
> {"id" : "2"}
> ]'
>  * Snippet when you try adding a document
> {code:java}
> INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit 
> while not ACTIVE - state: BUFFERING replay: false
> INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor;
>  [test_restore_shard1_replica_n21] webapp=/solr path=/update 
> params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code}
>  * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] 
> it's always 1 (BUFFERING)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11724:
-
Fix Version/s: 7.3.1
   master (8.0)
   7.4

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11724:
-
Affects Version/s: (was: 7.1)

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12065) Restore replica always in buffering state

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12065:
-
Fix Version/s: 7.3.1

> Restore replica always in buffering state
> -
>
> Key: SOLR-12065
> URL: https://issues.apache.org/jira/browse/SOLR-12065
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0), 7.3.1
>
> Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, 
> SOLR-12065.patch, SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, 
> restore_snippet.log
>
>
> Steps to reproduce:
>  
>  - 
> [http://localhost:8983/solr/admin/collections?action=CREATE=test_backup=1=1]
>  - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [ \{"id" : "1"}
> ]' 
>  - 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=test_backup=test_backup=/Users/varunthacker/backups]
>  - 
> [http://localhost:8983/solr/admin/collections?action=RESTORE=test_backup=/Users/varunthacker/backups=test_restore]
>  * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [
> {"id" : "2"}
> ]'
>  * Snippet when you try adding a document
> {code:java}
> INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit 
> while not ACTIVE - state: BUFFERING replay: false
> INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor;
>  [test_restore_shard1_replica_n21] webapp=/solr path=/update 
> params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code}
>  * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] 
> it's always 1 (BUFFERING)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12261) Deleting collections should sync aliases before prematurely failing when alias is deleted

2018-04-24 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12261.
-
   Resolution: Fixed
Fix Version/s: 7.4

> Deleting collections should sync aliases before prematurely failing when 
> alias is deleted
> -
>
> Key: SOLR-12261
> URL: https://issues.apache.org/jira/browse/SOLR-12261
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12261.patch
>
>
> In SOLR-11218 [~erickerickson] ensured that we can't delete a collection that 
> is referenced by an alias. However It may be that the alias is deleted but 
> the node servicing the request doesn't know about this yet. It should call 
> AliasesManager.update() first (which now sync()'s with ZK).
> I believe this is the cause of some sporadic failures to 
> org.apache.solr.cloud.AliasIntegrationTest#tearDown which deletes the alias 
> then all collections.
> It's debatable if this is an improvement or a bug. Sadly most of SolrCloud 
> simply seems to operate this way despite it being eventually consistent. Thus 
> users using SolrCloud may have to add sleep()s after calls to Solr admin 
> calls :-/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12261) Deleting collections should sync aliases before prematurely failing when alias is deleted

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451575#comment-16451575
 ] 

ASF subversion and git services commented on SOLR-12261:


Commit 5a89f604cdfc6fde68c8e6a5fdfb01f5ac3f732d in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a89f60 ]

SOLR-12261: Collection deletion's check for alias membership should
 sync() aliases with ZK before throwing an error.

(cherry picked from commit 1370f6b)


> Deleting collections should sync aliases before prematurely failing when 
> alias is deleted
> -
>
> Key: SOLR-12261
> URL: https://issues.apache.org/jira/browse/SOLR-12261
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12261.patch
>
>
> In SOLR-11218 [~erickerickson] ensured that we can't delete a collection that 
> is referenced by an alias. However It may be that the alias is deleted but 
> the node servicing the request doesn't know about this yet. It should call 
> AliasesManager.update() first (which now sync()'s with ZK).
> I believe this is the cause of some sporadic failures to 
> org.apache.solr.cloud.AliasIntegrationTest#tearDown which deletes the alias 
> then all collections.
> It's debatable if this is an improvement or a bug. Sadly most of SolrCloud 
> simply seems to operate this way despite it being eventually consistent. Thus 
> users using SolrCloud may have to add sleep()s after calls to Solr admin 
> calls :-/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12261) Deleting collections should sync aliases before prematurely failing when alias is deleted

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451571#comment-16451571
 ] 

ASF subversion and git services commented on SOLR-12261:


Commit 1370f6b520787efdef982620708d0fc070268b6b in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1370f6b ]

SOLR-12261: Collection deletion's check for alias membership should
 sync() aliases with ZK before throwing an error.


> Deleting collections should sync aliases before prematurely failing when 
> alias is deleted
> -
>
> Key: SOLR-12261
> URL: https://issues.apache.org/jira/browse/SOLR-12261
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12261.patch
>
>
> In SOLR-11218 [~erickerickson] ensured that we can't delete a collection that 
> is referenced by an alias. However It may be that the alias is deleted but 
> the node servicing the request doesn't know about this yet. It should call 
> AliasesManager.update() first (which now sync()'s with ZK).
> I believe this is the cause of some sporadic failures to 
> org.apache.solr.cloud.AliasIntegrationTest#tearDown which deletes the alias 
> then all collections.
> It's debatable if this is an improvement or a bug. Sadly most of SolrCloud 
> simply seems to operate this way despite it being eventually consistent. Thus 
> users using SolrCloud may have to add sleep()s after calls to Solr admin 
> calls :-/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8274) android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题;

2018-04-24 Thread zhangzhenan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhenan updated LUCENE-8274:

Description: 
由于android-26开始支持Java8,所以我着手将android工程中的Lucene4.7.2替换成7.2.1,我指定了java8进行编译

compileOptions

{ sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility 
JavaVersion.VERSION_1_8 }

引用7.2.1

dependencies

{ compile 'com.android.support:multidex:1.0.1' compile 
'org.apache.lucene:lucene-core:7.2.1' compile 
'org.apache.lucene:lucene-analyzers-common:7.2.1' compile 
'org.apache.lucene:lucene-analyzers-smartcn:7.2.1' compile 
'org.apache.lucene:lucene-queries:7.2.1' compile 
'org.apache.lucene:lucene-queryparser:7.2.1' }

编译生成apk未出现问题,但运行时出现Didn't find class "java.lang.ClassValue"

04-25 10:20:15.129 13251 13273 E AndroidRuntime: 
java.lang.NoClassDefFoundError: Failed resolution of: Ljava/lang/ClassValue;
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.standard.StandardAnalyzer.createComponents(StandardAnalyzer.java:103)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:134)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:198)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:240)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.newFieldQuery(QueryParserBase.java:475)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.getFieldQuery(QueryParserBase.java:467)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.MultiFieldQueryParser.getFieldQuery(MultiFieldQueryParser.java:154)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:830)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:469)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:355)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:244)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:215)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:109)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.index.ContactsIndexHelper.getQuery(ContactsIndexHelper.java:713)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.index.IndexHelper.initQuery(IndexHelper.java:496)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.task.search.SearchLocalTask$4.run(SearchLocalTask.java:342)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.Thread.run(Thread.java:764)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: Caused by: 
java.lang.ClassNotFoundException: Didn't find class "java.lang.ClassValue" on 
path: DexPathList[[zip file 
"/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk"],nativeLibraryDirectories=[/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/lib/arm,
 
/data/app/com.andriod.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk!/lib/armeabi,
 /system/lib, /vendor/lib]]
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.ClassLoader.loadClass(ClassLoader.java:379)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.ClassLoader.loadClass(ClassLoader.java:312)
 04-25 10:20:15.129 13251 13273 E AndroidRuntime: ... 19 more

 

这个问题跟stackoverflow上的这个问题一样:

https://stackoverflow.com/questions/47657615/lucene-android-noclassdeffounderror

  was:
由于android-26开始支持Java8,所以我着手将android工程中的Lucene4.7.2替换成7.2.1,我指定了java8进行编译

compileOptions {
 sourceCompatibility JavaVersion.VERSION_1_8
 targetCompatibility JavaVersion.VERSION_1_8
}

引用7.2.1

dependencies {
 compile 'com.android.support:multidex:1.0.1'
 compile 'org.apache.lucene:lucene-core:7.2.1'
 compile 'org.apache.lucene:lucene-analyzers-common:7.2.1'
 compile 'org.apache.lucene:lucene-analyzers-smartcn:7.2.1'
 compile 'org.apache.lucene:lucene-queries:7.2.1'
 

[jira] [Updated] (LUCENE-8274) android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题;

2018-04-24 Thread zhangzhenan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhenan updated LUCENE-8274:

Summary: android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题; 
 (was: android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题)

> android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题;
> --
>
> Key: LUCENE-8274
> URL: https://issues.apache.org/jira/browse/LUCENE-8274
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 7.2.1
> Environment: android-26
> android studio
>  
> buildToolsVersion "28.0.0-rc1"
>  
> minSdkVersion 26
> targetSdkVersion 26
>  
> compileOptions {
>  sourceCompatibility JavaVersion.VERSION_1_8
>  targetCompatibility JavaVersion.VERSION_1_8
> }
>  
>Reporter: zhangzhenan
>Priority: Major
>  Labels: android8.0
>
> 由于android-26开始支持Java8,所以我着手将android工程中的Lucene4.7.2替换成7.2.1,我指定了java8进行编译
> compileOptions {
>  sourceCompatibility JavaVersion.VERSION_1_8
>  targetCompatibility JavaVersion.VERSION_1_8
> }
> 引用7.2.1
> dependencies {
>  compile 'com.android.support:multidex:1.0.1'
>  compile 'org.apache.lucene:lucene-core:7.2.1'
>  compile 'org.apache.lucene:lucene-analyzers-common:7.2.1'
>  compile 'org.apache.lucene:lucene-analyzers-smartcn:7.2.1'
>  compile 'org.apache.lucene:lucene-queries:7.2.1'
>  compile 'org.apache.lucene:lucene-queryparser:7.2.1'
> }
> 编译生成apk未出现问题,但运行时出现Didn't find class "java.lang.ClassValue"
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: 
> java.lang.NoClassDefFoundError: Failed resolution of: Ljava/lang/ClassValue;
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.analysis.standard.StandardAnalyzer.createComponents(StandardAnalyzer.java:103)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:134)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:198)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:240)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParserBase.newFieldQuery(QueryParserBase.java:475)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParserBase.getFieldQuery(QueryParserBase.java:467)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.MultiFieldQueryParser.getFieldQuery(MultiFieldQueryParser.java:154)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:830)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:469)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:355)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:244)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:215)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:109)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> com.android.globalsearch.model.index.ContactsIndexHelper.getQuery(ContactsIndexHelper.java:713)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> com.android.globalsearch.model.index.IndexHelper.initQuery(IndexHelper.java:496)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> com.android.globalsearch.model.task.search.SearchLocalTask$4.run(SearchLocalTask.java:342)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
> java.lang.Thread.run(Thread.java:764)
> 04-25 10:20:15.129 13251 13273 E AndroidRuntime: Caused by: 
> java.lang.ClassNotFoundException: Didn't find class "java.lang.ClassValue" on 
> path: DexPathList[[zip file 
> "/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk"],nativeLibraryDirectories=[/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/lib/arm,
>  
> /data/app/com.andriod.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk!/lib/armeabi,
>  /system/lib, /vendor/lib]]
> 04-25 10:20:15.129 

[jira] [Created] (LUCENE-8274) android上使用Lucene7.2.1出现Didn't find class "java.lang.ClassValue"问题

2018-04-24 Thread zhangzhenan (JIRA)
zhangzhenan created LUCENE-8274:
---

 Summary: android上使用Lucene7.2.1出现Didn't find class 
"java.lang.ClassValue"问题
 Key: LUCENE-8274
 URL: https://issues.apache.org/jira/browse/LUCENE-8274
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 7.2.1
 Environment: android-26

android studio

 

buildToolsVersion "28.0.0-rc1"

 

minSdkVersion 26
targetSdkVersion 26

 

compileOptions {
 sourceCompatibility JavaVersion.VERSION_1_8
 targetCompatibility JavaVersion.VERSION_1_8
}

 
Reporter: zhangzhenan


由于android-26开始支持Java8,所以我着手将android工程中的Lucene4.7.2替换成7.2.1,我指定了java8进行编译

compileOptions {
 sourceCompatibility JavaVersion.VERSION_1_8
 targetCompatibility JavaVersion.VERSION_1_8
}

引用7.2.1

dependencies {
 compile 'com.android.support:multidex:1.0.1'
 compile 'org.apache.lucene:lucene-core:7.2.1'
 compile 'org.apache.lucene:lucene-analyzers-common:7.2.1'
 compile 'org.apache.lucene:lucene-analyzers-smartcn:7.2.1'
 compile 'org.apache.lucene:lucene-queries:7.2.1'
 compile 'org.apache.lucene:lucene-queryparser:7.2.1'

}

编译生成apk未出现问题,但运行时出现Didn't find class "java.lang.ClassValue"

04-25 10:20:15.129 13251 13273 E AndroidRuntime: 
java.lang.NoClassDefFoundError: Failed resolution of: Ljava/lang/ClassValue;
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.standard.StandardAnalyzer.createComponents(StandardAnalyzer.java:103)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:134)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:198)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:240)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.newFieldQuery(QueryParserBase.java:475)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.getFieldQuery(QueryParserBase.java:467)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.MultiFieldQueryParser.getFieldQuery(MultiFieldQueryParser.java:154)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:830)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:469)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:355)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:244)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:215)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:109)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.index.ContactsIndexHelper.getQuery(ContactsIndexHelper.java:713)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.index.IndexHelper.initQuery(IndexHelper.java:496)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
com.android.globalsearch.model.task.search.SearchLocalTask$4.run(SearchLocalTask.java:342)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.Thread.run(Thread.java:764)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: Caused by: 
java.lang.ClassNotFoundException: Didn't find class "java.lang.ClassValue" on 
path: DexPathList[[zip file 
"/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk"],nativeLibraryDirectories=[/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/lib/arm,
 
/data/app/com.andriod.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk!/lib/armeabi,
 /system/lib, /vendor/lib]]
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.ClassLoader.loadClass(ClassLoader.java:379)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: at 
java.lang.ClassLoader.loadClass(ClassLoader.java:312)
04-25 10:20:15.129 13251 13273 E AndroidRuntime: ... 19 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

Re: BugFix release 7.3.1

2018-04-24 Thread Đạt Cao Mạnh
Hi Varun,

Go ahead, I will start the build tomorrow.
On Wed, Apr 25, 2018 at 1:21 AM Varun Thacker  wrote:

> Hi Dat,
>
> What's the timeline in mind that you have for creating a Solr 7.3.1 RC?
>
> I want to backport SOLR-12065 / SOLR-11724 and I can wrap it up today
>
> On Mon, Apr 23, 2018 at 1:01 AM, Alan Woodward 
> wrote:
>
>> Done
>>
>> > On 23 Apr 2018, at 04:12, Đạt Cao Mạnh  wrote:
>> >
>> > Hi Alan,
>> >
>> > Can you backport LUCENE-8254 to branch_7_3?
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1792 - Unstable!

2018-04-24 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12266) Add discrete Fourier transform Stream Evaluators

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451506#comment-16451506
 ] 

ASF subversion and git services commented on SOLR-12266:


Commit 9201de7621fc289baf397e92c97c40080d45a1dd in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9201de7 ]

SOLR-12266: Add discrete Fourier transform Stream Evaluators


> Add discrete Fourier transform Stream Evaluators
> 
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch, SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12266) Add discrete Fourier transform Stream Evaluators

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451497#comment-16451497
 ] 

ASF subversion and git services commented on SOLR-12266:


Commit c5a1738151ef183be3ea20d10d0c897252d9e6ff in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5a1738 ]

SOLR-12266: Add discrete Fourier transform Stream Evaluators


> Add discrete Fourier transform Stream Evaluators
> 
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch, SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12266) Add discrete Fourier transform Stream Evaluators

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12266:
--
Attachment: SOLR-12266.patch

> Add discrete Fourier transform Stream Evaluators
> 
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch, SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 563 - Still Unstable!

2018-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/563/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180425125411184, index.20180425125435076, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180425125411184, 
index.20180425125435076, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([F53250C681B0F12:D4F825CA6D3366A1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21900 - Unstable!

2018-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21900/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([1981903BF1DA6793:4A38D28B13CBF269]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 14498 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-Tests-master - Build # 2505 - Unstable

2018-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2505/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([938B9F4863607436]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.ZkControllerTest: 
1) 

[jira] [Commented] (SOLR-12270) Improve "Your Max Processes.." WARN messages while starting Solr's examples

2018-04-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451389#comment-16451389
 ] 

Varun Thacker commented on SOLR-12270:
--

We should simply print it once while running the Solr example also.

Also we could probably condense it a little? Something like
{code:java}
*** [WARN] *** 

Your open file limit is currently 10240 and your Max Processes Limit is 
currently 1418.

Please set both to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

*** [END WARN] ***{code}

> Improve "Your Max Processes.." WARN messages while starting Solr's examples
> ---
>
> Key: SOLR-12270
> URL: https://issues.apache.org/jira/browse/SOLR-12270
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> If I start Solr 7.3 I am greeted with this very VERBOSE message
>  
> {code:java}
> ~/solr-7.3.0$ ./bin/solr  start -e cloud -noprompt -z localhost:2181 -m 2g
> *** [WARN] *** Your open file limit is currently 256. 
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> *** [WARN] ***  Your Max Processes Limit is currently 1418.
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> Welcome to the SolrCloud example!
> Starting up 2 Solr nodes for your example SolrCloud cluster.
> Creating Solr home directory 
> /Users/varunthacker/solr-7.3.0/example/cloud/node1/solr
> Cloning /Users/varunthacker/solr-7.3.0/example/cloud/node1 into
>    /Users/varunthacker/solr-7.3.0/example/cloud/node2
> Starting up Solr on port 8983 using command:
> "bin/solr" start -cloud -p 8983 -s "example/cloud/node1/solr" -z 
> localhost:2181 -m 2g
> *** [WARN] *** Your open file limit is currently 10240. 
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> *** [WARN] ***  Your Max Processes Limit is currently 1418.
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> Waiting up to 180 seconds to see Solr running on port 8983 [-] 
> Started Solr server on port 8983 (pid=82037). Happy searching!
>       
> Starting up Solr on port 7574 using command:
> "bin/solr" start -cloud -p 7574 -s "example/cloud/node2/solr" -z 
> localhost:2181 -m 2g
> *** [WARN] *** Your open file limit is currently 10240. 
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> *** [WARN] ***  Your Max Processes Limit is currently 1418.
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
> your profile or solr.in.sh
> Waiting up to 180 seconds to see Solr running on port 7574 [\] 
> Started Solr server on port 7574 (pid=82143). Happy searching!
> INFO  - 2018-04-24 16:07:10.566; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:2181 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'
> Enabling auto soft-commits with maxTime 3 secs using the Config API
> POSTing request to Config API: 
> http://localhost:8983/solr/gettingstarted/config
> {"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}}
> Successfully set-property updateHandler.autoSoftCommit.maxTime to 3000
> SolrCloud example running, please visit: http://localhost:8983/solr
> {code}
> Do we really need so many duplicate warnings for the same message? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12270) Improve "Your Max Processes.." WARN messages while starting Solr's examples

2018-04-24 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12270:


 Summary: Improve "Your Max Processes.." WARN messages while 
starting Solr's examples
 Key: SOLR-12270
 URL: https://issues.apache.org/jira/browse/SOLR-12270
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


If I start Solr 7.3 I am greeted with this very VERBOSE message

 
{code:java}
~/solr-7.3.0$ ./bin/solr  start -e cloud -noprompt -z localhost:2181 -m 2g

*** [WARN] *** Your open file limit is currently 256. 

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

*** [WARN] ***  Your Max Processes Limit is currently 1418.

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh



Welcome to the SolrCloud example!



Starting up 2 Solr nodes for your example SolrCloud cluster.



Creating Solr home directory 
/Users/varunthacker/solr-7.3.0/example/cloud/node1/solr

Cloning /Users/varunthacker/solr-7.3.0/example/cloud/node1 into

   /Users/varunthacker/solr-7.3.0/example/cloud/node2



Starting up Solr on port 8983 using command:

"bin/solr" start -cloud -p 8983 -s "example/cloud/node1/solr" -z localhost:2181 
-m 2g



*** [WARN] *** Your open file limit is currently 10240. 

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

*** [WARN] ***  Your Max Processes Limit is currently 1418.

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

Waiting up to 180 seconds to see Solr running on port 8983 [-] 

Started Solr server on port 8983 (pid=82037). Happy searching!



      

Starting up Solr on port 7574 using command:

"bin/solr" start -cloud -p 7574 -s "example/cloud/node2/solr" -z localhost:2181 
-m 2g



*** [WARN] *** Your open file limit is currently 10240. 

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

*** [WARN] ***  Your Max Processes Limit is currently 1418.

It should be set to 65000 to avoid operational disruption.

If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

Waiting up to 180 seconds to see Solr running on port 7574 [\] 

Started Solr server on port 7574 (pid=82143). Happy searching!



INFO  - 2018-04-24 16:07:10.566; 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
localhost:2181 ready

Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
config-set 'gettingstarted'



Enabling auto soft-commits with maxTime 3 secs using the Config API



POSTing request to Config API: http://localhost:8983/solr/gettingstarted/config

{"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}}

Successfully set-property updateHandler.autoSoftCommit.maxTime to 3000





SolrCloud example running, please visit: http://localhost:8983/solr


{code}
Do we really need so many duplicate warnings for the same message? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451382#comment-16451382
 ] 

Robert Muir commented on LUCENE-8264:
-

Its not possible to warn-only: the encoding of things changed completely. I 
think the key issue here is Lucene is an *index* not a *database*. Because it 
is a lossy *index* and does not retain all of the user's data, its not possible 
to safely migrate some things automagically. In the norms case IndexWriter 
needs to re-analyze the text ("re-index") and compute stats to get back the 
value, so it can be re-encoded. The function is {{y = f(x)}} and if {{x}} is 
not available its not possible, so lucene can't do it.

Also related to this change, in some cases, its necessary for the user to 
migrate away from index-time boosts. The removal of these is what opened the 
door to adrien's more efficient encoding here. So the user has to decide to put 
such feature values into a NumericDocValuesField and use expressions/function 
queries to combine with the documents score, or via the new FeatureField (which 
can be much more efficient), or whatever. This case is interesting because it 
emphasizes there are other things besides just the original document's text 
that need to be dealt with on upgrades.

I don't agree with the idea that lucene should be forced to drag along all 
kinds of nonsense data and slowly corrupt itself over time, or that some 
improvements aren't possible because the format can't be changed. Instead I 
think projects like solr that advertise themselves as a *database* need to add 
the ability to regenerate a new lucene index efficiently (e.g. minimizing 
network traffic across distributed nodes, etc). They need to use the additional 
stuff they have (e.g. original user's data, abstractions of some sort over 
lucene stuff like scoring features) to make this easier. Lucene is just the 
indexing/search library.

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7286 - Still Unstable!

2018-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7286/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=700871

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=700871
at 
__randomizedtesting.SeedInfo.seed([7DDA73DF31FDFF8A:45B600FAA52D5DCC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
should have fired an event

Stack Trace:
java.lang.AssertionError: should have fired an event
at 

[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451326#comment-16451326
 ] 

Jan Høydahl commented on LUCENE-8264:
-

I'm also puzzled of this strictness introduced by LUCENE-7837 from 8.0. I'm 
fine with keeping that behaviour as default, but add a config option to 
fall-back to warn-only, so that Lucene users such as offline upgrade tools can 
choose to handle created=N-2 situations in a custom way.

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12261) Deleting collections should sync aliases before prematurely failing when alias is deleted

2018-04-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451272#comment-16451272
 ] 

Mark Miller commented on SOLR-12261:


bq. It's debatable if this is an improvement or a bug.

Thats been a ton of SolrCloud. Upsides and downsides of building on an existing 
system and trying to get something out fast with a few devs (so we could build 
up to many, many more devs in a reasonable timeframe).

In the end you end up with many 'unfinished' things living between bug and 
feature. Things constantly get ticked off the list though.

Good scriptability and the testing of that with the collections API is still 
something that's on my list, unless it's further than the state I'm aware of. 
Good responses have been a work in progress, and the lack of some ZK=truth 
stuff has made deleting and creating the same collection in a script a bit of a 
nightmare.

In the end, bug, improvement, take your pick. Stuff that should be fixed and 
tested.

For the Alias feature though, I really didn't spend a lot of time thinking out 
every possible case, I pumped it out during a short hackathon and didn't use it 
personally. So it's an improvement. I mean a bug. It's been getting fixed and 
tested.

> Deleting collections should sync aliases before prematurely failing when 
> alias is deleted
> -
>
> Key: SOLR-12261
> URL: https://issues.apache.org/jira/browse/SOLR-12261
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-12261.patch
>
>
> In SOLR-11218 [~erickerickson] ensured that we can't delete a collection that 
> is referenced by an alias. However It may be that the alias is deleted but 
> the node servicing the request doesn't know about this yet. It should call 
> AliasesManager.update() first (which now sync()'s with ZK).
> I believe this is the cause of some sporadic failures to 
> org.apache.solr.cloud.AliasIntegrationTest#tearDown which deletes the alias 
> then all collections.
> It's debatable if this is an improvement or a bug. Sadly most of SolrCloud 
> simply seems to operate this way despite it being eventually consistent. Thus 
> users using SolrCloud may have to add sleep()s after calls to Solr admin 
> calls :-/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451271#comment-16451271
 ] 

Robert Muir commented on LUCENE-8273:
-

Also I am not sure if the name BypassingTokenFilter is the best.

It works well for your case (but I think "bypass" may be due to some 
inertia/history and maybe not the best going forward). Maybe it should be "if" 
instead of "unless".

{code}
// don't lowercase if the term contains an "o" character
TokenStream t = new BypassingTokenFilter(cts, AssertingLowerCaseFilter::new) {
  CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
  @Override
  protected boolean bypass() throws IOException {
return termAtt.toString().contains("o");
  }
};
{code}

But will look awkward for other cases:
{code}
// apply greek stemmer ("don't bypass") if the token is written in the greek 
script.
TokenStream t = new BypassingTokenFilter(ts, GreekStemmer::new) {
  ScriptAttribute scriptAtt = addAttribute(ScriptAttribute.class);
  @Override
  protected boolean bypass() throws IOException {
return scriptAtt.getCode() != UScript.GREEK;
  } 
};
{code}


> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8998) JSON Facet API child roll-ups

2018-04-24 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8998:
---
Attachment: SOLR-8998.patch

> JSON Facet API child roll-ups
> -
>
> Key: SOLR-8998
> URL: https://issues.apache.org/jira/browse/SOLR-8998
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: SOLR-8998.patch, SOLR-8998.patch, SOLR_8998.patch, 
> SOLR_8998.patch, SOLR_8998.patch
>
>
> The JSON Facet API currently has the ability to map between parents and 
> children ( see http://yonik.com/solr-nested-objects/ )
> This issue is about adding a true rollup ability where parents would take on 
> derived values from their children.  The most important part (and the most 
> difficult part) will be the external API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451265#comment-16451265
 ] 

Robert Muir commented on LUCENE-8273:
-

{quote}
I added this to core rather than to the analysis module as it seems to me to be 
a utility class like FilteringTokenFilter, which is also in core. But I'm 
perfectly happy to move it to analysis-common if that makes more sense to 
others.
{quote}

The idea is cool but I would like to see it more fleshed out (eg. marked 
experimental somewhere) before going into core/:
* improved testing:  i'd like to see some edge cases tested such as both "true" 
and "false" cases on the final token for end(), etc. what happens is a little 
sneaky,  think it should be hooked into TestRandomChains (this should probably 
be explicitly added to that test, wrapping with check of random.nextBoolean() 
or something simple that will test all cases). This may uncover some 
integration difficulties. In particular, it is not clear to me how some stuff 
such as end() works correctly in the general case with this filter right now.
* integration with CustomAnalyzer: as this would add a generic "if" to allow 
branching in analysis chains (there is an issue somewhere for this), which 
would be very powerful, it would be good to plumb into CustomAnalyzer to make 
sure it can work well with the factory model. seems doable with the functional 
interface but needs to be proven out.


> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 208 - Still Failing

2018-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/208/

No tests ran.

Build Log:
[...truncated 24220 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2190 links (1747 relative) to 2947 anchors in 228 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

[jira] [Created] (SOLR-12269) Investigate merging UpdateRequestHandlerApi and UpdateHandler

2018-04-24 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12269:


 Summary: Investigate merging UpdateRequestHandlerApi and 
UpdateHandler
 Key: SOLR-12269
 URL: https://issues.apache.org/jira/browse/SOLR-12269
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Looks like they serve the same functionality but one is only for V1 APIs and 
one is only for V2 APIs 

The metrics are also different so if we can avoid creating two request handlers 
internally for different versions of the API it would be great



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12268) Opening a new NRT Searcher can take a while and block calls to /admin/cores.

2018-04-24 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12268:
--

 Summary: Opening a new NRT Searcher can take a while and block 
calls to /admin/cores.
 Key: SOLR-12268
 URL: https://issues.apache.org/jira/browse/SOLR-12268
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller


When we open a new reader from a writer, we get an IndexWriter lock and may 
call applyAllDeletesAndUpdates. That call can take a while holding the lock. 
Meanwhile calls coming to /admin/cores get isCurrent for the reader, which 
checks if the IndexWriter is closed, which requires the IndexWriter lock. These 
leads to /admin/cores calls taking as long as applyAllDeletesAndUpdates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12267) Admin UI broken metrics

2018-04-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451218#comment-16451218
 ] 

Varun Thacker commented on SOLR-12267:
--

This API calls returns the same data on both Solr 6.6.2 and Solr 7.2.0

 

http://localhost:8983/solr/admin/metrics?group=core=UPDATE./update.requestTimes=json=on
{code:java}
{
"responseHeader":{
"status":0,
"QTime":1},
"metrics":{
"solr.core.techproducts":{
"UPDATE./update.requestTimes":{
"count":15,
"meanRate":0.06193838474100722,
"1minRate":0.059722026233201185,
"5minRate":1.370641605420876,
"15minRate":2.31058601280263,
"min_ms":2.13397,
"max_ms":128.889822,
"mean_ms":19.4997552667,
"median_ms":4.00796,
"stddev_ms":37.98377078937656,
"p75_ms":8.417471,
"p95_ms":128.889822,
"p99_ms":128.889822,
"p999_ms":128.889822{code}

> Admin UI broken metrics
> ---
>
> Key: SOLR-12267
> URL: https://issues.apache.org/jira/browse/SOLR-12267
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: Solr662.png, Solr720.png
>
>
> Attaching Screenshots of the same metric on Solr 6.6.2 VS Solr 7.2.0 
> The admin UI shows completely different metrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12267) Admin UI broken metrics

2018-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12267:
-
Attachment: Solr720.png
Solr662.png

> Admin UI broken metrics
> ---
>
> Key: SOLR-12267
> URL: https://issues.apache.org/jira/browse/SOLR-12267
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: Solr662.png, Solr720.png
>
>
> Attaching Screenshots of the same metric on Solr 6.6.2 VS Solr 7.2.0 
> The admin UI shows completely different metrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12267) Admin UI broken metrics

2018-04-24 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12267:


 Summary: Admin UI broken metrics
 Key: SOLR-12267
 URL: https://issues.apache.org/jira/browse/SOLR-12267
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Attaching Screenshots of the same metric on Solr 6.6.2 VS Solr 7.2.0 

The admin UI shows completely different metrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 574 - Failure!

2018-04-24 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12266) Add discrete Fourier transform Stream Evaluators

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12266:
--
Summary: Add discrete Fourier transform Stream Evaluators  (was: Add 
discrete Fourier transforms Stream Evaluators)

> Add discrete Fourier transform Stream Evaluators
> 
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12266) Add discrete Fourier transforms Stream Evaluators

2018-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450442#comment-16450442
 ] 

Joel Bernstein commented on SOLR-12266:
---

Patch with initial implementation, not tests yet. 

> Add discrete Fourier transforms Stream Evaluators
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12266) Add discrete Fourier transforms Stream Evaluators

2018-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450442#comment-16450442
 ] 

Joel Bernstein edited comment on SOLR-12266 at 4/24/18 7:58 PM:


Patch with initial implementation, no tests yet. 


was (Author: joel.bernstein):
Patch with initial implementation, not tests yet. 

> Add discrete Fourier transforms Stream Evaluators
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12266) Add discrete Fourier transforms Stream Evaluators

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12266:
--
Summary: Add discrete Fourier transforms Stream Evaluators  (was: Add fft 
and ifft Stream Evaluators to support discrete Fourier transforms)

> Add discrete Fourier transforms Stream Evaluators
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12266) Add fft and ifft Stream Evaluators to support discrete Fourier transforms

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12266:
--
Attachment: SOLR-12266.patch

> Add fft and ifft Stream Evaluators to support discrete Fourier transforms
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12266.patch
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12266) Add fft and ifft Stream Evaluators to support discrete Fourier transforms

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12266:
--
Fix Version/s: 7.4

> Add fft and ifft Stream Evaluators to support discrete Fourier transforms
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12266) Add fft and ifft Stream Evaluators to support discrete Fourier transforms

2018-04-24 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-12266:
-

 Summary: Add fft and ifft Stream Evaluators to support discrete 
Fourier transforms
 Key: SOLR-12266
 URL: https://issues.apache.org/jira/browse/SOLR-12266
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds the fft and ifft Stream Evaluators to support forward and 
inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12266) Add fft and ifft Stream Evaluators to support discrete Fourier transforms

2018-04-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12266:
-

Assignee: Joel Bernstein

> Add fft and ifft Stream Evaluators to support discrete Fourier transforms
> -
>
> Key: SOLR-12266
> URL: https://issues.apache.org/jira/browse/SOLR-12266
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket adds the fft and ifft Stream Evaluators to support forward and 
> inverse discrete Fourier transforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-04-24 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450408#comment-16450408
 ] 

Matt Weber commented on LUCENE-8196:


[~jim.ferenczi] [~romseygeek]

So given a single document with the value {{a b}}. The following queries would 
both match this document:

{code:java}
Intervals.unordered(Intervals.term("b"), Intervals.term("a")) 
{code}

{code:java}
Intervals.unordered(Intervals.term("b"), Intervals.term("b")) 
{code}


The first I think would have an interval width of {{1}} and the 2nd should have 
a width of {{0}}.  So if we have a {{minwidth}} operator we could use that to 
set the minimum width to {{1}} preventing the 2nd from matching?  If both of 
these queries result in an interval with the same width then that feels wrong 
to me.  


> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1817 - Still Failing!

2018-04-24 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12265) Upgrade Jetty to 9.4.9

2018-04-24 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450349#comment-16450349
 ] 

Michael Braun commented on SOLR-12265:
--

Not sure [~varunthacker] - just checked on the offchance they had published the 
RC on Maven Central. 

> Upgrade Jetty to 9.4.9
> --
>
> Key: SOLR-12265
> URL: https://issues.apache.org/jira/browse/SOLR-12265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Solr 7.3 upgraded to Jetty 9.4.8 
> We're seeing this WARN very sporadically ( maybe one in every 100k requests ) 
> on the replica when indexing.
> {code:java}
> date time WARN [qtp768306356-580185] ? (:) - 
> java.nio.channels.ReadPendingException: null
> at org.eclipse.jetty.io.FillInterest.register(FillInterest.java:58) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractEndPoint.fillInterested(AbstractEndPoint.java:353)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection.fillInterested(AbstractConnection.java:134)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) 
> ~[jetty-server-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:289) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:149) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0-zing_17.11.0.0]
> date time WARN [qtp768306356-580185] ? (:) - Read pending for 
> org.eclipse.jetty.server.HttpConnection$BlockingReadCallback@2e98df28 
> prevented AC.ReadCB@424271f8{HttpConnection@424271f8[p=HttpParser{s=START,0 
> of 
> -1},g=HttpGenerator@424273ae{s=START}]=>HttpChannelOverHttp@4242713d{r=141,c=false,a=IDLE,uri=null}<-DecryptedEndPoint@4242708d{/host:52824<->/host:port,OPEN,fill=FI,flush=-,to=1/86400}->HttpConnection@424271f8[p=HttpParser{s=START,0
>  of -1},g=HttpGenerator@424273ae{s=START}]=>{code}
> When this happens the leader basically waits till it get's a 
> SocketTimeoutException and then puts the replica into recovery.
> My motivation for upgrading to Jetty 9.4.9 is that the EatWhatYouKill was 
> introduced in Jetty 9.4.x . I don't believe we saw this error in Jetty 9.3.x 
> and then in Jetty 9.4.9 this class has undergone quite a few changes 
> [https://github.com/eclipse/jetty.project/commit/0cb4f5629dca082eec943b94ec8ef4ca0d5f1aa4#diff-ae450a12d4eca85a437bd5082f698f48]
>  . 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12265) Upgrade Jetty to 9.4.9

2018-04-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450348#comment-16450348
 ] 

Varun Thacker commented on SOLR-12265:
--

Sounds good to me! I'll wait and then update the Jira title appropriately 
afterwards.

Is [http://dev.eclipse.org/mhonarc/lists/jetty-dev/] not where the vote's and 
discussion takes place? 

> Upgrade Jetty to 9.4.9
> --
>
> Key: SOLR-12265
> URL: https://issues.apache.org/jira/browse/SOLR-12265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Solr 7.3 upgraded to Jetty 9.4.8 
> We're seeing this WARN very sporadically ( maybe one in every 100k requests ) 
> on the replica when indexing.
> {code:java}
> date time WARN [qtp768306356-580185] ? (:) - 
> java.nio.channels.ReadPendingException: null
> at org.eclipse.jetty.io.FillInterest.register(FillInterest.java:58) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractEndPoint.fillInterested(AbstractEndPoint.java:353)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection.fillInterested(AbstractConnection.java:134)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) 
> ~[jetty-server-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:289) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:149) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0-zing_17.11.0.0]
> date time WARN [qtp768306356-580185] ? (:) - Read pending for 
> org.eclipse.jetty.server.HttpConnection$BlockingReadCallback@2e98df28 
> prevented AC.ReadCB@424271f8{HttpConnection@424271f8[p=HttpParser{s=START,0 
> of 
> -1},g=HttpGenerator@424273ae{s=START}]=>HttpChannelOverHttp@4242713d{r=141,c=false,a=IDLE,uri=null}<-DecryptedEndPoint@4242708d{/host:52824<->/host:port,OPEN,fill=FI,flush=-,to=1/86400}->HttpConnection@424271f8[p=HttpParser{s=START,0
>  of -1},g=HttpGenerator@424273ae{s=START}]=>{code}
> When this happens the leader basically waits till it get's a 
> SocketTimeoutException and then puts the replica into recovery.
> My motivation for upgrading to Jetty 9.4.9 is that the EatWhatYouKill was 
> introduced in Jetty 9.4.x . I don't believe we saw this error in Jetty 9.3.x 
> and then in Jetty 9.4.9 this class has undergone quite a few changes 
> [https://github.com/eclipse/jetty.project/commit/0cb4f5629dca082eec943b94ec8ef4ca0d5f1aa4#diff-ae450a12d4eca85a437bd5082f698f48]
>  . 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BugFix release 7.3.1

2018-04-24 Thread Varun Thacker
Hi Dat,

What's the timeline in mind that you have for creating a Solr 7.3.1 RC?

I want to backport SOLR-12065 / SOLR-11724 and I can wrap it up today

On Mon, Apr 23, 2018 at 1:01 AM, Alan Woodward  wrote:

> Done
>
> > On 23 Apr 2018, at 04:12, Đạt Cao Mạnh  wrote:
> >
> > Hi Alan,
> >
> > Can you backport LUCENE-8254 to branch_7_3?
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-12265) Upgrade Jetty to 9.4.9

2018-04-24 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450231#comment-16450231
 ] 

Michael Braun commented on SOLR-12265:
--

It looks like a release of 9.4.10 might be imminent - 9.4.10.RC0 artifacts were 
published ten days ago. 

> Upgrade Jetty to 9.4.9
> --
>
> Key: SOLR-12265
> URL: https://issues.apache.org/jira/browse/SOLR-12265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Solr 7.3 upgraded to Jetty 9.4.8 
> We're seeing this WARN very sporadically ( maybe one in every 100k requests ) 
> on the replica when indexing.
> {code:java}
> date time WARN [qtp768306356-580185] ? (:) - 
> java.nio.channels.ReadPendingException: null
> at org.eclipse.jetty.io.FillInterest.register(FillInterest.java:58) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractEndPoint.fillInterested(AbstractEndPoint.java:353)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection.fillInterested(AbstractConnection.java:134)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) 
> ~[jetty-server-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>  ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:289) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:149) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) 
> ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0-zing_17.11.0.0]
> date time WARN [qtp768306356-580185] ? (:) - Read pending for 
> org.eclipse.jetty.server.HttpConnection$BlockingReadCallback@2e98df28 
> prevented AC.ReadCB@424271f8{HttpConnection@424271f8[p=HttpParser{s=START,0 
> of 
> -1},g=HttpGenerator@424273ae{s=START}]=>HttpChannelOverHttp@4242713d{r=141,c=false,a=IDLE,uri=null}<-DecryptedEndPoint@4242708d{/host:52824<->/host:port,OPEN,fill=FI,flush=-,to=1/86400}->HttpConnection@424271f8[p=HttpParser{s=START,0
>  of -1},g=HttpGenerator@424273ae{s=START}]=>{code}
> When this happens the leader basically waits till it get's a 
> SocketTimeoutException and then puts the replica into recovery.
> My motivation for upgrading to Jetty 9.4.9 is that the EatWhatYouKill was 
> introduced in Jetty 9.4.x . I don't believe we saw this error in Jetty 9.3.x 
> and then in Jetty 9.4.9 this class has undergone quite a few changes 
> [https://github.com/eclipse/jetty.project/commit/0cb4f5629dca082eec943b94ec8ef4ca0d5f1aa4#diff-ae450a12d4eca85a437bd5082f698f48]
>  . 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 209 - Still Unstable

2018-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/209/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([2F1AF5B4BCD51CF0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.ZkControllerTest: 
1) 

[jira] [Created] (SOLR-12265) Upgrade Jetty to 9.4.9

2018-04-24 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12265:


 Summary: Upgrade Jetty to 9.4.9
 Key: SOLR-12265
 URL: https://issues.apache.org/jira/browse/SOLR-12265
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.3
Reporter: Varun Thacker
Assignee: Varun Thacker


Solr 7.3 upgraded to Jetty 9.4.8 

We're seeing this WARN very sporadically ( maybe one in every 100k requests ) 
on the replica when indexing.
{code:java}
date time WARN [qtp768306356-580185] ? (:) - 
java.nio.channels.ReadPendingException: null
at org.eclipse.jetty.io.FillInterest.register(FillInterest.java:58) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.io.AbstractEndPoint.fillInterested(AbstractEndPoint.java:353) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.io.AbstractConnection.fillInterested(AbstractConnection.java:134)
 ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) 
~[jetty-server-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
 ~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:289) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:149) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) 
~[jetty-io-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
 ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
 ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
 ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
 ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 ~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
~[jetty-util-9.4.8.v20171121.jar:9.4.8.v20171121]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0-zing_17.11.0.0]

date time WARN [qtp768306356-580185] ? (:) - Read pending for 
org.eclipse.jetty.server.HttpConnection$BlockingReadCallback@2e98df28 prevented 
AC.ReadCB@424271f8{HttpConnection@424271f8[p=HttpParser{s=START,0 of 
-1},g=HttpGenerator@424273ae{s=START}]=>HttpChannelOverHttp@4242713d{r=141,c=false,a=IDLE,uri=null}<-DecryptedEndPoint@4242708d{/host:52824<->/host:port,OPEN,fill=FI,flush=-,to=1/86400}->HttpConnection@424271f8[p=HttpParser{s=START,0
 of -1},g=HttpGenerator@424273ae{s=START}]=>{code}
When this happens the leader basically waits till it get's a 
SocketTimeoutException and then puts the replica into recovery.

My motivation for upgrading to Jetty 9.4.9 is that the EatWhatYouKill was 
introduced in Jetty 9.4.x . I don't believe we saw this error in Jetty 9.3.x 
and then in Jetty 9.4.9 this class has undergone quite a few changes 
[https://github.com/eclipse/jetty.project/commit/0cb4f5629dca082eec943b94ec8ef4ca0d5f1aa4#diff-ae450a12d4eca85a437bd5082f698f48]
 . 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12264) Expose Metrics from Plugin/Stats Replication Handler as JMX MBeans

2018-04-24 Thread JIRA
Sven Büsing created SOLR-12264:
--

 Summary: Expose Metrics from Plugin/Stats Replication Handler as 
JMX MBeans
 Key: SOLR-12264
 URL: https://issues.apache.org/jira/browse/SOLR-12264
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Sven Büsing


Would it be possible to expose the metrics below 
[http://localhost:8983/solr/#/core/plugins?type=replication=%2Freplication]
 as MBeans via JMX?

The following Items would be interesting for us to better monitor system health 
status:

replicationFailedAt -> as a numeric type unix timestamp (not as 
java.lang.String)
timesFailed -> as a numeric type e.g. java.lang.Long



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-04-24 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450158#comment-16450158
 ] 

Matt Weber commented on LUCENE-8196:


I use these queries to build query parsers and I am specifically thinking of an 
unordered near and how I can prevent it from matching the same term.  I can't 
think of any situation where a user would think {{NEAR(a, a)}} would match 
documents with a single {{a}} and if we can't get that by default I would like 
a way to explicitly prevent it myself.  Spans have the same issue as well, see 
LUCENE-3120.  

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12235) Incomplete debugQuery info when using edismax and boost param

2018-04-24 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-12235:
---
Component/s: (was: Schema and Analysis)
 query parsers

> Incomplete debugQuery info when using edismax and boost param
> -
>
> Key: SOLR-12235
> URL: https://issues.apache.org/jira/browse/SOLR-12235
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers, search
>Affects Versions: 7.3
> Environment: Solr 7.3.0, Java 1.8.0_162
>Reporter: Bogdan Stoica
>Priority: Minor
>
> There is an issue with the way SOLR 7.3 outputs explain information when 
> using edismax and 
> boost param.
>  
> Example query: 
> /select?boost=results=on=edismax=word=text
>  
> Solr 7.3 outputs:
>  
> {code:java}
>  
> 31349.63 = product of: 1.0 = boost 31349.63 = boost(double(results)) 
> {code}
>  
>  
> In comparrison, Solr 7.2.1 returns the following:
>  
> {code:java}
>  
> 31349.63 = boost(text:word,double(results)), product of: 14.400382 = 
> weight(text:word in 18142) [SchemaSimilarity], result of: 14.400382 = 
> score(doc=18142,freq=1.0 = termFreq=1.0 ), product of: 10.677335 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 6.0 = 
> docFreq 281851.0 = docCount 1.3486869 = tfNorm, computed as (freq * (k1 + 1)) 
> / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = 
> termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 2.7172585 = avgFieldLength 
> 1.0 = fieldLength 2177.0 = double(results)=2177.0 
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2018-04-24 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450143#comment-16450143
 ] 

Jason Gerlowski commented on SOLR-9272:
---

Just wanted to toss a reminder on here about the fledgling test suite we now 
have for the {{bin/solr}} scripts.  You can run it on Linux via {{cd solr && 
bin-test/test}}.  Test coverage is very sparse, but the suite can save you some 
manual testing burden (esp. if you add test cases for things you run into).

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-24 Thread Erick Erickson
Thanks Mike! It's great to have other eyes on it (and I'm taking a bit
of a break to come back at it with fresh eyes).

It'll be a bit before I can respond in detail. So far the latest patch
has successfully run through one full test iteration which is totally
inadequate before checking in of course. I intend to send it through a
bunch more before thinking about committing of course, but any failed
cases most welcome as I can beast them.

Again, thanks for taking the time.

On Tue, Apr 24, 2018 at 8:48 AM, Michael McCandless (JIRA)
 wrote:
>
> [ 
> https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450110#comment-16450110
>  ]
>
> Michael McCandless commented on LUCENE-7976:
> 
>
> {quote}// We did our best to find the right merges, but through the 
> vagaries of the scoring algorithm etc. we didn't  
>// merge down to 
> the required max segment count. So merge the N smallest segments to make it 
> so.
> {quote}
> Hmm can you describe why this would happen?  Seems like if you ask the 
> scoring algorithm to find merges down to N segments, it shouldn't ever fail?
>
> We also seem to invoke {{getSegmentSizes}} more than once in 
> {{findForcedMerges}}?
>
>> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges 
>> of very large segments
>> -
>>
>> Key: LUCENE-7976
>> URL: https://issues.apache.org/jira/browse/LUCENE-7976
>> Project: Lucene - Core
>>  Issue Type: Improvement
>>Reporter: Erick Erickson
>>Assignee: Erick Erickson
>>Priority: Major
>> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, 
>> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>>
>>
>> We're seeing situations "in the wild" where there are very large indexes (on 
>> disk) handled quite easily in a single Lucene index. This is particularly 
>> true as features like docValues move data into MMapDirectory space. The 
>> current TMP algorithm allows on the order of 50% deleted documents as per a 
>> dev list conversation with Mike McCandless (and his blog here:  
>> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
>> Especially in the current era of very large indexes in aggregate, (think 
>> many TB) solutions like "you need to distribute your collection over more 
>> shards" become very costly. Additionally, the tempting "optimize" button 
>> exacerbates the issue since once you form, say, a 100G segment (by 
>> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
>> docs in it are deleted (current default 5G max segment size).
>> The proposal here would be to add a new parameter to TMP, something like 
>>  (no, that's not serious name, 
>> suggestions welcome) which would default to 100 (or the same behavior we 
>> have now).
>> So if I set this parameter to, say, 20%, and the max segment size stays at 
>> 5G, the following would happen when segments were selected for merging:
>> > any segment with > 20% deleted documents would be merged or rewritten NO 
>> > MATTER HOW LARGE. There are two cases,
>> >> the segment has < 5G "live" docs. In that case it would be merged with 
>> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
>> >> segments exist, it would just be rewritten
>> >> The segment has > 5G "live" docs (the result of a forceMerge or 
>> >> optimize). It would be rewritten into a single segment removing all 
>> >> deleted docs no matter how big it is to start. The 100G example above 
>> >> would be rewritten to an 80G segment for instance.
>> Of course this would lead to potentially much more I/O which is why the 
>> default would be the same behavior we see now. As it stands now, though, 
>> there's no way to recover from an optimize/forceMerge except to re-index 
>> from scratch. We routinely see 200G-300G Lucene indexes at this point "in 
>> the wild" with 10s of  shards replicated 3 or more times. And that doesn't 
>> even include having these over HDFS.
>> Alternatives welcome! Something like the above seems minimally invasive. A 
>> new merge policy is certainly an alternative.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 21897 - Unstable!

2018-04-24 Thread Simon Willnauer
I am looking into this

On Tue, Apr 24, 2018 at 5:37 PM, Policeman Jenkins Server
 wrote:
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21897/
> Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 6 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager
>
> Error Message:
> Suite timeout exceeded (>= 720 msec).
>
> Stack Trace:
> java.lang.Exception: Suite timeout exceeded (>= 720 msec).
> at __randomizedtesting.SeedInfo.seed([BA998C838D219DA9]:0)
>
>
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=17, name=Thread-1, 
> state=RUNNABLE, group=TGRP-TestSearcherManager]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=17, name=Thread-1, state=RUNNABLE, 
> group=TGRP-TestSearcherManager]
> Caused by: java.lang.RuntimeException: 
> java.nio.file.FileAlreadyExistsException: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.search.TestSearcherManager_BA998C838D219DA9-001/tempDir-001/_0.fdt
> at __randomizedtesting.SeedInfo.seed([BA998C838D219DA9]:0)
> at 
> org.apache.lucene.search.TestSearcherManager$8.run(TestSearcherManager.java:590)
> Caused by: java.nio.file.FileAlreadyExistsException: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.search.TestSearcherManager_BA998C838D219DA9-001/tempDir-001/_0.fdt
> at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:94)
> at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
> at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
> at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:215)
> at 
> java.base/java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
> at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
> at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
> at 
> org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
> at 
> org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
> at 
> org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
> at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
> at java.base/java.nio.file.Files.newOutputStream(Files.java:218)
> at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
> at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
> at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:665)
> at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
> at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:116)
> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
> at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
> at 
> org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsWriter(AssertingStoredFieldsFormat.java:48)
> at 
> org.apache.lucene.index.StoredFieldsConsumer.initStoredFieldsWriter(StoredFieldsConsumer.java:39)
> at 
> org.apache.lucene.index.StoredFieldsConsumer.startDocument(StoredFieldsConsumer.java:46)
> at 
> org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:363)
> at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:399)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:251)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:490)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1518)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1210)
> at 
> org.apache.lucene.search.TestSearcherManager$8.run(TestSearcherManager.java:574)
>
>
> FAILED:  
> 

[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450134#comment-16450134
 ] 

Erick Erickson commented on LUCENE-8264:


bq. I am a true -1 to making a tool that will screw up scoring, sorry.

It was surprising to me how many applications I became involved in where 
scoring was irrelevant. I still have to check my assumptions at the door when 
working with a new client on that score (little pun there).

Conversely, scoring is everything to other clients I work with and screwing up 
scoring would be a major problem for them.

One-size-fits-all doesn't reflect my experience at all though.

Having something that silently "did the best it could" automagically would lead 
to it's own problems, so having something like this silently kick in isn't a 
good option.

I'm not going to enjoy the conversations that start with "Well, you have to 
re-index from scratch for your app or stay on version 7x forever, there is no 
other option".

Yet explaining weird results to a customer isn't very much fun either, 
especially when it's a surprise to them. At least when they upgrade and things 
don't load at all they won't be surprised by subtle problems. Surprised by 
total inability to do anything, maybe. But that's not subtle.

I also dread taking customer X and trying to explain to them all the gotcha's 
with a tool that upgrades manually. "Well, you'll be able to search but if you 
originally indexed with X, then the consequence will be Y" through about 30 
iterations.

So I'm a little lost here on what to do. _Strongly_ recommend that people 
reindex is obvious, but then maybe the fallback is to send Uwe a lot of 
business...

So is this going to be the official stance going forward? Lucene supports 
version N-1 if (and only if) it was originally created with N-1? Or will this 
upgrade problem go away absent the original problem and people will be able to 
go from an index produced with 8->9->10? Or is that TBD?





> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450110#comment-16450110
 ] 

Michael McCandless commented on LUCENE-7976:


{quote}    // We did our best to find the right merges, but through the 
vagaries of the scoring algorithm etc. we didn't                                
                                                     // merge down to the 
required max segment count. So merge the N smallest segments to make it so. 
{quote}
Hmm can you describe why this would happen?  Seems like if you ask the scoring 
algorithm to find merges down to N segments, it shouldn't ever fail?

We also seem to invoke {{getSegmentSizes}} more than once in 
{{findForcedMerges}}?

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450104#comment-16450104
 ] 

Michael McCandless commented on LUCENE-7976:


{quote}Right, but that has quite a few consequences when comparing old .vs. new 
behavior for FORCE_MERGE and FORCE_MERGE_DELETES for several reasons, mostly 
stemming from having these two operations respect maxSegmentBytes:
{quote}
OK I see ... I think it still makes sense to try to break these changes into a 
couple issues.  This one (just refactoring to share the scoring approach, with 
the corresponding change in behavior) is going to be big enough!

Hmm I see some more failing tests e.g.:
{quote}[junit4] Suite: 
org.apache.lucene.search.TestTopFieldCollectorEarlyTermination
 [junit4] 2> NOTE: reproduce with: ant test 
-Dtestcase=TestTopFieldCollectorEarlyTermination 
-Dtests.method=testEarlyTermination -Dtests.seed=355D07976851D85A 
-Dtests.badapples=true -Dtests.locale=nn-N\
 O -Dtests.timezone=America/Cambridge_Bay -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
 [junit4] ERROR 869s J3 | 
TestTopFieldCollectorEarlyTermination.testEarlyTermination <<<
 [junit4] > Throwable #1: java.lang.OutOfMemoryError: GC overhead limit exceeded
 [junit4] > at 
__randomizedtesting.SeedInfo.seed([355D07976851D85A:FACA46C8503D4859]:0)
 [junit4] > at java.util.Arrays.copyOf(Arrays.java:3332)
 [junit4] > at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
 [junit4] > at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
 [junit4] > at java.lang.StringBuilder.append(StringBuilder.java:136)
 [junit4] > at 
org.apache.lucene.store.MockIndexInputWrapper.toString(MockIndexInputWrapper.java:224)
 [junit4] > at java.lang.String.valueOf(String.java:2994)
 [junit4] > at java.lang.StringBuilder.append(StringBuilder.java:131)
 [junit4] > at 
org.apache.lucene.store.BufferedChecksumIndexInput.(BufferedChecksumIndexInput.java:34)
 [junit4] > at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:119)
 [junit4] > at 
org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
 [junit4] > at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.readEntries(Lucene50CompoundReader.java:105)
 [junit4] > at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:69)
 [junit4] > at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:70)
 [junit4] > at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:100)
 [junit4] > at 
org.apache.lucene.index.SegmentReader.(SegmentReader.java:78)
 [junit4] > at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:202)
 [junit4] > at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:782)
 [junit4] > at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4221)
 [junit4] > at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3910)
 [junit4] > at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 [junit4] > at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2077)
 [junit4] > at 
org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1910)
 [junit4] > at 
org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1861)
 [junit4] > at 
org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:454)
 [junit4] > at 
org.apache.lucene.search.TestTopFieldCollectorEarlyTermination.createRandomIndex(TestTopFieldCollectorEarlyTermination.java:96)
 [junit4] > at 
org.apache.lucene.search.TestTopFieldCollectorEarlyTermination.doTestEarlyTermination(TestTopFieldCollectorEarlyTermination.java:123)
 [junit4] > at 
org.apache.lucene.search.TestTopFieldCollectorEarlyTermination.testEarlyTermination(TestTopFieldCollectorEarlyTermination.java:113)


{quote}
and
{quote}[junit4] 2> NOTE: reproduce with: ant test 
-Dtestcase=TestIndexWriterDelete 
-Dtests.method=testOnlyDeletesTriggersMergeOnClose 
-Dtests.seed=355D07976851D85A -Dtests.badapples=true -Dtests.locale=en-IE\
 -Dtests.timezone=Australia/Perth -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
 [junit4] ERROR 0.05s J0 | 
TestIndexWriterDelete.testOnlyDeletesTriggersMergeOnClose <<<
 [junit4] > Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=660, name=Lucene Merge Thread #6, 
state=RUNNABLE, group=TGRP-Tes\
 tIndexWriterDelete]
 [junit4] > Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.RuntimeException: segments must include at least one segment
 [junit4] > at __randomizedtesting.SeedInfo.seed([355D07976851D85A]:0)
 [junit4] > at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
 [junit4] > 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 21897 - Unstable!

2018-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21897/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([BA998C838D219DA9]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager

Error Message:
Captured an uncaught exception in thread: Thread[id=17, name=Thread-1, 
state=RUNNABLE, group=TGRP-TestSearcherManager]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=17, name=Thread-1, state=RUNNABLE, 
group=TGRP-TestSearcherManager]
Caused by: java.lang.RuntimeException: 
java.nio.file.FileAlreadyExistsException: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.search.TestSearcherManager_BA998C838D219DA9-001/tempDir-001/_0.fdt
at __randomizedtesting.SeedInfo.seed([BA998C838D219DA9]:0)
at 
org.apache.lucene.search.TestSearcherManager$8.run(TestSearcherManager.java:590)
Caused by: java.nio.file.FileAlreadyExistsException: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.search.TestSearcherManager_BA998C838D219DA9-001/tempDir-001/_0.fdt
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:94)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:215)
at 
java.base/java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at java.base/java.nio.file.Files.newOutputStream(Files.java:218)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
at 
org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:665)
at 
org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
at 
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:116)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
at 
org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
at 
org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsWriter(AssertingStoredFieldsFormat.java:48)
at 
org.apache.lucene.index.StoredFieldsConsumer.initStoredFieldsWriter(StoredFieldsConsumer.java:39)
at 
org.apache.lucene.index.StoredFieldsConsumer.startDocument(StoredFieldsConsumer.java:46)
at 
org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:363)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:399)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:251)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:490)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1518)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1210)
at 
org.apache.lucene.search.TestSearcherManager$8.run(TestSearcherManager.java:574)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([BA998C838D219DA9]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestSearcherManager

Error 

[jira] [Comment Edited] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-04-24 Thread Bruno Roustant (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450065#comment-16450065
 ] 

Bruno Roustant edited comment on SOLR-11865 at 4/24/18 3:37 PM:


Sorry for the delay.

Yes, if you can take it from here, that would be awesome!
 * Getters for defaults: you're right, there is no need. Please remove them.
 * keepElevationPriority as a constant in QEC: good point.
 * keepElevationPriority meaning:
 Actually the comment is not right, maybe the sorting has changed since the 
time I wrote this comment. I don't think it is linked anymore to forceElevation 
since the ElevationComparatorSource can be added as a SortField even if 
forceElevation=false when one sorts by score.
 The point is

 - with keepElevationPriority=true, the behavior is unchanged, the elevated 
documents (on top) are sorted by the order of the elevation rules and elevated 
ids in the config file.
 - with keepElevationPriority=false, the behavior changes, the elevated 
documents (still on top) are in any order (this will allow the use of the 
efficient but unsorted TrieSubsetMatcher in the other patch), and they may be 
re-ordered by other sort fields 


was (Author: bruno.roustant):
Sorry for the delay.

Yes, if you can take it from here, that would be awesome!
 * Getters for defaults: you're right, there is no need. Please remove them.
 * keepElevationPriority as a constant in QEC: good point.
 * keepElevationPriority meaning:
Actually the comment is not right, maybe the sorting has changed since the time 
I wrote this comment. I don't think it is linked anymore to forceElevation 
since the ElevationComparatorSource can be added as a SortField even if 
forceElevation=false when one sort by score.
The point is
- with keepElevationPriority=true, the behavior is unchanged, the elevated 
documents (on top) are sorted by the order of the elevation rules and elevated 
ids in the config file.
- with keepElevationPriority=false, the behavior changes, the elevated 
documents (still on top) are in any order, and they may be re-ordered by other 
sort fields (this will allow the use of the efficient but unsorted 
TrieSubsetMatcher in the other patch).

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Priority: Minor
>  Labels: QueryComponent
> Fix For: master (8.0)
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch
>
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-04-24 Thread Bruno Roustant (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450065#comment-16450065
 ] 

Bruno Roustant commented on SOLR-11865:
---

Sorry for the delay.

Yes, if you can take it from here, that would be awesome!
 * Getters for defaults: you're right, there is no need. Please remove them.
 * keepElevationPriority as a constant in QEC: good point.
 * keepElevationPriority meaning:
Actually the comment is not right, maybe the sorting has changed since the time 
I wrote this comment. I don't think it is linked anymore to forceElevation 
since the ElevationComparatorSource can be added as a SortField even if 
forceElevation=false when one sort by score.
The point is
- with keepElevationPriority=true, the behavior is unchanged, the elevated 
documents (on top) are sorted by the order of the elevation rules and elevated 
ids in the config file.
- with keepElevationPriority=false, the behavior changes, the elevated 
documents (still on top) are in any order, and they may be re-ordered by other 
sort fields (this will allow the use of the efficient but unsorted 
TrieSubsetMatcher in the other patch).

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Priority: Minor
>  Labels: QueryComponent
> Fix For: master (8.0)
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch
>
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-04-24 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450039#comment-16450039
 ] 

Jim Ferenczi commented on LUCENE-8196:
--

I don't think an operator can prevent anything here, a query for 
*Intervals.ordered(Intervals.term("w3"), Intervals.term("w3"))* should always 
return all intervals of the term "w3" (it will not interleave successive 
intervals of "w3"). [~mattweber] why do you think that this "scenario" should 
be prevented ? When I do "foo AND foo" I don't expect it to match only document 
that have foo twice ?

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8272) Share internal DV update code between binary and numeric

2018-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450028#comment-16450028
 ] 

Michael McCandless commented on LUCENE-8272:


+1

> Share internal DV update code between binary and numeric
> 
>
> Key: LUCENE-8272
> URL: https://issues.apache.org/jira/browse/LUCENE-8272
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8272.patch
>
>
> Today we duplicate a fair portion of the internal logic to
> apply updates of binary and numeric doc values. This change refactors
> this non-trivial code to share the same code path and only differ in
> if we provide a binary or numeric instance. This also allows us to
> iterator over the updates only once rather than twice once for numeric
> and once for binary fields.
> 
> This change also subclass DocValuesIterator from 
> DocValuesFieldUpdates.Iterator
> which allows easier consumption down the road since it now shares most of 
> it's
> interface with DocIdSetIterator which is the main interface for this in 
> Lucene.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8262) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-24 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450020#comment-16450020
 ] 

David Smiley commented on LUCENE-8262:
--

Mark, could you please provide any further info of why Lucene 
IndexReader/IndexWriter is fundamentally un-interruitible?  I think that's what 
your saying; I haven't heard that before.  The NIO aspect is already 
understood; and the user can choose to avoid NIO.

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: LUCENE-8262
> URL: https://issues.apache.org/jira/browse/LUCENE-8262
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-04-24 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450014#comment-16450014
 ] 

Matt Weber edited comment on LUCENE-8196 at 4/24/18 2:59 PM:
-

[~jim.ferenczi] [~romseygeek]  I think rename to {{and}} makes sense, however, 
I would still like a way to explicitly prevent the scenario I described . Maybe 
a {{minwith}} operator?  The width at the same position/interval should be 
{{0}} right? 


was (Author: mattweber):
[~jim.ferenczi] [~romseygeek]  I think rename to {{and}} makes sense, however, 
I would still live a way to explicitly prevent the scenario I described . Maybe 
a {{minwith}} operator?  The width at the same position/interval should be 
{{0}} right? 

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-04-24 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450014#comment-16450014
 ] 

Matt Weber commented on LUCENE-8196:


[~jim.ferenczi] [~romseygeek]  I think rename to {{and}} makes sense, however, 
I would still live a way to explicitly prevent the scenario I described . Maybe 
a {{minwith}} operator?  The width at the same position/interval should be 
{{0}} right? 

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Mike Sokolov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449994#comment-16449994
 ] 

Mike Sokolov commented on LUCENE-8273:
--

The name  "resetting" is a little confusing since it controls propagation of 
calls in end() and close() as well. Maybe call it "recursing" or "once" or 
something else?

> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449993#comment-16449993
 ] 

David Smiley commented on LUCENE-8273:
--

Nice!

Could you add a test with a filter that may produce multiple terms instead of 
just one-to-one?  And maybe try the scenario when the filter swallows it (e.g. 
WDF sees a token that is simply a symbol).  The documentation is ok but I was 
confused about practically what would usage look like until I looked at the 
test, so maybe a simple example in the class javadocs could shed light on this. 

With such a general utility, I wonder if the existing TokenFilters that have 
precondition checks (e.g. stemmers that check conditions) needn't bother doing 
this anymore since you could wrap the stemmer with the BypassingTokenFilter 
here with a check if the word is in a list?  Then we wouldn't even need 
KeywordAttribute!  I realize this is taking your simple proposal and taking it 
very far but I think it's worth discussing for 8.0.

An alternative to your BypassingTokenFilter is creating an intermediate base 
class between existing TokenFilters that bypass (e.g. stemmers + ones that 
ought to like WDF) and TokenFilter.  But thinking about this more, this seems 
like a bigger disruptive change and wouldn't cast a net as wide as 
BypassingTokenFilter which can filter anything, even filters where the author 
forgot to consider being filtered.

> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12244) Inconsistent method names

2018-04-24 Thread KuiLIU (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449991#comment-16449991
 ] 

KuiLIU commented on SOLR-12244:
---

https://github.com/apache/lucene-solr/pull/354

> Inconsistent method names
> -
>
> Key: SOLR-12244
> URL: https://issues.apache.org/jira/browse/SOLR-12244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: KuiLIU
>Priority: Major
>
> The following method is named as "getShardNames".
> The methods is adding "sliceName" to "shardNames", thus the method name 
> "addShardNames" should be more clear than "getShardNames" since "get" means 
> getting something.
> {code:java}
>  public static void getShardNames(Integer numShards, List shardNames) 
> {
>  if (numShards == null)
>throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, 
> "numShards" + " is a required param");
>  for (int i = 0; i < numShards; i++) {
>final String sliceName = "shard" + (i + 1);
>shardNames.add(sliceName);
>  }
>  
>}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8265) WordDelimiterFilter should pass through terms marked as keywords

2018-04-24 Thread Mike Sokolov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449981#comment-16449981
 ] 

Mike Sokolov commented on LUCENE-8265:
--

[~romseygeek] yes, I could use that!  

> WordDelimiterFilter should pass through terms marked as keywords
> 
>
> Key: LUCENE-8265
> URL: https://issues.apache.org/jira/browse/LUCENE-8265
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This will help in cases where some terms containing separator characters 
> should be split, but others should not.  For example, this will enable a 
> filter that identifies things that look like fractions and identifies them as 
> keywords so that 1/2 does not become 12, while doing splitting and joining on 
> terms that look like part numbers containing slashes, eg something like 
> "sn-999123/1" might sometimes be written "sn-999123-1".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Michael Sokolov
+1

On Tue, Apr 24, 2018 at 9:58 AM, Alan Woodward (JIRA) 
wrote:

>
> [ https://issues.apache.org/jira/browse/LUCENE-8273?page=
> com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel=16449897#comment-16449897 ]
>
> Alan Woodward commented on LUCENE-8273:
> ---
>
> I added this to core rather than to the analysis module as it seems to me
> to be a utility class like FilteringTokenFilter, which is also in core.
> But I'm perfectly happy to move it to analysis-common if that makes more
> sense to others.
>
> > Add a BypassingTokenFilter
> > --
> >
> > Key: LUCENE-8273
> > URL: https://issues.apache.org/jira/browse/LUCENE-8273
> > Project: Lucene - Core
> >  Issue Type: New Feature
> >Reporter: Alan Woodward
> >Priority: Major
> > Attachments: LUCENE-8273.patch
> >
> >
> > Spinoff of LUCENE-8265.  It would be useful to be able to wrap a
> TokenFilter in such a way that it could optionally be bypassed based on the
> current state of the TokenStream.  This could be used to, for example, only
> apply WordDelimiterFilter to terms that contain hyphens.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-8262) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449956#comment-16449956
 ] 

Mark Miller commented on LUCENE-8262:
-

I've looked into this in the past. This is not the only problem interrupting 
can cause.

The answer is don't interrupt threads running Lucene IndexReader/IndexWriter 
code. I spent a bunch of time making sure Solr no longer does. It cannot be 
properly supported.

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: LUCENE-8262
> URL: https://issues.apache.org/jira/browse/LUCENE-8262
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449944#comment-16449944
 ] 

Mark Miller commented on LUCENE-8264:
-

bq. Sorry, i don't the discussion makes much sense. 

The discussion makes sense, it sounds like you think making some kind of tool 
doesn't make sense.

bq. The stuff like norms changes requires reindex, like the inverted index, the 
data is stored in a lossy way. Lucene can't do anything about it: its an index.

That's been covered in the discussion - sometimes you can't do anything and 
that's why Lucene currently has this limitation.

bq. I am a true -1 to making a tool that will screw up scoring, sorry.

Same old helpful Robert. Type fast and carry a big veto. Happy belated birthday!

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12248) Grouping in SolrCloud fails if indexed="false" docValues="true" and stored="false"

2018-04-24 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12248:

Summary: Grouping in SolrCloud fails if indexed="false" docValues="true" 
and stored="false"  (was: Grouping in SolrCloud fails if indexed="false" 
docValues="true" and sorted="false")

> Grouping in SolrCloud fails if indexed="false" docValues="true" and 
> stored="false"
> --
>
> Key: SOLR-12248
> URL: https://issues.apache.org/jira/browse/SOLR-12248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Erick Erickson
>Priority: Minor
>
> In SolrCloud _only_ (it works in stand-alone mode), a field defined as:
>  indexed="false"  docValues="true"  stored="false"  />
> will fail with the following error:
> java.lang.NullPointerException
> org.apache.solr.schema.BoolField.toExternal(BoolField.java:131)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:142)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:51)
> org.apache.solr.search.grouping.endresulttransformer.GroupedEndResultTransformer.transform(GroupedEndResultTransformer.java:72)
> org.apache.solr.handler.component.QueryComponent.groupedFinishStage(QueryComponent.java:830)
> org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:793)
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:435)
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> .
> .
> curiously enough it succeeds with a field identically defined except for 
> stored="true"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449897#comment-16449897
 ] 

Alan Woodward commented on LUCENE-8273:
---

I added this to core rather than to the analysis module as it seems to me to be 
a utility class like FilteringTokenFilter, which is also in core.  But I'm 
perfectly happy to move it to analysis-common if that makes more sense to 
others.

> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8265) WordDelimiterFilter should pass through terms marked as keywords

2018-04-24 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449896#comment-16449896
 ] 

Alan Woodward commented on LUCENE-8265:
---

I created LUCENE-8273 for the potential spinoff - [~sokolov] would this work 
for your situation?

> WordDelimiterFilter should pass through terms marked as keywords
> 
>
> Key: LUCENE-8265
> URL: https://issues.apache.org/jira/browse/LUCENE-8265
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This will help in cases where some terms containing separator characters 
> should be split, but others should not.  For example, this will enable a 
> filter that identifies things that look like fractions and identifies them as 
> keywords so that 1/2 does not become 12, while doing splitting and joining on 
> terms that look like part numbers containing slashes, eg something like 
> "sn-999123/1" might sometimes be written "sn-999123-1".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449893#comment-16449893
 ] 

Alan Woodward commented on LUCENE-8273:
---

Here's a patch.

> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8273:
--
Attachment: LUCENE-8273.patch

> Add a BypassingTokenFilter
> --
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8272) Share internal DV update code between binary and numeric

2018-04-24 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449891#comment-16449891
 ] 

Shai Erera commented on LUCENE-8272:


I put some comments on the PR, but I don't see them mentioned here, so FYI.

> Share internal DV update code between binary and numeric
> 
>
> Key: LUCENE-8272
> URL: https://issues.apache.org/jira/browse/LUCENE-8272
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8272.patch
>
>
> Today we duplicate a fair portion of the internal logic to
> apply updates of binary and numeric doc values. This change refactors
> this non-trivial code to share the same code path and only differ in
> if we provide a binary or numeric instance. This also allows us to
> iterator over the updates only once rather than twice once for numeric
> and once for binary fields.
> 
> This change also subclass DocValuesIterator from 
> DocValuesFieldUpdates.Iterator
> which allows easier consumption down the road since it now shares most of 
> it's
> interface with DocIdSetIterator which is the main interface for this in 
> Lucene.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8273) Add a BypassingTokenFilter

2018-04-24 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8273:
-

 Summary: Add a BypassingTokenFilter
 Key: LUCENE-8273
 URL: https://issues.apache.org/jira/browse/LUCENE-8273
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Alan Woodward


Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter in 
such a way that it could optionally be bypassed based on the current state of 
the TokenStream.  This could be used to, for example, only apply 
WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8271) Remove IndexWriter from DWFlushQueue

2018-04-24 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8271.
-
Resolution: Fixed

>  Remove IndexWriter from DWFlushQueue
> -
>
> Key: LUCENE-8271
> URL: https://issues.apache.org/jira/browse/LUCENE-8271
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8271.patch
>
>
> This simplifies DocumentsWriterFlushQueue by moving all IW related
> code out of it. The DWFQ now only contains logic for taking tickets
> off the queue and applying it to a given consumer. The logic now
> entirely resides in IW and has private visitiliby. Locking
> also is more contained since IW knows exactly what is called and when.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8271) Remove IndexWriter from DWFlushQueue

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449885#comment-16449885
 ] 

ASF subversion and git services commented on LUCENE-8271:
-

Commit e018ff5554229f17e49bf37b629811d183c9f856 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e018ff5 ]

LUCENE-8271: Remove IndexWriter from DWFlushQueue

This simplifies DocumentsWriterFlushQueue by moving all IW related
code out of it. The DWFQ now only contains logic for taking tickets
off the queue and applying it to a given consumer. The logic now
entirely resides in IW and has private visibility. Locking
also is more contained since IW knows exactly what is called and when.

>  Remove IndexWriter from DWFlushQueue
> -
>
> Key: LUCENE-8271
> URL: https://issues.apache.org/jira/browse/LUCENE-8271
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8271.patch
>
>
> This simplifies DocumentsWriterFlushQueue by moving all IW related
> code out of it. The DWFQ now only contains logic for taking tickets
> off the queue and applying it to a given consumer. The logic now
> entirely resides in IW and has private visitiliby. Locking
> also is more contained since IW knows exactly what is called and when.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8271) Remove IndexWriter from DWFlushQueue

2018-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449867#comment-16449867
 ] 

ASF subversion and git services commented on LUCENE-8271:
-

Commit d32ce90924146a047e1e6f86dc95e23f639d5ac4 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d32ce90 ]

LUCENE-8271: Remove IndexWriter from DWFlushQueue

This simplifies DocumentsWriterFlushQueue by moving all IW related
code out of it. The DWFQ now only contains logic for taking tickets
off the queue and applying it to a given consumer. The logic now
entirely resides in IW and has private visibility. Locking
also is more contained since IW knows exactly what is called and when.

>  Remove IndexWriter from DWFlushQueue
> -
>
> Key: LUCENE-8271
> URL: https://issues.apache.org/jira/browse/LUCENE-8271
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8271.patch
>
>
> This simplifies DocumentsWriterFlushQueue by moving all IW related
> code out of it. The DWFQ now only contains logic for taking tickets
> off the queue and applying it to a given consumer. The logic now
> entirely resides in IW and has private visitiliby. Locking
> also is more contained since IW knows exactly what is called and when.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8272) Share internal DV update code between binary and numeric

2018-04-24 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449856#comment-16449856
 ] 

Simon Willnauer commented on LUCENE-8272:
-

[https://github.com/s1monw/lucene-solr/pull/15] /cc [~mikemccand]

> Share internal DV update code between binary and numeric
> 
>
> Key: LUCENE-8272
> URL: https://issues.apache.org/jira/browse/LUCENE-8272
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8272.patch
>
>
> Today we duplicate a fair portion of the internal logic to
> apply updates of binary and numeric doc values. This change refactors
> this non-trivial code to share the same code path and only differ in
> if we provide a binary or numeric instance. This also allows us to
> iterator over the updates only once rather than twice once for numeric
> and once for binary fields.
> 
> This change also subclass DocValuesIterator from 
> DocValuesFieldUpdates.Iterator
> which allows easier consumption down the road since it now shares most of 
> it's
> interface with DocIdSetIterator which is the main interface for this in 
> Lucene.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8272) Share internal DV update code between binary and numeric

2018-04-24 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8272:

Attachment: LUCENE-8272.patch

> Share internal DV update code between binary and numeric
> 
>
> Key: LUCENE-8272
> URL: https://issues.apache.org/jira/browse/LUCENE-8272
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8272.patch
>
>
> Today we duplicate a fair portion of the internal logic to
> apply updates of binary and numeric doc values. This change refactors
> this non-trivial code to share the same code path and only differ in
> if we provide a binary or numeric instance. This also allows us to
> iterator over the updates only once rather than twice once for numeric
> and once for binary fields.
> 
> This change also subclass DocValuesIterator from 
> DocValuesFieldUpdates.Iterator
> which allows easier consumption down the road since it now shares most of 
> it's
> interface with DocIdSetIterator which is the main interface for this in 
> Lucene.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8272) Share internal DV update code between binary and numeric

2018-04-24 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8272:
---

 Summary: Share internal DV update code between binary and numeric
 Key: LUCENE-8272
 URL: https://issues.apache.org/jira/browse/LUCENE-8272
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.4, master (8.0)
Reporter: Simon Willnauer
 Fix For: 7.4, master (8.0)
 Attachments: LUCENE-8272.patch

Today we duplicate a fair portion of the internal logic to
apply updates of binary and numeric doc values. This change refactors
this non-trivial code to share the same code path and only differ in
if we provide a binary or numeric instance. This also allows us to
iterator over the updates only once rather than twice once for numeric
and once for binary fields.

This change also subclass DocValuesIterator from 
DocValuesFieldUpdates.Iterator
which allows easier consumption down the road since it now shares most of 
it's
interface with DocIdSetIterator which is the main interface for this in 
Lucene.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement

2018-04-24 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449854#comment-16449854
 ] 

Dawid Weiss commented on LUCENE-8261:
-

I've implemented slightly stronger reference resolution, including checks for 
circular references and some sanity checking. The main method has tests since 
these are utilities and don't have their associated tests.

> InterpolatedProperties.interpolate should quote the replacement
> ---
>
> Key: LUCENE-8261
> URL: https://issues.apache.org/jira/browse/LUCENE-8261
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-8261.patch, LUCENE-8261.patch
>
>
> InterpolatedProperties is used in lib check tasks in the build file. I 
> occasionally see this:
> {code}
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108:
>  java.lang.IllegalArgumentException: named capturing group is missing 
> trailing '}'
> at 
> java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052)
> at 
> java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908)
> at 
> org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64)
> {code}
> I don't think we ever need to use any group references in those replacements; 
> they should be fixed strings (quoted verbatim)? So 
> {{Pattern.quoteReplacement}} would be adequate here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement

2018-04-24 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8261:

Attachment: LUCENE-8261.patch

> InterpolatedProperties.interpolate should quote the replacement
> ---
>
> Key: LUCENE-8261
> URL: https://issues.apache.org/jira/browse/LUCENE-8261
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-8261.patch, LUCENE-8261.patch
>
>
> InterpolatedProperties is used in lib check tasks in the build file. I 
> occasionally see this:
> {code}
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108:
>  java.lang.IllegalArgumentException: named capturing group is missing 
> trailing '}'
> at 
> java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052)
> at 
> java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908)
> at 
> org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64)
> {code}
> I don't think we ever need to use any group references in those replacements; 
> they should be fixed strings (quoted verbatim)? So 
> {{Pattern.quoteReplacement}} would be adequate here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8271) Remove IndexWriter from DWFlushQueue

2018-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449834#comment-16449834
 ] 

Michael McCandless commented on LUCENE-8271:


+1

>  Remove IndexWriter from DWFlushQueue
> -
>
> Key: LUCENE-8271
> URL: https://issues.apache.org/jira/browse/LUCENE-8271
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8271.patch
>
>
> This simplifies DocumentsWriterFlushQueue by moving all IW related
> code out of it. The DWFQ now only contains logic for taking tickets
> off the queue and applying it to a given consumer. The logic now
> entirely resides in IW and has private visitiliby. Locking
> also is more contained since IW knows exactly what is called and when.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8265) WordDelimiterFilter should pass through terms marked as keywords

2018-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449819#comment-16449819
 ] 

Michael McCandless commented on LUCENE-8265:


Thanks [~sokolov]; new PR looks great.

> WordDelimiterFilter should pass through terms marked as keywords
> 
>
> Key: LUCENE-8265
> URL: https://issues.apache.org/jira/browse/LUCENE-8265
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This will help in cases where some terms containing separator characters 
> should be split, but others should not.  For example, this will enable a 
> filter that identifies things that look like fractions and identifies them as 
> keywords so that 1/2 does not become 12, while doing splitting and joining on 
> terms that look like part numbers containing slashes, eg something like 
> "sn-999123/1" might sometimes be written "sn-999123-1".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449787#comment-16449787
 ] 

David Smiley commented on LUCENE-8264:
--

Fascinating discussion.

Shawn said:
{quote}Now I'm hearing differently ... that any user who has successfully done 
this has just gotten lucky, and that there's no guarantee for the future.
{quote}
I don't think it's quite that bleak.  I believe each segment records metadata 
of the Lucene version, so we could explicitly know wether or not the index 
contains segments older than the current version.  One could even write a tool 
to spit out the IDs of those documents to facilitate a re-index of just those 
documents.

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 562 - Still Unstable!

2018-04-24 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449759#comment-16449759
 ] 

Uwe Schindler edited comment on LUCENE-8264 at 4/24/18 12:37 PM:
-

I have no problem on making that tool not public. This is how I earn my money 
since a few months. Bringing stone-aged indexes up-to date (adding docvalues). 
Those people know that's wrong and scoring is not an issue for them in most 
cases. If it is, we are working on reindexing, but sometimes that's really 
impossible. All those people were Lucene-only customers. It's cool, because 
people back in 2.x/3.x days were already using Lucene as their only storage, 
unfortunately not everything also stored, so some stuff is "not easy" to be 
reindexed in a fast way (like extracting all text again from PDF files).


was (Author: thetaphi):
I have no problem on making that tool not public. This is how I earn my money 
since a few months. Bringing stone-aged indexes up-to date (adding docvalues). 
Those people know that's wrong and scoring is not an issue for them in most 
cases. If it is we are working on reindexing, but sometimes that's really 
impossible. All those people were Lucene-only customers. It's cool, because 
people back in 2.x/3.x days were already using Lucene as their only storage, 
unfortunately not everything also stored, so some stuff is "not easy" to be 
reindexed in a fast way (like extracting all text again from PDF files).

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-04-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449759#comment-16449759
 ] 

Uwe Schindler commented on LUCENE-8264:
---

I have no problem on making that tool not public. This is how I earn my money 
since a few months. Bringing stone-aged indexes up-to date (adding docvalues). 
Those people know that's wrong and scoring is not an issue for them in most 
cases. If it is we are working on reindexing, but sometimes that's really 
impossible. All those people were Lucene-only customers. It's cool, because 
people back in 2.x/3.x days were already using Lucene as their only storage, 
unfortunately not everything also stored, so some stuff is "not easy" to be 
reindexed in a fast way (like extracting all text again from PDF files).

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8265) WordDelimiterFilter should pass through terms marked as keywords

2018-04-24 Thread Mike Sokolov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449744#comment-16449744
 ] 

Mike Sokolov commented on LUCENE-8265:
--

I updated the pull request, adding a new flag, IGNORE_KEYWORDS, that gates
this feature.

On Mon, Apr 23, 2018 at 11:52 AM, David Smiley (JIRA) 



> WordDelimiterFilter should pass through terms marked as keywords
> 
>
> Key: LUCENE-8265
> URL: https://issues.apache.org/jira/browse/LUCENE-8265
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This will help in cases where some terms containing separator characters 
> should be split, but others should not.  For example, this will enable a 
> filter that identifies things that look like fractions and identifies them as 
> keywords so that 1/2 does not become 12, while doing splitting and joining on 
> terms that look like part numbers containing slashes, eg something like 
> "sn-999123/1" might sometimes be written "sn-999123-1".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >