Re: Date format issue in solr select query.

2019-05-08 Thread Erick Erickson
Something’s not quite right here. The format 2019-02-28 should fail to index 
unless you’re using a DateRangeField. So my guess is that somehow you’re really 
submitting the date with the time or some other surprise.

Or perhaps you’re using the “schemaless” mode which can transform dates, we 
recommend against that.

Best,
Erick

> On May 8, 2019, at 7:00 PM, Karthik Gunasekaran 
>  wrote:
> 
> Hi,
> I am new to solr. I am using solr7.6 version. 
>  
> The problem which I am facing is to format the date for a specific field.
>  
> Explanation of my issue: 
>  
> I have a collection named “DateFieldTest”
> It has few fields out of “initial_release_date” is a field of type pdate.
> We are loading the data into the collection as below
>  
>  
> [
>   {
> "id": 0,
> "Number": 0,
> "String": "This is a string 0",
> "initial_release_date": "2019-02-28"
>   },
>   {
> "ID": 1,
> "Number": 1,
> "String": "This is a string 1",
> " initial_release_date ": "2019-02-28"
>   }]
>  
> When we do a select query as 
> http://localhost:8983/solr/DateFieldTest/select?q=*:*
> We are getting the output as,
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":0,
> "params":{
>   "q":"*:*"}},
>   "response":{"numFound":1000,"start":0,"docs":[
>   {
> "id":"0",
> "Number":[0],
> "String":["This is a Māori macron 0"],
> "initial_release_date":["2019-02-28T00:00:00Z"],
> "_version_":1633015101576445952},
>   {
> "ID":[1],
> "Number":[1],
> "String":["This is a Māori macron 1"],
> "initial_release_date":["2019-02-28T00:00:00Z"],
> "_version_":1633015101949739008},
>  
> But our use case is to get the output for the above query is to get the 
> initial_release_date field to be formatted as -MM-DD.
> The query returns by adding time to the data field automatically, which we 
> don’t want to happen.
> Can someone please help me to resolve this issue to get only date value 
> without time in my select query.
>  
> Thanks,
> Karthik Gunasekaran
> Senior Applications Developer | kaiwhakawhanake Pūmanawa Tautono
> Digital Business  - Channels | Ngā Ratonga Mamati - Ngā Hongere
> Digital Business Services | Ngā Ratonga Pakihi Mamati
> Stats NZ Tatauranga Aotearoa 
> DDI +64 4 931 4347 | stats.govt.nz
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836080#comment-16836080
 ] 

Tomás Fernández Löbbe commented on SOLR-13439:
--

None of those cases use the feature as it's supposed to be used. You either 
need frequent access to the properties (i.e. per request, timer, something) or 
not (i.e. maybe to validate something after a particular type of request, like 
a collection API with a particular set of values, etc)

Case 1 (Frequent access):
 1. T+0: SolrCore starts (i.e. A replica is added, a collection is created, you 
name it)
 2. T+0: On initialization, a component that relies on collection properties 
registers a listener. Solr reads from ZooKeeper once.
 3. T+0 to T+end-of-life-of-the-core: The watch remains.
 4. T+end-of-life-of-the-core: no more reads are expected, the watch is removed.
 Times read with current approach: 1 + Number of modifications of properties.
 Times with the cache approach: unknown, at least the same as with the 
listener, but depends on when the properties are accessed.

Case 2: Don't set a watch. (you only do this if you know you are going to 
access the properties a low number of times, so low that it doesn't make sense 
to spend resources in watching).
 T + 0: read the properties (reads from Zookeeper)
 done.
{quote}Code that calls getCollectionProperties() is either fast or slow 
depending on whether or not someone else has set a watch on that collection.
{quote}
If your component doesn't set a listener, assume it's going to ZooKeeper. You'd 
only do this if reads to a property are infrequent enough (see case 2 above)
{quote}I think the current inconsistency is worse because something that was 
fast due to the action of unrelated code (getCollectionProperties()) can become 
surprisingly slow,
{quote}
Again, assume {{getCollectionProperties()}} goes to ZooKeeper, but this is only 
used in case of infrequent access. The caller makes the informed tradeoff of 
going to ZooKeeper per call instead of spending resources (ZooKeeper’s and 
Solr’s) in watching.
{quote}whereas after my patch, setting the watch may become surprisingly fast 
due to the effect of unrelated code.
{quote}
... if it's in cache you mean?
{quote}in no case will it be more
{quote}
I can give you two cases:
 Case 1: Collection properties are accessed infrequently (like in my “case 2 
above”), but collection properties change frequently (i.e. every second)
 1. T + 0: call to getCollectionProperties(), Zk watch is set and element is on 
cache
 2. T + 1 to T + 9: Collection properties changes, fires watches to Solr. Solr 
receives the watch and reads from Zookeeper
 3. T + 10 cache expires
 With cache, we read from Zookeeper 10 times, and ZooKeeper fires 10 watches. 
Without cache, we read once, ZooKeeper doens't fire any watch. Keep in mind 
that some clusters may have many collections (hundreds/thousands?), this may 
add a lot of load to ZooKeeper for things that aren’t going to be needed. 
 Case 2: A component doesn’t rely on a listener, but relies on cache. 
 1. T + 0: call to getCollectionProperties(), Zk watch is set and element is on 
cache
 2. T + 10, cache expires
 3. T + 11: call to getCollectionProperties(), Zk watch is set and element is 
on cache
 4. T + 20, cache expires
 5. …
 With a listener, this is just one read. With cache, this is, again, unknown, 
but up to N, the number of calls to {{getCollectionProperties()}}

> Make collection properties easier and safer to use in code
> --
>
> Key: SOLR-13439
> URL: https://issues.apache.org/jira/browse/SOLR-13439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13439.patch, SOLR-13439.patch
>
>
> (breaking this out from SOLR-13420, please read there for further background)
> Before this patch the api is quite confusing (IMHO):
>  # any code that wanted to know what the properties for a collection are 
> could call zkStateReader.getCollectionProperties(collection) but this was a 
> dangerous and trappy API because that was a query to zookeeper every time. If 
> a naive user auto-completed that in their IDE without investigating, heavy 
> use of zookeeper would ensue.
>  # To "do it right" for any code that might get called on a per-doc or per 
> request basis one had to cause caching by registering a watcher. At which 
> point the getCollectionProperties(collection) magically becomes safe to use, 
> but the watcher pattern probably looks famillar induces a user who hasn't 
> read the solr code closely to create their own cache and update it when their 
> watcher is 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.2) - Build # 7930 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7930/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Error from server at http://127.0.0.1:57796/solr/authCollection: Error from 
server at null: Expected mime type application/octet-stream but got text/html. 
   Error 401 require 
authentication  HTTP ERROR 401 Problem 
accessing /solr/authCollection_shard2_replica_n2/select. Reason: 
require authenticationhttp://eclipse.org/jetty;>Powered 
by Jetty:// 9.4.14.v20181114

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:57796/solr/authCollection: Error from server at 
null: Expected mime type application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/authCollection_shard2_replica_n2/select. Reason:
require authenticationhttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114




at 
__randomizedtesting.SeedInfo.seed([1E0750AD432CAC1A:A26926BFE77F2F60]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:290)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-13437) fork noggit code to Solr

2019-05-08 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836040#comment-16836040
 ] 

Noble Paul commented on SOLR-13437:
---

I plan to commit this in a few days

> fork noggit code to Solr
> 
>
> Key: SOLR-13437
> URL: https://issues.apache.org/jira/browse/SOLR-13437
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We rely on noggit for all our JSON encoding/decoding needs.The main project 
> is not actively maintained . We cannot easily switch to another parser 
> because it may cause backward incompatibility and we have advertised the 
> ability to use flexible JSON and we also use noggit internally in many classes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Date format issue in solr select query.

2019-05-08 Thread Karthik Gunasekaran
Hi,
I am new to solr. I am using solr7.6 version.

The problem which I am facing is to format the date for a specific field.

Explanation of my issue:

I have a collection named “DateFieldTest”
It has few fields out of “initial_release_date” is a field of type pdate.
We are loading the data into the collection as below


[
  {
"id": 0,
"Number": 0,
"String": "This is a string 0",
"initial_release_date": "2019-02-28"
  },
  {
"ID": 1,
"Number": 1,
"String": "This is a string 1",
" initial_release_date ": "2019-02-28"
  }]

When we do a select query as 
http://localhost:8983/solr/DateFieldTest/select?q=*:*
We are getting the output as,
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":0,
"params":{
  "q":"*:*"}},
  "response":{"numFound":1000,"start":0,"docs":[
  {
"id":"0",
"Number":[0],
"String":["This is a Māori macron 0"],
"initial_release_date":["2019-02-28T00:00:00Z"],
"_version_":1633015101576445952},
  {
"ID":[1],
"Number":[1],
"String":["This is a Māori macron 1"],
"initial_release_date":["2019-02-28T00:00:00Z"],
"_version_":1633015101949739008},

But our use case is to get the output for the above query is to get the 
initial_release_date field to be formatted as -MM-DD.
The query returns by adding time to the data field automatically, which we 
don’t want to happen.
Can someone please help me to resolve this issue to get only date value without 
time in my select query.

Thanks,
Karthik Gunasekaran
Senior Applications Developer | kaiwhakawhanake Pūmanawa Tautono
Digital Business  - Channels | Ngā Ratonga Mamati - Ngā Hongere
Digital Business Services | Ngā Ratonga Pakihi Mamati
Stats NZ Tatauranga Aotearoa
DDI +64 4 931 4347 | stats.govt.nz
[cid:image007.jpg@01D29D69.DD3FD280]
[cid:image008.png@01D29D69.DD3FD280]  
[cid:image009.png@01D29D69.DD3FD280]    
[cid:image010.png@01D29D69.DD3FD280] 




[JENKINS] Lucene-Solr-8.1-Linux (32bit/jdk1.8.0_201) - Build # 302 - Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.1-Linux/302/
Java: 32bit/jdk1.8.0_201 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestCloudSearcherWarming

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestCloudSearcherWarming: 1) Thread[id=9026, 
name=ZkTestServer Run Thread, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at java.lang.Object.wait(Native 
Method) at java.lang.Thread.join(Thread.java:1252) at 
java.lang.Thread.join(Thread.java:1326) at 
org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:320)
 at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:343)
 at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:566)
2) Thread[id=9029, name=SyncThread:0, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)
3) Thread[id=9028, name=SessionTracker, state=TIMED_WAITING, 
group=TGRP-TestCloudSearcherWarming] at java.lang.Object.wait(Native 
Method) at 
org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147) 
   4) Thread[id=9027, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
 at java.lang.Thread.run(Thread.java:748)5) Thread[id=9030, 
name=ProcessThread(sid:0 cport:44151):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudSearcherWarming: 
   1) Thread[id=9026, name=ZkTestServer Run Thread, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:320)
at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:343)
at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:566)
   2) Thread[id=9029, name=SyncThread:0, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)
   3) Thread[id=9028, name=SessionTracker, state=TIMED_WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.lang.Object.wait(Native Method)
at 
org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147)
   4) Thread[id=9027, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming]
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)
   5) Thread[id=9030, name=ProcessThread(sid:0 cport:44151):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]

Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread Noble Paul
It's a bug fix. So,we should include it

On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
 wrote:
>
> Hi Dat,
>
> > Should we respin the release for SOLR-13449.
>
> I don't fully understand the implications of not having SOLR-13449. If
> you (or someone else) suggest(s) that this needs to go into 8.1, then
> I'll re-spin RC2 tomorrow.
>
> Thanks,
> Ishan
>
> On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
> >
> > SUCCESS! [1:08:48.869786]
> >
> >
> > On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:
> >>
> >> Hi Ishan,
> >>
> >> Should we respin the release for SOLR-13449.
> >>
> >> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
> >>>
> >>> +1 SUCCESS! [1:15:45.039228]
> >>>
> >>> Kevin Risden
> >>>
> >>>
> >>> On Wed, May 8, 2019 at 11:12 AM David Smiley  
> >>> wrote:
> 
>  +1
>  SUCCESS! [1:29:43.016321]
> 
>  Thanks for doing the release Ishan!
> 
>  ~ David Smiley
>  Apache Lucene/Solr Search Developer
>  http://www.linkedin.com/in/davidwsmiley
> 
> 
>  On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
>   wrote:
> >
> > Please vote for release candidate 1 for Lucene/Solr 8.1.0
> >
> > The artifacts can be downloaded from:
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
> >
> > Here's my +1
> > SUCCESS! [0:46:38.948020]
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >> --
> >> Best regards,
> >> Cao Mạnh Đạt
> >> D.O.B : 31-07-1991
> >> Cell: (+84) 946.328.329
> >> E-mail: caomanhdat...@gmail.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1330 - Failure

2019-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1330/

No tests ran.

Build Log:
[...truncated 23471 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2531 links (2070 relative) to 3359 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 356 - Failure

2019-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/356/

All tests passed

Build Log:
[...truncated 62619 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1509551442
 [ecj-lint] Compiling 48 source files to /tmp/ecj1509551442
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:643:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:101:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build.xml:687:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 115 - Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/115/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testListenerAcceptance

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([8B5594FF5A70C5C:190103A84C2E18AA]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testListenerAcceptance(NodeAddedTriggerTest.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Expected metric minimums for prefix SECURITY./authentication.: 

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-9) - Build # 121 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/121/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.prometheus.exporter.SolrExporterIntegrationTest.jvmMetrics

Error Message:
expected:<36> but was:<48>

Stack Trace:
java.lang.AssertionError: expected:<36> but was:<48>
at 
__randomizedtesting.SeedInfo.seed([3FEAC88952914342:EC5EF566477F8FD6]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.prometheus.exporter.SolrExporterIntegrationTest.jvmMetrics(SolrExporterIntegrationTest.java:71)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 2085 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[jira] [Comment Edited] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-08 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835757#comment-16835757
 ] 

Gus Heck edited comment on SOLR-13439 at 5/8/19 10:23 PM:
--

Need was probably the wrong word... want :) would have been more correct. I 
prefer that since then it can be a single line and we don't have to do the 
cache lookup for an object that we are already handling.

As for consistency/frequency of access let's consider some cases (time in 
minutes):
 # Case 1:
 ## T+0 A watch is set, cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 2:
 ## T+0 A watch is set, cache is populated from zk
 ## T+15 properties are accessed with getCollectionProperties() 
 ## T+30 the watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 3:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+5 a watch is set, cache is already populated, zk is not accessed
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case the data access frequency for zk is once instead of twice. 
This LESS than current.
 # Case 4:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+10 the cache expires
 ## T+20 a watch is set, cache is cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed twice. This is UNCHANGED vs current. 
 # Case 5:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+1 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## T+2 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## T+12 the cache expires
 ## T+30 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+40 the cache expires
 ## In this case zookeeper is accessed twice instead of four times This is LESS 
than current.

I will grant you that it's hard to predict that when the load on zookeeper will 
be less, but in no case will it be more, so unless you mean to stress test 
zookeeper this is not much of a problem. If I've missed a case let me know.

The existing code (without this patch) isn't really consistent either. Code 
that calls getCollectionProperties() is either fast or slow depending on 
whether or not someone else has set a watch on that collection. I think the 
current inconsistency is worse because something that was fast  due to the 
action of unrelated code (getCollectionProperties()) can become surprisingly 
slow, whereas after my patch, setting the watch may become surprisingly fast 
due to the effect of unrelated code.

Edit: re-reading that last sentence I realize now that one could flip either 
one around but I guess what I meant to express is that the delta is potentially 
much larger in the present code since it would be almost inconceivable to 
create a watch on a per-doc basis, but one could easily write code checking 
collection properties on a per-doc or per request basis.


was (Author: gus_heck):
Need was probably the wrong word... want :) would have been more correct. I 
prefer that since then it can be a single line and we don't have to do the 
cache lookup for an object that we are already handling.

As for consistency/frequency of access let's consider some cases (time in 
minutes):
 # Case 1:
 ## T+0 A watch is set, cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 2:
 ## T+0 A watch is set, cache is populated from zk
 ## T+15 properties are accessed with getCollectionProperties() 
 ## T+30 the watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 3:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+5 a watch is set, cache is already populated, zk is not accessed
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case the data access frequency for zk is once instead of twice. 
This LESS than current.
 # Case 4:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+10 the cache expires
 ## T+20 a watch is set, cache is cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed twice. This is UNCHANGED vs current. 
 # Case 5:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+1 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## T+2 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## 

Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread Ishan Chattopadhyaya
Hi Dat,

> Should we respin the release for SOLR-13449.

I don't fully understand the implications of not having SOLR-13449. If
you (or someone else) suggest(s) that this needs to go into 8.1, then
I'll re-spin RC2 tomorrow.

Thanks,
Ishan

On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
>
> SUCCESS! [1:08:48.869786]
>
>
> On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:
>>
>> Hi Ishan,
>>
>> Should we respin the release for SOLR-13449.
>>
>> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>>>
>>> +1 SUCCESS! [1:15:45.039228]
>>>
>>> Kevin Risden
>>>
>>>
>>> On Wed, May 8, 2019 at 11:12 AM David Smiley  
>>> wrote:

 +1
 SUCCESS! [1:29:43.016321]

 Thanks for doing the release Ishan!

 ~ David Smiley
 Apache Lucene/Solr Search Developer
 http://www.linkedin.com/in/davidwsmiley


 On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
  wrote:
>
> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> Here's my +1
> SUCCESS! [0:46:38.948020]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>> --
>> Best regards,
>> Cao Mạnh Đạt
>> D.O.B : 31-07-1991
>> Cell: (+84) 946.328.329
>> E-mail: caomanhdat...@gmail.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread Varun Thacker
SUCCESS! [1:08:48.869786]


On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:

> Hi Ishan,
>
> Should we respin the release for SOLR-13449.
>
> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>
>> +1 SUCCESS! [1:15:45.039228]
>>
>> Kevin Risden
>>
>>
>> On Wed, May 8, 2019 at 11:12 AM David Smiley 
>> wrote:
>>
>>> +1
>>> SUCCESS! [1:29:43.016321]
>>>
>>> Thanks for doing the release Ishan!
>>>
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>>
>>>
>>> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> wrote:
>>>
 Please vote for release candidate 1 for Lucene/Solr 8.1.0

 The artifacts can be downloaded from:

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50

 You can run the smoke tester directly with this command:

 python3 -u dev-tools/scripts/smokeTestRelease.py \

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50

 Here's my +1
 SUCCESS! [0:46:38.948020]

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 --
> *Best regards,*
> *Cao Mạnh Đạt*
>
>
> *D.O.B : 31-07-1991Cell: (+84) 946.328.329E-mail: caomanhdat...@gmail.com
> *
>


[jira] [Updated] (SOLR-13442) Safe mode with minimal functionality

2019-05-08 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-13442:

Description: 
With lots and lots of out of the box features come the possibility of security 
vulnerabilities. A managed / hosted Solr cluster should have only minimal 
functionality turned on.

Through this issue, we'd like to explore the possibility of starting up Solr 
such that just basic cloud based indexing and querying works (under basic 
auth), and fancy stuff like the following be turned off (maybe by a startup 
parameter):
# Tika
# DIH
# Funky shards parameter usage (unless specific to implicit routing)
# HDFS
# Streaming expressions
# non whitelisted function queries (with a whitelist of only few that are 
essential)
# configset upload
# blob store
# etc.

The motivation of this work is to have a public facing minimal Solr that is 
bullet proof, secure against external exposure (with the help of basic auth and 
rule based authorization).

  was:
With lots and lots of out of the box features come the possibility of security 
vulnerabilities. A managed / hosted Solr cluster should have only minimal 
functionality turned on.

Through this issue, I plan to explore the possibility of starting up Solr such 
that just basic cloud based indexing and querying works (under basic auth), and 
fancy stuff like the following be turned off (maybe by a startup parameter):
# Tika
# DIH
# Funky shards parameter usage (unless specific to implicit routing)
# HDFS
# Streaming expressions
# non whitelisted function queries (with a whitelist of only few that are 
essential)
# configset upload
# blob store
# etc.

My motivation is to have a public facing minimal Solr that is bullet proof 
secure against external exposure (with the help of basic auth and rule based 
authorization).


> Safe mode with minimal functionality
> 
>
> Key: SOLR-13442
> URL: https://issues.apache.org/jira/browse/SOLR-13442
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
>
> With lots and lots of out of the box features come the possibility of 
> security vulnerabilities. A managed / hosted Solr cluster should have only 
> minimal functionality turned on.
> Through this issue, we'd like to explore the possibility of starting up Solr 
> such that just basic cloud based indexing and querying works (under basic 
> auth), and fancy stuff like the following be turned off (maybe by a startup 
> parameter):
> # Tika
> # DIH
> # Funky shards parameter usage (unless specific to implicit routing)
> # HDFS
> # Streaming expressions
> # non whitelisted function queries (with a whitelist of only few that are 
> essential)
> # configset upload
> # blob store
> # etc.
> The motivation of this work is to have a public facing minimal Solr that is 
> bullet proof, secure against external exposure (with the help of basic auth 
> and rule based authorization).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835894#comment-16835894
 ] 

Simon Willnauer commented on LUCENE-8785:
-

{quote} Please feel free to commit this to the release branch. In case of a 
re-spin, I'll pick this change up. {quote}

[~ichattopadhyaya] done. Thanks.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835892#comment-16835892
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit e31be1ee532189996c8979973ff74cab3626283f in lucene-solr's branch 
refs/heads/branch_8_1 from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e31be1e ]

LUCENE-8785: Ensure threadstates are locked before iterating (#664)

Ensure new threadstates are locked before retrieving the
number of active threadstates. This causes assertion errors
and potentially broken field attributes in the IndexWriter when
IndexWriter#deleteAll is called while actively indexing.


> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread Đạt Cao Mạnh
Hi Ishan,

Should we respin the release for SOLR-13449.

On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:

> +1 SUCCESS! [1:15:45.039228]
>
> Kevin Risden
>
>
> On Wed, May 8, 2019 at 11:12 AM David Smiley 
> wrote:
>
>> +1
>> SUCCESS! [1:29:43.016321]
>>
>> Thanks for doing the release Ishan!
>>
>> ~ David Smiley
>> Apache Lucene/Solr Search Developer
>> http://www.linkedin.com/in/davidwsmiley
>>
>>
>> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> wrote:
>>
>>> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>>>
>>> The artifacts can be downloaded from:
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>>
>>> You can run the smoke tester directly with this command:
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>>
>>> Here's my +1
>>> SUCCESS! [0:46:38.948020]
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>> --
*Best regards,*
*Cao Mạnh Đạt*


*D.O.B : 31-07-1991Cell: (+84) 946.328.329E-mail: caomanhdat...@gmail.com
*


RE: 8.0 jobs disabled on ASF Jenkins

2019-05-08 Thread Cassandra Targett
No problem Uwe.

Thanks Steve for copying over the 8.1 job. Hopefully we can disable/delete it 
again very soon.

Cassandra
On May 8, 2019, 11:44 AM -0500, Uwe Schindler , wrote:
> Hi,
>
> That was me. I was not aware that the Ref Guide was not yet released. I 
> renamed the 8.0 jobs to 8.1 - and this included the ref guide!
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Steve Rowe 
> > Sent: Wednesday, May 8, 2019 6:41 PM
> > To: Lucene Dev 
> > Subject: Re: 8.0 jobs disabled on ASF Jenkins
> >
> > I don't recall deleting the job, but maybe I did? Anyway, I restored it by
> > cloning from the 8.1 job and switching the branch name to branch_8_0, then
> > manually kicked off the first run, which has succeeded.
> >
> > --
> > Steve
> >
> > > On May 8, 2019, at 7:03 PM, Cassandra Targett 
> > wrote:
> > >
> > > Someone appears to have deleted the 8.0 Ref Guide job. Does anyone
> > recall doing that (maybe I missed an announcement)?
> > >
> > > Since the 8.0 Ref Guide isn’t out yet, I’d like it back but have no 
> > > ability to
> > recreate it, and am not sure how it’s set up anyway.
> > >
> > > Thanks,
> > > Cassandra
> > > On Mar 19, 2019, 8:52 AM -0500, Uwe Schindler ,
> > wrote:
> > > > I did the same for the Policeman Jenkins last weekend when I updated JDK
> > versions.
> > > >
> > > > Uwe
> > > >
> > > > -
> > > > Uwe Schindler
> > > > Achterdiek 19, D-28357 Bremen
> > > > http://www.thetaphi.de
> > > > eMail: u...@thetaphi.de
> > > >
> > > > > -Original Message-
> > > > > From: Adrien Grand 
> > > > > Sent: Tuesday, March 19, 2019 1:55 PM
> > > > > To: Lucene Dev 
> > > > > Subject: 8.0 jobs disabled on ASF Jenkins
> > > > >
> > > > > FYI I disabled 8.0 jobs on ASF Jenkins except the one about the 
> > > > > reference
> > > > > guide.
> > > > >
> > > > > --
> > > > > Adrien
> > > > >
> > > > > -
> > > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


Re: Lucene/Solr 7.7.2

2019-05-08 Thread Jan Høydahl
Yes please do!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. mai 2019 kl. 18:25 skrev Ishan Chattopadhyaya :
> 
> I would like to backport SOLR-13410, as without this the ADDROLE of
> "overseer" is effectively broken. Please let me know if that is fine.
> 
> On Sat, May 4, 2019 at 2:22 AM Jan Høydahl  wrote:
>> 
>> Sure, go ahead!
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>> 3. mai 2019 kl. 17:53 skrev Andrzej Białecki 
>> :
>> 
>> Hi,
>> 
>> I would like to back-port the recent changes in the re-opened SOLR-12833, 
>> since the increased memory consumption adversely affects existing 7x users.
>> 
>> On 3 May 2019, at 10:38, Jan Høydahl  wrote:
>> 
>> To not confuse two releases at the same time, I'll delay the first 7.7.2 RC 
>> until after a successful 8.1 vote.
>> Uwe, can you re-enable the Jenkins 7.7 jobs to make sure we have a healthy 
>> branch_7_7?
>> Feel free to push important bug fixes to the branch in the meantime, 
>> announcing them in this thread.
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>> 30. apr. 2019 kl. 18:19 skrev Ishan Chattopadhyaya 
>> :
>> 
>> +1 Jan for May 7th.
>> Hopefully, 8.1 would be already out by then (or close to being there).
>> 
>> On Tue, Apr 30, 2019 at 1:33 PM Bram Van Dam  wrote:
>> 
>> 
>> On 29/04/2019 23:33, Jan Høydahl wrote:
>> 
>> I'll vounteer as RM for 7.7.2 and aim at first RC on Tuesday May 7th
>> 
>> 
>> Thank you!
>> 
>> 
>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Updated] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-08 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-13439:

Attachment: SOLR-13439.patch

> Make collection properties easier and safer to use in code
> --
>
> Key: SOLR-13439
> URL: https://issues.apache.org/jira/browse/SOLR-13439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13439.patch, SOLR-13439.patch
>
>
> (breaking this out from SOLR-13420, please read there for further background)
> Before this patch the api is quite confusing (IMHO):
>  # any code that wanted to know what the properties for a collection are 
> could call zkStateReader.getCollectionProperties(collection) but this was a 
> dangerous and trappy API because that was a query to zookeeper every time. If 
> a naive user auto-completed that in their IDE without investigating, heavy 
> use of zookeeper would ensue.
>  # To "do it right" for any code that might get called on a per-doc or per 
> request basis one had to cause caching by registering a watcher. At which 
> point the getCollectionProperties(collection) magically becomes safe to use, 
> but the watcher pattern probably looks famillar induces a user who hasn't 
> read the solr code closely to create their own cache and update it when their 
> watcher is notified. If the caching side effect of watches isn't understood 
> this will lead to many in-memory copies of collection properties maintained 
> in user code.
>  # This also creates a task to be scheduled on a thread (PropsNotification) 
> and induces an extra thread-scheduling lag before the changes can be observed 
> by user code.
>  # The code that cares about collection properties needs to have a lifecycle 
> tied to either a collection or something other object with an even more 
> ephemeral life cycle such as an URP. The user now also has to remember to 
> ensure the watch is unregistered, or there is a leak.
> After this patch
>  # Calls to getCollectionProperties(collection) are always safe to use in any 
> code anywhere. Caching and cleanup are automatic.
>  # Code that really actually wants to know if a collection property changes 
> so it can wake up and do something (autoscaling?) still has the option of 
> registering a watcher that will asynchronously send them a notification.
>  # Updates can be observed sooner via getCollectionProperties with no need to 
> wait for a thread to run. (vs a cache held in user code)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-08 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835835#comment-16835835
 ] 

Gus Heck commented on SOLR-13439:
-

Patch fixing the cache predicate to use {{collectionPropsWatches}} so that no 
further reviewers will be confused by that.  Unit test for 
CoonditionaExpiringCache still pending

> Make collection properties easier and safer to use in code
> --
>
> Key: SOLR-13439
> URL: https://issues.apache.org/jira/browse/SOLR-13439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13439.patch, SOLR-13439.patch
>
>
> (breaking this out from SOLR-13420, please read there for further background)
> Before this patch the api is quite confusing (IMHO):
>  # any code that wanted to know what the properties for a collection are 
> could call zkStateReader.getCollectionProperties(collection) but this was a 
> dangerous and trappy API because that was a query to zookeeper every time. If 
> a naive user auto-completed that in their IDE without investigating, heavy 
> use of zookeeper would ensue.
>  # To "do it right" for any code that might get called on a per-doc or per 
> request basis one had to cause caching by registering a watcher. At which 
> point the getCollectionProperties(collection) magically becomes safe to use, 
> but the watcher pattern probably looks famillar induces a user who hasn't 
> read the solr code closely to create their own cache and update it when their 
> watcher is notified. If the caching side effect of watches isn't understood 
> this will lead to many in-memory copies of collection properties maintained 
> in user code.
>  # This also creates a task to be scheduled on a thread (PropsNotification) 
> and induces an extra thread-scheduling lag before the changes can be observed 
> by user code.
>  # The code that cares about collection properties needs to have a lifecycle 
> tied to either a collection or something other object with an even more 
> ephemeral life cycle such as an URP. The user now also has to remember to 
> ensure the watch is unregistered, or there is a leak.
> After this patch
>  # Calls to getCollectionProperties(collection) are always safe to use in any 
> code anywhere. Caching and cleanup are automatic.
>  # Code that really actually wants to know if a collection property changes 
> so it can wake up and do something (autoscaling?) still has the option of 
> registering a watcher that will asynchronously send them a notification.
>  # Updates can be observed sooner via getCollectionProperties with no need to 
> wait for a thread to run. (vs a cache held in user code)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835804#comment-16835804
 ] 

ASF subversion and git services commented on SOLR-12833:


Commit 865dbdde1e3f8720a0cb19ad25e591a599614bcd in lucene-solr's branch 
refs/heads/branch_8_1 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=865dbdd ]

SOLR-12833: prevent NPE in DistributedUpdateProcessorTest AfterClass when 
mockito assumption fails in BeforeClass

(cherry picked from commit cde00b9a84f3d57252d34daaa77f2b56cf9802cb)


> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, 8.0
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 7.7, 8.0, 8.1
>
> Attachments: SOLR-12833-noint.patch, SOLR-12833.patch, 
> SOLR-12833.patch, threadDump.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835805#comment-16835805
 ] 

ASF subversion and git services commented on SOLR-12833:


Commit eed96570aa19c20304daafba403980b62445b964 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=eed9657 ]

SOLR-12833: prevent NPE in DistributedUpdateProcessorTest AfterClass when 
mockito assumption fails in BeforeClass

(cherry picked from commit cde00b9a84f3d57252d34daaa77f2b56cf9802cb)


> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, 8.0
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 7.7, 8.0, 8.1
>
> Attachments: SOLR-12833-noint.patch, SOLR-12833.patch, 
> SOLR-12833.patch, threadDump.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835806#comment-16835806
 ] 

ASF subversion and git services commented on SOLR-12833:


Commit cde00b9a84f3d57252d34daaa77f2b56cf9802cb in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cde00b9 ]

SOLR-12833: prevent NPE in DistributedUpdateProcessorTest AfterClass when 
mockito assumption fails in BeforeClass


> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, 8.0
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 7.7, 8.0, 8.1
>
> Attachments: SOLR-12833-noint.patch, SOLR-12833.patch, 
> SOLR-12833.patch, threadDump.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 243 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/243/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttp2SolrClient.testReliability

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:54195/solr/collection1/select?q=*%3A*=javabin=2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:54195/solr/collection1/select?q=*%3A*=javabin=2
at 
__randomizedtesting.SeedInfo.seed([3125228B40FDD0C:C2DA8F6E15690CA5]:0)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:406)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:739)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:605)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:581)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.client.solrj.TestLBHttp2SolrClient.testReliability(TestLBHttp2SolrClient.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Comment Edited] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835799#comment-16835799
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-8785 at 5/8/19 6:26 PM:
--

bq. This reverts commit 2af15a6e725b2c548bb8ded2ba67935ef592823d. We are 
currently releasing this branch and it's unclear if we respin. In the case of a 
respin we can backport this commit easily from the stable branch.

Please feel free to commit this to the release branch. In case of a re-spin, 
I'll pick this change up.


was (Author: ichattopadhyaya):
bq. This reverts commit 2af15a6e725b2c548bb8ded2ba67935ef592823d. We are 
currently
releasing this branch and it's unclear if we respin. In the case of a respin
we can backport this commit easily from the stable branch.

Please feel free to commit this to the release branch. In case of a re-spin, 
I'll pick this change up.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835799#comment-16835799
 ] 

Ishan Chattopadhyaya commented on LUCENE-8785:
--

bq. This reverts commit 2af15a6e725b2c548bb8ded2ba67935ef592823d. We are 
currently
releasing this branch and it's unclear if we respin. In the case of a respin
we can backport this commit easily from the stable branch.

Please feel free to commit this to the release branch. In case of a re-spin, 
I'll pick this change up.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835796#comment-16835796
 ] 

ASF subversion and git services commented on SOLR-13445:


Commit 2ec14f0323e0ed28d36ac984f79c00890b26271d in lucene-solr's branch 
refs/heads/branch_8x from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2ec14f0 ]

SOLR-13445: Fix precommit


> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835795#comment-16835795
 ] 

ASF subversion and git services commented on SOLR-13445:


Commit 81cfbcd0096b85d98c38dec038e2934bfaa271ca in lucene-solr's branch 
refs/heads/master from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=81cfbcd ]

SOLR-13445: Fix precommit


> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:Socket Timeouts

2019-05-08 Thread Christine Poerschke (BLOOMBERG/ LONDON)
The issue of "timeouts values from many places" resonates with me too. And 
 being configurable from solr.xml and per request handler 
particularly springs to mind.

Christine

http://lucene.apache.org/solr/guide/7_7/format-of-solr-xml.html#the-shardhandlerfactory-element

http://lucene.apache.org/solr/guide/7_7/distributed-requests.html#configuring-the-shardhandlerfactory

From: dev@lucene.apache.org At: 05/05/19 18:35:51To:  dev@lucene.apache.org
Subject: Socket Timeouts

I'm working with a client that's trying to process a lot of data (billions of 
docs) via a streaming expression, and the initial query is (not surprisingly) 
taking a long time. Lots of various types of timeouts have been cropping up and 
I've found myself thinking I solved some only to discover that the settings in 
solr.xml are far less wide reaching than I thought initially. The present 5% 
scale cluster seems to hit one particular time out about 50% of the time which 
has made it particularly confusing. I'm guessing it's probably depending on 
something like how busy the virtualization in Amazon is, just barely making it 
when it gets more resources and timing out if anything is starved. 

As I look around the code base I'm finding a LOT of places where timeouts on 
SolrClients and CloudSolrClients are just arbitrarily set to one-off constant 
values. The one bugging me right now is 

public abstract class SolrClientBuilder> {

  protected HttpClient httpClient;
  protected ResponseParser responseParser;
  protected Integer connectionTimeoutMillis = 15000;
  protected Integer socketTimeoutMillis = 12;

Which I am unable to change because of this code in SolrStream:

  /**
  * Opens the stream to a single Solr instance.
  **/
  public void open() throws IOException {
if(cache == null) {
  client = new HttpSolrClient.Builder(baseUrl).build();
} else {
  client = cache.getHttpSolrClient(baseUrl);
}

I need to make this particular case configurable, so that I can get results 
from a very long running query, but I sense that there is a much wider problem 
in that we don't seem to have any organized plan for how socket timeouts are 
set/managed in the code.

What thoughts have people had on this front? 

-Gus

-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)



[jira] [Commented] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-08 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835757#comment-16835757
 ] 

Gus Heck commented on SOLR-13439:
-

Need was probably the wrong word... want :) would have been more correct. I 
prefer that since then it can be a single line and we don't have to do the 
cache lookup for an object that we are already handling.

As for consistency/frequency of access let's consider some cases (time in 
minutes):
 # Case 1:
 ## T+0 A watch is set, cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 2:
 ## T+0 A watch is set, cache is populated from zk
 ## T+15 properties are accessed with getCollectionProperties() 
 ## T+30 the watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed exactly once.This is UNCHANGED vs current
 # Case 3:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+5 a watch is set, cache is already populated, zk is not accessed
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case the data access frequency for zk is once instead of twice. 
This LESS than current.
 # Case 4:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+10 the cache expires
 ## T+20 a watch is set, cache is cache is populated from zk
 ## T+30 watch is unregistered
 ## T+40 cache expires
 ## In this case zookeeper is accessed twice. This is UNCHANGED vs current. 
 # Case 5:
 ## T+0 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+1 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## T+2 a call to getCollectionProperties() is made, cache is already 
populated, zk is not accessed
 ## T+12 the cache expires
 ## T+30 a call to getCollectionProperties() is made, cache is populated from zk
 ## T+40 the cache expires
 ## In this case zookeeper is accessed twice instead of four times This is LESS 
than current.

I will grant you that it's hard to predict that when the load on zookeeper will 
be less, but in no case will it be more, so unless you mean to stress test 
zookeeper this is not much of a problem. If I've missed a case let me know.

The existing code (without this patch) isn't really consistent either. Code 
that calls getCollectionProperties() is either fast or slow depending on 
whether or not someone else has set a watch on that collection. I think the 
current inconsistency is worse because something that was fast  due to the 
action of unrelated code (getCollectionProperties()) can become surprisingly 
slow, whereas after my patch, setting the watch may become surprisingly fast 
due to the effect of unrelated code.

> Make collection properties easier and safer to use in code
> --
>
> Key: SOLR-13439
> URL: https://issues.apache.org/jira/browse/SOLR-13439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13439.patch
>
>
> (breaking this out from SOLR-13420, please read there for further background)
> Before this patch the api is quite confusing (IMHO):
>  # any code that wanted to know what the properties for a collection are 
> could call zkStateReader.getCollectionProperties(collection) but this was a 
> dangerous and trappy API because that was a query to zookeeper every time. If 
> a naive user auto-completed that in their IDE without investigating, heavy 
> use of zookeeper would ensue.
>  # To "do it right" for any code that might get called on a per-doc or per 
> request basis one had to cause caching by registering a watcher. At which 
> point the getCollectionProperties(collection) magically becomes safe to use, 
> but the watcher pattern probably looks famillar induces a user who hasn't 
> read the solr code closely to create their own cache and update it when their 
> watcher is notified. If the caching side effect of watches isn't understood 
> this will lead to many in-memory copies of collection properties maintained 
> in user code.
>  # This also creates a task to be scheduled on a thread (PropsNotification) 
> and induces an extra thread-scheduling lag before the changes can be observed 
> by user code.
>  # The code that cares about collection properties needs to have a lifecycle 
> tied to either a collection or something other object with an even more 
> ephemeral life cycle such as an URP. The user now also has to remember to 
> ensure the watch is unregistered, or there is a leak.
> After this patch
>  # Calls to 

[jira] [Commented] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835756#comment-16835756
 ] 

ASF subversion and git services commented on SOLR-13445:


Commit 8a1b966165339f017e0f1afb736b0afb939a0510 in lucene-solr's branch 
refs/heads/branch_8x from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8a1b966 ]

SOLR-13445: Preferred replicas on nodes with same system properties as the 
query master


> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-13445.
-
   Resolution: Fixed
Fix Version/s: 8.2
   master (9.0)

> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835754#comment-16835754
 ] 

ASF subversion and git services commented on SOLR-13445:


Commit 6b5b74bc9c9576913a5124eec138938e09037dad in lucene-solr's branch 
refs/heads/master from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6b5b74b ]

SOLR-13445: Preferred replicas on nodes with same system properties as the 
query master


> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread Kevin Risden
+1 SUCCESS! [1:15:45.039228]

Kevin Risden


On Wed, May 8, 2019 at 11:12 AM David Smiley 
wrote:

> +1
> SUCCESS! [1:29:43.016321]
>
> Thanks for doing the release Ishan!
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>
>> You can run the smoke tester directly with this command:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>
>> Here's my +1
>> SUCCESS! [0:46:38.948020]
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


RE: 8.0 jobs disabled on ASF Jenkins

2019-05-08 Thread Uwe Schindler
Hi,

That was me. I was not aware that the Ref Guide was not yet released. I renamed 
the 8.0 jobs to 8.1 - and this included the ref guide!

Uwe

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Steve Rowe 
> Sent: Wednesday, May 8, 2019 6:41 PM
> To: Lucene Dev 
> Subject: Re: 8.0 jobs disabled on ASF Jenkins
> 
> I don't recall deleting the job, but maybe I did?  Anyway, I restored it by
> cloning from the 8.1 job and switching the branch name to branch_8_0, then
> manually kicked off the first run, which has succeeded.
> 
> --
> Steve
> 
> > On May 8, 2019, at 7:03 PM, Cassandra Targett 
> wrote:
> >
> > Someone appears to have deleted the 8.0 Ref Guide job. Does anyone
> recall doing that (maybe I missed an announcement)?
> >
> > Since the 8.0 Ref Guide isn’t out yet, I’d like it back but have no ability 
> > to
> recreate it, and am not sure how it’s set up anyway.
> >
> > Thanks,
> > Cassandra
> > On Mar 19, 2019, 8:52 AM -0500, Uwe Schindler ,
> wrote:
> >> I did the same for the Policeman Jenkins last weekend when I updated JDK
> versions.
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> Achterdiek 19, D-28357 Bremen
> >> http://www.thetaphi.de
> >> eMail: u...@thetaphi.de
> >>
> >>> -Original Message-
> >>> From: Adrien Grand 
> >>> Sent: Tuesday, March 19, 2019 1:55 PM
> >>> To: Lucene Dev 
> >>> Subject: 8.0 jobs disabled on ASF Jenkins
> >>>
> >>> FYI I disabled 8.0 jobs on ASF Jenkins except the one about the reference
> >>> guide.
> >>>
> >>> --
> >>> Adrien
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 8.0 jobs disabled on ASF Jenkins

2019-05-08 Thread Steve Rowe
I don't recall deleting the job, but maybe I did?  Anyway, I restored it by 
cloning from the 8.1 job and switching the branch name to branch_8_0, then 
manually kicked off the first run, which has succeeded.

--
Steve

> On May 8, 2019, at 7:03 PM, Cassandra Targett  wrote:
> 
> Someone appears to have deleted the 8.0 Ref Guide job. Does anyone recall 
> doing that (maybe I missed an announcement)? 
> 
> Since the 8.0 Ref Guide isn’t out yet, I’d like it back but have no ability 
> to recreate it, and am not sure how it’s set up anyway.
> 
> Thanks,
> Cassandra
> On Mar 19, 2019, 8:52 AM -0500, Uwe Schindler , wrote:
>> I did the same for the Policeman Jenkins last weekend when I updated JDK 
>> versions.
>> 
>> Uwe
>> 
>> -
>> Uwe Schindler
>> Achterdiek 19, D-28357 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>> 
>>> -Original Message-
>>> From: Adrien Grand 
>>> Sent: Tuesday, March 19, 2019 1:55 PM
>>> To: Lucene Dev 
>>> Subject: 8.0 jobs disabled on ASF Jenkins
>>> 
>>> FYI I disabled 8.0 jobs on ASF Jenkins except the one about the reference
>>> guide.
>>> 
>>> --
>>> Adrien
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.7.2

2019-05-08 Thread Ishan Chattopadhyaya
I would like to backport SOLR-13410, as without this the ADDROLE of
"overseer" is effectively broken. Please let me know if that is fine.

On Sat, May 4, 2019 at 2:22 AM Jan Høydahl  wrote:
>
> Sure, go ahead!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 3. mai 2019 kl. 17:53 skrev Andrzej Białecki 
> :
>
> Hi,
>
> I would like to back-port the recent changes in the re-opened SOLR-12833, 
> since the increased memory consumption adversely affects existing 7x users.
>
> On 3 May 2019, at 10:38, Jan Høydahl  wrote:
>
> To not confuse two releases at the same time, I'll delay the first 7.7.2 RC 
> until after a successful 8.1 vote.
> Uwe, can you re-enable the Jenkins 7.7 jobs to make sure we have a healthy 
> branch_7_7?
> Feel free to push important bug fixes to the branch in the meantime, 
> announcing them in this thread.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 30. apr. 2019 kl. 18:19 skrev Ishan Chattopadhyaya 
> :
>
> +1 Jan for May 7th.
> Hopefully, 8.1 would be already out by then (or close to being there).
>
> On Tue, Apr 30, 2019 at 1:33 PM Bram Van Dam  wrote:
>
>
> On 29/04/2019 23:33, Jan Høydahl wrote:
>
> I'll vounteer as RM for 7.7.2 and aim at first RC on Tuesday May 7th
>
>
> Thank you!
>
>
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13306) Add a request parameter to execute a streaming expression locally

2019-05-08 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck resolved SOLR-13306.
-
   Resolution: Fixed
Fix Version/s: 8.2
   master (9.0)

Added a small section in docs too.

> Add a request parameter to execute a streaming expression locally
> -
>
> Key: SOLR-13306
> URL: https://issues.apache.org/jira/browse/SOLR-13306
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.0
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13306.patch
>
>
> In some cases it is known (due to routing configuration) that all documents 
> required for a streaming expression are co-located on the same server. In 
> this case it is inefficient to send JSON over the wire, and it would be more 
> efficient to issue the same expression to N servers, thereby saving transport 
> and merge costs. Details, Patch and example to follow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13306) Add a request parameter to execute a streaming expression locally

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835726#comment-16835726
 ] 

ASF subversion and git services commented on SOLR-13306:


Commit 76b854cb4fce759c2b312a16126c4c0be0f7086a in lucene-solr's branch 
refs/heads/master from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=76b854c ]

SOLR-13306 Add a request parameter to execute a streaming expression locally


> Add a request parameter to execute a streaming expression locally
> -
>
> Key: SOLR-13306
> URL: https://issues.apache.org/jira/browse/SOLR-13306
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.0
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13306.patch
>
>
> In some cases it is known (due to routing configuration) that all documents 
> required for a streaming expression are co-located on the same server. In 
> this case it is inefficient to send JSON over the wire, and it would be more 
> efficient to issue the same expression to N servers, thereby saving transport 
> and merge costs. Details, Patch and example to follow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 8.0 jobs disabled on ASF Jenkins

2019-05-08 Thread Cassandra Targett
Someone appears to have deleted the 8.0 Ref Guide job. Does anyone recall doing 
that (maybe I missed an announcement)?

Since the 8.0 Ref Guide isn’t out yet, I’d like it back but have no ability to 
recreate it, and am not sure how it’s set up anyway.

Thanks,
Cassandra
On Mar 19, 2019, 8:52 AM -0500, Uwe Schindler , wrote:
> I did the same for the Policeman Jenkins last weekend when I updated JDK 
> versions.
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Adrien Grand 
> > Sent: Tuesday, March 19, 2019 1:55 PM
> > To: Lucene Dev 
> > Subject: 8.0 jobs disabled on ASF Jenkins
> >
> > FYI I disabled 8.0 jobs on ASF Jenkins except the one about the reference
> > guide.
> >
> > --
> > Adrien
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


[JENKINS] Lucene-Solr-8.1-Windows (64bit/jdk-10.0.1) - Build # 95 - Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.1-Windows/95/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest

Error Message:
committed = 143196160 should be < max = 143130624

Stack Trace:
java.lang.IllegalArgumentException: committed = 143196160 should be < max = 
143130624
at __randomizedtesting.SeedInfo.seed([883D07D7349D74DC]:0)
at 
java.management/java.lang.management.MemoryUsage.(MemoryUsage.java:166)
at 
java.management/sun.management.MemoryPoolImpl.getCollectionUsage0(Native Method)
at 
java.management/sun.management.MemoryPoolImpl.getCollectionUsage(MemoryPoolImpl.java:266)
at 
com.codahale.metrics.jvm.MemoryUsageGaugeSet.getMetrics(MemoryUsageGaugeSet.java:96)
at 
org.apache.solr.metrics.SolrMetricManager.registerAll(SolrMetricManager.java:516)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager.(SimCloudManager.java:197)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager.createCluster(SimCloudManager.java:277)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.setupCluster(ScheduledMaintenanceTriggerTest.java:78)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13573 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
   [junit4]   2> 708251 INFO  
(SUITE-ScheduledMaintenanceTriggerTest-seed#[883D07D7349D74DC]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-8.1-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest_883D07D7349D74DC-001\init-core-data-001
   [junit4]   2> 708253 WARN  
(SUITE-ScheduledMaintenanceTriggerTest-seed#[883D07D7349D74DC]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4
   [junit4]   2> 708253 INFO  
(SUITE-ScheduledMaintenanceTriggerTest-seed#[883D07D7349D74DC]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 708254 INFO  
(SUITE-ScheduledMaintenanceTriggerTest-seed#[883D07D7349D74DC]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 708255 

Re: Writing Unit test for SOLR Issue

2019-05-08 Thread David Smiley
Absolutely; there are plenty of PR based contributions.  Just reference the
exact issue in the PR title, e.g. SOLR-13331

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Fri, May 3, 2019 at 4:18 AM Thomas Wöckinger 
wrote:

> Now that all apache repos are moved to github, is it possible to do a pull
> request?
>
> I dont wan't to crunch all changes into one commit.
>
> Best,
>
> Tom
>
> On Thu, May 2, 2019 at 3:48 PM Thomas Wöckinger <
> thomas.woeckin...@gmail.com> wrote:
>
>> In the middle of writing a test base, i realized that the
>> SolrJettyTestBase is using an EmbeddedSolrServer as its fallback.
>>
>> So two questions:
>>
>> 1.: Should i introduce a new EmbeddedSolrServerTestBase to clarify which
>> implementations is used behind the scenes?
>> 2.: I think it is a good idea two refactor EmbeddedSolrServer to use
>> different codecs beside JavaBinCodec?
>>
>> Best,
>>
>> Tom
>>
>>
>> On Thu, May 2, 2019 at 1:01 PM Thomas Wöckinger <
>> thomas.woeckin...@gmail.com> wrote:
>>
>>> Ok, thx for your fast response, i will start now, hope to be finished in
>>> two days.
>>>
>>> Best,
>>>
>>> Tom
>>>
>>> On Thu, May 2, 2019 at 12:59 PM Jason Gerlowski 
>>> wrote:
>>>
 Hi Thomas,

 No need to open a new issue.  I added a few tests when I committed a
 fix on SOLR-13331 a few weeks back.  I backported this change and it
 should be fixed in 7.7.2 when that is released.

 The project could always use additional tests though, so if you want
 to add an EmbeddedSolrServer test, just upload a patch to SOLR-13331,
 and I'll take a look.  (Might help if you tag me in your post there).

 Best,

 Jason

 On Thu, May 2, 2019 at 5:36 AM Thomas Wöckinger
  wrote:
 >
 > As i already commented on the issue SOLR-13331, i am starting writing
 test on this issue using EmbbededSolrServer, i didn't had time the last few
 weeks, should i open a new issue for this?
 >
 > I think backporting this to 7.x is also a good idea!
 >
 > On Tue, Mar 26, 2019 at 3:46 PM Jason Gerlowski <
 gerlowsk...@gmail.com> wrote:
 >>
 >> I'm only passingly familiar with EmbeddedSolrServer.  But if you can
 >> reproduce the problem using EmbeddedSolrServer, then that'd be a
 great
 >> place to start for a test.  If you aren't able to reproduce the
 >> problem with EmbeddedSolrServer though, you'll probably need to use
 >> HttpSolrClient and one of the other test bases.  Other test base
 >> options are RestTestBase or SolrJettyTestBase (see SolrJ's SchemaTest
 >> or TestBatchUpdate for examples of each of these.)
 >>
 >> On Tue, Mar 26, 2019 at 10:03 AM Thomas Wöckinger
 >>  wrote:
 >> >
 >> > I know SolrJ pretty well, so should i write the against
 EmbeddedSolrServer, or is there a different base class for such tests?
 >> >
 >> > On Tue, Mar 26, 2019 at 2:55 PM Jason Gerlowski <
 gerlowsk...@gmail.com> wrote:
 >> >>
 >> >> Hi Thomas,
 >> >>
 >> >> I see what you mean; the utilities used by that test as-is rely on
 >> >> XML.  If you want to send the atomic-update via Javabin, the best
 >> >> option is probably to write a small testcase using SolrJ. Javabin
 is
 >> >> the default wire format in SolrJ, so it should do what you want.
 >> >>
 >> >> If you haven't used SolrJ much before, then this should give you a
 >> >> good overview:
 https://lucene.apache.org/solr/guide/7_7/using-solrj.html.
 >> >> As far as performing atomic-updates specifically, Yonik has an
 example
 >> >> on his blog post here that does an atomic update in SolrJ:
 >> >> http://yonik.com/solr/atomic-updates/ . Hopefully those two are
 enough
 >> >> to get you started.
 >> >>
 >> >> Lastly, I'll assign SOLR-13331 to myself and can help you with
 review
 >> >> once you take a first crack at a test.  Feel free to bring up any
 >> >> other questions or places where you get stuck on the JIRA.  (I'm
 more
 >> >> likely to see the notifications over there once I assign myself.)
 >> >>
 >> >> Best,
 >> >>
 >> >> Jason
 >> >>
 >> >> On Tue, Mar 26, 2019 at 7:30 AM Thomas Wöckinger
 >> >>  wrote:
 >> >> >
 >> >> > Following problem:
 >> >> >
 >> >> > TestHarness is using XMLLoader to test the whole test case, so
 it is not possible to test with ByteArrayUtf8CharSequence because it will
 be converted to String before.
 >> >> >
 >> >> > Can you guide me to create a TestHarness for which is using
 JavaBinCodec.
 >> >> >
 >> >> > Thx Tom
 >> >> >
 >> >> > On Mon, Mar 25, 2019 at 8:42 PM Erick Erickson <
 erickerick...@gmail.com> wrote:
 >> >> >>
 >> >> >> Take a look at
 …/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdatesTest.java
 >> >> >>
 >> >> >> If you don’t want 

Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-08 Thread David Smiley
+1
SUCCESS! [1:29:43.016321]

Thanks for doing the release Ishan!

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> Here's my +1
> SUCCESS! [0:46:38.948020]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS-MAVEN] Lucene-Solr-Maven-master #2558: POMs out of sync

2019-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2558/

No tests ran.

Build Log:
[...truncated 18105 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:673: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:408:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:1648:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:581:
 Error deploying artifact 'org.apache.lucene:lucene-test-framework:jar': Error 
deploying artifact: Error transferring file

Total time: 9 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-13-ea+18) - Build # 7929 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7929/
Java: 64bit/jdk-13-ea+18 -XX:-UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:
SOLR-11606: ByteBuddy used by Mockito is not working with this JVM version.

Stack Trace:
org.junit.AssumptionViolatedException: SOLR-11606: ByteBuddy used by Mockito is 
not working with this JVM version.
at __randomizedtesting.SeedInfo.seed([FADF99DF852A745]:0)
at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeNoException(RandomizedTest.java:742)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:376)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.beforeClass(DistributedUpdateProcessorTest.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.IllegalArgumentException: Unknown Java version: 13
at 
net.bytebuddy.ClassFileVersion.ofJavaVersion(ClassFileVersion.java:210)
at 
net.bytebuddy.ClassFileVersion$VersionLocator$ForJava9CapableVm.locate(ClassFileVersion.java:462)
at net.bytebuddy.ClassFileVersion.ofThisVm(ClassFileVersion.java:223)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:374)
... 24 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([FADF99DF852A745]:0)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.AfterClass(DistributedUpdateProcessorTest.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 

[JENKINS-EA] Lucene-Solr-8.1-Linux (64bit/jdk-13-ea+18) - Build # 300 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.1-Linux/300/
Java: 64bit/jdk-13-ea+18 -XX:+UseCompressedOops -XX:+UseSerialGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:
SOLR-11606: ByteBuddy used by Mockito is not working with this JVM version.

Stack Trace:
org.junit.AssumptionViolatedException: SOLR-11606: ByteBuddy used by Mockito is 
not working with this JVM version.
at __randomizedtesting.SeedInfo.seed([73ACF037460F0152]:0)
at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeNoException(RandomizedTest.java:742)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:374)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.beforeClass(DistributedUpdateProcessorTest.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.IllegalArgumentException: Unknown Java version: 13
at 
net.bytebuddy.ClassFileVersion.ofJavaVersion(ClassFileVersion.java:210)
at 
net.bytebuddy.ClassFileVersion$VersionLocator$ForJava9CapableVm.locate(ClassFileVersion.java:462)
at net.bytebuddy.ClassFileVersion.ofThisVm(ClassFileVersion.java:223)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:372)
... 24 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([73ACF037460F0152]:0)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.AfterClass(DistributedUpdateProcessorTest.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 

[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835629#comment-16835629
 ] 

Atri Sharma commented on LUCENE-8796:
-

bq. We could potentially reduce the number of comparisons (at average) using 
the property that binary search uses same number of comparisons to search for a 
value in 2^n and 2^n+1 -1. Could we adjust the lower bound of search space 
based on that?

Something like:


{code:java}
int lowerBound = 0;
while(bound < length && docs[bound] < target) {
lowerBound += bound;
bound = std::min((bound + 1) * 2 - 1, length);
}
  i = Arrays.binarySearch(docs, lowerBound, (lowerBound + 
Math.min(bound, length)), target);
{code}

This might be wrong (I have not run a bunch of tests), but gives the general 
idea.

> Use exponential search in IntArrayDocIdSet advance method
> -
>
> Key: LUCENE-8796
> URL: https://issues.apache.org/jira/browse/LUCENE-8796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
> its advance method use exponential search instead of binary search. This 
> should help performance of queries including conjunctions: given that 
> ConjunctionDISI uses leap frog, it advances through doc ids in small steps, 
> hence exponential search should be faster when advancing on average compared 
> to binary search.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835628#comment-16835628
 ] 

Luca Cavanna commented on LUCENE-8796:
--

You are right [~ysee...@gmail.com] I will make that change and re-run 
benchmarks.

> Use exponential search in IntArrayDocIdSet advance method
> -
>
> Key: LUCENE-8796
> URL: https://issues.apache.org/jira/browse/LUCENE-8796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
> its advance method use exponential search instead of binary search. This 
> should help performance of queries including conjunctions: given that 
> ConjunctionDISI uses leap frog, it advances through doc ids in small steps, 
> hence exponential search should be faster when advancing on average compared 
> to binary search.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1843 - Still Failing

2019-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1843/

3 tests failed.
FAILED:  
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed

Error Message:
Expected metric minimums for prefix SECURITY./authentication/pki.: 
{failMissingCredentials=0, authenticated=12, passThrough=0, 
failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=3804171, 
failWrongCredentials=0, requestTimes=428, requests=4, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication/pki.: {failMissingCredentials=0, authenticated=12, 
passThrough=0, failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=3804171, 
failWrongCredentials=0, requestTimes=428, requests=4, errors=0}
at 
__randomizedtesting.SeedInfo.seed([3B1597EDEE2B1697:3D3D95838C9C8024]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:129)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertPkiAuthMetricsMinimums(SolrCloudAuthTestCase.java:74)
at 
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed(JWTAuthPluginIntegrationTest.java:173)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835613#comment-16835613
 ] 

Atri Sharma commented on LUCENE-8796:
-

+1, nice change!

A few thoughts:

1) We could potentially reduce the number of comparisons (at average) using the 
property that binary search uses same number of comparisons to search for a 
value in 2^n and 2^n+1 -1. Could we adjust the lower bound of search space 
based on that?

2) Could we improve things here for equal values?

> Use exponential search in IntArrayDocIdSet advance method
> -
>
> Key: LUCENE-8796
> URL: https://issues.apache.org/jira/browse/LUCENE-8796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
> its advance method use exponential search instead of binary search. This 
> should help performance of queries including conjunctions: given that 
> ConjunctionDISI uses leap frog, it advances through doc ids in small steps, 
> hence exponential search should be faster when advancing on average compared 
> to binary search.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13304) PreanalyzedField#createField swallows Exception

2019-05-08 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835606#comment-16835606
 ] 

Gus Heck commented on SOLR-13304:
-

Looks like I forgot to come back to this. When I went to commit it I noticed 
that it breaks a test for PreAnalyzeFieldUpdateRequestProcessor which was 
relying on it's exception swallowing properties. I moved the exception ignoring 
behavior into the URP. In addition to covering exceptions and converting them 
to log messages (by design), that URP has some other slightly odd behaviors, 
which may deserve adjustment, but URP related stuff can be discussed in another 
ticket. Attaching patch with adjustments to the URP. The only user visible 
change there is now you will get a slightly different log message for a null 
field value vs an exception parsing the pre-analyzed data.

> PreanalyzedField#createField swallows Exception
> ---
>
> Key: SOLR-13304
> URL: https://issues.apache.org/jira/browse/SOLR-13304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13304.patch, SOLR-13304.patch
>
>
> The following code allows one to believe that an ill formatted pre-analyzed 
> field has successfully been written unless one is actively monitoring the 
> logs: 
> {code}@Override
> public IndexableField createField(SchemaField field, Object value) {
>   IndexableField f = null;
>   try {
> f = fromString(field, String.valueOf(value));
>   } catch (Exception e) {
> log.warn("Error parsing pre-analyzed field '" + field.getName() + "'", e);
> return null;
>   }
>   return f;
> }{code}
> I believe this should throw an error just like a poorly formatted date or 
> other invalid value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13304) PreanalyzedField#createField swallows Exception

2019-05-08 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-13304:

Attachment: SOLR-13304.patch

> PreanalyzedField#createField swallows Exception
> ---
>
> Key: SOLR-13304
> URL: https://issues.apache.org/jira/browse/SOLR-13304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13304.patch, SOLR-13304.patch
>
>
> The following code allows one to believe that an ill formatted pre-analyzed 
> field has successfully been written unless one is actively monitoring the 
> logs: 
> {code}@Override
> public IndexableField createField(SchemaField field, Object value) {
>   IndexableField f = null;
>   try {
> f = fromString(field, String.valueOf(value));
>   } catch (Exception e) {
> log.warn("Error parsing pre-analyzed field '" + field.getName() + "'", e);
> return null;
>   }
>   return f;
> }{code}
> I believe this should throw an error just like a poorly formatted date or 
> other invalid value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13454) Investigate ReindexCollectionTest failures

2019-05-08 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13454:
-

 Summary: Investigate ReindexCollectionTest failures
 Key: SOLR-13454
 URL: https://issues.apache.org/jira/browse/SOLR-13454
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson


This _looks_ like it might be another example of commits not quite happening 
correctly, see 
SOLR-11035. Problem is I can’t get it to fail locally after 2,000 iterations.

So I’m going to add a bit to the bandaid to allow tests to conditionally fail 
if the bandaid would have made it pass. That way we can positively detect that 
the bandaid is indeed the case rather than change code and hope.

This _shouldn’t_ add any noise to the Jenkins lists, as the test won’t fail in 
cases where it didn’t before.

In case people wonder what the heck I’m doing.

BTW, if we ever really understand/fix the underlying cause, we should make the 
bandaid code fail and see, then remove it if so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835597#comment-16835597
 ] 

Yonik Seeley commented on LUCENE-8796:
--

Hmmm, that looks like it's searching the whole space each time instead of 
starting that the current point?

Presumably this:
{code}
  while(bound < length && docs[bound] < target) {
{code}
Should be something like this:
{code}
  while(i+bound < length && docs[i+bound] < target) {
{code}
And also adjust the bounds of the following binary search to match as well.


> Use exponential search in IntArrayDocIdSet advance method
> -
>
> Key: LUCENE-8796
> URL: https://issues.apache.org/jira/browse/LUCENE-8796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
> its advance method use exponential search instead of binary search. This 
> should help performance of queries including conjunctions: given that 
> ConjunctionDISI uses leap frog, it advances through doc ids in small steps, 
> hence exponential search should be faster when advancing on average compared 
> to binary search.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13428) Take the WARN message out of the logs when optimizing.

2019-05-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-13428.
---
   Resolution: Fixed
Fix Version/s: 8.2
   master (9.0)

> Take the WARN message out of the logs when optimizing.
> --
>
> Key: SOLR-13428
> URL: https://issues.apache.org/jira/browse/SOLR-13428
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13428.patch
>
>
> I think this is both unnecessary and produces unnecessary angst. Users can't 
> get themselves into the situation where they have oversize segments unless 
> they take explicit action any more.  And since the big red "optimize" button 
> is gone, we can at least reasonably expect that they've at least read the ref 
> guide to even know there's an optimize option that produces oversize segments.
> Also, update the ref guide, particularly the "Index Replication" section 
> where it mentions optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13428) Take the WARN message out of the logs when optimizing.

2019-05-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13428:
--
Attachment: SOLR-13428.patch

> Take the WARN message out of the logs when optimizing.
> --
>
> Key: SOLR-13428
> URL: https://issues.apache.org/jira/browse/SOLR-13428
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-13428.patch
>
>
> I think this is both unnecessary and produces unnecessary angst. Users can't 
> get themselves into the situation where they have oversize segments unless 
> they take explicit action any more.  And since the big red "optimize" button 
> is gone, we can at least reasonably expect that they've at least read the ref 
> guide to even know there's an optimize option that produces oversize segments.
> Also, update the ref guide, particularly the "Index Replication" section 
> where it mentions optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+18) - Build # 24051 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24051/
Java: 64bit/jdk-13-ea+18 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:
SOLR-11606: ByteBuddy used by Mockito is not working with this JVM version.

Stack Trace:
org.junit.AssumptionViolatedException: SOLR-11606: ByteBuddy used by Mockito is 
not working with this JVM version.
at __randomizedtesting.SeedInfo.seed([F73FE42E8B1567BC]:0)
at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeNoException(RandomizedTest.java:742)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:376)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.beforeClass(DistributedUpdateProcessorTest.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.IllegalArgumentException: Unknown Java version: 13
at 
net.bytebuddy.ClassFileVersion.ofJavaVersion(ClassFileVersion.java:210)
at 
net.bytebuddy.ClassFileVersion$VersionLocator$ForJava9CapableVm.locate(ClassFileVersion.java:462)
at net.bytebuddy.ClassFileVersion.ofThisVm(ClassFileVersion.java:223)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
org.apache.solr.SolrTestCaseJ4.assumeWorkingMockito(SolrTestCaseJ4.java:374)
... 24 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DistributedUpdateProcessorTest

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([F73FE42E8B1567BC]:0)
at 
org.apache.solr.update.processor.DistributedUpdateProcessorTest.AfterClass(DistributedUpdateProcessorTest.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)

[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835542#comment-16835542
 ] 

Luca Cavanna commented on LUCENE-8796:
--

I have made the change and played with luceneutil to run some benchmark. I 
opened a PR here: https://github.com/apache/lucene-solr/pull/667 .

Luceneutil does not currently benchmark the queries that should be affected by 
this change, hence I added benchmarks for numeric range queries, prefix queries 
and wildcard queries in conjunction with term queries (low, medium and high 
frequency). See the changes I made to my luceneutil fork: 
[https://github.com/mikemccand/luceneutil/compare/master...javanna:conjunctions]
 .  Also, for the benchmarks I temporarily modified DocIdSetBuilder#grow to 
never call upgradeToBitSet (on both baseline and modified version), so that the 
updated code is exercised as much as possible during the benchmarks run, 
otherwise in many cases we would use bitsets instead and the changed code would 
not be exercised at all.

I ran the wikimedium10m benchmarks a few times, here is probably the run with 
the least noise, results show a little improvement for some queries, and no 
regressions in general:
 
Report after iter 19:
 TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff
 WildcardConjMedTerm 75.49 (2.2%) 72.79 (2.0%) -3.6% ( -7% - 0%)
 OrHighNotMed 607.01 (5.7%) 593.10 (4.4%) -2.3% ( -11% - 8%)
 WildcardConjHighTerm 64.00 (1.7%) 62.55 (1.4%) -2.3% ( -5% - 0%)
 Fuzzy2 20.14 (3.4%) 19.72 (4.6%) -2.1% ( -9% - 6%)
 HighTerm 1174.41 (4.7%) 1150.11 (4.2%) -2.1% ( -10% - 7%)
 OrHighLow 483.40 (5.1%) 473.69 (6.9%) -2.0% ( -13% - 10%)
 OrNotHighLow 526.75 (3.6%) 516.47 (3.6%) -2.0% ( -8% - 5%)
 OrNotHighHigh 600.38 (4.9%) 590.21 (3.7%) -1.7% ( -9% - 7%)
 HighTermMonthSort 110.05 (11.7%) 108.58 (11.5%) -1.3% ( -21% - 24%)
 OrHighMed 107.83 (2.6%) 106.48 (4.7%) -1.3% ( -8% - 6%)
 PrefixConjMedTerm 56.98 (2.5%) 56.33 (1.7%) -1.1% ( -5% - 3%)
 AndHighLow 432.27 (3.6%) 427.46 (3.2%) -1.1% ( -7% - 5%)
 PrefixConjLowTerm 44.43 (2.8%) 43.98 (1.8%) -1.0% ( -5% - 3%)
 MedTerm 1409.97 (5.5%) 1396.33 (4.9%) -1.0% ( -10% - 9%)
 HighSloppyPhrase 11.98 (4.3%) 11.87 (5.1%) -0.9% ( -9% - 8%)
 OrNotHighMed 614.19 (4.6%) 608.74 (3.8%) -0.9% ( -8% - 7%)
 Respell 58.11 (2.4%) 57.61 (2.4%) -0.9% ( -5% - 3%)
 LowTerm 1342.33 (4.8%) 1330.86 (4.0%) -0.9% ( -9% - 8%)
 PrefixConjHighTerm 68.50 (2.9%) 67.93 (1.8%) -0.8% ( -5% - 3%)
 OrHighNotHigh 566.30 (5.2%) 561.88 (4.5%) -0.8% ( -9% - 9%)
 WildcardConjLowTerm 32.75 (2.5%) 32.56 (2.1%) -0.6% ( -5% - 4%)
 PKLookup 131.80 (2.4%) 131.28 (2.3%) -0.4% ( -5% - 4%)
 OrHighHigh 29.90 (3.4%) 29.79 (5.3%) -0.4% ( -8% - 8%)
 OrHighNotLow 497.65 (6.6%) 495.84 (5.2%) -0.4% ( -11% - 12%)
 AndHighMed 175.08 (3.5%) 174.58 (3.0%) -0.3% ( -6% - 6%)
 LowSpanNear 15.17 (1.8%) 15.13 (2.5%) -0.2% ( -4% - 4%)
 Fuzzy1 71.14 (5.9%) 70.97 (6.3%) -0.2% ( -11% - 12%)
 LowSloppyPhrase 35.23 (2.0%) 35.16 (2.6%) -0.2% ( -4% - 4%)
 LowPhrase 74.10 (1.7%) 73.98 (1.8%) -0.2% ( -3% - 3%)
 HighPhrase 34.18 (2.1%) 34.13 (2.0%) -0.1% ( -4% - 3%)
 Prefix3 45.33 (2.3%) 45.28 (2.1%) -0.1% ( -4% - 4%)
 MedPhrase 28.30 (2.1%) 28.27 (1.7%) -0.1% ( -3% - 3%)
 MedSloppyPhrase 6.80 (3.6%) 6.80 (3.2%) -0.0% ( -6% - 6%)
 AndHighHigh 53.79 (3.9%) 53.79 (4.0%) -0.0% ( -7% - 8%)
 MedSpanNear 61.78 (2.2%) 61.83 (1.7%) 0.1% ( -3% - 4%)
 Wildcard 37.83 (2.5%) 37.91 (1.7%) 0.2% ( -3% - 4%)
 IntNRQConjHighTerm 20.17 (3.8%) 20.24 (4.9%) 0.3% ( -8% - 9%)
 HighTermDayOfYearSort 53.55 (7.8%) 53.76 (7.3%) 0.4% ( -13% - 16%)
 HighSpanNear 5.39 (2.6%) 5.42 (2.6%) 0.5% ( -4% - 5%)
 IntNRQConjLowTerm 19.69 (4.3%) 19.86 (4.3%) 0.9% ( -7% - 9%)
 IntNRQConjMedTerm 15.93 (4.5%) 16.12 (5.4%) 1.2% ( -8% - 11%)
 IntNRQ 114.28 (10.3%) 116.41 (14.0%) 1.9% ( -20% - 29%)

 

 

> Use exponential search in IntArrayDocIdSet advance method
> -
>
> Key: LUCENE-8796
> URL: https://issues.apache.org/jira/browse/LUCENE-8796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
> its advance method use exponential search instead of binary search. This 
> should help performance of queries including conjunctions: given that 
> ConjunctionDISI uses leap frog, it advances through doc ids in small steps, 
> hence exponential search should be faster when advancing on average compared 
> to binary search.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] javanna opened a new pull request #667: Use exponential search in IntArrayDocIdSetIterator#advance

2019-05-08 Thread GitBox
javanna opened a new pull request #667: Use exponential search in 
IntArrayDocIdSetIterator#advance
URL: https://github.com/apache/lucene-solr/pull/667
 
 
   As described in https://issues.apache.org/jira/browse/LUCENE-8796 .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-08 Thread Luca Cavanna (JIRA)
Luca Cavanna created LUCENE-8796:


 Summary: Use exponential search in IntArrayDocIdSet advance method
 Key: LUCENE-8796
 URL: https://issues.apache.org/jira/browse/LUCENE-8796
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Luca Cavanna


Chatting with [~jpountz] , he suggested to improve IntArrayDocIdSet by making 
its advance method use exponential search instead of binary search. This should 
help performance of queries including conjunctions: given that ConjunctionDISI 
uses leap frog, it advances through doc ids in small steps, hence exponential 
search should be faster when advancing on average compared to binary search.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-08 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835508#comment-16835508
 ] 

Atri Sharma commented on LUCENE-8757:
-

bq. Are the work units tackled in order for each query?  I.e. is the queue a 
FIFO queue?  If so, the sorting can be useful since IndexSearcher would work 
first on the hardest/slowest work units, the "long poles" for the concurrent 
search?

Yes, the leafslices are tackled in order in IndexSearcher i.e. threads are 
created for work units in the same order in which slices() created the work 
units. So with a sort, what you said will be applicable i.e. the larger work 
units get scheduled first.

> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-08 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835502#comment-16835502
 ] 

Michael McCandless commented on LUCENE-8757:


Are the work units tackled in order for each query?  I.e. is the queue a FIFO 
queue?  If so, the sorting can be useful since {{IndexSearcher}} would work 
first on the hardest/slowest work units, the "long poles" for the concurrent 
search?

> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-8.x-Linux (32bit/jdk1.8.0_201) - Build # 55 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-8.x-Linux/55/
Java: 32bit/jdk1.8.0_201 -client -XX:+UseParallelGC

11 tests failed.
FAILED:  
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed

Error Message:
Expected metric minimums for prefix SECURITY./authentication/pki.: 
{failMissingCredentials=0, authenticated=12, passThrough=0, 
failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=6147105, 
failWrongCredentials=0, requestTimes=289, requests=4, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication/pki.: {failMissingCredentials=0, authenticated=12, 
passThrough=0, failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=6147105, 
failWrongCredentials=0, requestTimes=289, requests=4, errors=0}
at 
__randomizedtesting.SeedInfo.seed([558B906285D13852:53A3920CE766AEE1]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:129)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertPkiAuthMetricsMinimums(SolrCloudAuthTestCase.java:74)
at 
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed(JWTAuthPluginIntegrationTest.java:173)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835500#comment-16835500
 ] 

Michael McCandless commented on LUCENE-8785:


Thank you [~simonw]!  Love how open-source works ;)  Lucene gets better.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 120 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/120/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

12 tests failed.
FAILED:  
org.apache.lucene.index.TestConcurrentMergeScheduler.testFlushExceptions

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5ABBEB5BEE3A945D:EED1B881E7DA0269]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.lucene.index.TestConcurrentMergeScheduler.testFlushExceptions(TestConcurrentMergeScheduler.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed

Error Message:
Expected metric minimums for prefix SECURITY./authentication/pki.: 
{failMissingCredentials=0, authenticated=12, passThrough=0, 
failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=5992769, 
failWrongCredentials=0, requestTimes=337, requests=4, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication/pki.: {failMissingCredentials=0, authenticated=12, 
passThrough=0, 

[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-08 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835498#comment-16835498
 ] 

Michael McCandless commented on LUCENE-8757:


Whoa, fast iterations over here!

I think there is an important justification for the 2nd criteria (number of 
segments in each work unit / slice), which is if you have an index with some 
large segments, and then with a long tail of small segments (easily happens if 
your machine has substantially CPU concurrency and you use multiple threads), 
since there is a fixed cost for visiting each segment, if you put too many 
small segments into one work unit, those fixed costs multiply and that one work 
unit can become too slow even though it's not actually going to visit too many 
documents.

I think we should keep it?

Re: the choice of the constants – I ran some performance tests quite a while 
ago on our production data/queries and a machine with sizable concurrency 
({{i3.16xlarge}}) and found those two constants to be a sweet spot at the time.

But let's also remember: this is simply a default segment -> work units 
assignment, and expert users can always continue to override.  Good defaults 
are important ;)

> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835493#comment-16835493
 ] 

ASF subversion and git services commented on SOLR-13449:


Commit 2c2d396b2961e9952cc34d3e66c4906b406f5941 in lucene-solr's branch 
refs/heads/branch_8_1 from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2c2d396 ]

SOLR-13449: SolrClientNodeStateProvider always retries on requesting metrics 
from other nodes


> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-13449.
-
Resolution: Fixed

> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835492#comment-16835492
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit fb00a0569319dc883e407e92a5784750b852205f in lucene-solr's branch 
refs/heads/branch_8_1 from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fb00a05 ]

SOLR-13453: Marking mentioned tests as AwaitsFix


> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-08 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835491#comment-16835491
 ] 

Atri Sharma commented on LUCENE-8757:
-

[~simonw] The reason the sort was added was to have a consistency guarantee 
from the slicing algorithm i.e. two queries with the exact same distribution of 
segments should get the same number of slices, irrespective of the order in 
which the segments are traversed by the method. Consider a distribution of 8 
segments where 6 segments have 10,000 documents each, and two segments have 
130,000 documents each. For the below order of traversal of segments (each 
value represents the maxDoc of the segment):

{10_000, 130_000, 10_000, 10_000, 10_000, 10_000, 10_000, 130_000).

The slicing algorithm will create one slice consisting of all segments (since 
the last segment's addition is what causes the maxDocs limit to be breached).

 
If the segments were sorted, the order would be:

{130_000, 130_000, 10_000, 10_000, 10_000, 10_000, 10_000, 10_000}

 

This would lead to two slices being created.

Thoughts?



bq. also want to suggest to beef up testing a bit

Thanks, added the test. Will raise another iteration post conclusion on above 
discussion.

 

> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835489#comment-16835489
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit 5a35ba41f2fa55ab52c4cf91353937f35b097b04 in lucene-solr's branch 
refs/heads/master from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5a35ba4 ]

SOLR-13453: Marking mentioned tests as AwaitsFix


> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835490#comment-16835490
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit d9fbcc6b85d993400fd1138c247913876e661f14 in lucene-solr's branch 
refs/heads/branch_8x from Cao Manh Dat
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d9fbcc6 ]

SOLR-13453: Marking mentioned tests as AwaitsFix


> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835486#comment-16835486
 ] 

Cao Manh Dat commented on SOLR-13445:
-

Adding minor change to the patch because of TestHttpShardHandlerFactory failure.

> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-13445:

Attachment: SOLR-13445.patch

> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-08 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-13453:
---

 Summary: JWTAuthPluginIntegrationTest and 
TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider 
behave nicely
 Key: SOLR-13453
 URL: https://issues.apache.org/jira/browse/SOLR-13453
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat


SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
provider stop retry once the metrics are successfully grabbed. 
In an unexpected way, JWTAuthPluginIntegrationTest and 
TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-08 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835481#comment-16835481
 ] 

Simon Willnauer commented on LUCENE-8757:
-

Thanks for the additional iteration, now that we simplified this can we remove 
the sorting? I don't necessearily see how the sort makes things simpler. If we 
see a segment > threshold we can just add it as a group? I though you did that 
already and hence my comment about the assertion. WDYT?

I also want to suggest to beef up testing a bit with a randomized version of 
this like this:
{code}
diff --git 
a/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java 
b/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
index 7c63a817adb..76ccca64ee7 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
@@ -1933,6 +1933,14 @@ public abstract class LuceneTestCase extends Assert {
 ret = random.nextBoolean()
 ? new AssertingIndexSearcher(random, r, ex)
 : new AssertingIndexSearcher(random, r.getContext(), ex);
+  } else if (random.nextBoolean()) {
+int maxDocPerSlice = 1 + random.nextInt(10);
+ret = new IndexSearcher(r, ex) {
+  @Override
+  protected LeafSlice[] slices(List leaves) {
+return slices(leaves, maxDocPerSlice);
+  }
+};
   } else {
 ret = random.nextBoolean()
 ? new IndexSearcher(r, ex)
{code}



> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7840) BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 1 MUST/FILTER clause and 0==minShouldMatch

2019-05-08 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835473#comment-16835473
 ] 

Simon Willnauer commented on LUCENE-7840:
-

LGTM

> BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 
> 1 MUST/FILTER clause and 0==minShouldMatch
> ---
>
> Key: LUCENE-7840
> URL: https://issues.apache.org/jira/browse/LUCENE-7840
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-7840.patch, LUCENE-7840.patch, LUCENE-7840.patch
>
>
> I haven't thought this through completely, let alone write up a patch / test 
> case, but IIUC...
> We should be able to optimize  {{ BooleanQuery rewriteNoScoring() }} so that 
> (after converting MUST clauses to FILTER clauses) we can check for the common 
> case of {{0==getMinimumNumberShouldMatch()}} and throw away any SHOULD 
> clauses as long as there is is at least one FILTER clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835472#comment-16835472
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit bd663649a48971f0a2fc2ada82471961e5a2507b in lucene-solr's branch 
refs/heads/branch_8_1 from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bd66364 ]

Revert "LUCENE-8785: Ensure threadstates are locked before iterating (#664)"

This reverts commit 2af15a6e725b2c548bb8ded2ba67935ef592823d. We are currently
releasing this branch and it's unclear if we respin. In the case of a respin
we can backport this commit easily from the stable branch.


> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8785.
-
Resolution: Fixed

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8785:

Fix Version/s: (was: 8.0.1)
   (was: 8.1)
   (was: 7.7.1)
   8.2
   7.7.2
   8.1.1

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835467#comment-16835467
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit d2d3371a354e9534f09e067bfb0a690c3f2260ab in lucene-solr's branch 
refs/heads/branch_7_7 from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d2d3371 ]

LUCENE-8785: Ensure threadstates are locked before iterating (#664)

Ensure new threadstates are locked before retrieving the
number of active threadstates. This causes assertion errors
and potentially broken field attributes in the IndexWriter when
IndexWriter#deleteAll is called while actively indexing.


> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.1, 8.0.1, 8.1, master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835458#comment-16835458
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit 2af15a6e725b2c548bb8ded2ba67935ef592823d in lucene-solr's branch 
refs/heads/branch_8_1 from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2af15a6 ]

LUCENE-8785: Ensure threadstates are locked before iterating (#664)

Ensure new threadstates are locked before retrieving the
number of active threadstates. This causes assertion errors
and potentially broken field attributes in the IndexWriter when
IndexWriter#deleteAll is called while actively indexing.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.1, 8.0.1, 8.1, master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835454#comment-16835454
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit a1560f26c1c245f8e7a4377155ee8bcaa805e554 in lucene-solr's branch 
refs/heads/branch_8x from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a1560f2 ]

LUCENE-8785: Ensure threadstates are locked before iterating (#664)

Ensure new threadstates are locked before retrieving the
number of active threadstates. This causes assertion errors
and potentially broken field attributes in the IndexWriter when
IndexWriter#deleteAll is called while actively indexing.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.1, 8.0.1, 8.1, master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8785) TestIndexWriterDelete.testDeleteAllNoDeadlock failure

2019-05-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835451#comment-16835451
 ] 

ASF subversion and git services commented on LUCENE-8785:
-

Commit e8d88a5b54b268b370f4bbc6f8a61148a62067c9 in lucene-solr's branch 
refs/heads/master from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e8d88a5 ]

LUCENE-8785: Ensure threadstates are locked before iterating (#664)

Ensure new threadstates are locked before retrieving the
number of active threadstates. This causes assertion errors
and potentially broken field attributes in the IndexWriter when
IndexWriter#deleteAll is called while actively indexing.

> TestIndexWriterDelete.testDeleteAllNoDeadlock failure
> -
>
> Key: LUCENE-8785
> URL: https://issues.apache.org/jira/browse/LUCENE-8785
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.6
> Environment: OpenJDK 1.8.0_202
>Reporter: Michael McCandless
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 7.7.1, 8.0.1, 8.1, master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I was running Lucene's core tests on an {{i3.16xlarge}} EC2 instance (64 
> cores), and hit this random yet spooky failure:
> {noformat}
>    [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=952BE262BA547C1 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ar-YE -Dtests.timezone=Europe/Lisbon -Dtests.as\
> serts=true -Dtests.file.encoding=US-ASCII
>    [junit4] ERROR   0.16s J3 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>    [junit4]    > Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=36, name=Thread-2, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1:3A4B5138AB66FD97]:0)
>    [junit4]    > Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: field number 0 is already mapped to field 
> name "null", not "content"
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([952BE262BA547C1]:0)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
>    [junit4]    > Caused by: java.lang.IllegalArgumentException: field number 
> 0 is already mapped to field name "null", not "content"
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$FieldNumbers.verifyConsistent(FieldInfos.java:310)
>    [junit4]    >        at 
> org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:415)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
>    [junit4]    >        at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
>    [junit4]    >        at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
>    [junit4]    >        at 
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
>    [junit4]    >        at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
>    [junit4]    >        at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326){noformat}
> It does *not* reproduce unfortunately ... but maybe there is some subtle 
> thread safety issue in this code ... this is a hairy part of Lucene ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw merged pull request #664: LUCENE-8785: Ensure threadstates are locked before iterating

2019-05-08 Thread GitBox
s1monw merged pull request #664: LUCENE-8785: Ensure threadstates are locked 
before iterating
URL: https://github.com/apache/lucene-solr/pull/664
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 530 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/530/
Java: 64bit/jdk1.8.0_201 -XX:+UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed

Error Message:
Expected metric minimums for prefix SECURITY./authentication/pki.: 
{failMissingCredentials=0, authenticated=12, passThrough=0, 
failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=5820448, 
failWrongCredentials=0, requestTimes=415, requests=4, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication/pki.: {failMissingCredentials=0, authenticated=12, 
passThrough=0, failWrongCredentials=0, requests=12, errors=0}, but got: 
{failMissingCredentials=0, authenticated=4, passThrough=0, totalTime=5820448, 
failWrongCredentials=0, requestTimes=415, requests=4, errors=0}
at 
__randomizedtesting.SeedInfo.seed([445EB6C317988D14:4276B4AD752F1BA7]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:129)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertPkiAuthMetricsMinimums(SolrCloudAuthTestCase.java:74)
at 
org.apache.solr.security.JWTAuthPluginIntegrationTest.createCollectionUpdateAndQueryDistributed(JWTAuthPluginIntegrationTest.java:173)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-08 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834809#comment-16834809
 ] 

Ishan Chattopadhyaya edited comment on SOLR-13449 at 5/8/19 8:28 AM:
-

[~caomanhdat], if you're convinced that this is a test issue, please feel free 
to @Ignore the test. I'll spin the RC once you're done.


was (Author: ichattopadhyaya):
[~caomanhdat], if you're convinced that this is a test issue, please feel free 
to ignore the test. I'll spin the RC once you're done.

> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-08 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835423#comment-16835423
 ] 

Ishan Chattopadhyaya commented on SOLR-13449:
-

This is causing failures in branch_8x (Build: 
https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/119/) and master (Build: 
https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/119/).

> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5133 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5133/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseParallelGC

14 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<154> but was:<152>

Stack Trace:
java.lang.AssertionError: expected:<154> but was:<152>
at 
__randomizedtesting.SeedInfo.seed([C04557270A46C201:481168FDA4BAAFF9]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13440) Support saving/restoring state of the SimCloudManager for repeatable simulations

2019-05-08 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835398#comment-16835398
 ] 

Andrzej Bialecki  commented on SOLR-13440:
--

Updated patch:

* support for saving intermediate steps in the simulation, including cumulative 
statistics
* additional consistency checking in {{SimCloudManager}} and 
{{SnapshotCloudManager}} and related components, to make sure the internal 
state makes sense
* fixed a serious bug in {{SimClusterStateProvider}} when initializing from 
existing {{ClusterState}}.
* bugfixes and refactorings

> Support saving/restoring state of the SimCloudManager for repeatable 
> simulations
> 
>
> Key: SOLR-13440
> URL: https://issues.apache.org/jira/browse/SOLR-13440
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13440.patch, SOLR-13440.patch
>
>
> In order to run simulated experiments that test variations in autoscaling 
> configs and their impact on the cluster layout we need to be able to start 
> from a known well-defined state.
> Currently the {{bin/solr autoscaling -simulate}} tool supports getting the 
> initial state from an actual running cluster. This issue proposes adding 
> support for saving and restoring this state from local files for running 
> repeatable experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13440) Support saving/restoring state of the SimCloudManager for repeatable simulations

2019-05-08 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13440:
-
Attachment: SOLR-13440.patch

> Support saving/restoring state of the SimCloudManager for repeatable 
> simulations
> 
>
> Key: SOLR-13440
> URL: https://issues.apache.org/jira/browse/SOLR-13440
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13440.patch, SOLR-13440.patch
>
>
> In order to run simulated experiments that test variations in autoscaling 
> configs and their impact on the cluster layout we need to be able to start 
> from a known well-defined state.
> Currently the {{bin/solr autoscaling -simulate}} tool supports getting the 
> initial state from an actual running cluster. This issue proposes adding 
> support for saving and restoring this state from local files for running 
> repeatable experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #666: SOLR-13437: fork noggit code into Solr

2019-05-08 Thread GitBox
noblepaul commented on a change in pull request #666: SOLR-13437: fork noggit 
code into Solr
URL: https://github.com/apache/lucene-solr/pull/666#discussion_r281940354
 
 

 ##
 File path: lucene/tools/forbiddenApis/solr.txt
 ##
 @@ -55,3 +55,8 @@ 
com.google.common.base.Preconditions#checkNotNull(java.lang.Object,java.lang.Obj
 @defaultMessage Use methods in java.util.Comparator instead
 com.google.common.collect.Ordering
 
+@defaultMessage Use corresponding classes in package 
org.apache.solr.common.json instead
 
 Review comment:
   sure. thanks . I didn't know it was possible


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 94 - Failure

2019-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/94/

No tests ran.

Build Log:
[...truncated 23881 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2531 links (2070 relative) to 3358 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.2.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[JENKINS] Lucene-Solr-8.1-Linux (32bit/jdk1.8.0_201) - Build # 299 - Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.1-Linux/299/
Java: 32bit/jdk1.8.0_201 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution

Error Message:
0.8167160287603006 0.8316832500407094

Stack Trace:
java.lang.AssertionError: 0.8167160287603006 0.8316832500407094
at 
__randomizedtesting.SeedInfo.seed([31E78EBC2A3E2E05:C9DA51209468412]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution(MathExpressionTest.java:4590)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16768 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> 313963 INFO  
(SUITE-MathExpressionTest-seed#[31E78EBC2A3E2E05]-worker) [] 
o.a.s.SolrTestCaseJ4 

[GitHub] [lucene-solr] uschindler commented on a change in pull request #666: SOLR-13437: fork noggit code into Solr

2019-05-08 Thread GitBox
uschindler commented on a change in pull request #666: SOLR-13437: fork noggit 
code into Solr
URL: https://github.com/apache/lucene-solr/pull/666#discussion_r281935086
 
 

 ##
 File path: lucene/tools/forbiddenApis/solr.txt
 ##
 @@ -55,3 +55,8 @@ 
com.google.common.base.Preconditions#checkNotNull(java.lang.Object,java.lang.Obj
 @defaultMessage Use methods in java.util.Comparator instead
 com.google.common.collect.Ordering
 
+@defaultMessage Use corresponding classes in package 
org.apache.solr.common.json instead
 
 Review comment:
   I'd propose to just add `org.noggit.**`, this disallows whole package.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 242 - Still Unstable!

2019-05-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/242/
Java: 32bit/jdk1.8.0_201 -client -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Error from server at http://127.0.0.1:53953/solr/authCollection: Error from 
server at null: Expected mime type application/octet-stream but got text/html. 
   Error 401 require 
authentication  HTTP ERROR 401 Problem 
accessing /solr/authCollection_shard2_replica_n2/select. Reason: 
require authenticationhttp://eclipse.org/jetty;>Powered 
by Jetty:// 9.4.14.v20181114

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:53953/solr/authCollection: Error from server at 
null: Expected mime type application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/authCollection_shard2_replica_n2/select. Reason:
require authenticationhttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114




at 
__randomizedtesting.SeedInfo.seed([110350F33EE05ACA:AD6D26E19AB3D9B0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)