[jira] [Comment Edited] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125371#comment-15125371
 ] 

Tomek Rękawek edited comment on OAK-3937 at 1/31/16 9:04 PM:
-

My observation is that on the PostgreSQL, if we have a bulk INSERT and there's 
a conflict, the {{BatchUpdateException#getUpdateCounts}} returns positive 
update count even for rows that haven't been successfully created. Method 
{{RDBOddity#batchFailingInsertResult()}} in attached [^rdb-oddity.patch] 
demonstrates this behaviour.

[^OAK-3937.patch] fixes this issue by ignoring the {{BatchUpdateException}} 
exception on PostgreSQL.


was (Author: tomek.rekawek):
My observation is that on the PostgreSQL, if we have a bulk INSERT and there's 
a conflict, the {{BatchUpdateException#getUpdateCounts}} returns positive 
update count even for rows that haven't been successfully created. Attached 
patch ignores the {{BatchUpdateException#getUpdateCounts}} values on PostgreSQL.

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch, rdb-oddity.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (OAK-3959) RDBDocumentStore: always upsert in the bulk updates

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3959:
---
Comment: was deleted

(was: Patch attached.)

> RDBDocumentStore: always upsert in the bulk updates
> ---
>
> Key: OAK-3959
> URL: https://issues.apache.org/jira/browse/OAK-3959
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3959.patch
>
>
> In PostgreSQL, if we try to insert two documents with the same ID 
> concurrently, both operations may fail (and no document is inserted). 
> Therefore we can't assume that all new documents are inserted after the first 
> round in the bulk {{RDBDocumentStore#createOrUpdate()}} - the upsert should 
> be always set to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: rdb-oddity.patch

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch, rdb-oddity.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3959) RDBDocumentStore: always upsert in the bulk updates

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-3959.

Resolution: Duplicate

OAK-3937 contains more precise description of the problem and also a better 
workaround.

> RDBDocumentStore: always upsert in the bulk updates
> ---
>
> Key: OAK-3959
> URL: https://issues.apache.org/jira/browse/OAK-3959
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3959.patch
>
>
> In PostgreSQL, if we try to insert two documents with the same ID 
> concurrently, both operations may fail (and no document is inserted). 
> Therefore we can't assume that all new documents are inserted after the first 
> round in the bulk {{RDBDocumentStore#createOrUpdate()}} - the upsert should 
> be always set to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3938) Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3938:
---
Attachment: rdb-oddity.patch

> Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()
> 
>
> Key: OAK-3938
> URL: https://issues.apache.org/jira/browse/OAK-3938
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: rdb-oddity.patch
>
>
> Happens with RDBMK only.
> {noformat}
> batchUpdateCachedDocument[RDBFixture: 
> RDB-H2(file)](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
>   Time elapsed: 0.01 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.batchUpdateCachedDocument(MultiDocumentStoreTest.java:333)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3938) Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125509#comment-15125509
 ] 

Tomek Rękawek commented on OAK-3938:


It seems that the bulk UPDATEs on Oracle always returns -2 (SUCCESS_NO_INFO), 
even if no rows have been modified. Unit test {{RDBOddity.batchUpdateResult()}} 
in the attached [^rdb-oddity.patch] demonstrates this behaviour (it fails only 
on Oracle).

> Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()
> 
>
> Key: OAK-3938
> URL: https://issues.apache.org/jira/browse/OAK-3938
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: rdb-oddity.patch
>
>
> Happens with RDBMK only.
> {noformat}
> batchUpdateCachedDocument[RDBFixture: 
> RDB-H2(file)](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
>   Time elapsed: 0.01 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.batchUpdateCachedDocument(MultiDocumentStoreTest.java:333)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3879) Lucene index / compatVersion 2: search for 'abc!' does not work

2016-01-31 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125507#comment-15125507
 ] 

Vikas Saurabh commented on OAK-3879:


So, here's \[0] a small test for the case I mentioned above. There are a couple 
of things to notice:
* The plans for both approaches don't quite line up
** The query formed by using range query like _\[d TO f }_ got queried at 
lucene with _full:metadata/title:\[d TO f} :fulltext:\[d TO f} ft:\("TO" "\[d" 
"f}")_
** The query with explicit props query got queried at lucene with 
_+metadata/title:\[d TO \*] +metadata/title:\[\* TO f}_
* Results for both cases are different as property specific comparison don't 
match up. I'm not sure but most probably this is because prop values aren't 
analyzed.

Maybe, there'd other cases that case use range queries. But more importantly, 
property range queries isn't equivalent to range queries over analyzed values - 
which can service different use cases.

\[0]
{code:java}
@Test
public void rangeQueries() throws Exception {
Tree idx = createIndex("test1", of("propa", "propb"));
Tree props = TestUtil.newRulePropTree(idx, 
NodeTypeConstants.NT_OAK_UNSTRUCTURED);
enableForFullText(props, "metadata/title", 
false).setProperty(LuceneIndexConstants.PROP_ANALYZED, true);

root.commit();

// create test data
Tree test = root.getTree("/").addChild("test");
for (String ch : "abcdefghijklmnopqrstuvwxyz".split("")) {
if (ch.length() == 0)continue;

usc(test, ch + "1").addChild("metadata").setProperty("title", ch + " 
title");
usc(test, ch + "2").addChild("metadata").setProperty("title", "title " 
+ ch + " title");
}
root.commit();

String rangeQuery = "//element(*, " +  
NodeTypeConstants.NT_OAK_UNSTRUCTURED + ")[jcr:contains(., '[d TO f}')]";
assertOrderedQuery(rangeQuery, asList("/test/d1", "/test/d2", "/test/e1", 
"/test/e2"), XPATH, true);

String explicitQuery = "select [jcr:path] from [" + 
NodeTypeConstants.NT_OAK_UNSTRUCTURED +
"] where [metadata/title] >= 'd' AND [metadata/title] < 'f'";
assertOrderedQuery(explicitQuery, asList("/test/d1", "/test/e1"), SQL2, 
true);
}
{code}

> Lucene index / compatVersion 2: search for 'abc!' does not work
> ---
>
> Key: OAK-3879
> URL: https://issues.apache.org/jira/browse/OAK-3879
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Chetan Mehrotra
> Fix For: 1.3.15
>
> Attachments: OAK-3879-v1.patch
>
>
> When using a Lucene fulltext index with compatVersion 2, then the following 
> query does not return any results. When using compatVersion 1, the correct 
> result is returned.
> {noformat}
> SELECT * FROM [nt:unstructured] AS c 
> WHERE CONTAINS(c.[jcr:description], 'abc!') 
> AND ISDESCENDANTNODE(c, '/content')
> {noformat}
> With compatVersion 1 and 2, searching for just 'abc' works. Also, searching 
> with '=' instead of 'contains' works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3938) Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125511#comment-15125511
 ] 

Tomek Rękawek commented on OAK-3938:


Attached [^OAK-3938.patch] disables the bulk updates on Oracle until we find an 
atomic way to check whether a row has been updated in the bulk operation.

> Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()
> 
>
> Key: OAK-3938
> URL: https://issues.apache.org/jira/browse/OAK-3938
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3938.patch, rdb-oddity.patch
>
>
> Happens with RDBMK only.
> {noformat}
> batchUpdateCachedDocument[RDBFixture: 
> RDB-H2(file)](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
>   Time elapsed: 0.01 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.batchUpdateCachedDocument(MultiDocumentStoreTest.java:333)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3938) Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3938:
---
Attachment: OAK-3938.patch

> Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()
> 
>
> Key: OAK-3938
> URL: https://issues.apache.org/jira/browse/OAK-3938
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3938.patch, rdb-oddity.patch
>
>
> Happens with RDBMK only.
> {noformat}
> batchUpdateCachedDocument[RDBFixture: 
> RDB-H2(file)](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
>   Time elapsed: 0.01 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.batchUpdateCachedDocument(MultiDocumentStoreTest.java:333)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181, 399 |  SEGMENT_MK, DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 
382 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151, 490, 656, 679 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, 
DOCUMENT_RDB, DOCUMENT_NS | 1.6, 1.7 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 
| DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | 
DOCUMENT_RDB | 1.6 |
| 
org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths
 | 361 | DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 
361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 |
| Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration | 
486 | DOCUMENT_NS | 1.7| 
| org.apache.jackrabbit.j2ee.TomcatIT.testTomcat | 489, 493, 597, 648 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 | 
| org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 
623, 624, 656, 679 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization
 | 490, 656, 679 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.run.osgi.PropertyIndexReindexingTest.propertyIndexState
 | 492 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.j2ee.TomcatIT | 589 | SEGMENT_MK | 1.8 |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStoreRestart
 | 621 | DOCUMENT_NS | 1.8 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation
 | 627 | DOCUMENT_RDB |1.7 |
| org.apache.jackrabbit.oak.spi.security.authorization.cug.impl.* | 648 | 
SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.remote.http.handler.RemoteServerIT | 643 | 
DOCUMNET_NS | 1.7, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 
663 | SEGMENT_MK | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest | 673, 674 | 
SEGMENT_MK | 1.8 | 
| org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest | 
648, 679 | SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest | 689 
| 

[jira] [Updated] (OAK-2714) Test failures on Jenkins

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181, 399 |  SEGMENT_MK, DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 
382 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151, 490, 656, 679 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, 
DOCUMENT_RDB, DOCUMENT_NS | 1.6, 1.7 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 
| DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | 
DOCUMENT_RDB | 1.6 |
| 
org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths
 | 361 | DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 
361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 |
| Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration | 
486 | DOCUMENT_NS | 1.7| 
| org.apache.jackrabbit.j2ee.TomcatIT.testTomcat | 489, 493, 597, 648 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 | 
| org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 
623, 624, 656, 679 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization
 | 490, 656, 679 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.run.osgi.PropertyIndexReindexingTest.propertyIndexState
 | 492 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.j2ee.TomcatIT | 589 | SEGMENT_MK | 1.8 |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStoreRestart
 | 621 | DOCUMENT_NS | 1.8 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation
 | 627 | DOCUMENT_RDB |1.7 |
| org.apache.jackrabbit.oak.spi.security.authorization.cug.impl.* | 648 | 
SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.remote.http.handler.RemoteServerIT | 643 | 
DOCUMNET_NS | 1.7, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 
663 | SEGMENT_MK | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest | 673, 674 | 
SEGMENT_MK | 1.8 | 
| org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest | 
648, 679 | SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest | 689 
| 

[jira] [Assigned] (OAK-3938) Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()

2016-01-31 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned OAK-3938:
---

Assignee: Julian Reschke

> Occasional failure in MultiDocumentStoreTest.batchUpdateCachedDocument()
> 
>
> Key: OAK-3938
> URL: https://issues.apache.org/jira/browse/OAK-3938
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
> Fix For: 1.4
>
> Attachments: OAK-3938.patch, rdb-oddity.patch
>
>
> Happens with RDBMK only.
> {noformat}
> batchUpdateCachedDocument[RDBFixture: 
> RDB-H2(file)](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
>   Time elapsed: 0.01 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.batchUpdateCachedDocument(MultiDocumentStoreTest.java:333)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2761) Persistent cache: add data in a different thread

2016-01-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125849#comment-15125849
 ] 

Thomas Mueller commented on OAK-2761:
-

[~tomek.rekawek], see the comments above on how to implement the "different 
thread" logic.

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4, 1.3.15
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2761) Persistent cache: add data in a different thread

2016-01-31 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125848#comment-15125848
 ] 

Thomas Mueller commented on OAK-2761:
-

The "different thread" logic is implemented in the (related) TCPBroadcaster 
class, using:

{noformat}
ArrayBlockingQueue sendBuffer
...
// entries are added to this buffer in the main thread (the thread that must 
not be blocked):
@Override
public void send(ByteBuffer buff) {
ByteBuffer b = ByteBuffer.allocate(buff.remaining());
b.put(buff);
b.flip();
while (sendBuffer.size() > MAX_BUFFER_SIZE) {
sendBuffer.poll();
}
try {
sendBuffer.add(b);
} catch (IllegalStateException e) {
// ignore - might happen once in a while,
// if the buffer was not yet full just before, but now
// many threads concurrently tried to add
}
}

// the thread that sends (writes):
void send() {
while (isRunning()) {
try {
ByteBuffer buff = sendBuffer.poll(10, TimeUnit.MILLISECONDS);
if (buff != null && isRunning()) {
sendBuffer(buff);
}
} catch (InterruptedException e) {
// ignore
}
}
}
...
{noformat}

As for threading, I have used an explicit new thread. I think that's much 
better than using a thread pool or similar, because we have full control over 
how to start and stop the thread. As we have seen for the AsyncIndexUpdate 
thread, relying on an external thread pool is dangerous, as stopping the thread 
pool can call Thread.interrupt (which results in all kinds of problems), or the 
thread pool is shut down too late, or shutting down the thread pool does not 
wait for all running threads to stop (no Thread.join). Also, you can give the 
thread a nice, human readable name, which is not easy with a thread pool. So I 
have used:

{noformat}
sendThread = new Thread(new Runnable() {
@Override
public void run() {
send();
}
}, "Oak TCPBroadcaster: send #" + id);
sendThread.setDaemon(true);
sendThread.start();
...
@Override
public void close() {
if (isRunning()) {
LOG.debug("Stopping");
synchronized (stop) {
stop.set(true);
stop.notifyAll();
}
 ...
try {
sendThread.join();
} catch (InterruptedException e) {
// ignore
}
{noformat}

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4, 1.3.15
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> 

[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2016-01-31 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2761:

Fix Version/s: 1.3.15
   1.4

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.4, 1.3.15
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3923) Async indexing delayed by 30 minutes because stop order is incorrect

2016-01-31 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125857#comment-15125857
 ] 

Chetan Mehrotra commented on OAK-3923:
--

Updated close call logic to support multiple calls to {{close}} method with 
1727893

> Async indexing delayed by 30 minutes because stop order is incorrect
> 
>
> Key: OAK-3923
> URL: https://issues.apache.org/jira/browse/OAK-3923
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Chetan Mehrotra
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.15
>
> Attachments: OAK-3923-v1.patch
>
>
> The stop order of Oak components is incorrect, and this can lead to an async 
> indexing delay of 30 minutes, because the indexing lease is not removed. The 
> problem is that the node store is stopped before the async index is stopped, 
> so that async indexing can still be in progress, and then when async indexing 
> is done, the lease can not be removed because the node store is not available.
> From the log file:
> {noformat}
> error.log:
> 21.01.2016 11:53:56.898 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-tarmk-standby BundleEvent STOPPED
> 21.01.2016 11:53:56.900 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-solr-osgi Service 
> [org.apache.jackrabbit.oak.plugins.index.solr.osgi.SolrIndexEditorProviderService,571,
>  [org.apache.jackrabbit.oak.plugins.index.IndexEditorProvider]] ServiceEvent 
> UNREGISTERING
> ...
> 21.01.2016 11:53:56.930 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-lucene BundleEvent STOPPING
> 21.01.2016 11:53:56.930 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-lucene BundleEvent STOPPED
> 21.01.2016 11:53:56.931 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-core Service 
> [org.apache.jackrabbit.oak.plugins.index.property.PropertyIndexProvider,405, 
> [org.apache.jackrabbit.oak.spi.query.QueryIndexProvider]] ServiceEvent 
> UNREGISTERING
> ...
> 21.01.2016 11:53:56.936 *INFO* [FelixStartLevel] 
> com.adobe.granite.repository.impl.SlingRepositoryManager stop: Repository 
> still running, forcing shutdown
> ...
> 21.01.2016 11:53:56.960 *WARN* [FelixStartLevel] 
> org.apache.jackrabbit.oak.osgi.OsgiWhiteboard Error unregistering service: 
> com.adobe.granite.repository.impl.SlingRepositoryManager$1@7c052458 of type 
> java.util.concurrent.Executor
> java.lang.IllegalStateException: Service already unregistered.
>   at 
> org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:136)
>   at 
> org.apache.jackrabbit.oak.osgi.OsgiWhiteboard$1.unregister(OsgiWhiteboard.java:81)
>   at 
> org.apache.jackrabbit.oak.spi.whiteboard.CompositeRegistration.unregister(CompositeRegistration.java:43)
>   at org.apache.jackrabbit.oak.Oak$6.close(Oak.java:592)
> ...
> 21.01.2016 11:56:50.985 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak-core Service [763, 
> [org.apache.jackrabbit.oak.plugins.segment.SegmentStoreProvider]] 
> ServiceEvent UNREGISTERING
>  
> debug.log:
> 21.01.2016 11:56:51.964 *WARN* [sling-default-4] 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
> update failed
> java.lang.IllegalStateException: service must be activated when used
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.getNodeStore(SegmentNodeStoreService.java:233)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.getNodeStore(SegmentNodeStoreService.java:92)
>   at 
> org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.getRoot(ProxyNodeStore.java:36)
>   at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate$AsyncUpdateCallback.close(AsyncIndexUpdate.java:266)
>   at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:451)
>   at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:351)
>  
> error.log:
> 21.01.2016 11:56:51.965 *ERROR* [sling-default-4] 
> org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
> execution of 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@1706b18c : service 
> must be activated when used
> java.lang.IllegalStateException: service must be activated when used
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.getNodeStore(SegmentNodeStoreService.java:233)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.getNodeStore(SegmentNodeStoreService.java:92)
>   at 
> 

[jira] [Resolved] (OAK-3879) Lucene index / compatVersion 2: search for 'abc!' does not work

2016-01-31 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3879.
--
Resolution: Fixed

Thanks [~catholicon] for the examples. Makes sense to then have them excluded 
from escape list.

So change the escape list to below with 1727895
{code}
+private static final char[] LUCENE_QUERY_OPERATORS = {':' , '/', '!', '&', 
'|'};
{code}

> Lucene index / compatVersion 2: search for 'abc!' does not work
> ---
>
> Key: OAK-3879
> URL: https://issues.apache.org/jira/browse/OAK-3879
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Thomas Mueller
>Assignee: Chetan Mehrotra
> Fix For: 1.3.15
>
> Attachments: OAK-3879-v1.patch
>
>
> When using a Lucene fulltext index with compatVersion 2, then the following 
> query does not return any results. When using compatVersion 1, the correct 
> result is returned.
> {noformat}
> SELECT * FROM [nt:unstructured] AS c 
> WHERE CONTAINS(c.[jcr:description], 'abc!') 
> AND ISDESCENDANTNODE(c, '/content')
> {noformat}
> With compatVersion 1 and 2, searching for just 'abc' works. Also, searching 
> with '=' instead of 'contains' works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125327#comment-15125327
 ] 

Tomek Rękawek commented on OAK-3937:


The attached patch enables autoCommit for the bulk UPDATE method (not for the 
bulk INSERT). It also removes one redundant commit() (invoked after a batch 
SELECT operation).

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125327#comment-15125327
 ] 

Tomek Rękawek edited comment on OAK-3937 at 1/31/16 1:23 PM:
-

The attached patch fixes the issue on PostgreSQL, by enabling autoCommit for 
the bulk UPDATE method (not for the bulk INSERT). It also removes one redundant 
commit() (invoked after a batch SELECT operation).


was (Author: tomek.rekawek):
The attached patch enables autoCommit for the bulk UPDATE method (not for the 
bulk INSERT). It also removes one redundant commit() (invoked after a batch 
SELECT operation).

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3961) Cold Standby revisit timeout setup

2016-01-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125328#comment-15125328
 ] 

Alex Parvulescu commented on OAK-3961:
--

removed the global timeouts, and refactored all the test with a more aggressive 
timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
I'm now curious to see if this is too low for the CI infra.

> Cold Standby revisit timeout setup
> --
>
> Key: OAK-3961
> URL: https://issues.apache.org/jira/browse/OAK-3961
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> The timeout settings are too large and inefficient, making all the tests very 
> slow. On top of this the current timeout if being enforced in 2 places, which 
> turns out it doesn't play too well with the sync mechanism:
> * one is via the _ReadTimeoutHandler_ in the _StandbyClient_
> * second is in the _SegmentLoaderHandler_
> as it turns out the first one is a global kill switch, and it will fail any 
> transaction larger than the set value (_all_ of the sync cycle), which is not 
> what I meant to do with it, so I'll remove it and only leave the second one, 
> which is a timeout per request (segment or binary).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3961) Cold Standby revisit timeout setup

2016-01-31 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3961:


 Summary: Cold Standby revisit timeout setup
 Key: OAK-3961
 URL: https://issues.apache.org/jira/browse/OAK-3961
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: tarmk-standby
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu


The timeout settings are too large and inefficient, making all the tests very 
slow. On top of this the current timeout if being enforced in 2 places, which 
turns out it doesn't play too well with the sync mechanism:
* one is via the _ReadTimeoutHandler_ in the _StandbyClient_
* second is in the _SegmentLoaderHandler_
as it turns out the first one is a global kill switch, and it will fail any 
transaction larger than the set value (_all_ of the sync cycle), which is not 
what I meant to do with it, so I'll remove it and only leave the second one, 
which is a timeout per request (segment or binary).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: OAK-3937.patch

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3810) Log messages related to AsyncIndexUpdate leaseTimeOut impact

2016-01-31 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3810.
--
Resolution: Duplicate

> Log messages related to AsyncIndexUpdate leaseTimeOut impact
> 
>
> Key: OAK-3810
> URL: https://issues.apache.org/jira/browse/OAK-3810
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Thierry Ygé
>
> Currently if the async index is not running it might be due to the 
> leaseTimeOut check. 
> As there are no log messages, it is difficult to analyze why the indexing is 
> not running.
> It would really help to add some log messages about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3961) Cold Standby revisit timeout setup

2016-01-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125328#comment-15125328
 ] 

Alex Parvulescu edited comment on OAK-3961 at 1/31/16 1:45 PM:
---

* removed the global timeouts, and refactored all the test with a more 
aggressive timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
* fixed a tiny compilation issue in oak-run with 
http://svn.apache.org/viewvc?rev=1727816=rev

I'm now curious to see if this is too low for the CI infra.


was (Author: alex.parvulescu):
removed the global timeouts, and refactored all the test with a more aggressive 
timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
I'm now curious to see if this is too low for the CI infra.

> Cold Standby revisit timeout setup
> --
>
> Key: OAK-3961
> URL: https://issues.apache.org/jira/browse/OAK-3961
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> The timeout settings are too large and inefficient, making all the tests very 
> slow. On top of this the current timeout if being enforced in 2 places, which 
> turns out it doesn't play too well with the sync mechanism:
> * one is via the _ReadTimeoutHandler_ in the _StandbyClient_
> * second is in the _SegmentLoaderHandler_
> as it turns out the first one is a global kill switch, and it will fail any 
> transaction larger than the set value (_all_ of the sync cycle), which is not 
> what I meant to do with it, so I'll remove it and only leave the second one, 
> which is a timeout per request (segment or binary).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123799#comment-15123799
 ] 

Tomek Rękawek edited comment on OAK-3937 at 1/31/16 2:00 PM:
-

It seems that in the current trunk the problem only exists on the PostgreSQL.


was (Author: tomek.rekawek):
It seems that in the current trunk the problem only exists on the PostgreSQL. 
It can be fixed with setting autoCommit() for the bulk update connection.

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: OAK-3937.patch

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125371#comment-15125371
 ] 

Tomek Rękawek commented on OAK-3937:


My observation is that on the PostgreSQL, if we have a bulk INSERT and there's 
a conflict, the {{BatchUpdateException#getUpdateCounts}} returns positive 
update count even for rows that haven't been successfully created. Attached 
patch ignores the {{BatchUpdateException#getUpdateCounts}} values on PostgreSQL.

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: OAK-3937.patch

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: (was: OAK-3937.patch)

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3937.patch
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3368) Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT

2016-01-31 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu reassigned OAK-3368:


Assignee: Alex Parvulescu  (was: Manfred Baedke)

> Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT
> -
>
> Key: OAK-3368
> URL: https://issues.apache.org/jira/browse/OAK-3368
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: tarmk-standby
>Reporter: Marcel Reutegger
>Assignee: Alex Parvulescu
>
> Both tests run for more than 5 minutes. Most of the time the tests are 
> somehow stuck in shutting down the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3368) Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT

2016-01-31 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3368.
--
   Resolution: Duplicate
Fix Version/s: (was: 1.4)

I'm revisiting the test setup as a part of OAK-3961, a fix for this will be 
included there.

> Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT
> -
>
> Key: OAK-3368
> URL: https://issues.apache.org/jira/browse/OAK-3368
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: tarmk-standby
>Reporter: Marcel Reutegger
>Assignee: Alex Parvulescu
>
> Both tests run for more than 5 minutes. Most of the time the tests are 
> somehow stuck in shutting down the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Comment: was deleted

(was: The attached patch fixes the issue on PostgreSQL, by enabling autoCommit 
for the bulk UPDATE method (not for the bulk INSERT). It also removes one 
redundant commit() (invoked after a batch SELECT operation).)

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3937) Batch createOrUpdate() may fail with primary key violation

2016-01-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3937:
---
Attachment: (was: OAK-3937.patch)

> Batch createOrUpdate() may fail with primary key violation
> --
>
> Key: OAK-3937
> URL: https://issues.apache.org/jira/browse/OAK-3937
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
> Fix For: 1.4
>
>
> In some cases the batch createOrUpdate() method may fail on RDBMK with a 
> primary key violation exception.
> {noformat}
> java.lang.AssertionError: 
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: 
> "PRIMARY_KEY_1 ON PUBLIC.DSTEST_NODES(ID) VALUES ('1:/node-40', 118)"; SQL 
> statement:
> insert into dstest_NODES(ID, MODIFIED, HASBINARY, DELETEDONCE, MODCOUNT, 
> CMODCOUNT, DSIZE, DATA, BDATA) values (?, ?, ?, ?, ?, ?, ?, ?, ?) [23505-185]
> {noformat}
> See the currently disabled test 
> {{MultiDocumentStoreTest.concurrentBatchUpdate()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3961) Cold Standby revisit timeout setup

2016-01-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125328#comment-15125328
 ] 

Alex Parvulescu edited comment on OAK-3961 at 1/31/16 3:09 PM:
---

* -removed the global timeouts-, refactored all the test with a more aggressive 
timeout value [r1727813|http://svn.apache.org/viewvc?rev=1727813=rev]
* fixed a tiny compilation issue in oak-run with 
[r1727816|http://svn.apache.org/viewvc?rev=1727816=rev]
* had to read the global timeout handler (it was the only way to control the 
timeout on the initial connection to a server that has a blacklist, otherwise 
it would hang), but to keep things consistent I'm removing it once the initial 
sync conversation happens (the timeout handler will only control the initial 
head request), this allowed for enabling of the _FailoverIPRangeTest_ test 
[1727831|http://svn.apache.org/viewvc?rev=1727831=rev], 
[1727832|http://svn.apache.org/viewvc?rev=1727832=rev]

I'm now curious to see if this is too low for the CI infra.


was (Author: alex.parvulescu):
* -removed the global timeouts-, and refactored all the test with a more 
aggressive timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
* fixed a tiny compilation issue in oak-run with 
http://svn.apache.org/viewvc?rev=1727816=rev
* had to read the global timeout handler (it was the only way to control the 
timeout on the initial connection to a server that has a blacklist, otherwise 
it would hang), but to keep things consistent I'm removing it once the initial 
sync conversation happens (the timeout handler will only control the initial 
head request), this allowed for enabling of the _FailoverIPRangeTest_ test 
http://svn.apache.org/viewvc?rev=1727831=rev, 
http://svn.apache.org/viewvc?rev=1727832=rev

I'm now curious to see if this is too low for the CI infra.

> Cold Standby revisit timeout setup
> --
>
> Key: OAK-3961
> URL: https://issues.apache.org/jira/browse/OAK-3961
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> The timeout settings are too large and inefficient, making all the tests very 
> slow. On top of this the current timeout if being enforced in 2 places, which 
> turns out it doesn't play too well with the sync mechanism:
> * one is via the _ReadTimeoutHandler_ in the _StandbyClient_
> * second is in the _SegmentLoaderHandler_
> as it turns out the first one is a global kill switch, and it will fail any 
> transaction larger than the set value (_all_ of the sync cycle), which is not 
> what I meant to do with it, so I'll remove it and only leave the second one, 
> which is a timeout per request (segment or binary).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3961) Cold Standby revisit timeout setup

2016-01-31 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125328#comment-15125328
 ] 

Alex Parvulescu edited comment on OAK-3961 at 1/31/16 3:06 PM:
---

* -removed the global timeouts-, and refactored all the test with a more 
aggressive timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
* fixed a tiny compilation issue in oak-run with 
http://svn.apache.org/viewvc?rev=1727816=rev
* had to read the global timeout handler (it was the only way to control the 
timeout on the initial connection to a server that has a blacklist, otherwise 
it would hang), but to keep things consistent I'm removing it once the initial 
sync conversation happens (the timeout handler will only control the initial 
head request), this allowed for enabling of the _ FailoverIPRangeTest_ test 
http://svn.apache.org/viewvc?rev=1727831=rev, 
http://svn.apache.org/viewvc?rev=1727832=rev

I'm now curious to see if this is too low for the CI infra.


was (Author: alex.parvulescu):
* removed the global timeouts, and refactored all the test with a more 
aggressive timeout value http://svn.apache.org/viewvc?rev=1727813=rev.
* fixed a tiny compilation issue in oak-run with 
http://svn.apache.org/viewvc?rev=1727816=rev

I'm now curious to see if this is too low for the CI infra.

> Cold Standby revisit timeout setup
> --
>
> Key: OAK-3961
> URL: https://issues.apache.org/jira/browse/OAK-3961
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> The timeout settings are too large and inefficient, making all the tests very 
> slow. On top of this the current timeout if being enforced in 2 places, which 
> turns out it doesn't play too well with the sync mechanism:
> * one is via the _ReadTimeoutHandler_ in the _StandbyClient_
> * second is in the _SegmentLoaderHandler_
> as it turns out the first one is a global kill switch, and it will fail any 
> transaction larger than the set value (_all_ of the sync cycle), which is not 
> what I meant to do with it, so I'll remove it and only leave the second one, 
> which is a timeout per request (segment or binary).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)