[jira] [Commented] (OAK-8606) Build Jackrabbit Oak #2359 failed

2019-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924691#comment-16924691
 ] 

Hudson commented on OAK-8606:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#2360|https://builds.apache.org/job/Jackrabbit%20Oak/2360/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2360/console]

> Build Jackrabbit Oak #2359 failed
> -
>
> Key: OAK-8606
> URL: https://issues.apache.org/jira/browse/OAK-8606
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2359 has failed.
> First failed run: [Jackrabbit Oak 
> #2359|https://builds.apache.org/job/Jackrabbit%20Oak/2359/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2359/console]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8606) Build Jackrabbit Oak #2359 failed

2019-09-06 Thread Hudson (Jira)
Hudson created OAK-8606:
---

 Summary: Build Jackrabbit Oak #2359 failed
 Key: OAK-8606
 URL: https://issues.apache.org/jira/browse/OAK-8606
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #2359 has failed.
First failed run: [Jackrabbit Oak 
#2359|https://builds.apache.org/job/Jackrabbit%20Oak/2359/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2359/console]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8605) Read the journal lazily in the SplitPersistence

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/OAK-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924612#comment-16924612
 ] 

Tomek Rękawek commented on OAK-8605:


Fixed for trunk in [r1866535|https://svn.apache.org/r1866535].

> Read the journal lazily in the SplitPersistence
> ---
>
> Key: OAK-8605
> URL: https://issues.apache.org/jira/browse/OAK-8605
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Tomek Rękawek
>Priority: Major
> Fix For: 1.18.0
>
>
> The SplitPersistence reads the whole read-only journal at once, during the 
> initialisation. We may read it in a lazy way, to avoid the fetching the whole 
> append blob from the Azure, which may be slow (see OAK-8604).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (OAK-8604) Keep the last journal entry in metadata

2019-09-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/OAK-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek reassigned OAK-8604:
--

Assignee: Tomek Rękawek

> Keep the last journal entry in metadata
> ---
>
> Key: OAK-8604
> URL: https://issues.apache.org/jira/browse/OAK-8604
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.18.0
>
>
> In the oak-segment-azure, reading the whole journal may be a time consuming 
> task, as the blob appends are a bit slow to read. We may try to keep the most 
> recent journal entry as a metadata and fall back to the actual blob contents 
> if the JournalReader.nextLine() is invoked subsequently.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8604) Keep the last journal entry in metadata

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/OAK-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924610#comment-16924610
 ] 

Tomek Rękawek commented on OAK-8604:


Fixed for trunk in [r1866534|https://svn.apache.org/r1866534].

> Keep the last journal entry in metadata
> ---
>
> Key: OAK-8604
> URL: https://issues.apache.org/jira/browse/OAK-8604
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Tomek Rękawek
>Priority: Major
> Fix For: 1.18.0
>
>
> In the oak-segment-azure, reading the whole journal may be a time consuming 
> task, as the blob appends are a bit slow to read. We may try to keep the most 
> recent journal entry as a metadata and fall back to the actual blob contents 
> if the JournalReader.nextLine() is invoked subsequently.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (OAK-8604) Keep the last journal entry in metadata

2019-09-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/OAK-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-8604.

Resolution: Fixed

> Keep the last journal entry in metadata
> ---
>
> Key: OAK-8604
> URL: https://issues.apache.org/jira/browse/OAK-8604
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.18.0
>
>
> In the oak-segment-azure, reading the whole journal may be a time consuming 
> task, as the blob appends are a bit slow to read. We may try to keep the most 
> recent journal entry as a metadata and fall back to the actual blob contents 
> if the JournalReader.nextLine() is invoked subsequently.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8605) Read the journal lazily in the SplitPersistence

2019-09-06 Thread Jira
Tomek Rękawek created OAK-8605:
--

 Summary: Read the journal lazily in the SplitPersistence
 Key: OAK-8605
 URL: https://issues.apache.org/jira/browse/OAK-8605
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-azure
Reporter: Tomek Rękawek
 Fix For: 1.18.0


The SplitPersistence reads the whole read-only journal at once, during the 
initialisation. We may read it in a lazy way, to avoid the fetching the whole 
append blob from the Azure, which may be slow (see OAK-8604).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8604) Keep the last journal entry in metadata

2019-09-06 Thread Jira
Tomek Rękawek created OAK-8604:
--

 Summary: Keep the last journal entry in metadata
 Key: OAK-8604
 URL: https://issues.apache.org/jira/browse/OAK-8604
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-azure
Reporter: Tomek Rękawek
 Fix For: 1.18.0


In the oak-segment-azure, reading the whole journal may be a time consuming 
task, as the blob appends are a bit slow to read. We may try to keep the most 
recent journal entry as a metadata and fall back to the actual blob contents if 
the JournalReader.nextLine() is invoked subsequently.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (OAK-8204) OutOfMemoryError: create nodes

2019-09-06 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8204.
-
Resolution: Cannot Reproduce

> OutOfMemoryError: create nodes
> --
>
> Key: OAK-8204
> URL: https://issues.apache.org/jira/browse/OAK-8204
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: zhouxu
>Priority: Major
>
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceede
> d
>  at java.lang.AbstractStringBuilder.(AbstractStringBuilder.java:68)
> at java.lang.StringBuilder.(StringBuilder.java:101)
>  at org.apache.jackrabbit.oak.commons.PathUtils.concat(PathUtils.java:318
> )
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChild
> NodeDoc(DocumentNodeState.java:507)
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChild
> Node(DocumentNodeState.java:256)
>  at org.apache.jackrabbit.oak.plugins.memory.MutableNodeState.setChildNod
> e(MutableNodeState.java:111)
>  at org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNo
> de(MemoryNodeBuilder.java:343)
>  at org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilde
> r.setChildNode(AbstractDocumentNodeBuilder.java:56)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeAdded(ApplyDif
> f.java:80)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:412)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.plugins.document.ModifiedDocumentNodeState.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (OAK-6165) Create compound index on _sdType and _sdMaxRevTime (RDB)

2019-09-06 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-6165.
-
Fix Version/s: (was: 1.18.0)
   Resolution: Implemented

> Create compound index on _sdType and _sdMaxRevTime (RDB)
> 
>
> Key: OAK-6165
> URL: https://issues.apache.org/jira/browse/OAK-6165
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: rdbmk
>Reporter: Chetan Mehrotra
>Assignee: Julian Reschke
>Priority: Major
>
> Clone for OAK-6129 for RDB i.e. create index on OAK-6129on _sdType and 
> _sdMaxRevTime. This is required to run queries issued by the Revision GC 
> efficiently. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8602) javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded

2019-09-06 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924265#comment-16924265
 ] 

Julian Reschke commented on OAK-8602:
-

Well: java.lang.OutOfMemoryError: GC overhead limit exceeded

What is Oak supposed to do here?

> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded
> -
>
> Key: OAK-8602
> URL: https://issues.apache.org/jira/browse/OAK-8602
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.14.0
> Environment: I am runing the oak instance on my local windows machine 
> and the version of oak is 1.14.0.The DocumentNodeStore is Mongodb and the 
> IndexProvider is SolrIndexProvider with async index.
>Reporter: zhouxu
>Priority: Major
>
> This problem occurs to I am importing a large of data to the repository.I use 
> jmeter to import data in 5 threads and max loop times of each thread is 
> 20.When the count of sample reach to about 2,the oak throws the 
> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded.The 
> complete exception information is as follows:
> {color:#FF}javax.jcr.RepositoryException: OakOak0001: GC overhead limit 
> exceeded
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:250)
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:213)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:669)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:495)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:420)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:273)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:417)
>  at cn.amberdata.afc.domain.permission.AfACL.grantJcrPermit(AfACL.java:215)
>  at cn.amberdata.afc.domain.permission.AfACL.grant(AfACL.java:182)
>  at cn.amberdata.afc.domain.permission.AfACL.grantPermit(AfACL.java:235)
>  at 
> cn.amberdata.afc.domain.object.AfPersistentObject.setACL(AfPersistentObject.java:381)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.doSetAclToTargetFormSource(ACLUtils.java:62)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.setAclToTargetFormSource(ACLUtils.java:40)
>  at 
> cn.amberdata.common.core.persistence.dao.impl.AsyncTaskDaoImpl.extendsParentAcl(AsyncTaskDaoImpl.java:105)
>  at sun.reflect.GeneratedMethodAccessor167.invoke(Unknown Source)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>  at java.lang.reflect.Method.invoke(Unknown Source)
>  at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
>  at 
> org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
>  at java.util.concurrent.FutureTask.run$$$capture(Unknown Source)
>  at java.util.concurrent.FutureTask.run(Unknown Source)
>  at java.lang.Thread.run(Unknown Source)
>  Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0001: 
> GC overhead limit exceeded
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.mergeFailed(DocumentNodeStoreBranch.java:342)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.access$600(DocumentNodeStoreBranch.java:56)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch$InMemory.merge(DocumentNodeStoreBranch.java:554)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:196)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:120)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1875)
>  at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:251)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:346)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:493)
>  ... 20 more
>  Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>  at java.util.Vector.(Unknown Source)
>  at java.util.Vector.(Unknown Source)
>  at java.util.Vector.(Unknown Source)
>  at java.util.Stack.(Unknown Source)
>  at 

[jira] [Commented] (OAK-8589) NPE in IndexDefintionBuilder with existing property rule without "name" property

2019-09-06 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924230#comment-16924230
 ] 

Thomas Mueller commented on OAK-8589:
-

Looks good to me!

> NPE in IndexDefintionBuilder with existing property rule without "name" 
> property
> 
>
> Key: OAK-8589
> URL: https://issues.apache.org/jira/browse/OAK-8589
> Project: Jackrabbit Oak
>  Issue Type: Improvement
> Environment: Inde
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.18.0
>
>
> {{IndexDefinitionBuilder#findExisting}} throws NPE when 
> {{IndexDefinitionBuilder}} is initialized with an existing index that has a 
> property rule without {{name}} property defined.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-06 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-8603:

Attachment: OAK-8603.patch

> Composite Node Store + Counter Index: allow indexing from scratch / reindex
> ---
>
> Key: OAK-8603
> URL: https://issues.apache.org/jira/browse/OAK-8603
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: composite, indexing
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
> Attachments: OAK-8603.patch
>
>
> When using the composite node store with a read-only portion of the 
> repository, the counter index does not allow to index from scratch / reindex.
> Index from scratch is needed in case the async checkpoint is lost. Reindex is 
> started by setting the "reindex" flag to true.
> Currently the failure is:
> {noformat}
> 05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
> update is still failing
> java.lang.UnsupportedOperationException: This builder is read-only.
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
>  [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:59)
>  [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
>   at 
> 

[jira] [Updated] (OAK-8593) Enable a transient cluster-node to connect as invisible to oak discovery

2019-09-06 Thread Stefan Egli (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8593:
-
Description: 
Currently, if Oak is initialized with read-write access to the DocumentStore 
(Mongo/RDB) e.g through oak-run, it is made visible to upper layers (e.g. sling 
discovery) via the oak discovery cluster view. This can cause problems in the 
actual cluster with the differing topology view and the capabilities advertised 
and in some circumstances being made a leader.

In an offline discussion with [~mreutegg] and [~egli] it was decided that its 
best to expose a flag \{{invisible}} in the cluserNodes collection/table with 
which these instances can connect and then these would be ignored by the oak 
discovery.

  was:
Currently, if Oak is initialized with read-write access to the DocumentStore 
(Mongo/RDB) e.g through oak-run, it is made visible to upper layers (e.g. sling 
recovery) via the oak discovery cluster view. This can cause problems in the 
actual cluster with the differing topology view and the capabilities advertised 
and in some circumstances being made a leader.

In an offline discussion with [~mreutegg] and [~egli] it was decided that its 
best to expose a flag \{{invisible}} in the cluserNodes collection/table with 
which these instances can connect and then these would be ignored by the oak 
discovery.


> Enable a transient cluster-node to connect as invisible to oak discovery
> 
>
> Key: OAK-8593
> URL: https://issues.apache.org/jira/browse/OAK-8593
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Major
>
> Currently, if Oak is initialized with read-write access to the DocumentStore 
> (Mongo/RDB) e.g through oak-run, it is made visible to upper layers (e.g. 
> sling discovery) via the oak discovery cluster view. This can cause problems 
> in the actual cluster with the differing topology view and the capabilities 
> advertised and in some circumstances being made a leader.
> In an offline discussion with [~mreutegg] and [~egli] it was decided that its 
> best to expose a flag \{{invisible}} in the cluserNodes collection/table with 
> which these instances can connect and then these would be ignored by the oak 
> discovery.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8591) Conflict exception on commit

2019-09-06 Thread Stefan Egli (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924133#comment-16924133
 ] 

Stefan Egli commented on OAK-8591:
--

The core fix ({{doc.isVisible -> isVisible}}) makes sense. What I was wondering 
if there's any implication of the fix elsewhere. Eg {{isValidRevision}} also 
gets used by {{getLatestValue}} which in turn is used by {{getNodeAtRevision}}: 
I'm wondering if this fix also changes (fixes) the behaviour of 
{{getNodeAtRevision}} ?

> Conflict exception on commit
> 
>
> Key: OAK-8591
> URL: https://issues.apache.org/jira/browse/OAK-8591
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
>
> Under some conditions it may happen that a commit to the DocumentStore fails 
> with a ConflictException even though there is no conflicting change. This 
> includes regular commits as well as branch commits.
> The actual problem is with NodeDocument.getNewestRevision(). This method is 
> used in Commit.checkConflicts() to find out when the most recent change 
> happened on a node and find conflicts. The returned value is incorrect and 
> may e.g. return null even though the node exists. This will then trigger the 
> ConflictException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8603) Composite Node Store + Counter Index: allow indexing from scratch / reindex

2019-09-06 Thread Thomas Mueller (Jira)
Thomas Mueller created OAK-8603:
---

 Summary: Composite Node Store + Counter Index: allow indexing from 
scratch / reindex
 Key: OAK-8603
 URL: https://issues.apache.org/jira/browse/OAK-8603
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: composite, indexing
Reporter: Thomas Mueller
Assignee: Thomas Mueller


When using the composite node store with a read-only portion of the repository, 
the counter index does not allow to index from scratch / reindex.

Index from scratch is needed in case the async checkpoint is lost. Reindex is 
started by setting the "reindex" flag to true.

Currently the failure is:

{noformat}
05.09.2019 09:29:21.892 *WARN* [async-index-update-async] 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate [async] The index 
update is still failing
java.lang.UnsupportedOperationException: This builder is read-only.
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.unsupported(ReadOnlyBuilder.java:44)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:189)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.state.ReadOnlyBuilder.child(ReadOnlyBuilder.java:34)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.getBuilder(NodeCounterEditor.java:184)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leaveNew(NodeCounterEditor.java:162)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.index.counter.NodeCounterEditor.leave(NodeCounterEditor.java:114)
 [org.apache.jackrabbit.oak-core:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:73)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:59) 
[org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:129)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 
org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
 [org.apache.jackrabbit.oak-store-spi:1.16.0.R1866113]
at 

[jira] [Created] (OAK-8602) javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded

2019-09-06 Thread zhouxu (Jira)
zhouxu created OAK-8602:
---

 Summary: javax.jcr.RepositoryException: OakOak0001: GC overhead 
limit exceeded
 Key: OAK-8602
 URL: https://issues.apache.org/jira/browse/OAK-8602
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.14.0
 Environment: I am runing the oak instance on my local windows machine 
and the version of oak is 1.14.0.The DocumentNodeStore is Mongodb and the 
IndexProvider is SolrIndexProvider with async index.
Reporter: zhouxu


This problem occurs to I am importing a large of data to the repository.I use 
jmeter to import data in 5 threads and max loop times of each thread is 
20.When the count of sample reach to about 2,the oak throws the 
javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded.The 
complete exception information is as follows:

{color:#FF}javax.jcr.RepositoryException: OakOak0001: GC overhead limit 
exceeded
 at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:250)
 at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:213)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:669)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:495)
 at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:420)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:273)
 at org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:417)
 at cn.amberdata.afc.domain.permission.AfACL.grantJcrPermit(AfACL.java:215)
 at cn.amberdata.afc.domain.permission.AfACL.grant(AfACL.java:182)
 at cn.amberdata.afc.domain.permission.AfACL.grantPermit(AfACL.java:235)
 at 
cn.amberdata.afc.domain.object.AfPersistentObject.setACL(AfPersistentObject.java:381)
 at 
cn.amberdata.afc.common.util.ACLUtils.doSetAclToTargetFormSource(ACLUtils.java:62)
 at 
cn.amberdata.afc.common.util.ACLUtils.setAclToTargetFormSource(ACLUtils.java:40)
 at 
cn.amberdata.common.core.persistence.dao.impl.AsyncTaskDaoImpl.extendsParentAcl(AsyncTaskDaoImpl.java:105)
 at sun.reflect.GeneratedMethodAccessor167.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
 at 
org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
 at java.util.concurrent.FutureTask.run$$$capture(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0001: GC 
overhead limit exceeded
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.mergeFailed(DocumentNodeStoreBranch.java:342)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.access$600(DocumentNodeStoreBranch.java:56)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch$InMemory.merge(DocumentNodeStoreBranch.java:554)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:196)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:120)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1875)
 at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:251)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:346)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:493)
 ... 20 more
 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
 at java.util.Vector.(Unknown Source)
 at java.util.Vector.(Unknown Source)
 at java.util.Vector.(Unknown Source)
 at java.util.Stack.(Unknown Source)
 at org.bson.AbstractBsonWriter.(AbstractBsonWriter.java:38)
 at org.bson.AbstractBsonWriter.(AbstractBsonWriter.java:50)
 at org.bson.BsonDocumentWriter.(BsonDocumentWriter.java:44)
 at org.bson.BsonDocumentWrapper.getUnwrapped(BsonDocumentWrapper.java:194)
 at org.bson.BsonDocumentWrapper.isEmpty(BsonDocumentWrapper.java:115)
 at 
com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:395)
 at 
com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:377)
 at