[jira] [Created] (OAK-8785) the mail list website is inaccessible

2019-11-24 Thread zhouxu (Jira)
zhouxu created OAK-8785:
---

 Summary: the mail list website is inaccessible
 Key: OAK-8785
 URL: https://issues.apache.org/jira/browse/OAK-8785
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: zhouxu


this website is inaccessible:

[http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Dev-f523400.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8784) Can oak update attribute values in batches with UPDATE statement?

2019-11-24 Thread zhouxu (Jira)
zhouxu created OAK-8784:
---

 Summary: Can oak update attribute values in batches with UPDATE 
statement? 
 Key: OAK-8784
 URL: https://issues.apache.org/jira/browse/OAK-8784
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: 1.14
Reporter: zhouxu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8770) Can't get fectresult

2019-11-17 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8770:

Description: 
Hello expert!

    I cannot get FectResult with solr,my code as follow,I debug the 
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr 
response(queryResponse.getFacetFields()).This question has puzzled me for many 
days。

    can anyone tell me:How the solr fect data  enters into the fectresult?

    1.get fectresult code:
     String sql = "select [jcr:path] ,[jcr:primaryType] ,      
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where      contains([jcr:primaryType],'da_document') order by 
da_long asc";

    Query q = qm.createQuery(sql2, Query.JCR_SQL2);
     QueryResult result = q.execute();
     FacetResult facetResult = new FacetResult(result);
     Set dimensions = facetResult.getDimensions(); // \{ "tags" }
     List facets = facetResult.getFacets("tags");
     for (FacetResult.Facet facet : facets)

{        String label = facet.getLabel();        int count = facet.getCount();  
  }

2.SolrQueryIndex code:
    List returnedFieldFacet = queryResponse.getFacetFields();

  was:
Hello expert!

    I cannot get FectResult with solr,my code as follow,I debug the 
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr 
response(queryResponse.getFacetFields()).This question has puzzled me for many 
days。

    1.get fectresult code:
     String sql = "select [jcr:path] ,[jcr:primaryType] ,      
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where      contains([jcr:primaryType],'da_document') order by 
da_long asc";

    Query q = qm.createQuery(sql2, Query.JCR_SQL2);
     QueryResult result = q.execute();
     FacetResult facetResult = new FacetResult(result);
     Set dimensions = facetResult.getDimensions(); // \{ "tags" }
     List facets = facetResult.getFacets("tags");
     for (FacetResult.Facet facet : facets)

{        String label = facet.getLabel();        int count = facet.getCount();  
  }

2.SolrQueryIndex code:
    List returnedFieldFacet = queryResponse.getFacetFields();


> Can't get fectresult
> 
>
> Key: OAK-8770
> URL: https://issues.apache.org/jira/browse/OAK-8770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: 1.14
>Affects Versions: 1.14.0
>Reporter: zhouxu
>Priority: Blocker
>
> Hello expert!
>     I cannot get FectResult with solr,my code as follow,I debug the 
> "org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
> the returned fect result from solr 
> response(queryResponse.getFacetFields()).This question has puzzled me for 
> many days。
>     can anyone tell me:How the solr fect data  enters into the fectresult?
>     1.get fectresult code:
>      String sql = "select [jcr:path] ,[jcr:primaryType] ,      
> [rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double
>  from da_document where      contains([jcr:primaryType],'da_document') order 
> by da_long asc";
>     Query q = qm.createQuery(sql2, Query.JCR_SQL2);
>      QueryResult result = q.execute();
>      FacetResult facetResult = new FacetResult(result);
>      Set dimensions = facetResult.getDimensions(); // \{ "tags" }
>      List facets = facetResult.getFacets("tags");
>      for (FacetResult.Facet facet : facets)
> {        String label = facet.getLabel();        int count = 
> facet.getCount();    }
> 2.SolrQueryIndex code:
>     List returnedFieldFacet = queryResponse.getFacetFields();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8770) Can't get fectresult

2019-11-17 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8770:

Description: 
Hello expert!

    I cannot get FectResult with solr,my code as follow,I debug the 
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr response(queryResponse.getFacetFields()).

This question has puzzled me for many days


    1.get fectresult code:
    String sql = "select [jcr:path] ,[jcr:primaryType] ,      
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where      contains([jcr:primaryType],'da_document') order by 
da_long asc";

    Query q = qm.createQuery(sql2, Query.JCR_SQL2);
    QueryResult result = q.execute();
    FacetResult facetResult = new FacetResult(result);
    Set dimensions = facetResult.getDimensions(); // \{ "tags" }
    List facets = facetResult.getFacets("tags");
    for (FacetResult.Facet facet : facets) {
       String label = facet.getLabel();
       int count = facet.getCount();
   }

2.SolrQueryIndex code:
   List returnedFieldFacet = queryResponse.getFacetFields();

  was:
Hello expert!

I cannot get FectResult with solr,my code as follow,I debug the  
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr response(queryResponse.getFacetFields()).
1.get fectresult code:
String sql = "select [jcr:path] ,[jcr:primaryType] ,   
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where contains([jcr:primaryType],'da_document')  order by 
da_long asc";

Query q = qm.createQuery(sql2, Query.JCR_SQL2);
QueryResult result = q.execute();
FacetResult facetResult = new FacetResult(result);
Set dimensions = facetResult.getDimensions(); // \{ "tags" }
List facets = facetResult.getFacets("tags");
for (FacetResult.Facet facet : facets) \{
String label = facet.getLabel();
int count = facet.getCount();
}

2.SolrQueryIndex code:
List returnedFieldFacet = queryResponse.getFacetFields();


> Can't get fectresult
> 
>
> Key: OAK-8770
> URL: https://issues.apache.org/jira/browse/OAK-8770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: 1.14
>Affects Versions: 1.14.0
>Reporter: zhouxu
>Priority: Major
>
> Hello expert!
>     I cannot get FectResult with solr,my code as follow,I debug the 
> "org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
> the returned fect result from solr response(queryResponse.getFacetFields()).
> This question has puzzled me for many days
>     1.get fectresult code:
>     String sql = "select [jcr:path] ,[jcr:primaryType] ,      
> [rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double
>  from da_document where      contains([jcr:primaryType],'da_document') order 
> by da_long asc";
>     Query q = qm.createQuery(sql2, Query.JCR_SQL2);
>     QueryResult result = q.execute();
>     FacetResult facetResult = new FacetResult(result);
>     Set dimensions = facetResult.getDimensions(); // \{ "tags" }
>     List facets = facetResult.getFacets("tags");
>     for (FacetResult.Facet facet : facets) {
>        String label = facet.getLabel();
>        int count = facet.getCount();
>    }
> 2.SolrQueryIndex code:
>    List returnedFieldFacet = queryResponse.getFacetFields();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8770) Can't get fectresult

2019-11-17 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8770:

Priority: Blocker  (was: Major)

> Can't get fectresult
> 
>
> Key: OAK-8770
> URL: https://issues.apache.org/jira/browse/OAK-8770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: 1.14
>Affects Versions: 1.14.0
>Reporter: zhouxu
>Priority: Blocker
>
> Hello expert!
>     I cannot get FectResult with solr,my code as follow,I debug the 
> "org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
> the returned fect result from solr response(queryResponse.getFacetFields()).
> This question has puzzled me for many days
>     1.get fectresult code:
>     String sql = "select [jcr:path] ,[jcr:primaryType] ,      
> [rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double
>  from da_document where      contains([jcr:primaryType],'da_document') order 
> by da_long asc";
>     Query q = qm.createQuery(sql2, Query.JCR_SQL2);
>     QueryResult result = q.execute();
>     FacetResult facetResult = new FacetResult(result);
>     Set dimensions = facetResult.getDimensions(); // \{ "tags" }
>     List facets = facetResult.getFacets("tags");
>     for (FacetResult.Facet facet : facets) {
>        String label = facet.getLabel();
>        int count = facet.getCount();
>    }
> 2.SolrQueryIndex code:
>    List returnedFieldFacet = queryResponse.getFacetFields();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8770) Can't get fectresult

2019-11-17 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8770:

Description: 
Hello expert!

    I cannot get FectResult with solr,my code as follow,I debug the 
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr 
response(queryResponse.getFacetFields()).This question has puzzled me for many 
days。

    1.get fectresult code:
     String sql = "select [jcr:path] ,[jcr:primaryType] ,      
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where      contains([jcr:primaryType],'da_document') order by 
da_long asc";

    Query q = qm.createQuery(sql2, Query.JCR_SQL2);
     QueryResult result = q.execute();
     FacetResult facetResult = new FacetResult(result);
     Set dimensions = facetResult.getDimensions(); // \{ "tags" }
     List facets = facetResult.getFacets("tags");
     for (FacetResult.Facet facet : facets)

{        String label = facet.getLabel();        int count = facet.getCount();  
  }

2.SolrQueryIndex code:
    List returnedFieldFacet = queryResponse.getFacetFields();

  was:
Hello expert!

    I cannot get FectResult with solr,my code as follow,I debug the 
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr response(queryResponse.getFacetFields()).

This question has puzzled me for many days


    1.get fectresult code:
    String sql = "select [jcr:path] ,[jcr:primaryType] ,      
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where      contains([jcr:primaryType],'da_document') order by 
da_long asc";

    Query q = qm.createQuery(sql2, Query.JCR_SQL2);
    QueryResult result = q.execute();
    FacetResult facetResult = new FacetResult(result);
    Set dimensions = facetResult.getDimensions(); // \{ "tags" }
    List facets = facetResult.getFacets("tags");
    for (FacetResult.Facet facet : facets) {
       String label = facet.getLabel();
       int count = facet.getCount();
   }

2.SolrQueryIndex code:
   List returnedFieldFacet = queryResponse.getFacetFields();


> Can't get fectresult
> 
>
> Key: OAK-8770
> URL: https://issues.apache.org/jira/browse/OAK-8770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: 1.14
>Affects Versions: 1.14.0
>Reporter: zhouxu
>Priority: Blocker
>
> Hello expert!
>     I cannot get FectResult with solr,my code as follow,I debug the 
> "org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
> the returned fect result from solr 
> response(queryResponse.getFacetFields()).This question has puzzled me for 
> many days。
>     1.get fectresult code:
>      String sql = "select [jcr:path] ,[jcr:primaryType] ,      
> [rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double
>  from da_document where      contains([jcr:primaryType],'da_document') order 
> by da_long asc";
>     Query q = qm.createQuery(sql2, Query.JCR_SQL2);
>      QueryResult result = q.execute();
>      FacetResult facetResult = new FacetResult(result);
>      Set dimensions = facetResult.getDimensions(); // \{ "tags" }
>      List facets = facetResult.getFacets("tags");
>      for (FacetResult.Facet facet : facets)
> {        String label = facet.getLabel();        int count = 
> facet.getCount();    }
> 2.SolrQueryIndex code:
>     List returnedFieldFacet = queryResponse.getFacetFields();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8770) Can't get fectresult

2019-11-17 Thread zhouxu (Jira)
zhouxu created OAK-8770:
---

 Summary: Can't get fectresult
 Key: OAK-8770
 URL: https://issues.apache.org/jira/browse/OAK-8770
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: 1.14
Affects Versions: 1.14.0
Reporter: zhouxu


Hello expert!

I cannot get FectResult with solr,my code as follow,I debug the  
"org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndex",i can get 
the returned fect result from solr response(queryResponse.getFacetFields()).
1.get fectresult code:
String sql = "select [jcr:path] ,[jcr:primaryType] ,   
[rep:facet(jcr:primaryType)],id,da_string,da_long,da_date,da_boolean,da_double 
from da_document where contains([jcr:primaryType],'da_document')  order by 
da_long asc";

Query q = qm.createQuery(sql2, Query.JCR_SQL2);
QueryResult result = q.execute();
FacetResult facetResult = new FacetResult(result);
Set dimensions = facetResult.getDimensions(); // \{ "tags" }
List facets = facetResult.getFacets("tags");
for (FacetResult.Facet facet : facets) \{
String label = facet.getLabel();
int count = facet.getCount();
}

2.SolrQueryIndex code:
List returnedFieldFacet = queryResponse.getFacetFields();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8757) oak with solr index query problem

2019-11-11 Thread zhouxu (Jira)
zhouxu created OAK-8757:
---

 Summary: oak with solr index query problem
 Key: OAK-8757
 URL: https://issues.apache.org/jira/browse/OAK-8757
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: zhouxu


hello expert:

   we use solr for query engine,When the where clause is set, SQL2 is converted 
to Solr syntax error, and the field is not converted to the saved field in 
Solr. For example: the field da_title in oak is not converted to the field 
da_title_string_sort in Solr for query.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8732) Does oak have a performance test report? How about performance in the case of billions of nodes

2019-11-11 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8732:

Priority: Blocker  (was: Major)

> Does oak have a performance test report? How about performance in the case of 
> billions of  nodes
> 
>
> Key: OAK-8732
> URL: https://issues.apache.org/jira/browse/OAK-8732
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: 1.14
>Reporter: zhouxu
>Priority: Blocker
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8731) Can oak support big data? Billions of nodes

2019-11-11 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8731:

Priority: Blocker  (was: Major)

> Can oak support big data? Billions of nodes
> ---
>
> Key: OAK-8731
> URL: https://issues.apache.org/jira/browse/OAK-8731
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: 1.14
>Reporter: zhouxu
>Priority: Blocker
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8753) OakMerge0004 Two cluster nodes import data synchronously

2019-11-08 Thread zhouxu (Jira)
zhouxu created OAK-8753:
---

 Summary: OakMerge0004  Two cluster nodes import data synchronously
 Key: OAK-8753
 URL: https://issues.apache.org/jira/browse/OAK-8753
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: zhouxu


hello expert:

  In order to speed up node import, I started two cluster nodes with same 
mongodb. one of them throws exception like this:

Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
OakMerge0004: Following exceptions occurred during the.ConflictException: The 
node 
4:/jcr:system/rep:permissionStore/default/a9f591bf-45ca-48ce-afce-cf43dacedd3a 
was changed in revision

can some one help us?thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8732) Does oak have a performance test report? How about performance in the case of billions of nodes

2019-10-31 Thread zhouxu (Jira)
zhouxu created OAK-8732:
---

 Summary: Does oak have a performance test report? How about 
performance in the case of billions of  nodes
 Key: OAK-8732
 URL: https://issues.apache.org/jira/browse/OAK-8732
 Project: Jackrabbit Oak
  Issue Type: Test
  Components: 1.14
Reporter: zhouxu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OAK-8731) Can oak support big data? Billions of nodes

2019-10-31 Thread zhouxu (Jira)
zhouxu created OAK-8731:
---

 Summary: Can oak support big data? Billions of nodes
 Key: OAK-8731
 URL: https://issues.apache.org/jira/browse/OAK-8731
 Project: Jackrabbit Oak
  Issue Type: Wish
  Components: 1.14
Reporter: zhouxu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8602) javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded

2019-09-09 Thread zhouxu (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8602:

Attachment: 5.jpg
4.jpg
3.jpg
2.jpg
1.jpg

> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded
> -
>
> Key: OAK-8602
> URL: https://issues.apache.org/jira/browse/OAK-8602
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.14.0
> Environment: I am runing the oak instance on my local windows machine 
> and the version of oak is 1.14.0.The DocumentNodeStore is Mongodb and the 
> IndexProvider is SolrIndexProvider with async index.
>Reporter: zhouxu
>Priority: Major
> Attachments: 1.jpg, 2.jpg, 3.jpg, 4.jpg, 5.jpg
>
>
> This problem occurs to I am importing a large of data to the repository.I use 
> jmeter to import data in 5 threads and max loop times of each thread is 
> 20.When the count of sample reach to about 2,the oak throws the 
> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded.The 
> complete exception information is as follows:
> {color:#FF}javax.jcr.RepositoryException: OakOak0001: GC overhead limit 
> exceeded
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:250)
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:213)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:669)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:495)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:420)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:273)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:417)
>  at cn.amberdata.afc.domain.permission.AfACL.grantJcrPermit(AfACL.java:215)
>  at cn.amberdata.afc.domain.permission.AfACL.grant(AfACL.java:182)
>  at cn.amberdata.afc.domain.permission.AfACL.grantPermit(AfACL.java:235)
>  at 
> cn.amberdata.afc.domain.object.AfPersistentObject.setACL(AfPersistentObject.java:381)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.doSetAclToTargetFormSource(ACLUtils.java:62)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.setAclToTargetFormSource(ACLUtils.java:40)
>  at 
> cn.amberdata.common.core.persistence.dao.impl.AsyncTaskDaoImpl.extendsParentAcl(AsyncTaskDaoImpl.java:105)
>  at sun.reflect.GeneratedMethodAccessor167.invoke(Unknown Source)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>  at java.lang.reflect.Method.invoke(Unknown Source)
>  at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
>  at 
> org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
>  at java.util.concurrent.FutureTask.run$$$capture(Unknown Source)
>  at java.util.concurrent.FutureTask.run(Unknown Source)
>  at java.lang.Thread.run(Unknown Source)
>  Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0001: 
> GC overhead limit exceeded
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.mergeFailed(DocumentNodeStoreBranch.java:342)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.access$600(DocumentNodeStoreBranch.java:56)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch$InMemory.merge(DocumentNodeStoreBranch.java:554)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:196)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:120)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1875)
>  at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:251)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:346)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:493)
>  ... 20 more
>  Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>  at java.util.Vector.(Unknown Source)
>  at java.util.Vector.(Unknown Source)
>  at java.util.Vector.(Unknown Source)
>  at java.util.Stack.(Unknown Source

[jira] [Commented] (OAK-8602) javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded

2019-09-09 Thread zhouxu (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925488#comment-16925488
 ] 

zhouxu commented on OAK-8602:
-

After this problem occured I try to use plumbr to collection the jvm dump 
imformation.For get the imformation quickly that I set the jvm parameters like 
this(JAVA_OPTS=-Xms128m -Xmx512m).The same problem comes again.

I also found some answers on Internet and then not good for me.

Actually,the way of we use oak is doubtful.

In this case,I request the backend API to save node.When the web application 
startup the oak repository is created.The repository will shutdown at the web 
application context was destoryed.

Each request can create a node and need get session in the start of the request 
then release session in the end of the request.I create 1 nodes that means 
to get session and release 1 times.Maybe that's incorrect.

When run junit test in local I open the repository in the setUp() invoked and 
close it in the tearDown() method.I create hundreds of thousands node in one 
session and it's works well. [~reschke]

> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded
> -
>
> Key: OAK-8602
> URL: https://issues.apache.org/jira/browse/OAK-8602
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.14.0
> Environment: I am runing the oak instance on my local windows machine 
> and the version of oak is 1.14.0.The DocumentNodeStore is Mongodb and the 
> IndexProvider is SolrIndexProvider with async index.
>Reporter: zhouxu
>Priority: Major
>
> This problem occurs to I am importing a large of data to the repository.I use 
> jmeter to import data in 5 threads and max loop times of each thread is 
> 20.When the count of sample reach to about 2,the oak throws the 
> javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded.The 
> complete exception information is as follows:
> {color:#FF}javax.jcr.RepositoryException: OakOak0001: GC overhead limit 
> exceeded
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:250)
>  at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:213)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:669)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:495)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:420)
>  at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:273)
>  at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:417)
>  at cn.amberdata.afc.domain.permission.AfACL.grantJcrPermit(AfACL.java:215)
>  at cn.amberdata.afc.domain.permission.AfACL.grant(AfACL.java:182)
>  at cn.amberdata.afc.domain.permission.AfACL.grantPermit(AfACL.java:235)
>  at 
> cn.amberdata.afc.domain.object.AfPersistentObject.setACL(AfPersistentObject.java:381)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.doSetAclToTargetFormSource(ACLUtils.java:62)
>  at 
> cn.amberdata.afc.common.util.ACLUtils.setAclToTargetFormSource(ACLUtils.java:40)
>  at 
> cn.amberdata.common.core.persistence.dao.impl.AsyncTaskDaoImpl.extendsParentAcl(AsyncTaskDaoImpl.java:105)
>  at sun.reflect.GeneratedMethodAccessor167.invoke(Unknown Source)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>  at java.lang.reflect.Method.invoke(Unknown Source)
>  at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
>  at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
>  at 
> org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
>  at java.util.concurrent.FutureTask.run$$$capture(Unknown Source)
>  at java.util.concurrent.FutureTask.run(Unknown Source)
>  at java.lang.Thread.run(Unknown Source)
>  Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0001: 
> GC overhead limit exceeded
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.mergeFailed(DocumentNodeStoreBranch.java:342)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.access$600(DocumentNodeStoreBranch.java:56)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch$InMemory.merge(DocumentNodeStoreBranch.java:554)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:196)
> at 

[jira] [Created] (OAK-8602) javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded

2019-09-05 Thread zhouxu (Jira)
zhouxu created OAK-8602:
---

 Summary: javax.jcr.RepositoryException: OakOak0001: GC overhead 
limit exceeded
 Key: OAK-8602
 URL: https://issues.apache.org/jira/browse/OAK-8602
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.14.0
 Environment: I am runing the oak instance on my local windows machine 
and the version of oak is 1.14.0.The DocumentNodeStore is Mongodb and the 
IndexProvider is SolrIndexProvider with async index.
Reporter: zhouxu


This problem occurs to I am importing a large of data to the repository.I use 
jmeter to import data in 5 threads and max loop times of each thread is 
20.When the count of sample reach to about 2,the oak throws the 
javax.jcr.RepositoryException: OakOak0001: GC overhead limit exceeded.The 
complete exception information is as follows:

{color:#FF}javax.jcr.RepositoryException: OakOak0001: GC overhead limit 
exceeded
 at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:250)
 at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:213)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:669)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:495)
 at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:420)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:273)
 at org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:417)
 at cn.amberdata.afc.domain.permission.AfACL.grantJcrPermit(AfACL.java:215)
 at cn.amberdata.afc.domain.permission.AfACL.grant(AfACL.java:182)
 at cn.amberdata.afc.domain.permission.AfACL.grantPermit(AfACL.java:235)
 at 
cn.amberdata.afc.domain.object.AfPersistentObject.setACL(AfPersistentObject.java:381)
 at 
cn.amberdata.afc.common.util.ACLUtils.doSetAclToTargetFormSource(ACLUtils.java:62)
 at 
cn.amberdata.afc.common.util.ACLUtils.setAclToTargetFormSource(ACLUtils.java:40)
 at 
cn.amberdata.common.core.persistence.dao.impl.AsyncTaskDaoImpl.extendsParentAcl(AsyncTaskDaoImpl.java:105)
 at sun.reflect.GeneratedMethodAccessor167.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
 at 
org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
 at java.util.concurrent.FutureTask.run$$$capture(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakOak0001: GC 
overhead limit exceeded
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.mergeFailed(DocumentNodeStoreBranch.java:342)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.access$600(DocumentNodeStoreBranch.java:56)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch$InMemory.merge(DocumentNodeStoreBranch.java:554)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:196)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:120)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1875)
 at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:251)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:346)
 at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:493)
 ... 20 more
 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
 at java.util.Vector.(Unknown Source)
 at java.util.Vector.(Unknown Source)
 at java.util.Vector.(Unknown Source)
 at java.util.Stack.(Unknown Source)
 at org.bson.AbstractBsonWriter.(AbstractBsonWriter.java:38)
 at org.bson.AbstractBsonWriter.(AbstractBsonWriter.java:50)
 at org.bson.BsonDocumentWriter.(BsonDocumentWriter.java:44)
 at org.bson.BsonDocumentWrapper.getUnwrapped(BsonDocumentWrapper.java:194)
 at org.bson.BsonDocumentWrapper.isEmpty(BsonDocumentWrapper.java:115)
 at 
com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:395)
 at 
com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:377)
 at 
org.bso

[jira] [Created] (OAK-8553) paging query with solr very slowly

2019-08-18 Thread zhouxu (JIRA)
zhouxu created OAK-8553:
---

 Summary: paging query with solr very slowly
 Key: OAK-8553
 URL: https://issues.apache.org/jira/browse/OAK-8553
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-search
Affects Versions: 1.14.0
Reporter: zhouxu


We create nodes synchronously to create Solr indexes, but paging queries in a 
directory with 150,000 nodes are particularly slow.the sql like this:
String sql = "select [jcr:path], 
[jcr:primaryType],[rep:facet(jcr:primaryType)], [jcr:created], [jcr:createdBy], 
 s_object_name, da_title, da_undeclare_record_num,  da_record_num, da_officer, 
s_modify_date,  s_md5, s_content_size,da_record_manager from [oak:Unstructured] 
where contains([jcr:primaryType],'oak:Unstructured OR da_document OR da_record 
OR da_dept_folder OR da_folder') and ISCHILDNODE([/test])";

Our test results are as follows: currently there are 180,000 nodes in a 
directory

1. When the paging query is 0-50 queries five times, the speed is as follows: 
637ms, 63ms, 91ms, 783ms, 28ms. This speed is acceptable.

2. When the paging query is 50 000-500 50 queries five times, the speed is as 
follows: 179,000 ms, 162,836 ms, 161,049 ms, 164,220 Ms. It's very slow. Oak 
seems to find 50,000 records iteratively.

Is there any way to solve this performance problem? Is it okay to use the Solr 
paging query syntax directly?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (OAK-8409) oak integration Solr without osgi

2019-06-18 Thread zhouxu (JIRA)
zhouxu created OAK-8409:
---

 Summary: oak integration Solr without osgi
 Key: OAK-8409
 URL: https://issues.apache.org/jira/browse/OAK-8409
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: 1.14
Reporter: zhouxu


1.we try to create repository like these code[1] ; 
2. deployed remote solr server and  put files (schema.xml, solrconfig.xml 
)which from oak-solr-core artifact into solr server 
3.create  oak:QueryIndexDefinition node like these: 
 Node index = root.getNode("oak:index"); 
 Node solr=index.addNode("solr","oak:QueryIndexDefinition"); 
 solr.setProperty("type", "solr"); 
 solr.setProperty("async", "async"); 
 solr.setProperty("reindex", true); 

problem:if I commit new nodes they are not send to the Solr server for 
indexing. 

create repository code[1]: 

   RemoteSolrServerProvider remoteSolrServerProvider=null; 
String solrURL="http://localhost:8983/solr/oak";; 
SolrServerConfiguration 
remoteSolrServerProviderSolrServerConfiguration = 
new RemoteSolrServerConfiguration(null, null, 1, 1, 
null, 10, 10, solrURL); 
try { 
remoteSolrServerProvider = 
remoteSolrServerProviderSolrServerConfiguration.getProvider(); 
} catch (IllegalAccessException e) { 
e.printStackTrace(); 
} catch (InvocationTargetException e) { 
e.printStackTrace(); 
} catch (InstantiationException e) { 
e.printStackTrace(); 
} 

OakSolrConfigurationProvider configurationProvider = new 
OakSolrConfigurationProvider() { 
@Override 
public OakSolrConfiguration getConfiguration() { 
return new DefaultSolrConfiguration() { 
@Override 
public int getRows() { 
return 50; 
} 
}; 
} 
}; 
SolrQueryIndexProvider solrQueryIndexProvider=new 
SolrQueryIndexProvider(remoteSolrServerProvider, configurationProvider); 

if (repository == null) { 
repository = new Jcr(new Oak(documentNodeStore)) 
.with(solrQueryIndexProvider) 
.with(new NodeStateSolrServersObserver()) 
.with(new 
SolrIndexEditorProvider(remoteSolrServerProvider, configurationProvider)) 
.with(new SolrIndexInitializer(false)) 
.createRepository(); 
} 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8221) Failure to do anything, throw CommitFailedException: OakMerge0004

2019-05-19 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu resolved OAK-8221.
-
Resolution: Fixed

> Failure to do anything, throw CommitFailedException: OakMerge0004
> -
>
> Key: OAK-8221
> URL: https://issues.apache.org/jira/browse/OAK-8221
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: zhouxu
>Priority: Major
>
> Hello expert:
>   1. To use Oak in my project simply add a dependency to 
> {{org.apache.jackrabbit:oak-jcr:1.10.2}} and to {{javax.jcr:jcr:2.0。}}
>   2. we construct a Repository instance,use mongodb like this:
>      MongoClient  
> mongoClient=getMongoClient(mongodbIP,mongodbPort,dbName,userName,password);
>      DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>      .setMongoDB(mongoClient, dbName, 0)
>      .setBlobStore(new FileBlobStore("D:\\amberdata\\FileStore"))
>       .build();
>     Repository repository = new Jcr(new 
> Oak(documentNodeStore)).createRepository();
>  3.Every developer uses the oak of a local application to connect to the same 
> mongodb,
> A few days later,We failed to unregister node type dw_unit_detail which we 
> registered,it takes a long time,and throw CommitFailedException: 
> OakMerge0004,like this:
>     javax.jcr.InvalidItemStateException: Failed to unregister node type 
> dw_unit_detail
> at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:240)
>  at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:186)
>  at com.datamber.afc.domain.type.AfType.destroy(AfType.java:397)
>  at com.datamber.afc.domain.type.AfTypeTest.destroy(AfTypeTest.java:345)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
> OakMerge0004: Following exceptions occurred during the bulk update 
> operations: [org.apache.jackrabbit.oak.plugins.document.ConflictException: 
> The node 3:/jcr:system/jcr:nodeTypes/dw_customize_type was changed in revision
> r16a0699ff58-0-3 (not yet visible), which was applied after the base revision
> r16a0b08f9b6-0-1,r16a067a0c76-0-2,r16a0699faf1-0-3,r16a072d9ee8-0-4,r16a069a45a1-0-5,r16a06874e87-0-6,r16a069be50a-0-7,r16a073313ba-0-8,r16a073e63ae-0-9,r16a0769307d-0-a,r16a0ae97b21-0-b,r16a0a0b1912-0-c,r16a0a0fa4e7-0-d,r16a0a4bbc67-0-e,r16a0a4438fc-0-f,r16a0ae19945-0-10,
>  before
> r16a0b099334-0-1] (retries 5, 303136 ms)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:218)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:127)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
>  at 
> org.apache.jackrabbit.

[jira] [Commented] (OAK-8221) Failure to do anything, throw CommitFailedException: OakMerge0004

2019-05-13 Thread zhouxu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16838399#comment-16838399
 ] 

zhouxu commented on OAK-8221:
-

Again, we have this problem. Can you tell me how to use   
DocumentNodeStoreService in a production environment? It's better to have code 
or reference link。thanks a lot.




> Failure to do anything, throw CommitFailedException: OakMerge0004
> -
>
> Key: OAK-8221
> URL: https://issues.apache.org/jira/browse/OAK-8221
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: zhouxu
>Priority: Major
>
> Hello expert:
>   1. To use Oak in my project simply add a dependency to 
> {{org.apache.jackrabbit:oak-jcr:1.10.2}} and to {{javax.jcr:jcr:2.0。}}
>   2. we construct a Repository instance,use mongodb like this:
>      MongoClient  
> mongoClient=getMongoClient(mongodbIP,mongodbPort,dbName,userName,password);
>      DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>      .setMongoDB(mongoClient, dbName, 0)
>      .setBlobStore(new FileBlobStore("D:\\amberdata\\FileStore"))
>       .build();
>     Repository repository = new Jcr(new 
> Oak(documentNodeStore)).createRepository();
>  3.Every developer uses the oak of a local application to connect to the same 
> mongodb,
> A few days later,We failed to unregister node type dw_unit_detail which we 
> registered,it takes a long time,and throw CommitFailedException: 
> OakMerge0004,like this:
>     javax.jcr.InvalidItemStateException: Failed to unregister node type 
> dw_unit_detail
> at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:240)
>  at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:186)
>  at com.datamber.afc.domain.type.AfType.destroy(AfType.java:397)
>  at com.datamber.afc.domain.type.AfTypeTest.destroy(AfTypeTest.java:345)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
> OakMerge0004: Following exceptions occurred during the bulk update 
> operations: [org.apache.jackrabbit.oak.plugins.document.ConflictException: 
> The node 3:/jcr:system/jcr:nodeTypes/dw_customize_type was changed in revision
> r16a0699ff58-0-3 (not yet visible), which was applied after the base revision
> r16a0b08f9b6-0-1,r16a067a0c76-0-2,r16a0699faf1-0-3,r16a072d9ee8-0-4,r16a069a45a1-0-5,r16a06874e87-0-6,r16a069be50a-0-7,r16a073313ba-0-8,r16a073e63ae-0-9,r16a0769307d-0-a,r16a0ae97b21-0-b,r16a0a0b1912-0-c,r16a0a0fa4e7-0-d,r16a0a4bbc67-0-e,r16a0a4438fc-0-f,r16a0ae19945-0-10,
>  before
> r16a0b099334-0-1] (retries 5, 303136 ms)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:218)
>  at 
> org.apache.jackrabbit.oak.plugins.docu

[jira] [Reopened] (OAK-8221) Failure to do anything, throw CommitFailedException: OakMerge0004

2019-05-13 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu reopened OAK-8221:
-

> Failure to do anything, throw CommitFailedException: OakMerge0004
> -
>
> Key: OAK-8221
> URL: https://issues.apache.org/jira/browse/OAK-8221
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: zhouxu
>Priority: Major
>
> Hello expert:
>   1. To use Oak in my project simply add a dependency to 
> {{org.apache.jackrabbit:oak-jcr:1.10.2}} and to {{javax.jcr:jcr:2.0。}}
>   2. we construct a Repository instance,use mongodb like this:
>      MongoClient  
> mongoClient=getMongoClient(mongodbIP,mongodbPort,dbName,userName,password);
>      DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>      .setMongoDB(mongoClient, dbName, 0)
>      .setBlobStore(new FileBlobStore("D:\\amberdata\\FileStore"))
>       .build();
>     Repository repository = new Jcr(new 
> Oak(documentNodeStore)).createRepository();
>  3.Every developer uses the oak of a local application to connect to the same 
> mongodb,
> A few days later,We failed to unregister node type dw_unit_detail which we 
> registered,it takes a long time,and throw CommitFailedException: 
> OakMerge0004,like this:
>     javax.jcr.InvalidItemStateException: Failed to unregister node type 
> dw_unit_detail
> at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:240)
>  at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:186)
>  at com.datamber.afc.domain.type.AfType.destroy(AfType.java:397)
>  at com.datamber.afc.domain.type.AfTypeTest.destroy(AfTypeTest.java:345)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
> OakMerge0004: Following exceptions occurred during the bulk update 
> operations: [org.apache.jackrabbit.oak.plugins.document.ConflictException: 
> The node 3:/jcr:system/jcr:nodeTypes/dw_customize_type was changed in revision
> r16a0699ff58-0-3 (not yet visible), which was applied after the base revision
> r16a0b08f9b6-0-1,r16a067a0c76-0-2,r16a0699faf1-0-3,r16a072d9ee8-0-4,r16a069a45a1-0-5,r16a06874e87-0-6,r16a069be50a-0-7,r16a073313ba-0-8,r16a073e63ae-0-9,r16a0769307d-0-a,r16a0ae97b21-0-b,r16a0a0b1912-0-c,r16a0a0fa4e7-0-d,r16a0a4bbc67-0-e,r16a0a4438fc-0-f,r16a0ae19945-0-10,
>  before
> r16a0b099334-0-1] (retries 5, 303136 ms)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:218)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:127)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
>  at 
> org.apache.jackrabbit.oak.plugins.document.D

[jira] [Commented] (OAK-8204) OutOfMemoryError: create nodes

2019-04-28 Thread zhouxu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827995#comment-16827995
 ] 

zhouxu commented on OAK-8204:
-

1.create repository

To use Oak in our project simply add a dependency to 
{{org.apache.jackrabbit:oak-jcr:1.0.0}} and to {{javax.jcr:jcr:2.0}}:
DB db = new MongoClient("127.0.0.1", 27017).getDB("oak");
DocumentNodeStore ns = newMongoDocumentNodeStoreBuilder()
.setMongoDB(db, "oak", 0)
.build();
Repository repo = new Jcr(new Oak(ns)).createRepository();


 
2,create 10 folders,and create 200,000 nodes per folder.
Session session = repository.login(new SimpleCredentials("admin", 
"admin".toCharArray()));
String folderName=null;
String docName=null;
Node root = session.getRootNode();
for(int i=1;i<=10;i++){
    folderName="J"+i;
   
   if (root.hasNode(folderName)) {
       throw new Exception(folderName + " node is exist!");
   }
   Node node = root.addNode(folderName, JcrConstants.NT_FOLDER);
   session.save();
    System.out.println("create folder "+folderName+" success===");

   for(int j=1;j<=20;j++){
        docName=folderName+"_"+j+".pdf";
        Node parentNode = getNodeByPath(parentPath);
       node = parentNode.addNode(docName, JcrConstants.NT_FILE);
       //add content node
       ...
       session.save();
   }
}
 

> OutOfMemoryError: create nodes
> --
>
> Key: OAK-8204
> URL: https://issues.apache.org/jira/browse/OAK-8204
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: zhouxu
>Priority: Major
>
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceede
> d
>  at java.lang.AbstractStringBuilder.(AbstractStringBuilder.java:68)
> at java.lang.StringBuilder.(StringBuilder.java:101)
>  at org.apache.jackrabbit.oak.commons.PathUtils.concat(PathUtils.java:318
> )
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChild
> NodeDoc(DocumentNodeState.java:507)
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChild
> Node(DocumentNodeState.java:256)
>  at org.apache.jackrabbit.oak.plugins.memory.MutableNodeState.setChildNod
> e(MutableNodeState.java:111)
>  at org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNo
> de(MemoryNodeBuilder.java:343)
>  at org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilde
> r.setChildNode(AbstractDocumentNodeBuilder.java:56)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeAdded(ApplyDif
> f.java:80)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:412)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.plugins.document.ModifiedDocumentNodeState.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8285) statistics the total number of documents return -1

2019-04-28 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8285:

Priority: Blocker  (was: Major)

> statistics the  total number of documents return -1
> ---
>
> Key: OAK-8285
> URL: https://issues.apache.org/jira/browse/OAK-8285
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: zhouxu
>Priority: Blocker
>
> We use oak1.10.2 inside a web application to store documents 
> How can we program statistics such as : 
> - the total number of documents 
> - compute the number of documents per property values for a given property ?
> we try these code,but it return -1
> QueryManager qm = session.getWorkspace().getQueryManager(); 
> Query q = qm.createQuery("SELECT * FROM [nt:file] WHERE LOCALNAME() LIKE 
> '%.txt' and [\"jcr:createdBy\"] = 'anonymous'", Query.JCR_SQL2); 
> QueryResult qr = q.execute(); 
> long stat = qr.getRows().getSize(); 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8285) statistics the total number of documents return -1

2019-04-28 Thread zhouxu (JIRA)
zhouxu created OAK-8285:
---

 Summary: statistics the  total number of documents return -1
 Key: OAK-8285
 URL: https://issues.apache.org/jira/browse/OAK-8285
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.10.2
Reporter: zhouxu


We use oak1.10.2 inside a web application to store documents 

How can we program statistics such as : 
- the total number of documents 
- compute the number of documents per property values for a given property ?



we try these code,but it return -1

QueryManager qm = session.getWorkspace().getQueryManager(); 
Query q = qm.createQuery("SELECT * FROM [nt:file] WHERE LOCALNAME() LIKE 
'%.txt' and [\"jcr:createdBy\"] = 'anonymous'", Query.JCR_SQL2); 
QueryResult qr = q.execute(); 
long stat = qr.getRows().getSize(); 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8221) Failure to do anything,throw CommitFailedException: OakMerge0004

2019-04-12 Thread zhouxu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816032#comment-16816032
 ] 

zhouxu commented on OAK-8221:
-

we can schedule revision garbage collection every night。

Is there an example of how to configure DocumentNodeStore for production 
environments? Is there an open source system using oak? We can refer to it.

> Failure to do anything,throw CommitFailedException: OakMerge0004
> 
>
> Key: OAK-8221
> URL: https://issues.apache.org/jira/browse/OAK-8221
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: zhouxu
>Priority: Major
>
> Hello expert:
>   1. To use Oak in my project simply add a dependency to 
> {{org.apache.jackrabbit:oak-jcr:1.10.2}} and to {{javax.jcr:jcr:2.0。}}
>   2. we construct a Repository instance,use mongodb like this:
>      MongoClient  
> mongoClient=getMongoClient(mongodbIP,mongodbPort,dbName,userName,password);
>      DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>      .setMongoDB(mongoClient, dbName, 0)
>      .setBlobStore(new FileBlobStore("D:\\amberdata\\FileStore"))
>       .build();
>     Repository repository = new Jcr(new 
> Oak(documentNodeStore)).createRepository();
>  3.Every developer uses the oak of a local application to connect to the same 
> mongodb,
> A few days later,We failed to unregister node type dw_unit_detail which we 
> registered,it takes a long time,and throw CommitFailedException: 
> OakMerge0004,like this:
>     javax.jcr.InvalidItemStateException: Failed to unregister node type 
> dw_unit_detail
> at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:240)
>  at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:186)
>  at com.datamber.afc.domain.type.AfType.destroy(AfType.java:397)
>  at com.datamber.afc.domain.type.AfTypeTest.destroy(AfTypeTest.java:345)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
> OakMerge0004: Following exceptions occurred during the bulk update 
> operations: [org.apache.jackrabbit.oak.plugins.document.ConflictException: 
> The node 3:/jcr:system/jcr:nodeTypes/dw_customize_type was changed in revision
> r16a0699ff58-0-3 (not yet visible), which was applied after the base revision
> r16a0b08f9b6-0-1,r16a067a0c76-0-2,r16a0699faf1-0-3,r16a072d9ee8-0-4,r16a069a45a1-0-5,r16a06874e87-0-6,r16a069be50a-0-7,r16a073313ba-0-8,r16a073e63ae-0-9,r16a0769307d-0-a,r16a0ae97b21-0-b,r16a0a0b1912-0-c,r16a0a0fa4e7-0-d,r16a0a4bbc67-0-e,r16a0a4438fc-0-f,r16a0ae19945-0-10,
>  before
> r16a0b099334-0-1] (retries 5, 303136 ms)
>  at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:218)
>  at 
> org.ap

[jira] [Created] (OAK-8221) Failure to do anything,throw CommitFailedException: OakMerge0004

2019-04-10 Thread zhouxu (JIRA)
zhouxu created OAK-8221:
---

 Summary: Failure to do anything,throw CommitFailedException: 
OakMerge0004
 Key: OAK-8221
 URL: https://issues.apache.org/jira/browse/OAK-8221
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: zhouxu


Hello expert:

  1. To use Oak in my project simply add a dependency to 
{{org.apache.jackrabbit:oak-jcr:1.10.2}} and to {{javax.jcr:jcr:2.0。}}

  2. we construct a Repository instance,use mongodb like this:
     MongoClient  
mongoClient=getMongoClient(mongodbIP,mongodbPort,dbName,userName,password);
     DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
     .setMongoDB(mongoClient, dbName, 0)
     .setBlobStore(new FileBlobStore("D:\\amberdata\\FileStore"))
      .build();

    Repository repository = new Jcr(new 
Oak(documentNodeStore)).createRepository();

 3.Every developer uses the oak of a local application to connect to the same 
mongodb,

A few days later,We failed to unregister node type dw_unit_detail which we 
registered,it takes a long time,and throw CommitFailedException: 
OakMerge0004,like this:

    javax.jcr.InvalidItemStateException: Failed to unregister node type 
dw_unit_detail

at 
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:240)
 at 
org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:186)
 at com.datamber.afc.domain.type.AfType.destroy(AfType.java:397)
 at com.datamber.afc.domain.type.AfTypeTest.destroy(AfTypeTest.java:345)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0004: 
OakMerge0004: Following exceptions occurred during the bulk update operations: 
[org.apache.jackrabbit.oak.plugins.document.ConflictException: The node 
3:/jcr:system/jcr:nodeTypes/dw_customize_type was changed in revision
r16a0699ff58-0-3 (not yet visible), which was applied after the base revision
r16a0b08f9b6-0-1,r16a067a0c76-0-2,r16a0699faf1-0-3,r16a072d9ee8-0-4,r16a069a45a1-0-5,r16a06874e87-0-6,r16a069be50a-0-7,r16a073313ba-0-8,r16a073e63ae-0-9,r16a0769307d-0-a,r16a0ae97b21-0-b,r16a0a0b1912-0-c,r16a0a0fa4e7-0-d,r16a0a4bbc67-0-e,r16a0a4438fc-0-f,r16a0ae19945-0-10,
 before
r16a0b099334-0-1] (retries 5, 303136 ms)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:218)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:127)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:170)
 at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1848)
 at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:250)
 at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:261)
 at 
org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.unregisterNodeType(ReadWriteNodeTypeManager.java:182)
 ... 26 more
Caused by: 

[jira] [Updated] (OAK-8204) OutOfMemoryError: create nodes

2019-04-08 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-8204:

Issue Type: Bug  (was: Improvement)

> OutOfMemoryError: create nodes
> --
>
> Key: OAK-8204
> URL: https://issues.apache.org/jira/browse/OAK-8204
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.10.2
>Reporter: zhouxu
>Priority: Major
>
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceede
> d
>  at java.lang.AbstractStringBuilder.(AbstractStringBuilder.java:68)
> at java.lang.StringBuilder.(StringBuilder.java:101)
>  at org.apache.jackrabbit.oak.commons.PathUtils.concat(PathUtils.java:318
> )
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChild
> NodeDoc(DocumentNodeState.java:507)
>  at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChild
> Node(DocumentNodeState.java:256)
>  at org.apache.jackrabbit.oak.plugins.memory.MutableNodeState.setChildNod
> e(MutableNodeState.java:111)
>  at org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNo
> de(MemoryNodeBuilder.java:343)
>  at org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilde
> r.setChildNode(AbstractDocumentNodeBuilder.java:56)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeAdded(ApplyDif
> f.java:80)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:412)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
> iff.java:87)
>  at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
> instBaseState(ModifiedNodeState.java:416)
>  at org.apache.jackrabbit.oak.plugins.document.ModifiedDocumentNodeState.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8204) OutOfMemoryError: create nodes

2019-04-08 Thread zhouxu (JIRA)
zhouxu created OAK-8204:
---

 Summary: OutOfMemoryError: create nodes
 Key: OAK-8204
 URL: https://issues.apache.org/jira/browse/OAK-8204
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.10.2
Reporter: zhouxu


Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceede
d
 at java.lang.AbstractStringBuilder.(AbstractStringBuilder.java:68)

at java.lang.StringBuilder.(StringBuilder.java:101)
 at org.apache.jackrabbit.oak.commons.PathUtils.concat(PathUtils.java:318
)
 at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChild
NodeDoc(DocumentNodeState.java:507)
 at org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChild
Node(DocumentNodeState.java:256)
 at org.apache.jackrabbit.oak.plugins.memory.MutableNodeState.setChildNod
e(MutableNodeState.java:111)
 at org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNo
de(MemoryNodeBuilder.java:343)
 at org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilde
r.setChildNode(AbstractDocumentNodeBuilder.java:56)
 at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeAdded(ApplyDif
f.java:80)
 at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
instBaseState(ModifiedNodeState.java:412)
 at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
iff.java:87)
 at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
instBaseState(ModifiedNodeState.java:416)
 at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
iff.java:87)
 at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
instBaseState(ModifiedNodeState.java:416)
 at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyD
iff.java:87)
 at org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAga
instBaseState(ModifiedNodeState.java:416)
 at org.apache.jackrabbit.oak.plugins.document.ModifiedDocumentNodeState.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8154) how to create system user with password

2019-03-21 Thread zhouxu (JIRA)
zhouxu created OAK-8154:
---

 Summary: how to create system user with password
 Key: OAK-8154
 URL: https://issues.apache.org/jira/browse/OAK-8154
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: zhouxu


hello expert!

     i create a system user  named dmadmin succefully,but i donot know how to 
set user password.the code like these:

JackrabbitSession jackrabbitSession = (JackrabbitSession) jcrSession;
UserManager userManager = jackrabbitSession.getUserManager();
User user = userManager.createSystemUser("dmadmin", null);
user.setProperty("description", new StringValue("这是一个超级用户"));
user.setProperty("role", new StringValue("super_user"));
jackrabbitSession.save();



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8077) how to support node type template modification and removal

2019-02-25 Thread zhouxu (JIRA)
zhouxu created OAK-8077:
---

 Summary: how to support node type template modification and removal
 Key: OAK-8077
 URL: https://issues.apache.org/jira/browse/OAK-8077
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: api
Reporter: zhouxu


hello everyone!

      we can use NodeTypeManager to create a custom  type,but we donot known 
how to modify and remove exist propery from exist node type? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7999) system starts within the lease time 120s for local junit test

2019-01-22 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-7999:

Description: 
1) First ,we use oak api to get DocumentNodeStore quickly, cluster node id 1 is 
assigned

2) Second, get DocumentNodeStore within the lease time (120s), a new cluster 
node id is assigned。It takes a long time to do a unit test.

*our get DocumentNodeStore code like this:*

Configuration configuration = new 
PropertiesConfiguration("repository.properties");
 dataStore = configuration.getString("repository.datastore");
 mongodbUri = configuration.getString("repository.mongodb.uri");
 mongodbName = configuration.getString("repository.mongodb.db");

DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
 .setMongoDB(mongodbUri, mongodbName, 0)
 .setBlobStore(new FileBlobStore("/Users/lili/Documents/dev/oak/file_data"))
 .build();

if(repository==null)

{ repository = new Jcr(new Oak(documentNodeStore)).createRepository(); }

*the logs like this:*

019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[629]-[INFO]
 Found an existing possibly active cluster node info (1) for this instance: 
mac:0616ba19daff//Users/lili/Documents/workspace/idea/AFC_JCR_V1.0, will try 
use it.
 2019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[636]-[INFO]
 Waiting for cluster node 1's lease to expire: 110s left

  was:
1) First ,we use oak api to get DocumentNodeStore quickly, cluster node id 1 is 
assigned

2) Second, get DocumentNodeStore within the lease time (120s), a new cluster 
node id is assigned。It takes a long time to do a unit test.

our get DocumentNodeStore code like this:

Configuration configuration = new 
PropertiesConfiguration("repository.properties");
dataStore = configuration.getString("repository.datastore");
mongodbUri = configuration.getString("repository.mongodb.uri");
mongodbName = configuration.getString("repository.mongodb.db");

DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
 .setMongoDB(mongodbUri, mongodbName, 0)
 .setBlobStore(new FileBlobStore("/Users/lili/Documents/dev/oak/file_data"))
 .build();

if(repository==null){
 repository = new Jcr(new Oak(documentNodeStore)).createRepository();
}

the logs like this:

019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[629]-[INFO]
 Found an existing possibly active cluster node info (1) for this instance: 
mac:0616ba19daff//Users/zhouxu/Documents/workspace/idea/AFC_JCR_V1.0, will try 
use it.
2019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[636]-[INFO]
 Waiting for cluster node 1's lease to expire: 110s left


> system starts within the lease time 120s for local junit test
> -
>
> Key: OAK-7999
> URL: https://issues.apache.org/jira/browse/OAK-7999
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: api
>Affects Versions: 1.10
>Reporter: zhouxu
>Priority: Blocker
>
> 1) First ,we use oak api to get DocumentNodeStore quickly, cluster node id 1 
> is assigned
> 2) Second, get DocumentNodeStore within the lease time (120s), a new cluster 
> node id is assigned。It takes a long time to do a unit test.
> *our get DocumentNodeStore code like this:*
> Configuration configuration = new 
> PropertiesConfiguration("repository.properties");
>  dataStore = configuration.getString("repository.datastore");
>  mongodbUri = configuration.getString("repository.mongodb.uri");
>  mongodbName = configuration.getString("repository.mongodb.db");
> DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>  .setMongoDB(mongodbUri, mongodbName, 0)
>  .setBlobStore(new FileBlobStore("/Users/lili/Documents/dev/oak/file_data"))
>  .build();
> if(repository==null)
> { repository = new Jcr(new Oak(documentNodeStore)).createRepository(); }
> *the logs like this:*
> 019-01-23 11:04:39 afc_jcr 
> [org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[629]-[INFO]
>  Found an existing possibly active cluster node info (1) for this instance: 
> mac:0616ba19daff//Users/lili/Documents/workspace/idea/AFC_JCR_V1.0, will try 
> use it.
>  2019-01-23 11:04:39 afc_jcr 
> [org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[636]-[INFO]
>  Waiting for cluster node 1's lease to expire: 110s left



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7999) system starts within the lease time 120s for local junit test

2019-01-22 Thread zhouxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouxu updated OAK-7999:

Priority: Blocker  (was: Major)

> system starts within the lease time 120s for local junit test
> -
>
> Key: OAK-7999
> URL: https://issues.apache.org/jira/browse/OAK-7999
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: api
>Affects Versions: 1.10
>Reporter: zhouxu
>Priority: Blocker
>
> 1) First ,we use oak api to get DocumentNodeStore quickly, cluster node id 1 
> is assigned
> 2) Second, get DocumentNodeStore within the lease time (120s), a new cluster 
> node id is assigned。It takes a long time to do a unit test.
> our get DocumentNodeStore code like this:
> Configuration configuration = new 
> PropertiesConfiguration("repository.properties");
> dataStore = configuration.getString("repository.datastore");
> mongodbUri = configuration.getString("repository.mongodb.uri");
> mongodbName = configuration.getString("repository.mongodb.db");
> DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
>  .setMongoDB(mongodbUri, mongodbName, 0)
>  .setBlobStore(new FileBlobStore("/Users/lili/Documents/dev/oak/file_data"))
>  .build();
> if(repository==null){
>  repository = new Jcr(new Oak(documentNodeStore)).createRepository();
> }
> the logs like this:
> 019-01-23 11:04:39 afc_jcr 
> [org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[629]-[INFO]
>  Found an existing possibly active cluster node info (1) for this instance: 
> mac:0616ba19daff//Users/zhouxu/Documents/workspace/idea/AFC_JCR_V1.0, will 
> try use it.
> 2019-01-23 11:04:39 afc_jcr 
> [org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[636]-[INFO]
>  Waiting for cluster node 1's lease to expire: 110s left



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7999) system starts within the lease time 120s for local junit test

2019-01-22 Thread zhouxu (JIRA)
zhouxu created OAK-7999:
---

 Summary: system starts within the lease time 120s for local junit 
test
 Key: OAK-7999
 URL: https://issues.apache.org/jira/browse/OAK-7999
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: api
Affects Versions: 1.10
Reporter: zhouxu


1) First ,we use oak api to get DocumentNodeStore quickly, cluster node id 1 is 
assigned

2) Second, get DocumentNodeStore within the lease time (120s), a new cluster 
node id is assigned。It takes a long time to do a unit test.

our get DocumentNodeStore code like this:

Configuration configuration = new 
PropertiesConfiguration("repository.properties");
dataStore = configuration.getString("repository.datastore");
mongodbUri = configuration.getString("repository.mongodb.uri");
mongodbName = configuration.getString("repository.mongodb.db");

DocumentNodeStore documentNodeStore = newMongoDocumentNodeStoreBuilder()
 .setMongoDB(mongodbUri, mongodbName, 0)
 .setBlobStore(new FileBlobStore("/Users/lili/Documents/dev/oak/file_data"))
 .build();

if(repository==null){
 repository = new Jcr(new Oak(documentNodeStore)).createRepository();
}

the logs like this:

019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[629]-[INFO]
 Found an existing possibly active cluster node info (1) for this instance: 
mac:0616ba19daff//Users/zhouxu/Documents/workspace/idea/AFC_JCR_V1.0, will try 
use it.
2019-01-23 11:04:39 afc_jcr 
[org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo]-[main]-[waitForLeaseExpiry]-[636]-[INFO]
 Waiting for cluster node 1's lease to expire: 110s left



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)