[jira] [Commented] (OAK-2760) HttpServer in Oak creates multiple instance of ContentRepository

2015-04-16 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497672#comment-14497672
 ] 

Francesco Mari commented on OAK-2760:
-

I think that the best option would be to cache the {{ContentRepository}} in 
{{Jcr}} and make it available to its users. The {{Oak}} builder doesn't just 
change its own state, but also the state of the system embedding the repository 
- e.g. via an OSGi whiteboard.This would mean that every service would be 
registered anew at every invocation of {{Oak.createContentRepository()}}.

> HttpServer in Oak creates multiple instance of ContentRepository
> 
>
> Key: OAK-2760
> URL: https://issues.apache.org/jira/browse/OAK-2760
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: run
>Reporter: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.0
>
>
> Http Server in oak-run constructs multiple repository instance of 
> ContentRepository [1]
> {code}
> Jcr jcr = new Jcr(oak);
> // 1 - OakServer
> ContentRepository repository = oak.createContentRepository();
> ServletHolder holder = new ServletHolder(new 
> OakServlet(repository));
> context.addServlet(holder, path + "/*");
> // 2 - Webdav Server on JCR repository
> final Repository jcrRepository = jcr.createRepository();
> {code}
> In above code a repository instance is created twice via same Oak instance 1 
> in OakServer and 2 for webdav. 
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java#L1125-1133



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2015-04-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1553:
---
Component/s: (was: mk)

> More sophisticated conflict resolution when concurrently adding nodes
> -
>
> Key: OAK-1553
> URL: https://issues.apache.org/jira/browse/OAK-1553
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk, segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: concurrency
> Fix For: 1.3.1
>
> Attachments: OAK-1553.patch
>
>
> {{MicroKernel.rebase}} currently specifies: "addExistingNode: A node has been 
> added that is different from a node of them same name that has been added to 
> the trunk."
> This is somewhat troublesome in the case where the same node with different 
> but non conflicting child items is added concurrently:
> {code}
> f.add("fo").add("u1"); commit();
> f.add("fo").add("u2"); commit();
> {code}
> currently fails with a conflict because {{fo}} is not the same node for the 
> both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1327) Cleanup NodeStore and MK implementations

2015-04-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1327:
---
Component/s: (was: mk)

> Cleanup NodeStore and MK implementations
> 
>
> Key: OAK-1327
> URL: https://issues.apache.org/jira/browse/OAK-1327
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: core, segmentmk
>Reporter: angela
>  Labels: modularization
> Fix For: 1.4
>
> Attachments: OAK-1327.patch
>
>
> as discussed during the oak-call today, i would like to cleanup the code base 
> before we officially release OAK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2382) Move NodeStore implementations to separate modules

2015-04-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2382:
---
Component/s: (was: mk)

> Move NodeStore implementations to separate modules
> --
>
> Key: OAK-2382
> URL: https://issues.apache.org/jira/browse/OAK-2382
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core, segmentmk
>Reporter: angela
>  Labels: modularization
> Fix For: 1.4
>
>
> as discussed in the oak-call yesterday,  i think we should take another look 
> at the modularization of the oak-core module.
> some time ago i proposed to move the NodeStore implementations into separate 
> modules.
> to begin with i just tried 2 separate modules:
> - oak-ns-document: > everything below oak.plugins.document
> - oak-ns-segment: > everything below oak.plugins.segment > segment specific 
> parts of oak.plugins.backup
> i found the following issues:
> - org.apache.jackrabbit.oak.plugins.cache is not part of the exported 
> packages - oak.plugins.backup contains both public API and implementations 
> without separation - the following test-classes have a hard dependency on one 
> or more ns implementations: > KernelNodeStoreCacheTest > 
> ClusterPermissionsTest > NodeStoreFixture to fix those we could need to be 
> able to run the tests with the individual nodestore modules and move those 
> tests that are just intended to work with a particular impl.
> such a move would not only prevent us from introducing unintended package 
> dependencies but would also reduce the number of dependencies present with 
> oak-core. 
> as discussed yesterday we may want to pick this up again this year.
> see also http://markmail.org/message/6cpbyuthub4jxase for the whole 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1340) Backup and restore for the SQL DocumentStore

2015-04-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1340:
---
Component/s: (was: mk)
 rdbmk

> Backup and restore for the SQL DocumentStore
> 
>
> Key: OAK-1340
> URL: https://issues.apache.org/jira/browse/OAK-1340
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, rdbmk
>Reporter: Alex Parvulescu
>  Labels: production, tools
> Fix For: 1.3.0
>
>
> Similar to OAK-1159 but specific to the SQL Document Store implementation.
> The backup could leverage the existing backup bits and backup to the file 
> system (sql-to-tarmk backup) but the restore functionality is missing 
> (tar-to-sql).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2776) Upgrade should allow to skip copying versions

2015-04-16 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497825#comment-14497825
 ] 

Chetan Mehrotra commented on OAK-2776:
--

Thanks [~jsedding] for all these patches in oak-upgrade. We would soon starting 
having a look at those as 1.2 release is done. Thanks again for your 
contribution in this area!

> Upgrade should allow to skip copying versions
> -
>
> Key: OAK-2776
> URL: https://issues.apache.org/jira/browse/OAK-2776
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.2
>Reporter: Julian Sedding
> Attachments: OAK-2776.patch
>
>
> In some cases it is not necessary to copy version histories during an 
> upgrade. Skipping to copy versions can result in a lot less content that 
> needs copying and thus a significant speedup.
> Additionally, OAK-2586 introduces the possibility to include and exclude 
> paths for an upgrade. Version histories should thus only be copied if their 
> respective versionable node is present in the copied part of the content. 
> Also reducing content being copied redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2778) DocumentNodeState is null for revision rx-x-x

2015-04-16 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-2778:
-

 Summary: DocumentNodeState is null for revision rx-x-x
 Key: OAK-2778
 URL: https://issues.apache.org/jira/browse/OAK-2778
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.2, 1.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.3.0


On a system running Oak 1.0.12 the following exception is seen repeatedly when 
the async index update tries to update a lucene index:

{noformat}
org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
execution of org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@6be42cde 
: DocumentNodeState is null for revision r14cbbd50ad2-0-1 of 
/oak:index/lucene/:data/_1co.cfe (aborting getChildNodes())
org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
DocumentNodeState is null for revision r14cbbd50ad2-0-1 of 
/oak:index/lucene/:data/_1co.cfe (aborting getChildNodes())
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$6.apply(DocumentNodeStore.java:925)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$6.apply(DocumentNodeStore.java:919)
at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.next(DocumentNodeState.java:618)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.next(DocumentNodeState.java:587)
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
at com.google.common.collect.Iterators.addAll(Iterators.java:357)
at com.google.common.collect.Lists.newArrayList(Lists.java:146)
at com.google.common.collect.Iterables.toCollection(Iterables.java:334)
at com.google.common.collect.Iterables.toArray(Iterables.java:312)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.listAll(OakDirectory.java:69)
at org.apache.lucene.index.DirectoryReader.indexExists(DirectoryReader.java:339)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:720)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.getWriter(LuceneIndexEditorContext.java:134)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.addOrUpdate(LuceneIndexEditor.java:260)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:171)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130)
at 
org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
{noformat}

A similar issue was already fixed with OAK-2420.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2228) Changing the query traversal limit should affect already started queries

2015-04-16 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2228:
-
Fix Version/s: 1.0.13

> Changing the query traversal limit should affect already started queries
> 
>
> Key: OAK-2228
> URL: https://issues.apache.org/jira/browse/OAK-2228
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.1.3, 1.0.13
>
>
> Therefore, changing the limit at runtime would stop long running queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2228) Changing the query traversal limit should affect already started queries

2015-04-16 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497905#comment-14497905
 ] 

Chetan Mehrotra commented on OAK-2228:
--

Merged to 1.0 with http://svn.apache.org/r1674041

> Changing the query traversal limit should affect already started queries
> 
>
> Key: OAK-2228
> URL: https://issues.apache.org/jira/browse/OAK-2228
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.1.3, 1.0.13
>
>
> Therefore, changing the limit at runtime would stop long running queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2749) Provide a "different lane" for slow indexers in async indexing

2015-04-16 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497968#comment-14497968
 ] 

Davide Giannella commented on OAK-2749:
---

[~chetanm]
{quote}
However make enabling this configurable as otherwise it would unnecessary 
perform a diff even if no index is assigned to it. So have it configurable and 
leave it to class initializing Oak to configure this mode
{quote}

your comment had me thinking and in (0) I tentatively added a logic
which will skip all the {{AscynIndexUpdate}} steps including checkins
if no suitable index definitions has been found.

(0) 
https://github.com/davidegiannella/jackrabbit-oak/commit/3451e2509fbb82dc800a075f97059615e3854b76

Could you please have a look. It's not fully convincing me and
probably we should consider only the definitions that matches the
provided {{IndexEditorProvider}}s. It takes inspiration from
{{IndexUpdate.collectIndexEditors()}}.





> Provide a "different lane" for slow indexers in async indexing
> --
>
> Key: OAK-2749
> URL: https://issues.apache.org/jira/browse/OAK-2749
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.3.0
>
> Attachments: OAK-2749-rc1.diff, OAK-2749-rc2.diff
>
>
> In case of big repositories, asynchronous index like Lucene Property,
> could lag behind as slow indexes, for example Full Text, are taken
> care in the same thread pool.
> Provide a separate thread pool in which such indexes could be
> registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2755) Consolidated JMX view of all EventListener related statistics

2015-04-16 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2755.
--
Resolution: Fixed

Done in 
* trunk - http://svn.apache.org/r1674046
* 1.0 - http://svn.apache.org/r1674055

Still to be ported to 1.2

> Consolidated JMX view of all EventListener related statistics
> -
>
> Key: OAK-2755
> URL: https://issues.apache.org/jira/browse/OAK-2755
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: monitoring, observation
> Fix For: 1.0.13, 1.3.0
>
> Attachments: OAK-2755-2.patch, OAK-2755-3.patch, OAK-2755.patch, 
> consolidated-listener-stats-2.png, consolidated-listener-stats.png
>
>
> Oak Observation support exposes a {{EventListenerMBean}} [1] which provide 
> quite a bit of details around registered observation listeners. However in a 
> typical application there would be multiple listeners registered. To simplify 
> monitoring it would be helpful to have a _consolidated_ view of all listeners 
> related statistics.
> Further the stats can also include some more details which are Oak specific
> * Subtree paths to which the listener listens to - By default JCR Api allows 
> single path however Oak allows a listener to register to multiple paths
> * If listener is enabled to listen to cluster local and cluster external 
> changes
> * Size of queue in BackgroundObserver
> * Distribution of change types present in the queue - Local, External etc
> [1] 
> https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-api/src/main/java/org/apache/jackrabbit/api/jmx/EventListenerMBean.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-2755) Consolidated JMX view of all EventListener related statistics

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reopened OAK-2755:
---

The implementation uses Java 7 features. Our current minimum Java version is 
1.6. See README.md and oak-parent/pom.xml.

> Consolidated JMX view of all EventListener related statistics
> -
>
> Key: OAK-2755
> URL: https://issues.apache.org/jira/browse/OAK-2755
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: monitoring, observation
> Fix For: 1.0.13, 1.3.0
>
> Attachments: OAK-2755-2.patch, OAK-2755-3.patch, OAK-2755.patch, 
> consolidated-listener-stats-2.png, consolidated-listener-stats.png
>
>
> Oak Observation support exposes a {{EventListenerMBean}} [1] which provide 
> quite a bit of details around registered observation listeners. However in a 
> typical application there would be multiple listeners registered. To simplify 
> monitoring it would be helpful to have a _consolidated_ view of all listeners 
> related statistics.
> Further the stats can also include some more details which are Oak specific
> * Subtree paths to which the listener listens to - By default JCR Api allows 
> single path however Oak allows a listener to register to multiple paths
> * If listener is enabled to listen to cluster local and cluster external 
> changes
> * Size of queue in BackgroundObserver
> * Distribution of change types present in the queue - Local, External etc
> [1] 
> https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-api/src/main/java/org/apache/jackrabbit/api/jmx/EventListenerMBean.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2779) DocumentNodeStore should provide option to set initial cache size as percentage of MAX VM size

2015-04-16 Thread Will McGauley (JIRA)
Will McGauley created OAK-2779:
--

 Summary: DocumentNodeStore should provide option to set initial 
cache size as percentage of MAX VM size
 Key: OAK-2779
 URL: https://issues.apache.org/jira/browse/OAK-2779
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Affects Versions: 1.2
Reporter: Will McGauley
 Fix For: 1.2.1


Currently the DocumentNodeStore provides a way to configure various cache 
parameters, including cache size and distribution of that size to various 
caches.  The distribution of caches is done as a % of the total cache size, 
which is very helpful, but the overall cache size can only be set as a literal 
value.

It would be helpful to achieve a good default value based on the available VM 
memory as a %, instead of a literal value.  By doing this the cache size would 
not need to be set by each customer, and a better initial experience would be 
achieved.  

I suggest that 25% of the max VM size would be a good starting point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2779) DocumentNodeStore should provide option to set initial cache size as percentage of MAX VM size

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2779:
--
Fix Version/s: (was: 1.2.1)
   1.3.0

> DocumentNodeStore should provide option to set initial cache size as 
> percentage of MAX VM size
> --
>
> Key: OAK-2779
> URL: https://issues.apache.org/jira/browse/OAK-2779
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Affects Versions: 1.2
>Reporter: Will McGauley
> Fix For: 1.3.0
>
>
> Currently the DocumentNodeStore provides a way to configure various cache 
> parameters, including cache size and distribution of that size to various 
> caches.  The distribution of caches is done as a % of the total cache size, 
> which is very helpful, but the overall cache size can only be set as a 
> literal value.
> It would be helpful to achieve a good default value based on the available VM 
> memory as a %, instead of a literal value.  By doing this the cache size 
> would not need to be set by each customer, and a better initial experience 
> would be achieved.  
> I suggest that 25% of the max VM size would be a good starting point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2755) Consolidated JMX view of all EventListener related statistics

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reassigned OAK-2755:
-

Assignee: Marcel Reutegger  (was: Chetan Mehrotra)

Going to replace it with Guava {{Objects.equal()}}.

> Consolidated JMX view of all EventListener related statistics
> -
>
> Key: OAK-2755
> URL: https://issues.apache.org/jira/browse/OAK-2755
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>  Labels: monitoring, observation
> Fix For: 1.0.13, 1.3.0
>
> Attachments: OAK-2755-2.patch, OAK-2755-3.patch, OAK-2755.patch, 
> consolidated-listener-stats-2.png, consolidated-listener-stats.png
>
>
> Oak Observation support exposes a {{EventListenerMBean}} [1] which provide 
> quite a bit of details around registered observation listeners. However in a 
> typical application there would be multiple listeners registered. To simplify 
> monitoring it would be helpful to have a _consolidated_ view of all listeners 
> related statistics.
> Further the stats can also include some more details which are Oak specific
> * Subtree paths to which the listener listens to - By default JCR Api allows 
> single path however Oak allows a listener to register to multiple paths
> * If listener is enabled to listen to cluster local and cluster external 
> changes
> * Size of queue in BackgroundObserver
> * Distribution of change types present in the queue - Local, External etc
> [1] 
> https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-api/src/main/java/org/apache/jackrabbit/api/jmx/EventListenerMBean.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2755) Consolidated JMX view of all EventListener related statistics

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2755.
---
Resolution: Fixed

Fixed in trunk: http://svn.apache.org/r1674065

and 1.0 branch: http://svn.apache.org/r1674066

> Consolidated JMX view of all EventListener related statistics
> -
>
> Key: OAK-2755
> URL: https://issues.apache.org/jira/browse/OAK-2755
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>  Labels: monitoring, observation
> Fix For: 1.0.13, 1.3.0
>
> Attachments: OAK-2755-2.patch, OAK-2755-3.patch, OAK-2755.patch, 
> consolidated-listener-stats-2.png, consolidated-listener-stats.png
>
>
> Oak Observation support exposes a {{EventListenerMBean}} [1] which provide 
> quite a bit of details around registered observation listeners. However in a 
> typical application there would be multiple listeners registered. To simplify 
> monitoring it would be helpful to have a _consolidated_ view of all listeners 
> related statistics.
> Further the stats can also include some more details which are Oak specific
> * Subtree paths to which the listener listens to - By default JCR Api allows 
> single path however Oak allows a listener to register to multiple paths
> * If listener is enabled to listen to cluster local and cluster external 
> changes
> * Size of queue in BackgroundObserver
> * Distribution of change types present in the queue - Local, External etc
> [1] 
> https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-api/src/main/java/org/apache/jackrabbit/api/jmx/EventListenerMBean.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2780) DocumentMK.commit() does not check if node exists on property patch

2015-04-16 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-2780:
-

 Summary: DocumentMK.commit() does not check if node exists on 
property patch
 Key: OAK-2780
 URL: https://issues.apache.org/jira/browse/OAK-2780
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.2, 1.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.3.0, 1.2.1


This may result in commits that get applied even though there is a conflict 
when the feature flag for OAK-2673 is enabled. See report in OAK-2751.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2751) Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch

2015-04-16 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498051#comment-14498051
 ] 

Marcel Reutegger commented on OAK-2751:
---

This issue only occurs with the DocumentMK, which is also the reason why it 
doesn't happen on the 1.2 branch. In the 1.2 branch the RandomizedTest does use 
the DocumentMK, but rather a NodeStoreKernel on top of the DocumentNodeStore.

The DocumentMK.commit() method does not check if a node exists for a property 
patch. I created a separate issue to fix this: OAK-2780.

> Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch
> --
>
> Key: OAK-2751
> URL: https://issues.apache.org/jira/browse/OAK-2751
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>Priority: Blocker
> Fix For: 1.0.13
>
> Attachments: OAK-2751-set-prop-on-delete-should-conflict.patch
>
>
> With merge of OAK-2673 some failures were seen in oak-it/mk which are not 
> seen on trunk. This require further change in patch to shutdown the feature 
> completely. 
> We still need to investigate why this happened and ensure that test pass with 
> this feature enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2780) DocumentMK.commit() does not check if node exists on property patch

2015-04-16 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498059#comment-14498059
 ] 

Vikas Saurabh commented on OAK-2780:


I think this issue should be marked for 1.0 branch as well.

> DocumentMK.commit() does not check if node exists on property patch
> ---
>
> Key: OAK-2780
> URL: https://issues.apache.org/jira/browse/OAK-2780
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.0, 1.2.1
>
>
> This may result in commits that get applied even though there is a conflict 
> when the feature flag for OAK-2673 is enabled. See report in OAK-2751.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2776) Upgrade should allow to skip copying versions

2015-04-16 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498091#comment-14498091
 ] 

Julian Sedding commented on OAK-2776:
-

Thanks [~chetanm]. I'm looking forward to your feedback. Currently I am still 
testing the changes with some larger upgrades, so a little delay is no problem.

> Upgrade should allow to skip copying versions
> -
>
> Key: OAK-2776
> URL: https://issues.apache.org/jira/browse/OAK-2776
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.2
>Reporter: Julian Sedding
> Attachments: OAK-2776.patch
>
>
> In some cases it is not necessary to copy version histories during an 
> upgrade. Skipping to copy versions can result in a lot less content that 
> needs copying and thus a significant speedup.
> Additionally, OAK-2586 introduces the possibility to include and exclude 
> paths for an upgrade. Version histories should thus only be copied if their 
> respective versionable node is present in the copied part of the content. 
> Also reducing content being copied redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-04-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103  | DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97, 105  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88  | DOCUMENT_RDB | 1.7  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |


  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103  | DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88  | DOCUMENT_RDB | 1.7  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |



> Test failures on Jenkins
> 
>
> Key: OAK-2714
> URL: https://issues.apache.org/jira/browse/OAK-2714
> Project: Jackrabbit Oak
>  Issue Type: Bug
> Environment: Jenkins, Ubuntu: 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
>Reporter: Michael Dürig
>  Labels: CI, Jenkins
>   

[jira] [Resolved] (OAK-2751) Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2751.
---
Resolution: Fixed

Now that OAK-2780 is fixed, the temporary fix (http://svn.apache.org/r1672665 ) 
on the 1.0 branch is not necessary anymore. Reverted it at: 
http://svn.apache.org/r1674089

> Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch
> --
>
> Key: OAK-2751
> URL: https://issues.apache.org/jira/browse/OAK-2751
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>Priority: Blocker
> Fix For: 1.0.13
>
> Attachments: OAK-2751-set-prop-on-delete-should-conflict.patch
>
>
> With merge of OAK-2673 some failures were seen in oak-it/mk which are not 
> seen on trunk. This require further change in patch to shutdown the feature 
> completely. 
> We still need to investigate why this happened and ensure that test pass with 
> this feature enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2751) Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch

2015-04-16 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498051#comment-14498051
 ] 

Marcel Reutegger edited comment on OAK-2751 at 4/16/15 3:11 PM:


This issue only occurs with the DocumentMK, which is also the reason why it 
doesn't happen on the 1.2 branch. In the 1.2 branch the RandomizedTest doesn't 
use the DocumentMK, but rather a NodeStoreKernel on top of the 
DocumentNodeStore.

The DocumentMK.commit() method does not check if a node exists for a property 
patch. I created a separate issue to fix this: OAK-2780.


was (Author: mreutegg):
This issue only occurs with the DocumentMK, which is also the reason why it 
doesn't happen on the 1.2 branch. In the 1.2 branch the RandomizedTest does use 
the DocumentMK, but rather a NodeStoreKernel on top of the DocumentNodeStore.

The DocumentMK.commit() method does not check if a node exists for a property 
patch. I created a separate issue to fix this: OAK-2780.

> Test failures with EnableConcurrentAddRemove feature enabled on 1.0 branch
> --
>
> Key: OAK-2751
> URL: https://issues.apache.org/jira/browse/OAK-2751
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
>Priority: Blocker
> Fix For: 1.0.13
>
> Attachments: OAK-2751-set-prop-on-delete-should-conflict.patch
>
>
> With merge of OAK-2673 some failures were seen in oak-it/mk which are not 
> seen on trunk. This require further change in patch to shutdown the feature 
> completely. 
> We still need to investigate why this happened and ensure that test pass with 
> this feature enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2780) DocumentMK.commit() does not check if node exists on property patch

2015-04-16 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2780.
---
   Resolution: Fixed
Fix Version/s: (was: 1.2.1)
   1.2.2
   1.0.13

Fixed in trunk: http://svn.apache.org/r1674075
1.0 branch: http://svn.apache.org/r1674087
1.2 branch: http://svn.apache.org/r1674090

> DocumentMK.commit() does not check if node exists on property patch
> ---
>
> Key: OAK-2780
> URL: https://issues.apache.org/jira/browse/OAK-2780
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.0.13, 1.3.0, 1.2.2
>
>
> This may result in commits that get applied even though there is a conflict 
> when the feature flag for OAK-2673 is enabled. See report in OAK-2751.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2781) log node type changes and the time needed to traverse the repository

2015-04-16 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-2781:
---

 Summary: log node type changes and the time needed to traverse the 
repository
 Key: OAK-2781
 URL: https://issues.apache.org/jira/browse/OAK-2781
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: oak-core
Affects Versions: 1.2, 1.0.12, 1.4
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.0.13, 1.3.0, 1.2.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2781) log node type changes and the time needed to traverse the repository

2015-04-16 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-2781.
-
Resolution: Fixed

> log node type changes and the time needed to traverse the repository
> 
>
> Key: OAK-2781
> URL: https://issues.apache.org/jira/browse/OAK-2781
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: oak-core
>Affects Versions: 1.0.12, 1.2, 1.4
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.0.13, 1.3.0, 1.2.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2782) Tika not able to load class in case of custom config

2015-04-16 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2782:


 Summary: Tika not able to load class in case of custom config
 Key: OAK-2782
 URL: https://issues.apache.org/jira/browse/OAK-2782
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Affects Versions: 1.2, 1.0.12, 1.2.1
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.0.13, 1.3.0, 1.2.2


If a custom config file is used to configure Tika then Tika is not able to load 
the configured parser classes

For e.g. with config having following entry
{code:xml}

  application/xml
  image/svg+xml

{code}

Throws following exception in OSGi env

{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.tika.parser.xml.DcXMLParser not found by org.apache.tika.core [82]
at 
org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1558)
at 
org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
at 
org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1998)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at 
org.apache.tika.config.ServiceLoader.getServiceClass(ServiceLoader.java:189)
at 
org.apache.tika.config.TikaConfig.parserFromDomElement(TikaConfig.java:318)
... 52 common frames omitted
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2782) Tika not able to load class in case of custom config

2015-04-16 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2782:
-
Attachment: OAK-2782-trunk.patch
OAK-2782-branch-1.0.patch

Attached patches for both trunk and master

[~alex.parvulescu] Can you have a look

> Tika not able to load class in case of custom config
> 
>
> Key: OAK-2782
> URL: https://issues.apache.org/jira/browse/OAK-2782
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.0.12, 1.2, 1.2.1
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.0.13, 1.3.0, 1.2.2
>
> Attachments: OAK-2782-branch-1.0.patch, OAK-2782-trunk.patch
>
>
> If a custom config file is used to configure Tika then Tika is not able to 
> load the configured parser classes
> For e.g. with config having following entry
> {code:xml}
> 
>   application/xml
>   image/svg+xml
> 
> {code}
> Throws following exception in OSGi env
> {noformat}
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.tika.parser.xml.DcXMLParser not found by org.apache.tika.core [82]
>   at 
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1558)
>   at 
> org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
>   at 
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1998)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:270)
>   at 
> org.apache.tika.config.ServiceLoader.getServiceClass(ServiceLoader.java:189)
>   at 
> org.apache.tika.config.TikaConfig.parserFromDomElement(TikaConfig.java:318)
>   ... 52 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2783) Make LDAP connection pool 'testOnBorrow' configurable

2015-04-16 Thread Tobias Bocanegra (JIRA)
Tobias Bocanegra created OAK-2783:
-

 Summary: Make LDAP connection pool 'testOnBorrow' configurable
 Key: OAK-2783
 URL: https://issues.apache.org/jira/browse/OAK-2783
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: security
Affects Versions: 1.2
Reporter: Tobias Bocanegra
Assignee: Tobias Bocanegra
Priority: Minor
 Fix For: 1.2.1


Depending of the LDAP server configuration, it fails to connect as the server 
doesn't allow the connection validation query.

It fails on 
{quote}
Caused by: java.util.NoSuchElementException: Could not create a validated 
object, cause: ValidateObject failed
at 
org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1233)
at 
org.apache.directory.ldap.client.api.LdapConnectionPool.getConnection(LdapConnectionPool.java:56)
at 
org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider.connect(LdapIdentityProvider.java:532)
... 92 common frames omitted
{quote}

Based on customer analyze of Oak code this is the reason it fails:

{quote}
I think I have found a solution for the problem. While the system is 
initializing the connection it tries to validate the connection. This is the 
reason for the strange search request:

SearchRequest
baseDn : ''
filter : '(objectClass=*)'
scope : base object

Because such kind of requests are not allowed in the client's ldap system the 
connection is being rejected (as invalid). It is configurable if the connection 
should be validated. The class 
org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider
 contains this code

if (config.getAdminPoolConfig().getMaxActive() != 0) {
adminPool = new LdapConnectionPool(adminConnectionFactory);
adminPool.setTestOnBorrow(true);
adminPool.setMaxActive(config.getAdminPoolConfig().getMaxActive());
adminPool.setWhenExhaustedAction(GenericObjectPool.WHEN_EXHAUSTED_BLOCK);
}

A solution for our Problem would most probably be to change the connectionPool 
configuration adminPool.setTestOnBorrow(false);
This Parameter comes sadly not from the identity provider configuration.

Is there a way to change this this parameter without creating an own 
implementation of the identity provider?
{quote}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2783) Make LDAP connection pool 'testOnBorrow' configurable

2015-04-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2783:
---
Fix Version/s: (was: 1.2.1)
   1.2.2

> Make LDAP connection pool 'testOnBorrow' configurable
> -
>
> Key: OAK-2783
> URL: https://issues.apache.org/jira/browse/OAK-2783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.2
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>Priority: Minor
> Fix For: 1.2.2
>
>
> Depending of the LDAP server configuration, it fails to connect as the server 
> doesn't allow the connection validation query.
> It fails on 
> {quote}
> Caused by: java.util.NoSuchElementException: Could not create a validated 
> object, cause: ValidateObject failed
> at 
> org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1233)
> at 
> org.apache.directory.ldap.client.api.LdapConnectionPool.getConnection(LdapConnectionPool.java:56)
> at 
> org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider.connect(LdapIdentityProvider.java:532)
> ... 92 common frames omitted
> {quote}
> Based on customer analyze of Oak code this is the reason it fails:
> {quote}
>   I think I have found a solution for the problem. While the system is 
> initializing the connection it tries to validate the connection. This is the 
> reason for the strange search request:
> SearchRequest
> baseDn : ''
> filter : '(objectClass=*)'
> scope : base object
> Because such kind of requests are not allowed in the client's ldap system the 
> connection is being rejected (as invalid). It is configurable if the 
> connection should be validated. The class 
> org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider
>  contains this code
> if (config.getAdminPoolConfig().getMaxActive() != 0) {
> adminPool = new LdapConnectionPool(adminConnectionFactory);
> adminPool.setTestOnBorrow(true);
> adminPool.setMaxActive(config.getAdminPoolConfig().getMaxActive());
> adminPool.setWhenExhaustedAction(GenericObjectPool.WHEN_EXHAUSTED_BLOCK);
> }
> A solution for our Problem would most probably be to change the 
> connectionPool configuration adminPool.setTestOnBorrow(false);
> This Parameter comes sadly not from the identity provider configuration.
> Is there a way to change this this parameter without creating an own 
> implementation of the identity provider?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2783) Make LDAP connection pool 'testOnBorrow' configurable

2015-04-16 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra resolved OAK-2783.
---
Resolution: Fixed

fixed in 1674150

> Make LDAP connection pool 'testOnBorrow' configurable
> -
>
> Key: OAK-2783
> URL: https://issues.apache.org/jira/browse/OAK-2783
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.2
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>Priority: Minor
> Fix For: 1.2.2
>
>
> Depending of the LDAP server configuration, it fails to connect as the server 
> doesn't allow the connection validation query.
> It fails on 
> {quote}
> Caused by: java.util.NoSuchElementException: Could not create a validated 
> object, cause: ValidateObject failed
> at 
> org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1233)
> at 
> org.apache.directory.ldap.client.api.LdapConnectionPool.getConnection(LdapConnectionPool.java:56)
> at 
> org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider.connect(LdapIdentityProvider.java:532)
> ... 92 common frames omitted
> {quote}
> Based on customer analyze of Oak code this is the reason it fails:
> {quote}
>   I think I have found a solution for the problem. While the system is 
> initializing the connection it tries to validate the connection. This is the 
> reason for the strange search request:
> SearchRequest
> baseDn : ''
> filter : '(objectClass=*)'
> scope : base object
> Because such kind of requests are not allowed in the client's ldap system the 
> connection is being rejected (as invalid). It is configurable if the 
> connection should be validated. The class 
> org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider
>  contains this code
> if (config.getAdminPoolConfig().getMaxActive() != 0) {
> adminPool = new LdapConnectionPool(adminConnectionFactory);
> adminPool.setTestOnBorrow(true);
> adminPool.setMaxActive(config.getAdminPoolConfig().getMaxActive());
> adminPool.setWhenExhaustedAction(GenericObjectPool.WHEN_EXHAUSTED_BLOCK);
> }
> A solution for our Problem would most probably be to change the 
> connectionPool configuration adminPool.setTestOnBorrow(false);
> This Parameter comes sadly not from the identity provider configuration.
> Is there a way to change this this parameter without creating an own 
> implementation of the identity provider?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)