[jira] [Updated] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4779:
--
Labels: candidate_oak_1_4 resilience  (was: resilience)

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_4, resilience
> Fix For: 1.6, 1.5.10
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4779.
---
   Resolution: Fixed
Fix Version/s: 1.5.10

Applied Julian's patch to trunk: http://svn.apache.org/r1760486

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_4, resilience
> Fix For: 1.6, 1.5.10
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reassigned OAK-4779:
-

Assignee: Marcel Reutegger  (was: Julian Reschke)

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.6
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3730) RDBDocumentStore: implement RDB-specific VersionGC support for lookup of split documents

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484421#comment-15484421
 ] 

Julian Reschke edited comment on OAK-3730 at 9/13/16 6:33 AM:
--

trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.apache.org/r1718895]
1.4: [r1760408|http://svn.apache.org/r1760408] 
[r1718895|http://svn.apache.org/r1718895]
1.2: [r1760445|http://svn.apache.org/r1760445] 
[r1718962|http://svn.apache.org/r1718962]
1.0: [r1760483|http://svn.apache.org/r1760483] 
[r1718974|http://svn.apache.org/r1718974]




was (Author: reschke):
trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.ap/r1718895]
1.4: [r1760408|http://svn.apache.org/r1760408] [r1718895|http://svn.apac1718895]
1.2: [r1760445|http://svn.apache.org/r1760445] [r1718962|http://svn.apac1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]


> RDBDocumentStore: implement RDB-specific VersionGC support for lookup of 
> split documents
> 
>
> Key: OAK-3730
> URL: https://issues.apache.org/jira/browse/OAK-3730
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4, 1.3.13, 1.0.26, 1.2.10
>
> Attachments: OAK-3730.diff, OAK-3730.diff
>
>
> This requires a change to the table layout (adding SD_TYPE and more), thus 
> needs thought about how to migrate from older versions of the persistence.
> Edit: the actual patch avoids a table change by restricting SQL queries to 
> IDs that can identity split documents. This approach does not rule out that 
> we introduce the SDTYPE column at a later stage, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4712) Publish S3DataStore stats in JMX MBean

2016-09-12 Thread Matt Ryan (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485434#comment-15485434
 ] 

Matt Ryan commented on OAK-4712:


[~amjain] The latest patch I submitted addresses some of the issues you raised. 
 Rather than wait until all the issues have been addressed I wanted to get this 
out for feedback.

The following changes were applied:
* Use a mocked S3DS in S3DataStoreStatsTest, where possible
* Remove usage of test file
* Remove dependency on S3 data store and associated properties - no S3 
configuration required

I wasn't able to simply mock S3DataStore in every test case.  Some of the tests 
require that I be able to control the backend (for example, checking whether a 
sync has completed).  For these I am still using a very simple S3DataStore 
subclass that allows me to replace the backend with something that makes 
testing more meaningful.

I have not yet moved any of the tests into oak-blob-cloud.  I'm not sure it 
makes much sense to do so.  Perhaps I am missing something?  The purpose of the 
test is to exercise the functionality being added which is mostly in 
S3DataStoreStats, which lives in oak-core.  Each test creates an 
S3DataStoreStats object as the system under test.  I tried moving some things 
to oak-blob-cloud, but it looked to me that any test I were to move would end 
up being so limited in what it could do that it wouldn't really be able to test 
much.

I will continue working on the use of MemoryNodeStore instead of the node store 
mock, and the addition of more tests like those suggested.

> Publish S3DataStore stats in JMX MBean
> --
>
> Key: OAK-4712
> URL: https://issues.apache.org/jira/browse/OAK-4712
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Matt Ryan
>Assignee: Amit Jain
> Attachments: OAK-4712.2.diff, OAK-4712.3.diff, OAK-4712.4.diff, 
> OAK-4712.5.diff, OAK-4712.7.patch, OAK_4712_6_Amit.patch
>
>
> This feature is to publish statistics about the S3DataStore via a JMX MBean.  
> There are two statistics suggested:
> * Indicate the number of files actively being synchronized from the local 
> cache into S3
> * Given a path to a local file, indicate whether synchronization of file into 
> S3 has completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4712) Publish S3DataStore stats in JMX MBean

2016-09-12 Thread Matt Ryan (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-4712:
---
Attachment: OAK-4712.7.patch

This patch applies some of the unit test changes suggested on the last patch.

> Publish S3DataStore stats in JMX MBean
> --
>
> Key: OAK-4712
> URL: https://issues.apache.org/jira/browse/OAK-4712
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Matt Ryan
>Assignee: Amit Jain
> Attachments: OAK-4712.2.diff, OAK-4712.3.diff, OAK-4712.4.diff, 
> OAK-4712.5.diff, OAK-4712.7.patch, OAK_4712_6_Amit.patch
>
>
> This feature is to publish statistics about the S3DataStore via a JMX MBean.  
> There are two statistics suggested:
> * Indicate the number of files actively being synchronized from the local 
> cache into S3
> * Given a path to a local file, indicate whether synchronization of file into 
> S3 has completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3730) RDBDocumentStore: implement RDB-specific VersionGC support for lookup of split documents

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484421#comment-15484421
 ] 

Julian Reschke edited comment on OAK-3730 at 9/12/16 8:32 PM:
--

trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.ap/r1718895]
1.4: [r1760408|http://svn.apache.org/r1760408] [r1718895|http://svn.apac1718895]
1.2: [r1760445|http://svn.apache.org/r1760445] [r1718962|http://svn.apac1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]



was (Author: reschke):
trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.apache.org/r1718895]
1.4: [r1760408|http://svn.apache.org/r1760408] 
[r1718895|http://svn.apache.org/r1718895]
1.2: [r1718962|http://svn.apache.org/r1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]


> RDBDocumentStore: implement RDB-specific VersionGC support for lookup of 
> split documents
> 
>
> Key: OAK-3730
> URL: https://issues.apache.org/jira/browse/OAK-3730
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4, 1.3.13, 1.0.26, 1.2.10
>
> Attachments: OAK-3730.diff, OAK-3730.diff
>
>
> This requires a change to the table layout (adding SD_TYPE and more), thus 
> needs thought about how to migrate from older versions of the persistence.
> Edit: the actual patch avoids a table change by restricting SQL queries to 
> IDs that can identity split documents. This approach does not rule out that 
> we introduce the SDTYPE column at a later stage, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4798) Share request encoders and decoders across the server and the client

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484770#comment-15484770
 ] 

Francesco Mari commented on OAK-4798:
-

At r1760419 I split the {{RequestDecoder}} into multiple decoders, one for each 
corresponding request class. The tests in {{RequestDecoderTest}} have been 
moved to test classes named after the new decoders.

> Share request encoders and decoders across the server and the client
> 
>
> Key: OAK-4798
> URL: https://issues.apache.org/jira/browse/OAK-4798
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
>
> The encoders and decoders for the request and response objects used by the 
> server are currently private to the server implementation. These classes 
> should be made available to the client for further reuse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4798) Share request encoders and decoders across the server and the client

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484765#comment-15484765
 ] 

Francesco Mari commented on OAK-4798:
-

At r1760411 I moved the requests, responses, decoders and encoders into the 
codec package. The tests have been moved to a corresponding package as well. 
Clients of those classes had to be adapted due to the change in package name.

> Share request encoders and decoders across the server and the client
> 
>
> Key: OAK-4798
> URL: https://issues.apache.org/jira/browse/OAK-4798
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
>
> The encoders and decoders for the request and response objects used by the 
> server are currently private to the server implementation. These classes 
> should be made available to the client for further reuse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4798) Share request encoders and decoders across the server and the client

2016-09-12 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-4798:
---

 Summary: Share request encoders and decoders across the server and 
the client
 Key: OAK-4798
 URL: https://issues.apache.org/jira/browse/OAK-4798
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar
Reporter: Francesco Mari
Assignee: Francesco Mari
 Fix For: Segment Tar 0.0.12


The encoders and decoders for the request and response objects used by the 
server are currently private to the server implementation. These classes should 
be made available to the client for further reuse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3730) RDBDocumentStore: implement RDB-specific VersionGC support for lookup of split documents

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484421#comment-15484421
 ] 

Julian Reschke edited comment on OAK-3730 at 9/12/16 4:56 PM:
--

trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.apache.org/r1718895]
1.4: [r1760408|http://svn.apache.org/r1760408] 
[r1718895|http://svn.apache.org/r1718895]
1.2: [r1718962|http://svn.apache.org/r1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]



was (Author: reschke):
trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.apache.org/r1718895]
1.4: [r1718895|http://svn.apache.org/r1718895]
1.2: [r1718962|http://svn.apache.org/r1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]


> RDBDocumentStore: implement RDB-specific VersionGC support for lookup of 
> split documents
> 
>
> Key: OAK-3730
> URL: https://issues.apache.org/jira/browse/OAK-3730
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4, 1.3.13, 1.0.26, 1.2.10
>
> Attachments: OAK-3730.diff, OAK-3730.diff
>
>
> This requires a change to the table layout (adding SD_TYPE and more), thus 
> needs thought about how to migrate from older versions of the persistence.
> Edit: the actual patch avoids a table change by restricting SQL queries to 
> IDs that can identity split documents. This approach does not rule out that 
> we introduce the SDTYPE column at a later stage, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4785.
-
Resolution: Fixed

Thanks [~marett] and [~dulceanu] for your feedback, I committed the patch at 
r1760405.

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484574#comment-15484574
 ] 

Andrei Dulceanu commented on OAK-4785:
--

[~frm], I like your idea; looked over the patch and definitely makes sense to 
decompose the monolithic {{StandbyServerHandler}} in finer grained sub-parts.

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu resolved OAK-4738.
--
   Resolution: Fixed
Fix Version/s: (was: Segment Tar 0.0.20)
   Segment Tar 0.0.12

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch, OAK-4738-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-4635.

Resolution: Fixed

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484257#comment-15484257
 ] 

Michael Dürig edited comment on OAK-4635 at 9/12/16 4:05 PM:
-

To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. -I will do 
this in a follow up commit.- Done at 
http://svn.apache.org/viewvc?rev=1760400&view=rev.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. See OAK-4797.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 


was (Author: mduerig):
To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. See OAK-4797.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3730) RDBDocumentStore: implement RDB-specific VersionGC support for lookup of split documents

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484421#comment-15484421
 ] 

Julian Reschke commented on OAK-3730:
-

trunk: [r1760387|http://svn.apache.org/r1760387] 
[r1718895|http://svn.apache.org/r1718895]
1.4: [r1718895|http://svn.apache.org/r1718895]
1.2: [r1718962|http://svn.apache.org/r1718962]
1.0: [r1718974|http://svn.apache.org/r1718974]


> RDBDocumentStore: implement RDB-specific VersionGC support for lookup of 
> split documents
> 
>
> Key: OAK-3730
> URL: https://issues.apache.org/jira/browse/OAK-3730
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4, 1.3.13, 1.0.26, 1.2.10
>
> Attachments: OAK-3730.diff, OAK-3730.diff
>
>
> This requires a change to the table layout (adding SD_TYPE and more), thus 
> needs thought about how to migrate from older versions of the persistence.
> Edit: the actual patch avoids a table change by restricting SQL queries to 
> IDs that can identity split documents. This approach does not rule out that 
> we introduce the SDTYPE column at a later stage, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (OAK-3730) RDBDocumentStore: implement RDB-specific VersionGC support for lookup of split documents

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3730:

Comment: was deleted

(was: trunk: http://svn.apache.org/r1718895
1.2: http://svn.apache.org/r1718962
1.0: http://svn.apache.org/r1718974
)

> RDBDocumentStore: implement RDB-specific VersionGC support for lookup of 
> split documents
> 
>
> Key: OAK-3730
> URL: https://issues.apache.org/jira/browse/OAK-3730
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.12
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.4, 1.3.13, 1.0.26, 1.2.10
>
> Attachments: OAK-3730.diff, OAK-3730.diff
>
>
> This requires a change to the table layout (adding SD_TYPE and more), thus 
> needs thought about how to migrate from older versions of the persistence.
> Edit: the actual patch avoids a table change by restricting SQL queries to 
> IDs that can identity split documents. This approach does not rule out that 
> we introduce the SDTYPE column at a later stage, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC driver

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484224#comment-15484224
 ] 

Julian Reschke edited comment on OAK-4794 at 9/12/16 3:25 PM:
--

trunk: [r1760373|http://svn.apache.org/r1760373]
1.4: [r1760386|http://svn.apache.org/r1760386]


was (Author: reschke):
trunk: [r1760373|http://svn.apache.org/r1760373]

> RDBDocumentStore: update PostgresQL JDBC driver
> ---
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.10, 1.4.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC driver

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4794:

Fix Version/s: 1.4.8

> RDBDocumentStore: update PostgresQL JDBC driver
> ---
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.10, 1.4.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4793:

Attachment: OAK-4793.diff

(work in progress)

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
> Attachments: OAK-4793.diff
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in RDBDocumentStore and make sure JDBC driver specific 
> exceptions are handled consistently and wrapped in a DocumentStoreException. 
> At the same time, cache consistency needs to be checked as well in case of a 
> driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484257#comment-15484257
 ] 

Michael Dürig edited comment on OAK-4635 at 9/12/16 2:54 PM:
-

To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. See 4797.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 


was (Author: mduerig):
To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. I will file a separate issue for this.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484348#comment-15484348
 ] 

Timothee Maret commented on OAK-4785:
-

Fine with me.

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484257#comment-15484257
 ] 

Michael Dürig edited comment on OAK-4635 at 9/12/16 2:54 PM:
-

To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. See OAK-4797.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 


was (Author: mduerig):
To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. See 4797.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4797) Optimise stable ids

2016-09-12 Thread JIRA
Michael Dürig created OAK-4797:
--

 Summary: Optimise stable ids 
 Key: OAK-4797
 URL: https://issues.apache.org/jira/browse/OAK-4797
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: Segment Tar 0.0.14


Currently {{SegmentNodeState#getStableId()}} returns a string with all its 
associated overhead:
* high memory requirements (42 characters plus the overhead of a {{String}} 
instance. The raw requirements are a mere 20 bytes (long msb, long lsb, int 
offset). The memory overhead is problematic as the stable id is used as key in 
the node deduplication cache (See OAK-4635).
* high serialisation cost. I have seen {{getStableId()}} occurring in stack 
traces. This is to be expected as that method is called quite often when 
comparing node states. 

This issue is to explore options for reducing both CPU and memory overhead of 
stable ids. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484257#comment-15484257
 ] 

Michael Dürig commented on OAK-4635:


To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. I will file a separate issue for this.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4635) Improve cache eviction policy of the node deduplication cache

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484233#comment-15484233
 ] 

Michael Dürig commented on OAK-4635:


Bulk of the changes are in: http://svn.apache.org/viewvc?rev=1760372&view=rev

The former {{NodeCache}} based on levels according to the depth of a node in 
the tree has now be replaces by the {{PriorityCache}}. That cache associates a 
cost to each mapping such that mappings with higher costs have a lower chance 
of being evicted. Currently the number of child nodes of a node is used for the 
cost of a node in the cache. 

> Improve cache eviction policy of the node deduplication cache
> -
>
> Key: OAK-4635
> URL: https://issues.apache.org/jira/browse/OAK-4635
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: perfomance
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 100 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4796) filter events before adding to ChangeProcessor's queue

2016-09-12 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-4796:


 Summary: filter events before adding to ChangeProcessor's queue
 Key: OAK-4796
 URL: https://issues.apache.org/jira/browse/OAK-4796
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Stefan Egli


Currently the 
[ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335]
 is in charge of doing the event diffing and filtering and does so in a pooled 
Thread, ie asynchronously, at a later stage independent from the commit. This 
has the advantage that the commit is fast, but has the following potentially 
negative effects:
# events (in the form of ContentChange Objects) occupy a slot of the queue even 
if the listener is not interested in it - any commit lands on any listener's 
queue. This reduces the capacity of the queue for 'actual' events to be 
delivered. It therefore increases the risk that the queue fills - and when full 
has various consequences such as loosing the CommitInfo etc.
# each event==ContentChange later on must be evaluated, and for that a diff 
must be calculated. Depending on runtime behavior that diff might be expensive 
if no longer in the cache (documentMk specifically).

As an improvement, this diffing+filtering could be done at an earlier stage 
already, nearer to the commit, and in case the filter would ignore the event, 
it would not have to be put into the queue at all, thus avoiding occupying a 
slot and later potentially slower diffing.

The suggestion is to implement this via the following algorithm:

* During the commit, in a {{Validator}} the listener's filters are evaluated - 
in an as-efficient-as-possible manner (Reason for doing it in a Validator is 
that this doesn't add overhead as oak already goes through all changes for 
other Validators). As a result a _list of potentially affected observers_ is 
added to the {{CommitInfo}} (false positives are fine).
** Note that the above adds cost to the commit and must therefore be carefully 
done and measured
** One potential measure could be to only do filtering when listener's queues 
are larger than a certain threshold (eg 10)
* The ChangeProcessor in {{contentChanged}} (in the one created in 
[createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224])
 then checks the new commitInfo's _potentially affected observers_ list and if 
it's not in the list, adds a {{NOOP}} token at the end of the queue. If there's 
already a NOOP there, the two are collapsed (this way when a filter is not 
affected it would have a NOOP at the end of the queue). If later on a no-NOOP 
item is added, the NOOP's {{root}} is used as the {{previousRoot}} for the 
newly added {{ContentChange}} obj.
** To achieve that, the ContentChange obj is extended to not only have the "to" 
{{root}} pointer, but also the "from" {{previousRoot}} pointer which currently 
is implicitly maintained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4796) filter events before adding to ChangeProcessor's queue

2016-09-12 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4796:
-
Affects Version/s: 1.5.9

> filter events before adding to ChangeProcessor's queue
> --
>
> Key: OAK-4796
> URL: https://issues.apache.org/jira/browse/OAK-4796
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.5.9
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: observation
> Fix For: 1.6
>
>
> Currently the 
> [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335]
>  is in charge of doing the event diffing and filtering and does so in a 
> pooled Thread, ie asynchronously, at a later stage independent from the 
> commit. This has the advantage that the commit is fast, but has the following 
> potentially negative effects:
> # events (in the form of ContentChange Objects) occupy a slot of the queue 
> even if the listener is not interested in it - any commit lands on any 
> listener's queue. This reduces the capacity of the queue for 'actual' events 
> to be delivered. It therefore increases the risk that the queue fills - and 
> when full has various consequences such as loosing the CommitInfo etc.
> # each event==ContentChange later on must be evaluated, and for that a diff 
> must be calculated. Depending on runtime behavior that diff might be 
> expensive if no longer in the cache (documentMk specifically).
> As an improvement, this diffing+filtering could be done at an earlier stage 
> already, nearer to the commit, and in case the filter would ignore the event, 
> it would not have to be put into the queue at all, thus avoiding occupying a 
> slot and later potentially slower diffing.
> The suggestion is to implement this via the following algorithm:
> * During the commit, in a {{Validator}} the listener's filters are evaluated 
> - in an as-efficient-as-possible manner (Reason for doing it in a Validator 
> is that this doesn't add overhead as oak already goes through all changes for 
> other Validators). As a result a _list of potentially affected observers_ is 
> added to the {{CommitInfo}} (false positives are fine).
> ** Note that the above adds cost to the commit and must therefore be 
> carefully done and measured
> ** One potential measure could be to only do filtering when listener's queues 
> are larger than a certain threshold (eg 10)
> * The ChangeProcessor in {{contentChanged}} (in the one created in 
> [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224])
>  then checks the new commitInfo's _potentially affected observers_ list and 
> if it's not in the list, adds a {{NOOP}} token at the end of the queue. If 
> there's already a NOOP there, the two are collapsed (this way when a filter 
> is not affected it would have a NOOP at the end of the queue). If later on a 
> no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} 
> for the newly added {{ContentChange}} obj.
> ** To achieve that, the ContentChange obj is extended to not only have the 
> "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which 
> currently is implicitly maintained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4796) filter events before adding to ChangeProcessor's queue

2016-09-12 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli reassigned OAK-4796:


Assignee: Stefan Egli

> filter events before adding to ChangeProcessor's queue
> --
>
> Key: OAK-4796
> URL: https://issues.apache.org/jira/browse/OAK-4796
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.5.9
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: observation
> Fix For: 1.6
>
>
> Currently the 
> [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335]
>  is in charge of doing the event diffing and filtering and does so in a 
> pooled Thread, ie asynchronously, at a later stage independent from the 
> commit. This has the advantage that the commit is fast, but has the following 
> potentially negative effects:
> # events (in the form of ContentChange Objects) occupy a slot of the queue 
> even if the listener is not interested in it - any commit lands on any 
> listener's queue. This reduces the capacity of the queue for 'actual' events 
> to be delivered. It therefore increases the risk that the queue fills - and 
> when full has various consequences such as loosing the CommitInfo etc.
> # each event==ContentChange later on must be evaluated, and for that a diff 
> must be calculated. Depending on runtime behavior that diff might be 
> expensive if no longer in the cache (documentMk specifically).
> As an improvement, this diffing+filtering could be done at an earlier stage 
> already, nearer to the commit, and in case the filter would ignore the event, 
> it would not have to be put into the queue at all, thus avoiding occupying a 
> slot and later potentially slower diffing.
> The suggestion is to implement this via the following algorithm:
> * During the commit, in a {{Validator}} the listener's filters are evaluated 
> - in an as-efficient-as-possible manner (Reason for doing it in a Validator 
> is that this doesn't add overhead as oak already goes through all changes for 
> other Validators). As a result a _list of potentially affected observers_ is 
> added to the {{CommitInfo}} (false positives are fine).
> ** Note that the above adds cost to the commit and must therefore be 
> carefully done and measured
> ** One potential measure could be to only do filtering when listener's queues 
> are larger than a certain threshold (eg 10)
> * The ChangeProcessor in {{contentChanged}} (in the one created in 
> [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224])
>  then checks the new commitInfo's _potentially affected observers_ list and 
> if it's not in the list, adds a {{NOOP}} token at the end of the queue. If 
> there's already a NOOP there, the two are collapsed (this way when a filter 
> is not affected it would have a NOOP at the end of the queue). If later on a 
> no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} 
> for the newly added {{ContentChange}} obj.
> ** To achieve that, the ContentChange obj is extended to not only have the 
> "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which 
> currently is implicitly maintained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4796) filter events before adding to ChangeProcessor's queue

2016-09-12 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4796:
-
Fix Version/s: 1.6

> filter events before adding to ChangeProcessor's queue
> --
>
> Key: OAK-4796
> URL: https://issues.apache.org/jira/browse/OAK-4796
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.5.9
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: observation
> Fix For: 1.6
>
>
> Currently the 
> [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335]
>  is in charge of doing the event diffing and filtering and does so in a 
> pooled Thread, ie asynchronously, at a later stage independent from the 
> commit. This has the advantage that the commit is fast, but has the following 
> potentially negative effects:
> # events (in the form of ContentChange Objects) occupy a slot of the queue 
> even if the listener is not interested in it - any commit lands on any 
> listener's queue. This reduces the capacity of the queue for 'actual' events 
> to be delivered. It therefore increases the risk that the queue fills - and 
> when full has various consequences such as loosing the CommitInfo etc.
> # each event==ContentChange later on must be evaluated, and for that a diff 
> must be calculated. Depending on runtime behavior that diff might be 
> expensive if no longer in the cache (documentMk specifically).
> As an improvement, this diffing+filtering could be done at an earlier stage 
> already, nearer to the commit, and in case the filter would ignore the event, 
> it would not have to be put into the queue at all, thus avoiding occupying a 
> slot and later potentially slower diffing.
> The suggestion is to implement this via the following algorithm:
> * During the commit, in a {{Validator}} the listener's filters are evaluated 
> - in an as-efficient-as-possible manner (Reason for doing it in a Validator 
> is that this doesn't add overhead as oak already goes through all changes for 
> other Validators). As a result a _list of potentially affected observers_ is 
> added to the {{CommitInfo}} (false positives are fine).
> ** Note that the above adds cost to the commit and must therefore be 
> carefully done and measured
> ** One potential measure could be to only do filtering when listener's queues 
> are larger than a certain threshold (eg 10)
> * The ChangeProcessor in {{contentChanged}} (in the one created in 
> [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224])
>  then checks the new commitInfo's _potentially affected observers_ list and 
> if it's not in the list, adds a {{NOOP}} token at the end of the queue. If 
> there's already a NOOP there, the two are collapsed (this way when a filter 
> is not affected it would have a NOOP at the end of the queue). If later on a 
> no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} 
> for the newly added {{ContentChange}} obj.
> ** To achieve that, the ContentChange obj is extended to not only have the 
> "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which 
> currently is implicitly maintained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4795) Node writer statistics is skewed

2016-09-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-4795.

Resolution: Fixed

Fixed at http://svn.apache.org/viewvc?rev=1760371&view=rev

> Node writer statistics is skewed
> 
>
> Key: OAK-4795
> URL: https://issues.apache.org/jira/browse/OAK-4795
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: monitoring
> Fix For: Segment Tar 0.0.12
>
>
> The statistics about writing nodes collected by the {{SegmentWriter}} 
> instances is a bit off. This was caused by the changes introduced with 
> OAK-4570. Starting with these changes also base node states of a node being 
> written are de-duplicated. Collecting the node writer stats does not 
> differentiate however e.g. the cache hits/misses between deduplication for 
> the base state or the actual state being written. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC driver

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484224#comment-15484224
 ] 

Julian Reschke commented on OAK-4794:
-

trunk: [r1760373|http://svn.apache.org/r1760373]

> RDBDocumentStore: update PostgresQL JDBC driver
> ---
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC driver

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-4794.
-
   Resolution: Fixed
Fix Version/s: 1.5.10

> RDBDocumentStore: update PostgresQL JDBC driver
> ---
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.5.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC driver

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4794:

Summary: RDBDocumentStore: update PostgresQL JDBC driver  (was: 
RDBDocumentStore: update PostgresQL JDBC drivers)

> RDBDocumentStore: update PostgresQL JDBC driver
> ---
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC drivers

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4794:

Issue Type: Technical task  (was: Task)
Parent: OAK-1266

> RDBDocumentStore: update PostgresQL JDBC drivers
> 
>
> Key: OAK-4794
> URL: https://issues.apache.org/jira/browse/OAK-4794
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.0.33, 1.2.18, 1.4.7, 1.5.9
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4795) Node writer statistics is skewed

2016-09-12 Thread JIRA
Michael Dürig created OAK-4795:
--

 Summary: Node writer statistics is skewed
 Key: OAK-4795
 URL: https://issues.apache.org/jira/browse/OAK-4795
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-tar
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: Segment Tar 0.0.12


The statistics about writing nodes collected by the {{SegmentWriter}} instances 
is a bit off. This was caused by the changes introduced with OAK-4570. Starting 
with these changes also base node states of a node being written are 
de-duplicated. Collecting the node writer stats does not differentiate however 
e.g. the cache hits/misses between deduplication for the base state or the 
actual state being written. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484174#comment-15484174
 ] 

Francesco Mari commented on OAK-4785:
-

[~marett], I tried to maintain the same behaviour as the previous 
implementation, but we can decide to add more states as needed in the future.

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484136#comment-15484136
 ] 

Marcel Reutegger commented on OAK-4779:
---

+1 looks good to me.

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.6
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4794) RDBDocumentStore: update PostgresQL JDBC drivers

2016-09-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-4794:
---

 Summary: RDBDocumentStore: update PostgresQL JDBC drivers
 Key: OAK-4794
 URL: https://issues.apache.org/jira/browse/OAK-4794
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: rdbmk
Affects Versions: 1.5.9, 1.4.7, 1.2.18, 1.0.33
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4791) Enable animal sniffer plugin

2016-09-12 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484133#comment-15484133
 ] 

Marcel Reutegger commented on OAK-4791:
---

Restricted branches to Java 1.6 runtime library:

- 1.4: http://svn.apache.org/r1760365
- 1.2: http://svn.apache.org/r1760366
- 1.0: http://svn.apache.org/r1760367

> Enable animal sniffer plugin
> 
>
> Key: OAK-4791
> URL: https://issues.apache.org/jira/browse/OAK-4791
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.2.19, 1.0.34, 1.5.10, 1.4.8
>
>
> I would like to use the animal sniffer plugin to enforce usage of the 
> advertised minimal Java version. For trunk this is currently Java 7, while 
> the 1.0, 1.2 and 1.4 branches are on Java 6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4674) Log a message when asynchronous persistent cache is enabled

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484102#comment-15484102
 ] 

Tomek Rękawek commented on OAK-4674:


Fixed in [r1760363|https://svn.apache.org/r1760363]. Following message will be 
logged after enabling the property.

{noformat}
The persistent cache writes will be asynchronous
{noformat}

Without the property, there'll be a similar message (but stating that the cache 
is synchronous, obviously).

> Log a message when asynchronous persistent cache is enabled
> ---
>
> Key: OAK-4674
> URL: https://issues.apache.org/jira/browse/OAK-4674
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, documentmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.6, 1.5.10
>
>
> It would be useful to log a message indicating that asynchronous persistent 
> cache (implemented in OAK-2761) is enabled by {{oak.cache.asynchronous}} 
> property (introduced in OAK-4123).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4674) Log a message when asynchronous persistent cache is enabled

2016-09-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-4674.

   Resolution: Fixed
Fix Version/s: 1.5.10

> Log a message when asynchronous persistent cache is enabled
> ---
>
> Key: OAK-4674
> URL: https://issues.apache.org/jira/browse/OAK-4674
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache, documentmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.6, 1.5.10
>
>
> It would be useful to log a message indicating that asynchronous persistent 
> cache (implemented in OAK-2761) is enabled by {{oak.cache.asynchronous}} 
> property (introduced in OAK-4123).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484096#comment-15484096
 ] 

Timothee Maret commented on OAK-4785:
-

[~frm] IMO the patch makes sense, encoder/decoders/handlers becomes clearer. 
Also, the handling of server {{state}} becomes clearer.

One suggestion, how about setting a specific state upon {{onUnsuccessfulStart}} 
and {{onStartTimeOut}} ?
Currently it seems the state would resolve to {{STATUS_INITIALIZING}} (via the 
implementation of the {{getStatus}} method) which may not allow to distinguish 
between a failed initialization and a server being initialized.

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4791) Enable animal sniffer plugin

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4791:
--
Fix Version/s: 1.0.34
   1.2.19

> Enable animal sniffer plugin
> 
>
> Key: OAK-4791
> URL: https://issues.apache.org/jira/browse/OAK-4791
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.2.19, 1.0.34, 1.5.10, 1.4.8
>
>
> I would like to use the animal sniffer plugin to enforce usage of the 
> advertised minimal Java version. For trunk this is currently Java 7, while 
> the 1.0, 1.2 and 1.4 branches are on Java 6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4791) Enable animal sniffer plugin

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4791:
--
Fix Version/s: 1.4.8

Merged into branches:

- 1.4: http://svn.apache.org/r1760343
- 1.2: http://svn.apache.org/r1760356
- 1.0: http://svn.apache.org/r1760360

Please note that after the merge, the branches are still checked against 1.7 
signature. I will tighten it in separate commits if the build succeeds on a 
branch.

> Enable animal sniffer plugin
> 
>
> Key: OAK-4791
> URL: https://issues.apache.org/jira/browse/OAK-4791
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.10, 1.4.8
>
>
> I would like to use the animal sniffer plugin to enforce usage of the 
> advertised minimal Java version. For trunk this is currently Java 7, while 
> the 1.0, 1.2 and 1.4 branches are on Java 6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484065#comment-15484065
 ] 

Francesco Mari commented on OAK-4738:
-

[~dulceanu], I committed your latest patch at r1760359.

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch, OAK-4738-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4793:

Issue Type: Technical task  (was: Task)
Parent: OAK-1266

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in RDBDocumentStore and make sure JDBC driver specific 
> exceptions are handled consistently and wrapped in a DocumentStoreException. 
> At the same time, cache consistency needs to be checked as well in case of a 
> driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned OAK-4793:
---

Assignee: Julian Reschke  (was: Marcel Reutegger)

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in MongoDocumentStore and make sure MongoDB Java 
> driver specific exceptions are handled consistently and wrapped in a 
> DocumentStoreException. At the same time, cache consistency needs to be 
> checked as well in case of a driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4793:

Description: With OAK-4771 the usage of DocumentStoreException was 
clarified in the DocumentStore interface. The purpose of this task is to check 
usage of the DocumentStoreException in RDBDocumentStore and make sure JDBC 
driver specific exceptions are handled consistently and wrapped in a 
DocumentStoreException. At the same time, cache consistency needs to be checked 
as well in case of a driver exception. E.g. invalidate if necessary.  (was: 
With OAK-4771 the usage of DocumentStoreException was clarified in the 
DocumentStore interface. The purpose of this task is to check usage of the 
DocumentStoreException in MongoDocumentStore and make sure MongoDB Java driver 
specific exceptions are handled consistently and wrapped in a 
DocumentStoreException. At the same time, cache consistency needs to be checked 
as well in case of a driver exception. E.g. invalidate if necessary.)

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in RDBDocumentStore and make sure JDBC driver specific 
> exceptions are handled consistently and wrapped in a DocumentStoreException. 
> At the same time, cache consistency needs to be checked as well in case of a 
> driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-4793:
---

 Summary: Check usage of DocumentStoreException in RDBDocumentStore
 Key: OAK-4793
 URL: https://issues.apache.org/jira/browse/OAK-4793
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.6


With OAK-4771 the usage of DocumentStoreException was clarified in the 
DocumentStore interface. The purpose of this task is to check usage of the 
DocumentStoreException in MongoDocumentStore and make sure MongoDB Java driver 
specific exceptions are handled consistently and wrapped in a 
DocumentStoreException. At the same time, cache consistency needs to be checked 
as well in case of a driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4793:

Component/s: (was: mongomk)
 rdbmk

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in MongoDocumentStore and make sure MongoDB Java 
> driver specific exceptions are handled consistently and wrapped in a 
> DocumentStoreException. At the same time, cache consistency needs to be 
> checked as well in case of a driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4771) Clarify exceptions in DocumentStore

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4771:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 resilience  
(was: resilience)

> Clarify exceptions in DocumentStore
> ---
>
> Key: OAK-4771
> URL: https://issues.apache.org/jira/browse/OAK-4771
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> resilience
> Fix For: 1.6, 1.5.10
>
>
> The current DocumentStore contract is rather vague about exceptions. The 
> class JavaDoc mentions implementation specific runtime exceptions, but does 
> not talk about the DocumentStoreException used by all the DocumentStore 
> implementations. We should make this explicit in all relevant methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-4738:
-
Attachment: OAK-4738-03.patch

[~frm], you're right, I removed all {{foobar}} occurences and submitted a new 
patch. Could you take a look at it, please?

Regarding your other comment, I like your suggestion, I will come back at it 
later.

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch, OAK-4738-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484037#comment-15484037
 ] 

Francesco Mari commented on OAK-4738:
-

[~dulceanu], if you want to test specifically for a failure in the DNS 
resolution, you could refactor a bit {{ClientIpFilter}} and extract the calls 
to {{InetAddress.getByName()}} into their own interface, say 
{{InetAddressResolver}}. You could then write a test with bogus implementation 
of {{InetAddressResolver}} and assert the behaviour of the {{ClientIpFilter}}.

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484025#comment-15484025
 ] 

Francesco Mari commented on OAK-4738:
-

I guess that this is not what the test is testing anyway. I guess that the test 
makes the assumption that 127.0.0.1 is not DNS-mapped against 'foobar', since 
every address in that list is supposed not to match the default address of the 
client. 

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4775) Investigate updating netty dependency

2016-09-12 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484020#comment-15484020
 ] 

Timothee Maret commented on OAK-4775:
-

Ok, if it is unlikely to require the two versions in the same deployment, then 
it makes no sense to extract a common dependency. 

> Investigate updating netty dependency
> -
>
> Key: OAK-4775
> URL: https://issues.apache.org/jira/browse/OAK-4775
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4775-01.patch
>
>
> As proposed by [~dulceanu] in [0].
> [0] 
> https://issues.apache.org/jira/browse/OAK-4737?focusedCommentId=15471964&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15471964



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4785) Move message parsing and handling in separate handlers

2016-09-12 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-4785:

Attachment: OAK-4785-01.patch

In the attached patch I untangle {{StandbyServerHandler}} in smaller handlers. 
These handlers are smaller to understand and implement, and contribute to a 
leaner pipeline. Moreover, I added a bunch of unit tests for the handlers that 
are way more fine-grained than the ones currently implemented and don't require 
external resources (e.g. a file system for the {{FileStore}} or a port for the 
server).

[~dulceanu], [~marett], do you think this makes sense?

> Move message parsing and handling in separate handlers
> --
>
> Key: OAK-4785
> URL: https://issues.apache.org/jira/browse/OAK-4785
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4785-01.patch
>
>
> {{StandbyServerHandler}} includes logic for parsing the input messages 
> (passed to the handler as strings), handling the recognised messages and 
> failing in case an error occurs in the process. These tasks should be split 
> in different handlers for ease of understanding and unit testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484001#comment-15484001
 ] 

Andrei Dulceanu commented on OAK-4738:
--

[~frm], I just thought that without it we won't have any means to test a 
failing standby due to host unavailability. If to be removed, should I replace 
it with {{localhost}} or remove it completely and with it also the tests which 
expect the standby to fail?

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4775) Investigate updating netty dependency

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483995#comment-15483995
 ] 

Francesco Mari commented on OAK-4775:
-

[~marett], I wouldn't mind about the size of the dependencies, because it is 
unlikely that oak-segment-tar and oak-tarmk-standby will be deployed in the 
same system. The plan is to deprecate oak-tarmk-standby and oak-segment, so I 
don't expect these two modules to be used anymore. oak-segment-tar is supposed 
to be the last man standing.

> Investigate updating netty dependency
> -
>
> Key: OAK-4775
> URL: https://issues.apache.org/jira/browse/OAK-4775
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4775-01.patch
>
>
> As proposed by [~dulceanu] in [0].
> [0] 
> https://issues.apache.org/jira/browse/OAK-4737?focusedCommentId=15471964&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15471964



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483991#comment-15483991
 ] 

Francesco Mari commented on OAK-4738:
-

[~dulceanu], can we actually remove this "foobar" completely? I think it's not 
really adding any value to the test, to be honest.

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4791) Enable animal sniffer plugin

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4791.
---
   Resolution: Fixed
Fix Version/s: 1.5.10

Enabled in trunk: http://svn.apache.org/r1760340

The plugin is active per default and disabled only when the build runs with the 
{{fast}} profile.

> Enable animal sniffer plugin
> 
>
> Key: OAK-4791
> URL: https://issues.apache.org/jira/browse/OAK-4791
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.10
>
>
> I would like to use the animal sniffer plugin to enforce usage of the 
> advertised minimal Java version. For trunk this is currently Java 7, while 
> the 1.0, 1.2 and 1.4 branches are on Java 6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483885#comment-15483885
 ] 

Julian Reschke commented on OAK-4779:
-

(verified for RDBDS with DB2)

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.6
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4779:

Attachment: OAK-4779-2.patch

Patch with slightly extended diagnostics.

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.6
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4712) Publish S3DataStore stats in JMX MBean

2016-09-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4712:
---
Attachment: OAK_4712_6_Amit.patch

[~mattvryan] I made some changes and attaching the patch here.
* Used PathUtils to parse the input path - helps in validating the paths
* Minor code changes for conditions at couple places minor doc changes
* Some changes in the S3DataStoreStatsTest (WIP)
** Removed usage of the test file

Regards to the test there are some changes that I suggest:
* Move parts that require configuration of the S3DS to oak-blob-cloud module 
itself into a new test.
* Use a mocked S3DS in the S3DataStoreStatsTest class
** Instead of mocking node store use MemoryNodeStore. 
** Add tests for multiple props, prop with binary property and a couple of test 
for different paths to test the logic in the S3DataStoreStats
** Possibly also remove the JMX registration logic and add a test in the 
SegmentNodeStoreServiceTest to test registration. i.e. directly working with 
S3DataStoreStats object.

Let me know what you think.
 

> Publish S3DataStore stats in JMX MBean
> --
>
> Key: OAK-4712
> URL: https://issues.apache.org/jira/browse/OAK-4712
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Matt Ryan
>Assignee: Amit Jain
> Attachments: OAK-4712.2.diff, OAK-4712.3.diff, OAK-4712.4.diff, 
> OAK-4712.5.diff, OAK_4712_6_Amit.patch
>
>
> This feature is to publish statistics about the S3DataStore via a JMX MBean.  
> There are two statistics suggested:
> * Indicate the number of files actively being synchronized from the local 
> cache into S3
> * Given a path to a local file, indicate whether synchronization of file into 
> S3 has completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4792) Replace usage of AssertionError in ClusterNodeInfo

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4792.
---
   Resolution: Fixed
Fix Version/s: 1.5.10

Done in trunk: http://svn.apache.org/r1760326

> Replace usage of AssertionError in ClusterNodeInfo
> --
>
> Key: OAK-4792
> URL: https://issues.apache.org/jira/browse/OAK-4792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.10
>
>
> As discussed in OAK-4770, I would like to replace usage of AssertionError in 
> ClusterNodeInfo with DocumentStoreException and clarify the contract 
> accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4790) Compilation error with JDK 6 in FileIOUtils

2016-09-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-4790.

Resolution: Fixed

* 1.2 - http://svn.apache.org/viewvc?rev=1760324&view=rev
* 1.4 - http://svn.apache.org/viewvc?rev=1760323&view=rev

> Compilation error with JDK 6 in FileIOUtils
> ---
>
> Key: OAK-4790
> URL: https://issues.apache.org/jira/browse/OAK-4790
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: commons
>Affects Versions: 1.2.18, 1.4.7
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.2.19, 1.4.8
>
>
> FIleIOUtils uses StandardCharsets which is available only from Java 1.7 and 
> thus fails compilation on Java 1.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4790) Compilation error with JDK 6 in FileIOUtils

2016-09-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4790:
---
Fix Version/s: 1.4.8
   1.2.19

> Compilation error with JDK 6 in FileIOUtils
> ---
>
> Key: OAK-4790
> URL: https://issues.apache.org/jira/browse/OAK-4790
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: commons
>Affects Versions: 1.2.18, 1.4.7
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.2.19, 1.4.8
>
>
> FIleIOUtils uses StandardCharsets which is available only from Java 1.7 and 
> thus fails compilation on Java 1.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4792) Replace usage of AssertionError in ClusterNodeInfo

2016-09-12 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-4792:
-

 Summary: Replace usage of AssertionError in ClusterNodeInfo
 Key: OAK-4792
 URL: https://issues.apache.org/jira/browse/OAK-4792
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, documentmk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.6


As discussed in OAK-4770, I would like to replace usage of AssertionError in 
ClusterNodeInfo with DocumentStoreException and clarify the contract 
accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4790) Compilation error with JDK 6 in FileIOUtils

2016-09-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4790:

Affects Version/s: 1.4.7

> Compilation error with JDK 6 in FileIOUtils
> ---
>
> Key: OAK-4790
> URL: https://issues.apache.org/jira/browse/OAK-4790
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: commons
>Affects Versions: 1.2.18, 1.4.7
>Reporter: Amit Jain
>Assignee: Amit Jain
>
> FIleIOUtils uses StandardCharsets which is available only from Java 1.7 and 
> thus fails compilation on Java 1.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4791) Enable animal sniffer plugin

2016-09-12 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-4791:
-

 Summary: Enable animal sniffer plugin
 Key: OAK-4791
 URL: https://issues.apache.org/jira/browse/OAK-4791
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: parent
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.6


I would like to use the animal sniffer plugin to enforce usage of the 
advertised minimal Java version. For trunk this is currently Java 7, while the 
1.0, 1.2 and 1.4 branches are on Java 6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4770) Missing exception handling in ClusterNodeInfo.renewLease()

2016-09-12 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483575#comment-15483575
 ] 

Marcel Reutegger commented on OAK-4770:
---

Discussed with Stefan offline and the intention of the AssertionError was to 
break through try/catches and terminate the current thread. But the lease 
update thread currently also catches errors and will not terminate. I will 
therefore try to replace usage of AssertionError with DocumentStoreException 
and update the contract of renewLease() accordingly.

> Missing exception handling in ClusterNodeInfo.renewLease()
> --
>
> Key: OAK-4770
> URL: https://issues.apache.org/jira/browse/OAK-4770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.6
>
>
> ClusterNodeInfo.renewLease() does not handle a potential 
> DocumentStoreException on {{findAndModify()}}. This may leave 
> {{previousLeaseEndTime}} in an inconsistent state and a subsequent 
> {{renewLease()}} call then considers the lease timed out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4790) Compilation error with JDK 6 in FileIOUtils

2016-09-12 Thread Amit Jain (JIRA)
Amit Jain created OAK-4790:
--

 Summary: Compilation error with JDK 6 in FileIOUtils
 Key: OAK-4790
 URL: https://issues.apache.org/jira/browse/OAK-4790
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: commons
Affects Versions: 1.2.18
Reporter: Amit Jain
Assignee: Amit Jain


FIleIOUtils uses StandardCharsets which is available only from Java 1.7 and 
thus fails compilation on Java 1.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4153) segment's compareAgainstBaseState wont call childNodeDeleted when deleting last and adding n nodes

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4153.
-

Bulk close for 1.4.7

> segment's compareAgainstBaseState wont call childNodeDeleted when deleting 
> last and adding n nodes
> --
>
> Key: OAK-4153
> URL: https://issues.apache.org/jira/browse/OAK-4153
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Affects Versions: 1.2.13, 1.0.29, 1.4.1, 1.5.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Critical
> Fix For: 1.5.1, 1.4.7, 1.2.19, 1.0.34
>
> Attachments: OAK-4153-2.patch, OAK-4153-3.patch, OAK-4153.patch, 
> OAK-4153.simplified.patch
>
>
> {{SegmentNodeState.compareAgainstBaseState}} fails to call 
> {{NodeStateDiff.childNodeDeleted}} when for the same parent the only child is 
> deleted and at the same time multiple new, different children are added.
> Reason is that the [current 
> code|https://github.com/apache/jackrabbit-oak/blob/a9ce70b61567ffe27529dad8eb5d38ced77cf8ad/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentNodeState.java#L558]
>  for '{{afterChildName == MANY_CHILD_NODES}}' *and* '{{beforeChildName == 
> ONE_CHILD_NODE}}' does not handle all cases: it assumes that 'after' contains 
> the 'before' child and doesn't handle the situation where the 'before' child 
> has gone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4675) SNFE thrown while testing FileStore.cleanup() running concurrently with writes

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4675.
-

Bulk close for 1.4.7

> SNFE thrown while testing FileStore.cleanup() running concurrently with writes
> --
>
> Key: OAK-4675
> URL: https://issues.apache.org/jira/browse/OAK-4675
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Affects Versions: 1.0.32, 1.4.6, 1.2.18, Segment Tar 0.0.8
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>  Labels: cleanup, gc
> Fix For: Segment Tar 0.0.10, 1.4.7, 1.2.19, 1.0.34
>
> Attachments: OAK-4675-01.patch, OAK-4675-02.patch, 
> OAK-4675-oak-segment-03.patch, test-case.patch
>
>
> {{SegmentNotFoundException}} is thrown from time to time in the following 
> scenario: plenty of concurrent writes (each creating a {{625 bytes}} blob) 
> interrupted by a cleanup. 
> Stack trace (including some debugging statements added by me):
> {code:java}
> Pre cleanup readers: []
> Before cleanup readers: [/Users/dulceanu/work/test-repo/data0a.tar]
> Initial size: 357.4 kB
> After cleanup readers: [/Users/dulceanu/work/test-repo/data0a.tar]
> After cleanup size: 357.4 kB
> Final size: 361.0 kB
> Exception in thread "pool-5-thread-74" 
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Cannot copy 
> record from a generation that has been gc'ed already
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.isOldGeneration(SegmentWriter.java:1207)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.writeNodeUncached(SegmentWriter.java:1096)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.writeNode(SegmentWriter.java:1013)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.writeNodeUncached(SegmentWriter.java:1074)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.writeNode(SegmentWriter.java:1013)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.writeNode(SegmentWriter.java:987)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.access$700(SegmentWriter.java:379)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$8.execute(SegmentWriter.java:337)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBufferWriterPool.execute(SegmentBufferWriterPool.java:105)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter.writeNode(SegmentWriter.java:334)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:111)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:550)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:571)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:627)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.merge(SegmentNodeStore.java:287)
>   at 
> org.apache.jackrabbit.oak.segment.CompactionAndCleanupIT$1.run(CompactionAndCleanupIT.java:961)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.jackrabbit.oak.segment.SegmentNotFoundException: 
> Segment 4fb637cc-5013-4925-ab13-0629c4406481 not found
>   at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:1341)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:123)
>   at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:94)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentWriter$SegmentWriteOperation.isOldGeneration(SegmentWriter.java:1199)
>   ... 18 more
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Invalid segment format. Dumping segment 
> 4fb637cc-5013-4925-ab13-0629c4406481
>  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 0070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 

[jira] [Closed] (OAK-4320) Use the cache tracker in the RDB Document Store

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4320.
-

Bulk close for 1.4.7

> Use the cache tracker in the RDB Document Store
> ---
>
> Key: OAK-4320
> URL: https://issues.apache.org/jira/browse/OAK-4320
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, rdbmk
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6, 1.4.7
>
> Attachments: OAK-4320-2.patch, OAK-4320.patch
>
>
> OAK-4112 introduced {{CacheChangesTracker}} mechanism, inspired by the 
> {{QueryContext}} class, already present in the RDB DS.
> The tracker works on the {{NodeDocumentCache}} level, within the methods 
> modifying the cache values, using the same {{NodeDocumentLocks}}, which may 
> prevent potential concurrency problems, described in the comments to OAK-3566.
> We should synchronise both approaches, so the queries in Mongo and RDB uses 
> the same logic to update their caches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4671) Update 1.4 to JR 2.12.3

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4671.
-

Bulk close for 1.4.7

> Update 1.4 to JR 2.12.3
> ---
>
> Key: OAK-4671
> URL: https://issues.apache.org/jira/browse/OAK-4671
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>Priority: Blocker
> Fix For: 1.4.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4382) Test failure in ExternalGroupPrincipalProviderTest.testFindPrincipalsByHintTypeGroup

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4382.
-

Bulk close for 1.4.7

> Test failure in 
> ExternalGroupPrincipalProviderTest.testFindPrincipalsByHintTypeGroup
> 
>
> Key: OAK-4382
> URL: https://issues.apache.org/jira/browse/OAK-4382
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: auth-external
>Affects Versions: 1.4.7, 1.2.19
>Reporter: Julian Reschke
>Assignee: angela
>Priority: Minor
> Fix For: 1.4.7, 1.2.19
>
>
> java.lang.AssertionError: 
> expected:<[org.apache.jackrabbit.oak.spi.security.principal.PrincipalImpl:a, 
> org.apache.jackrabbit.oak.spi.security.principal.PrincipalImpl:aa, 
> org.apache.jackrabbit.oak.spi.security.principal.PrincipalImpl:aaa]> but 
> was:<[org.apache.jackrabbit.oak.spi.security.authentication.external.impl.principal.ExternalGroupPrincipalProvider$ExternalGroupPrincipal:a]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.jackrabbit.oak.spi.security.authentication.external.impl.principal.ExternalGroupPrincipalProviderTest.testFindPrincipalsByHintTypeGroup(ExternalGroupPrincipalProviderTest.java:357)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4743) Update Oak 1.2 and Oak 1.4 to Jackrabbit 2.12.4

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4743.
-

Bulk close for 1.4.7

> Update Oak 1.2 and Oak 1.4 to Jackrabbit 2.12.4
> ---
>
> Key: OAK-4743
> URL: https://issues.apache.org/jira/browse/OAK-4743
> Project: Jackrabbit Oak
>  Issue Type: Task
>Affects Versions: 1.4.6, 1.2.18
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.4.7, 1.2.19
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4679) Backport OAK-4119, OAK-4101, OAK-4087 and OAK-4344

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4679.
-

Bulk close for 1.4.7

> Backport OAK-4119, OAK-4101, OAK-4087 and OAK-4344
> --
>
> Key: OAK-4679
> URL: https://issues.apache.org/jira/browse/OAK-4679
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external, auth-ldap, core
>Reporter: Dominique Jäggi
>Assignee: Dominique Jäggi
>Priority: Blocker
> Fix For: 1.4.7, 1.2.19
>
> Attachments: OAK-4679-osgi-export.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4001) ExternalLoginModule: Make max sync attempts configurable

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4001.
-

Bulk close for 1.4.7

> ExternalLoginModule: Make max sync attempts configurable
> 
>
> Key: OAK-4001
> URL: https://issues.apache.org/jira/browse/OAK-4001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: angela
>Priority: Minor
> Fix For: 1.4.7, 1.2.19
>
>
> improvement request reflecting open TODO in 
> {{ExternalLoginModule#MAX_SYNC_ATTEMPTS}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4005) LdapIdentityProvider.getEntries() is prone to OOME.

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4005.
-

Bulk close for 1.4.7

> LdapIdentityProvider.getEntries() is prone to OOME.
> ---
>
> Key: OAK-4005
> URL: https://issues.apache.org/jira/browse/OAK-4005
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-ldap
>Affects Versions: 1.2.11, 1.3.15, 1.0.27, 1.4.0
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>Priority: Minor
> Fix For: 1.6, 1.4.7, 1.2.19
>
>
> The public methods LdapIdentityProvider.listUsers() and 
> LdapIdentityProvider.listGroups() both call 
> LdapIdentityProvider.getEntries(...), which tries to collect all matching 
> results from the backend in one LinkedList. Since typical LDAP directories 
> are quite huge, this will usually yield an OutOfMemoryError.
> We'd need a cursor with connection handling here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4678) Backport OAK-4344 and OAK-4005

2016-09-12 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4678.
-

Bulk close for 1.4.7

> Backport OAK-4344 and OAK-4005
> --
>
> Key: OAK-4678
> URL: https://issues.apache.org/jira/browse/OAK-4678
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-ldap
>Reporter: Dominique Jäggi
>Assignee: Dominique Jäggi
> Fix For: 1.4.7, 1.2.19
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4775) Investigate updating netty dependency

2016-09-12 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483529#comment-15483529
 ] 

Timothee Maret edited comment on OAK-4775 at 9/12/16 8:50 AM:
--

[~frm] The Netty dependencies are embedded in the cold standby bundle (TarMk 
and SegmentTar).
The Netty dependencies account for ~4MB, thus it'd make sense to share it 
between TarMk and SegmentTar. This could be done by running Netty in its own 
bundle.

[~frm] wdyt ?

cc [~dulceanu]


was (Author: marett):
[~frm] The Netty dependencies are embedded in the cold standby bundle (TarMk 
and SegmentTar).
The Netty dependencies account for ~4MB, thus it'd make sense to share it 
between TarMk and SegmentTar. This could be done by running Netty in its own 
bundle.

[~frm] wdyt ?

> Investigate updating netty dependency
> -
>
> Key: OAK-4775
> URL: https://issues.apache.org/jira/browse/OAK-4775
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4775-01.patch
>
>
> As proposed by [~dulceanu] in [0].
> [0] 
> https://issues.apache.org/jira/browse/OAK-4737?focusedCommentId=15471964&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15471964



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4775) Investigate updating netty dependency

2016-09-12 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483529#comment-15483529
 ] 

Timothee Maret commented on OAK-4775:
-

[~frm] The Netty dependencies are embedded in the cold standby bundle (TarMk 
and SegmentTar).
The Netty dependencies account for ~4MB, thus it'd make sense to share it 
between TarMk and SegmentTar. This could be done by running Netty in its own 
bundle.

[~frm] wdyt ?

> Investigate updating netty dependency
> -
>
> Key: OAK-4775
> URL: https://issues.apache.org/jira/browse/OAK-4775
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4775-01.patch
>
>
> As proposed by [~dulceanu] in [0].
> [0] 
> https://issues.apache.org/jira/browse/OAK-4737?focusedCommentId=15471964&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15471964



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4779:
--
Attachment: OAK-4779.patch

Proposed fix.

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.6
>
> Attachments: OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4749) Include initial cost in stats for observation processing

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4749.
---
   Resolution: Fixed
Fix Version/s: 1.5.10

Applied patch to trunk: http://svn.apache.org/r1760304

> Include initial cost in stats for observation processing
> 
>
> Key: OAK-4749
> URL: https://issues.apache.org/jira/browse/OAK-4749
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: observation
> Fix For: 1.6, 1.5.10
>
> Attachments: OAK-4749.patch
>
>
> The jackrabbit-jcr-commons {{ListenerTracker}} collects timing for JCR event 
> listeners. It tracks producer (oak internal) and consumer (JCR EventListener) 
> time. The initial producer cost is currently not reflected in these stats, 
> because {{ChangeProcessor}} in oak-jcr does an initial {{hasNext()}} on the 
> {{EventIterator}} outside of the {{ListenerTracker}}. For some listeners this 
> initial producer time may even account for the entire cost when the event 
> filter rejects all changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4769) Update Jackrabbit version to 2.13.3

2016-09-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-4769.
---
   Resolution: Fixed
Fix Version/s: 1.5.10

Done in trunk: http://svn.apache.org/r1760302

> Update Jackrabbit version to 2.13.3
> ---
>
> Key: OAK-4769
> URL: https://issues.apache.org/jira/browse/OAK-4769
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-4738:
-
Attachment: OAK-4738-02.patch

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4738) Fix unit tests in FailoverIPRangeTest

2016-09-12 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483430#comment-15483430
 ] 

Andrei Dulceanu commented on OAK-4738:
--

The reason for which only one test in the whole class failed was a timeout 
during DNS lookup of an invalid hostname. Making sure that lookup is made 
before any test is run solved the problem, since the next time the invalid 
hostname is used it will be fetched from the cache. 

[~frm], can you take a look at the latest patch, please?

> Fix unit tests in FailoverIPRangeTest
> -
>
> Key: OAK-4738
> URL: https://issues.apache.org/jira/browse/OAK-4738
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
> Fix For: Segment Tar 0.0.20
>
> Attachments: OAK-4738-01.patch, OAK-4738-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4736) Fix integration test in StandbyTestIT

2016-09-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu resolved OAK-4736.
--
Resolution: Fixed

As a result of this test passing consistently in Jenkins (following the latest 
changes), I will mark this one as "resolved".

> Fix integration test in StandbyTestIT
> -
>
> Key: OAK-4736
> URL: https://issues.apache.org/jira/browse/OAK-4736
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.12
>
> Attachments: OAK-4736-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)