[jira] [Comment Edited] (OAK-4969) ColdStandby does not fetch missing blobs

2016-10-28 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616810#comment-15616810
 ] 

Timothee Maret edited comment on OAK-4969 at 10/28/16 11:15 PM:


[~frm] thanks for your insight. I found a clean way to fix this. Actually the 
{{StandbyDiff}} allows to intercept the exception at the required node level 
and attempt loading its binary properties.

I have attached a patch that contains
1. a test that reproduce the issue
2. the fix for the trunk

Could you have a look at the patch ?

This issue affects the previous standby implementation ({{oak-tarmk-standby}}). 
Should I open a separate issue for tracking the fix there (AFAIK, it is not 
strictly speaking a backport) ?


was (Author: marett):
[~frm] thanks for your insight. I found a clean way to fix this. Actually the 
{{StandbyDiff}} allows to intercept the exception and attempt loading its 
binary properties.

I have attached a patch that contains
1. a test that reproduce the issue
2. the fix for the trunk

Could you have a look at the patch ?

This issue affects the previous standby implementation ({{oak-tarmk-standby}}). 
Should I open a separate issue for tracking the fix there (AFAIK, it is not 
strictly speaking a backport) ?

> ColdStandby does not fetch missing blobs
> 
>
> Key: OAK-4969
> URL: https://issues.apache.org/jira/browse/OAK-4969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4969.patch
>
>
> 1. In a setup composed of two instances (primary, standby) configured with a 
> custom blob store (File blob store).
> 2. On the primary instance, set/update a BINARY property of an existing 
> resource with > 2MB binary.
> 3. Observe that the standby instance does not fetch the binary and instead, 
> enters a loop detecting the missing binary upon comparing node states.
> For example, the following stack trace would be printed every 5 seconds on 
> the standby (the polling time is 5sec). 
> {code}
> 19.10.2016 16:22:38.035 *DEBUG* [nioEventLoopGroup-1005-1] 
> org.apache.jackrabbit.oak.segment.standby.codec.ResponseDecoder Decoding 'get 
> head' response
> 19.10.2016 16:22:38.038 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Channel closed
> 19.10.2016 16:22:40.241 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Group shut down
> 19.10.2016 16:22:40.242 *ERROR* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> java.lang.RuntimeException: Error occurred while obtaining InputStream for 
> blobId [4dfc748c91d518c9221031ec6115fd7ac04fe27b#10]
>   at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:252)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:45)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:42)
>   at com.google.common.io.ByteStreams$3.openStream(ByteStreams.java:907)
>   at com.google.common.io.ByteSource.contentEquals(ByteSource.java:301)
>   at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:68)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.equals(SegmentBlob.java:193)
>   at com.google.common.base.Objects.equal(Objects.java:55)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentPropertyState.equals(SegmentPropertyState.java:249)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:622)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:516)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:415)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> 

[jira] [Updated] (OAK-4969) ColdStandby does not fetch missing blobs

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4969:

Description: 
1. In a setup composed of two instances (primary, standby) configured with a 
custom blob store (File blob store).
2. On the primary instance, set/update a BINARY property of an existing 
resource with > 2MB binary.
3. Observe that the standby instance does not fetch the binary and instead, 
enters a loop detecting the missing binary upon comparing node states.

For example, the following stack trace would be printed every 5 seconds on the 
standby (the polling time is 5sec). 
{code}

19.10.2016 16:22:38.035 *DEBUG* [nioEventLoopGroup-1005-1] 
org.apache.jackrabbit.oak.segment.standby.codec.ResponseDecoder Decoding 'get 
head' response
19.10.2016 16:22:38.038 *DEBUG* [sling-default-81-Registered Service.607] 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Channel closed
19.10.2016 16:22:40.241 *DEBUG* [sling-default-81-Registered Service.607] 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Group shut down
19.10.2016 16:22:40.242 *ERROR* [sling-default-81-Registered Service.607] 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
synchronizing state.
java.lang.RuntimeException: Error occurred while obtaining InputStream for 
blobId [4dfc748c91d518c9221031ec6115fd7ac04fe27b#10]
at 
org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
at 
org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:252)
at 
org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:87)
at 
org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:45)
at 
org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:42)
at com.google.common.io.ByteStreams$3.openStream(ByteStreams.java:907)
at com.google.common.io.ByteSource.contentEquals(ByteSource.java:301)
at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
at 
org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:68)
at 
org.apache.jackrabbit.oak.segment.SegmentBlob.equals(SegmentBlob.java:193)
at com.google.common.base.Objects.equal(Objects.java:55)
at 
org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53)
at 
org.apache.jackrabbit.oak.segment.SegmentPropertyState.equals(SegmentPropertyState.java:249)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:622)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:516)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:415)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:457)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
at 
org.apache.jackrabbit.oak.segment.MapRecord$2.childNodeChanged(MapRecord.java:401)
at 
org.apache.jackrabbit.oak.segment.MapRecord$3.childNodeChanged(MapRecord.java:442)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:485)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:433)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:391)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
at 
org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:485)
at 
org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
at 

[jira] [Updated] (OAK-4969) ColdStandby does not fetch missing blobs

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4969:

Flags: Patch

> ColdStandby does not fetch missing blobs
> 
>
> Key: OAK-4969
> URL: https://issues.apache.org/jira/browse/OAK-4969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4969.patch
>
>
> 1. In a setup composed of two instances (primary, standby) configured with a 
> custom blob store (File blob store).
> 2. On the primary instance, set/update a BINARY property of an existing 
> resource with > 2MB binary.
> 3. Observe that the standby instance does not fetch the binary and instead, 
> enters a loop detecting the missing binary upon comparing node states.
> For example, the following stack trace would be printed every 5 seconds on 
> the standby (the polling time is 5sec). 
> {code}
> 19.10.2016 16:22:38.035 *DEBUG* [nioEventLoopGroup-1005-1] 
> org.apache.jackrabbit.oak.segment.standby.codec.ResponseDecoder Decoding 'get 
> head' response
> 19.10.2016 16:22:38.038 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Channel closed
> 19.10.2016 16:22:40.241 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Group shut down
> 19.10.2016 16:22:40.242 *ERROR* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> java.lang.RuntimeException: Error occurred while obtaining InputStream for 
> blobId [4dfc748c91d518c9221031ec6115fd7ac04fe27b#10]
>   at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:252)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:45)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:42)
>   at com.google.common.io.ByteStreams$3.openStream(ByteStreams.java:907)
>   at com.google.common.io.ByteSource.contentEquals(ByteSource.java:301)
>   at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:68)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.equals(SegmentBlob.java:193)
>   at com.google.common.base.Objects.equal(Objects.java:55)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentPropertyState.equals(SegmentPropertyState.java:249)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:622)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:516)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:415)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:457)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord$2.childNodeChanged(MapRecord.java:401)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord$3.childNodeChanged(MapRecord.java:442)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:485)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:433)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:391)
>   at 
> 

[jira] [Updated] (OAK-4969) ColdStandby does not fetch missing blobs

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4969:

Attachment: OAK-4969.patch

> ColdStandby does not fetch missing blobs
> 
>
> Key: OAK-4969
> URL: https://issues.apache.org/jira/browse/OAK-4969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4969.patch
>
>
> 1. In a setup composed of two instances (primary, standby) configured with a 
> custom blob store (File blob store).
> 2. On the primary instance, set/update a BINARY property of an existing 
> resource with > 2MB binary.
> 3. Observe that the standby instance does not fetch the binary and instead, 
> enters a loop detecting the missing binary upon comparing node states.
> For example, the following stack trace would be printed every 5 seconds on 
> the standby (the polling time is 5sec). 
> {code}
> 19.10.2016 16:22:38.035 *DEBUG* [nioEventLoopGroup-1005-1] 
> org.apache.jackrabbit.oak.segment.standby.codec.ResponseDecoder Decoding 'get 
> head' response
> 19.10.2016 16:22:38.038 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Channel closed
> 19.10.2016 16:22:40.241 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Group shut down
> 19.10.2016 16:22:40.242 *ERROR* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> java.lang.RuntimeException: Error occurred while obtaining InputStream for 
> blobId [4dfc748c91d518c9221031ec6115fd7ac04fe27b#10]
>   at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:252)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:45)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:42)
>   at com.google.common.io.ByteStreams$3.openStream(ByteStreams.java:907)
>   at com.google.common.io.ByteSource.contentEquals(ByteSource.java:301)
>   at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:68)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.equals(SegmentBlob.java:193)
>   at com.google.common.base.Objects.equal(Objects.java:55)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentPropertyState.equals(SegmentPropertyState.java:249)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:622)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:516)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:415)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:457)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord$2.childNodeChanged(MapRecord.java:401)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord$3.childNodeChanged(MapRecord.java:442)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:485)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:433)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:391)
>   at 
> 

[jira] [Commented] (OAK-4969) ColdStandby does not fetch missing blobs

2016-10-28 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616810#comment-15616810
 ] 

Timothee Maret commented on OAK-4969:
-

[~frm] thanks for your insight. I found a clean way to fix this. Actually the 
{{StandbyDiff}} allows to intercept the exception and attempt loading its 
binary properties.

I have attached a patch that contains
1. a test that reproduce the issue
2. the fix for the trunk

Could you have a look at the patch ?

This issue affects the previous standby implementation ({{oak-tarmk-standby}}). 
Should I open a separate issue for tracking the fix there (AFAIK, it is not 
strictly speaking a backport) ?

> ColdStandby does not fetch missing blobs
> 
>
> Key: OAK-4969
> URL: https://issues.apache.org/jira/browse/OAK-4969
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.10
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6, 1.5.13
>
>
> 1. In a setup composed of two instances (primary, standby) configured with a 
> custom blob store (File blob store).
> 2. On the primary instance, set/update a BINARY property of an existing 
> resource with > 2MB binary.
> 3. Observe that the standby instance does not fetch the binary and instead, 
> enters a loop detecting the missing binary upon comparing node states.
> For example, the following stack trace would be printed every 5 seconds on 
> the standby (the polling time is 5sec). 
> {code}
> 19.10.2016 16:22:38.035 *DEBUG* [nioEventLoopGroup-1005-1] 
> org.apache.jackrabbit.oak.segment.standby.codec.ResponseDecoder Decoding 'get 
> head' response
> 19.10.2016 16:22:38.038 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Channel closed
> 19.10.2016 16:22:40.241 *DEBUG* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClient Group shut down
> 19.10.2016 16:22:40.242 *ERROR* [sling-default-81-Registered Service.607] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> java.lang.RuntimeException: Error occurred while obtaining InputStream for 
> blobId [4dfc748c91d518c9221031ec6115fd7ac04fe27b#10]
>   at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:252)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.getNewStream(SegmentBlob.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:45)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.getInput(AbstractBlob.java:42)
>   at com.google.common.io.ByteStreams$3.openStream(ByteStreams.java:907)
>   at com.google.common.io.ByteSource.contentEquals(ByteSource.java:301)
>   at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:68)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.equals(SegmentBlob.java:193)
>   at com.google.common.base.Objects.equal(Objects.java:55)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentPropertyState.equals(SegmentPropertyState.java:249)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:622)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:516)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:415)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> org.apache.jackrabbit.oak.segment.MapRecord.compare(MapRecord.java:457)
>   at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:609)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.process(StandbyDiff.java:216)
>   at 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyDiff.childNodeChanged(StandbyDiff.java:186)
>   at 
> 

[jira] [Commented] (OAK-5033) JCR Query Row's public getScore() has hardcoded value of 0.01

2016-10-28 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615792#comment-15615792
 ] 

Vikas Saurabh commented on OAK-5033:


For specific case of understanding of how scoring is working, check out 
{{oak:scoreExplanation}} (explained in oak lucene documentation.. I'm currently 
not in great internet connectivity, else I'd have linked those directly)

> JCR Query Row's public getScore() has hardcoded value of 0.01
> -
>
> Key: OAK-5033
> URL: https://issues.apache.org/jira/browse/OAK-5033
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: David Gonzalez
>Priority: Minor
>
> RowImpl.java [1] hardcodes the result of the public API getScore() to 0.01.
> This is problematic when trying to debug/understand the effects of index 
> boosting on search results. 
> There is documentation that helps illustrate how boosting works wrt to 
> lucene, but it is generally inaccessible to the lay-developer/QA. 
> Understanding how the boost effects the result score would be quite helpful 
> in understanding the effects of boost.
> Note: I understand this does not effect the internal scoring/ordering, 
> however this is a public API and should return the expected value.
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/query/RowImpl.java#L84



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4042) Full text search doesn't work for prefix text containing GB-18030 characters

2016-10-28 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4042:
---
Fix Version/s: (was: 1.6)

> Full text search doesn't work for prefix text containing GB-18030 characters
> 
>
> Key: OAK-4042
> URL: https://issues.apache.org/jira/browse/OAK-4042
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>
> For a full text indexed field {{text}} and a node having
> {{/a/b/@text="some text normaltextsuffix and 中文标题suffix."}}, this node should 
> be returned for:
> {{SELECT * from \[nt:base] WHERE CONTAINS([text], '中文标题*')}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3207) Exercise for Password Expiry and History

2016-10-28 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3207:

Fix Version/s: (was: 1.6)

> Exercise for Password Expiry and History
> 
>
> Key: OAK-3207
> URL: https://issues.apache.org/jira/browse/OAK-3207
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: exercise
>Reporter: angela
>Assignee: Dominique Jäggi
>Priority: Minor
>
> the exercise about password expiry ({{L12_PasswordExpiryTest}}) is still 
> marked with TODOs a I ran out of time when creating the first bunch of 
> exercises. nevertheless it might be helpful to invent some exercises around 
> the password expiration feature (maybe in combination with the latest 
> password history feature) to train understanding of these features, how to 
> use them and where the limitations are.
> [~djaeggi], i take the liberty to assign this to you since you are the 
> original author of those features. obviously no pressure whatsoever just 
> in case you have some time left and wish to make more contributions :-)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4696) Improve logging for SyncHandler

2016-10-28 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4696.
-
   Resolution: Fixed
Fix Version/s: 1.5.14

Committed revision 1767025; I applied the patch with minor modifications as 
noted in the pull request.


> Improve logging for SyncHandler
> ---
>
> Key: OAK-4696
> URL: https://issues.apache.org/jira/browse/OAK-4696
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Affects Versions: 1.5.8
>Reporter: Konrad Windszus
>Assignee: angela
> Fix For: 1.5.14
>
>
> In the {{ExternalLoginModule.syncUser}} the {{context.sync(..)}} is being 
> called without evaluating the return value at all 
> (https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L354).
>  The return value should be logged to find out, which sync took place.
> Also I am wondering if it would make sense to do the {{root.commit()}} only 
> conditionally in case the return value is not {{SyncResult.NOP}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4274) Memory-mapped files can't be explicitly unmapped

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615491#comment-15615491
 ] 

Tomek Rękawek commented on OAK-4274:


It's also worth to note the difference between UNIXes and Windows. While UNIX 
allows to remove mmapped file, Windows throws an exception  saying that the 
file is still in use.

For the segment-tar [we saw|OAK-4828] such exceptions on the Windows machines, 
which may mean that the file has not been released (possibly because of the 
buffer leakage mentioned by Michael/Francesco).

> Memory-mapped files can't be explicitly unmapped
> 
>
> Key: OAK-4274
> URL: https://issues.apache.org/jira/browse/OAK-4274
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Reporter: Francesco Mari
>  Labels: gc, resilience
> Fix For: 1.6, 1.5.14
>
>
> As described by [this JDK 
> bug|http://bugs.java.com/view_bug.do?bug_id=4724038], there is no way to 
> explicitly unmap memory mapped files. A memory mapped file is unmapped only 
> if the corresponding {{MappedByteBuffer}} is garbage collected by the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4887) Query cost estimation: ordering by an unindexed property not reflected

2016-10-28 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615393#comment-15615393
 ] 

Thomas Mueller commented on OAK-4887:
-

Thanks a lot!

I believe the Lucene index implementation only returns one plan right now, so 
the query engine can't decide which Lucene index to use (damAssetLuceneCreated 
or damAssetLucene). It would probably be best if the Lucene index returns one 
plan per index, so that the query engine can decide which one to use best. I 
don't think internal API changes are needed. After that, there are probably 
some smaller changes needed in the query engine, to account for the limit (if 
set).

> Query cost estimation: ordering by an unindexed property not reflected
> --
>
> Key: OAK-4887
> URL: https://issues.apache.org/jira/browse/OAK-4887
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.4.2
>Reporter: Alexander Klimetschek
>Assignee: Thomas Mueller
> Fix For: 1.6
>
>
> A query that orders by an unindexed property seems to have no effect on the 
> cost estimation, compared to the same query without the order by, although it 
> has a big impact on the execution performance for larger results/indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4075) Suggestion index update requires at least one non-hidden commit to get kick-started

2016-10-28 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-4075:
---
Fix Version/s: (was: 1.6)

> Suggestion index update requires at least one non-hidden commit to get 
> kick-started
> ---
>
> Key: OAK-4075
> URL: https://issues.apache.org/jira/browse/OAK-4075
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>
> This is related to OAK-4066. The fix over there to improve suggestion 
> generation was to build suggestions for any (visible) change in repository if 
> {{suggestUpdateFrequencyMinutes}} have elapsed.
> It might be useful to do better and not require even that commit. 
> {{AsyncIndexUpdate}} even today, does poll for visible changes - in absence 
> of those it doesn't invoke IndexEditors. To fix this issue, we might want to 
> open a hook from AsyncIndexUpdate to do such periodical index-related tasks 
> (I think it's easier to hook into polling done by async indexing than 
> creating a new timer altogether).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5018) Warn traversal queries: false positives

2016-10-28 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-5018.
-
Resolution: Fixed

> Warn traversal queries: false positives
> ---
>
> Key: OAK-5018
> URL: https://issues.apache.org/jira/browse/OAK-5018
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.5.13
>
>
> OAK-4888 logs a warning by default if there are queries that traverse. There 
> are false positives, for example if "ischildnode" is used:
> {noformat}
> /jcr:root/content/*/jcr:content[@xyz]
> {noformat}
> This needs to be fixed. 
> Temporarily, I will change the "warn" to "info" until this is resolved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-5006) Persistent cache: improve concurrency

2016-10-28 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615363#comment-15615363
 ] 

Thomas Mueller commented on OAK-5006:
-

http://svn.apache.org/r1767018 (trunk) make cache size configurable

> Persistent cache: improve concurrency
> -
>
> Key: OAK-5006
> URL: https://issues.apache.org/jira/browse/OAK-5006
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.6
>
>
> In some highly concurrent benchmarks (100 threads or so), the persistent 
> cache is sometimes blocked, mainly in the (in-memory) cache. This should be 
> resolved. One problem seems to be fixed in a newer version of the MVStore 
> (the "file system cache"), and the other cache bottleneck should be solved 
> using a config option for the segmentCount.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3973) There should be a way to get filtered suggestions based on some path

2016-10-28 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3973:
---
Fix Version/s: (was: 1.6)

> There should be a way to get filtered suggestions based on some path
> 
>
> Key: OAK-3973
> URL: https://issues.apache.org/jira/browse/OAK-3973
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
> Attachments: OAK-3973.wip.patch
>
>
> Currently, suggestions are indexed at index level - so, when querying for 
> suggestions, the result incorporates all documents indexed by that document.
> So, this doesn't allow filtering suggestions for a subset of indexed document 
> - at least we should be able to get filtered suggestion based on a root path.
> This seems very similar to a functionality provided by lucene vide 
> LUCENE-5528 which was fixed in lucene 4.8. Since oak is still on 4.7, so, we 
> can't exploit that feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3973) There should be a way to get filtered suggestions based on some path

2016-10-28 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615329#comment-15615329
 ] 

Vikas Saurabh commented on OAK-3973:


The last thought around doing this cleanly was to use lucene's 
AnalyzingInfixSuggester with its context support - that got out in lucene 4.8 
though. So, this is essentially blocked by our bump to newer lucene (and of 
course then some work around using it). In any case, i don't think that this 
issue needs to be done before 1.6... So, unscheduling it.

> There should be a way to get filtered suggestions based on some path
> 
>
> Key: OAK-3973
> URL: https://issues.apache.org/jira/browse/OAK-3973
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
> Fix For: 1.6
>
> Attachments: OAK-3973.wip.patch
>
>
> Currently, suggestions are indexed at index level - so, when querying for 
> suggestions, the result incorporates all documents indexed by that document.
> So, this doesn't allow filtering suggestions for a subset of indexed document 
> - at least we should be able to get filtered suggestion based on a root path.
> This seems very similar to a functionality provided by lucene vide 
> LUCENE-5528 which was fixed in lucene 4.8. Since oak is still on 4.7, so, we 
> can't exploit that feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4943) Keep Lucene Commits so that if the Segments file gets corrupted recovery can be attempted.

2016-10-28 Thread Ian Boston (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615248#comment-15615248
 ] 

Ian Boston edited comment on OAK-4943 at 10/28/16 12:30 PM:


FYI: I am working on a Sling WebConsole UI to provide detail on all commits, 
including the age of each commit and functionality to analyse each commit, 
validate, rollback and download the commit. This is FYI as it doesn't fit 
inside the Oak project and may be better in Sling or a contrib repo.


was (Author: ianeboston):
FYI: I am working on a Sling WebConsole UI to provide detail on all commits, 
including age of each commit and functionality to analyse each commit, 
validate, rollback and download the commit.

> Keep Lucene Commits so that if the Segments file gets corrupted recovery can 
> be attempted.
> --
>
> Key: OAK-4943
> URL: https://issues.apache.org/jira/browse/OAK-4943
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.6
>Reporter: Ian Boston
>
> Currently, Lucene deletes all previous generations of the segments files as 
> it has it uses the default IndexDeletionPolicy. Changing this to an 
> IndexDeletionPolicy that keeps a number of previous segments files will allow 
> an operator to find the most recent valid commit when the current segments 
> file reports corruption. The patch found at 
> https://github.com/apache/jackrabbit-oak/compare/trunk...ieb:KeepLuceneCommits
>  keeps 10 previous commits.
> A more sophisticated policy could be used to save commits non-linearly over 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4943) Keep Lucene Commits so that if the Segments file gets corrupted recovery can be attempted.

2016-10-28 Thread Ian Boston (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615248#comment-15615248
 ] 

Ian Boston commented on OAK-4943:
-

FYI: I am working on a Sling WebConsole UI to provide detail on all commits, 
including age of each commit and functionality to analyse each commit, 
validate, rollback and download the commit.

> Keep Lucene Commits so that if the Segments file gets corrupted recovery can 
> be attempted.
> --
>
> Key: OAK-4943
> URL: https://issues.apache.org/jira/browse/OAK-4943
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.6
>Reporter: Ian Boston
>
> Currently, Lucene deletes all previous generations of the segments files as 
> it has it uses the default IndexDeletionPolicy. Changing this to an 
> IndexDeletionPolicy that keeps a number of previous segments files will allow 
> an operator to find the most recent valid commit when the current segments 
> file reports corruption. The patch found at 
> https://github.com/apache/jackrabbit-oak/compare/trunk...ieb:KeepLuceneCommits
>  keeps 10 previous commits.
> A more sophisticated policy could be used to save commits non-linearly over 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4943) Keep Lucene Commits so that if the Segments file gets corrupted recovery can be attempted.

2016-10-28 Thread Ian Boston (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15615246#comment-15615246
 ] 

Ian Boston commented on OAK-4943:
-

OAK-4943 is a better solution to the problem

> Keep Lucene Commits so that if the Segments file gets corrupted recovery can 
> be attempted.
> --
>
> Key: OAK-4943
> URL: https://issues.apache.org/jira/browse/OAK-4943
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.6
>Reporter: Ian Boston
>
> Currently, Lucene deletes all previous generations of the segments files as 
> it has it uses the default IndexDeletionPolicy. Changing this to an 
> IndexDeletionPolicy that keeps a number of previous segments files will allow 
> an operator to find the most recent valid commit when the current segments 
> file reports corruption. The patch found at 
> https://github.com/apache/jackrabbit-oak/compare/trunk...ieb:KeepLuceneCommits
>  keeps 10 previous commits.
> A more sophisticated policy could be used to save commits non-linearly over 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5032) Update Groovy version in oak-run to 2.4.7

2016-10-28 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-5032:
-
Description: 
oak-run currently packages Groovy 2.3.1 while latest is 2.4.7. We should update 
to latest for new features and better performance

With use of invokedynamic support and new version a groovy a script [1] which 
was taking 1 hr on a big repository now takes 30 mins! 

[1] https://gist.github.com/chetanmeh/d7588d96a839dd2d26760913e4055215

  was:
oak-run currently packages Groovy 2.3.1 while latest is 2.4.7. We should update 
to latest for new features and better performance

With use of invokedynamic support and new version a groovy script which was 
taking 1 hr on a big repository now takes 30 mins!


> Update Groovy version in oak-run to 2.4.7
> -
>
> Key: OAK-5032
> URL: https://issues.apache.org/jira/browse/OAK-5032
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.6, 1.5.13
>
>
> oak-run currently packages Groovy 2.3.1 while latest is 2.4.7. We should 
> update to latest for new features and better performance
> With use of invokedynamic support and new version a groovy a script [1] which 
> was taking 1 hr on a big repository now takes 30 mins! 
> [1] https://gist.github.com/chetanmeh/d7588d96a839dd2d26760913e4055215



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5032) Update Groovy version in oak-run to 2.4.7

2016-10-28 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5032.
--
   Resolution: Fixed
Fix Version/s: 1.5.13

* Switched to gmaven-plus plugin as gmaven does not work with new Groovy version
* Switched to 'indy' release to make use of invokedynamic support
* Updated jline to latest release

Done with 1767008

> Update Groovy version in oak-run to 2.4.7
> -
>
> Key: OAK-5032
> URL: https://issues.apache.org/jira/browse/OAK-5032
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.6, 1.5.13
>
>
> oak-run currently packages Groovy 2.3.1 while latest is 2.4.7. We should 
> update to latest for new features and better performance
> With use of invokedynamic support and new version a groovy script which was 
> taking 1 hr on a big repository now takes 30 mins!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-5032) Update Groovy version in oak-run to 2.4.7

2016-10-28 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-5032:


 Summary: Update Groovy version in oak-run to 2.4.7
 Key: OAK-5032
 URL: https://issues.apache.org/jira/browse/OAK-5032
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.6


oak-run currently packages Groovy 2.3.1 while latest is 2.4.7. We should update 
to latest for new features and better performance

With use of invokedynamic support and new version a groovy script which was 
taking 1 hr on a big repository now takes 30 mins!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614762#comment-15614762
 ] 

Tomek Rękawek edited comment on OAK-4882 at 10/28/16 9:44 AM:
--

I committed the async queue simplified as in the discussion above. The commit 
also includes heuristics suggest by Chetan.

Trunk: [r1766966|https://svn.apache.org/r1766966], 
[r1766992|https://svn.apache.org/r1766992].
1.4: [r1766970|https://svna.apche.org/r1766970].


was (Author: tomek.rekawek):
I committed the async queue simplified as in the discussion above. The commit 
also includes heuristics suggest by Chetan.

Trunk: [r1766966|https://svn.apache.org/r1766966]
1.4: [r1766970|https://svna.apche.org/r1766970].

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5031) Log configuration deprecation messages at WARN level

2016-10-28 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-5031.
-
   Resolution: Fixed
 Assignee: Francesco Mari  (was: Timothee Maret)
Fix Version/s: 1.5.13

Committed at r1766985.

> Log configuration deprecation messages at WARN level
> 
>
> Key: OAK-5031
> URL: https://issues.apache.org/jira/browse/OAK-5031
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Francesco Mari
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5031.patch
>
>
> The {{StandbyStoreServiceDeprecationError}} and 
> {{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
> configurations and log at {{ERROR}} level a warning message.
> The level should be revised to {{WARN}} since having a deprecated 
> configuration does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5029) Use head GC generation number to trigger cleanup on standby instance

2016-10-28 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-5029.
-
   Resolution: Fixed
 Assignee: Francesco Mari  (was: Timothee Maret)
Fix Version/s: 1.5.13
   1.6

Committed at r1766984.

> Use head GC generation number to trigger cleanup on standby instance 
> -
>
> Key: OAK-5029
> URL: https://issues.apache.org/jira/browse/OAK-5029
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Francesco Mari
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5029.patch
>
>
> With {{oak-segment-tar}}, the trigger for running {{cleanup}} on the standby 
> instance could be determined by the GC generation number of the head which is 
> bound to increase every time a cleanup is ran.
> {code}
> fileStore.getHead().getRecordId().getSegment().getGcGeneration();
> {code}
> The current trigger mechanism consists of detecting a 25% size increase over 
> a client cycle (typ. 5 sec).
> This would be dropped in favor of the new detection mechanism.
> The auto-compaction mode would remain configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4834) Make the role configurable for the SegmentNodeStore

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614812#comment-15614812
 ] 

Tomek Rękawek commented on OAK-4834:


Since we have the SegmentNodeStoreFactory supporting the role-setting, there's 
no need to provide the same feature in the SegmentNodeStoreService.

> Make the role configurable for the SegmentNodeStore
> ---
>
> Key: OAK-4834
> URL: https://issues.apache.org/jira/browse/OAK-4834
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Minor
> Fix For: 1.6
>
> Attachments: OAK-4834.patch
>
>
> Right now the SegmentNodeStoreService can be configured with the secondary 
> boolean property. It'll be then registered as NodeStoreProvider with 
> {{role=secondary}}.
> Maybe we can make it more flexible and allow to configure any role, using 
> {{nsProvider.role}} property?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4834) Make the role configurable for the SegmentNodeStore

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-4834.

Resolution: Won't Fix

> Make the role configurable for the SegmentNodeStore
> ---
>
> Key: OAK-4834
> URL: https://issues.apache.org/jira/browse/OAK-4834
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Minor
> Fix For: 1.6
>
> Attachments: OAK-4834.patch
>
>
> Right now the SegmentNodeStoreService can be configured with the secondary 
> boolean property. It'll be then registered as NodeStoreProvider with 
> {{role=secondary}}.
> Maybe we can make it more flexible and allow to configure any role, using 
> {{nsProvider.role}} property?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614788#comment-15614788
 ] 

Chetan Mehrotra edited comment on OAK-4882 at 10/28/16 8:50 AM:


[~tomek.rekawek] It would be helpful to invest in some benchmark which can 
reproduce the issue and validate any approach against that. Otherwise we may 
hit issue somewhere else. 

Also I would feel more comfortable if this change gets some more test runs 
before we merge it to branch. 


was (Author: chetanm):
[~tomek.rekawek] It would be helpful to invest in some benchmark which can 
reproduce the issue and validate any approach against that. Otherwise we may 
hit issue somewhere else. 

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614788#comment-15614788
 ] 

Chetan Mehrotra commented on OAK-4882:
--

[~tomek.rekawek] It would be helpful to invest in some benchmark which can 
reproduce the issue and validate any approach against that. Otherwise we may 
hit issue somewhere else. 

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614762#comment-15614762
 ] 

Tomek Rękawek edited comment on OAK-4882 at 10/28/16 8:43 AM:
--

I committed the async queue simplified as in the discussion above. The commit 
also includes heuristics suggest by Chetan.

Trunk: [r1766966|https://svn.apache.org/r1766966]
1.4: [r1766970|https://svna.apche.org/r1766970].


was (Author: tomek.rekawek):
I committed the async queue simplified as in the discussion above. The commit 
also includes heuristics suggest by Chetan.

Trunk: [r1766966|https://svn.apache.org/r1766966]

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614762#comment-15614762
 ] 

Tomek Rękawek commented on OAK-4882:


I committed the async queue simplified as in the discussion above. The commit 
also includes heuristics suggest by Chetan.

Trunk: [r1766966|https://svn.apache.org/r1766966]

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4882:
---
Fix Version/s: 1.5.13

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4882.patch
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-5031) Log configuration deprecation messages at WARN level

2016-10-28 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614687#comment-15614687
 ] 

Timothee Maret commented on OAK-5031:
-

[~frm] could you look at the patch ?

> Log configuration deprecation messages at WARN level
> 
>
> Key: OAK-5031
> URL: https://issues.apache.org/jira/browse/OAK-5031
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6
>
> Attachments: OAK-5031.patch
>
>
> The {{StandbyStoreServiceDeprecationError}} and 
> {{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
> configurations and log at {{ERROR}} level a warning message.
> The level should be revised to {{WARN}} since having a deprecated 
> configuration does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5031) Log configuration deprecation messages at WARN level

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-5031:

Attachment: OAK-5031.patch

Attaching a svn compatible patch otherwise available at
https://github.com/tmaret/jackrabbit-oak/commit/9b47749402352058f13f0f61752b18828efd2f9d.diff

> Log configuration deprecation messages at WARN level
> 
>
> Key: OAK-5031
> URL: https://issues.apache.org/jira/browse/OAK-5031
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6
>
> Attachments: OAK-5031.patch
>
>
> The {{StandbyStoreServiceDeprecationError}} and 
> {{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
> configurations and log at {{ERROR}} level a warning message.
> The level should be revised to {{WARN}} since having a deprecated 
> configuration does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5031) Log configuration deprecation messages at WARN level

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-5031:

Flags: Patch

> Log configuration deprecation messages at WARN level
> 
>
> Key: OAK-5031
> URL: https://issues.apache.org/jira/browse/OAK-5031
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6
>
> Attachments: OAK-5031.patch
>
>
> The {{StandbyStoreServiceDeprecationError}} and 
> {{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
> configurations and log at {{ERROR}} level a warning message.
> The level should be revised to {{WARN}} since having a deprecated 
> configuration does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5031) Log configuration deprecation messages at WARN level

2016-10-28 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-5031:

Summary: Log configuration deprecation messages at WARN level  (was: Log 
configuration deprecation error messages at WARN level)

> Log configuration deprecation messages at WARN level
> 
>
> Key: OAK-5031
> URL: https://issues.apache.org/jira/browse/OAK-5031
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.16
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6
>
>
> The {{StandbyStoreServiceDeprecationError}} and 
> {{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
> configurations and log at {{ERROR}} level a warning message.
> The level should be revised to {{WARN}} since having a deprecated 
> configuration does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-5030) Copying the versions store is slow and increase the repository size

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614669#comment-15614669
 ] 

Tomek Rękawek commented on OAK-5030:


[~mduerig], it breaks the unit tests. The reason is that 
{{isVersionHistoryExists}} method should check if the version history exists in 
the target builder rather than source node state. Sometimes the history is not 
copied from the source repository and in this case it should return false.

If the reason of the slowness is calling getNodeState() on the builder, maybe 
following patch would help: [^OAK-5030-2.patch]?

> Copying the versions store is slow and increase the repository size
> ---
>
> Key: OAK-5030
> URL: https://issues.apache.org/jira/browse/OAK-5030
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: perfomance
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5030-2.patch, OAK-5030.patch
>
>
> This is a follow up to OAK-4970: when defining an workspace name that doesn't 
> match the name of the workspace of the source repository (via the 
> {{WORKSPACE_NAME_PROP}} system property), upgrading is still very slow and 
> causes the repository size to grow way too much. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5030) Copying the versions store is slow and increase the repository size

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-5030:
---
Attachment: OAK-5030-2.patch

> Copying the versions store is slow and increase the repository size
> ---
>
> Key: OAK-5030
> URL: https://issues.apache.org/jira/browse/OAK-5030
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: perfomance
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5030-2.patch, OAK-5030.patch
>
>
> This is a follow up to OAK-4970: when defining an workspace name that doesn't 
> match the name of the workspace of the source repository (via the 
> {{WORKSPACE_NAME_PROP}} system property), upgrading is still very slow and 
> causes the repository size to grow way too much. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-5031) Log configuration deprecation error messages at WARN level

2016-10-28 Thread Timothee Maret (JIRA)
Timothee Maret created OAK-5031:
---

 Summary: Log configuration deprecation error messages at WARN level
 Key: OAK-5031
 URL: https://issues.apache.org/jira/browse/OAK-5031
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: segment-tar
Affects Versions: Segment Tar 0.0.16
Reporter: Timothee Maret
Assignee: Timothee Maret
 Fix For: 1.6


The {{StandbyStoreServiceDeprecationError}} and 
{{SegmentNodeStoreServiceDeprecationError}} detect the presence of deprecated 
configurations and log at {{ERROR}} level a warning message.

The level should be revised to {{WARN}} since having a deprecated configuration 
does not break the instance (it's not an error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5030) Copying the versions store is slow and increase the repository size

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-5030:
---
Fix Version/s: 1.6

> Copying the versions store is slow and increase the repository size
> ---
>
> Key: OAK-5030
> URL: https://issues.apache.org/jira/browse/OAK-5030
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: perfomance
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5030.patch
>
>
> This is a follow up to OAK-4970: when defining an workspace name that doesn't 
> match the name of the workspace of the source repository (via the 
> {{WORKSPACE_NAME_PROP}} system property), upgrading is still very slow and 
> causes the repository size to grow way too much. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-5030) Copying the versions store is slow and increase the repository size

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek reassigned OAK-5030:
--

Assignee: Tomek Rękawek

> Copying the versions store is slow and increase the repository size
> ---
>
> Key: OAK-5030
> URL: https://issues.apache.org/jira/browse/OAK-5030
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: perfomance
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-5030.patch
>
>
> This is a follow up to OAK-4970: when defining an workspace name that doesn't 
> match the name of the workspace of the source repository (via the 
> {{WORKSPACE_NAME_PROP}} system property), upgrading is still very slow and 
> causes the repository size to grow way too much. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5015) Retry mechanism for failed async uploads

2016-10-28 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-5015.

   Resolution: Fixed
Fix Version/s: 1.5.13

Done with http://svn.apache.org/viewvc?rev=1766928=rev

> Retry mechanism for failed async uploads
> 
>
> Key: OAK-5015
> URL: https://issues.apache.org/jira/browse/OAK-5015
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.6, 1.5.13
>
>
> There needs to be a retry mechanism for failed uploads. The reasons for 
> failure could be similar to OAK-5005.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4933) Create a data store implementation that integrates with Microsoft Azure Blob Storage

2016-10-28 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614460#comment-15614460
 ] 

Amit Jain commented on OAK-4933:


I forgot to mention this but we probably won't even need to mock things as it 
has a Transient Provider (in-memory) which can be used for tests see 
http://jclouds.apache.org/start/blobstore/

> Create a data store implementation that integrates with Microsoft Azure Blob 
> Storage
> 
>
> Key: OAK-4933
> URL: https://issues.apache.org/jira/browse/OAK-4933
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: blob, core
>Reporter: Matt Ryan
>
> This epic proposes the creation of a new type of Oak data store, 
> AzureDataStore, that offers an integration between Oak and Microsoft Azure 
> Blob Storage.  AzureDataStore would be very similar in purpose and 
> functionality to S3DataStore, with a different backend that uses Azure Blob 
> Storage instead of S3.
> Some initial exploration into this concept can be seen in my github here:  
> https://github.com/mattvryan/jackrabbit-oak/tree/azure-blob-store
> More info about Azure Blob Storage:
> * [Java 
> SDK|https://azure.microsoft.com/en-us/documentation/articles/storage-java-how-to-use-blob-storage/]
> * [Javadoc|http://azure.github.io/azure-storage-java/]
> * [Source|https://github.com/azure/azure-storage-java] (GitHub) - Microsoft's 
> Azure Storage Java SDK is open source and Apache licensed
> * [Package 
> info|https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage] on 
> mvnrepository.com
> As I see it, the following work would be required:
> * Create an AzureDataStore class that extends CachingDataStore
> * Create a new backend for Azure Blob Storage
> * Comprehensive unit testing of the new data store and backend classes
> * Create test "mocks" for the necessary Azure Storage classes to facilitate 
> unit testing (they are all final classes and cannot be mocked directly)
> * Create SharedAzureDataStore class
> * Create AzureDataStoreService class
> * Implement similar JMX metrics as exist for S3DataStore
> * Combine and refactor existing Oak code with newly added code to make best 
> reuse of existing code, etc.
> * Integration testing with system configured with AzureDataStore, comparison 
> w/ S3DataStore in terms of performance and correctness
> * Modify Azure Storage SDK code to make it into a valid OSGi bundle, and 
> submit these changes back to that project (currently it is not an OSGi bundle 
> and therefore currently has to be embedded)
> List isn't purported to be comprehensive of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)