[jira] [Commented] (OAK-5090) Provide escape/unescape utility for Oak node names

2016-11-09 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15653266#comment-15653266
 ] 

Alexander Klimetschek commented on OAK-5090:


Where should it go? IMO putting it into jackrabbit-api Text would be great in 
order to make it as visible as possible. Downside is that if the escaping needs 
to change in line with Oak versions, is that this would be a bit decoupled.

> Provide escape/unescape utility for Oak node names
> --
>
> Key: OAK-5090
> URL: https://issues.apache.org/jira/browse/OAK-5090
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Alexander Klimetschek
>
> JCR applications often use 
> [Text.escapeIllegalJcrChars()|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)],
>  but missing that Oak has more limits. There should be a common 
> escape/unescape utility that covers all and ensures applications don't have 
> problems with special characters.
> From OAK-4857, and related to OAK-5089 (documentation for the same). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5089) Document illegal item names in Oak

2016-11-09 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-5089:
---
Summary: Document illegal item names in Oak  (was: Document illegal node 
name chars in Oak)

> Document illegal item names in Oak
> --
>
> Key: OAK-5089
> URL: https://issues.apache.org/jira/browse/OAK-5089
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: doc
>Reporter: Alexander Klimetschek
>
> From OAK-4857. Oak like Jackrabbit 2 has limits on valid node/property names 
> in addition to the illegal chars from the JCR spec. This isn't documented yet.
> Here is what I know so far:
> * illegal node name if entire name is empty or {{.}} or {{..}}
> * no length limit (\?)
> * otherwise name can have all unicode chars except:
> * JCR illegal chars {{/ : \[ ] | *}}
> * 
> [Character.isWhitespace()|https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#isWhitespace(char)],
>  except for regular space {{u20}} which is allowed, except first or last char



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4857) Support space chars common in CJK inside node names

2016-11-09 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15653259#comment-15653259
 ] 

Alexander Klimetschek commented on OAK-4857:


I created:
* OAK-5089 for documentation
* OAK-5090 for an escaping/unescaping utility

This is required fven if this issue is addressed and special CJK spaces will be 
supported (eventually), as long as there are more characters (like whitespace) 
that are invalid in Oak.


> Support space chars common in CJK inside node names
> ---
>
> Key: OAK-4857
> URL: https://issues.apache.org/jira/browse/OAK-4857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.4.7, 1.5.10
>Reporter: Alexander Klimetschek
>Assignee: Marcel Reutegger
> Fix For: 1.6
>
> Attachments: OAK-4857-tests.patch
>
>
> Oak (like Jackrabbit) does not allow spaces commonly used in CJK like 
> {{u3000}} (ideographic space) or {{u00A0}} (no-break space) _inside_ a node 
> name, while allowing some of them (the non breaking spaces) at the _beginning 
> or end_.
> They should be supported for better globalization readiness, and filesystems 
> allow them, making common filesystem to JCR mappings unnecessarily hard. 
> Escaping would be an option for applications, but there is currently no 
> utility method for it 
> ([Text.escapeIllegalJcrChars|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)]
>  will not escape these spaces), nor is it documented for applications how to 
> do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5090) Provide escape/unescape utility for Oak node names

2016-11-09 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-5090:
---
Description: 
JCR applications often use 
[Text.escapeIllegalJcrChars()|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)],
 but missing that Oak has more limits. There should be a common escape/unescape 
utility that covers all and ensures applications don't have problems with 
special characters.

>From OAK-4857, and related to OAK-5089 (documentation for the same). 

> Provide escape/unescape utility for Oak node names
> --
>
> Key: OAK-5090
> URL: https://issues.apache.org/jira/browse/OAK-5090
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: commons
>Reporter: Alexander Klimetschek
>
> JCR applications often use 
> [Text.escapeIllegalJcrChars()|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)],
>  but missing that Oak has more limits. There should be a common 
> escape/unescape utility that covers all and ensures applications don't have 
> problems with special characters.
> From OAK-4857, and related to OAK-5089 (documentation for the same). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5089) Document illegal node name chars in Oak

2016-11-09 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-5089:
---
Description: 
>From OAK-4857. Oak like Jackrabbit 2 has limits on valid node/property names 
>in addition to the illegal chars from the JCR spec. This isn't documented yet.

Here is what I know so far:
* illegal node name if entire name is empty or {{.}} or {{..}}
* no length limit (\?)
* otherwise name can have all unicode chars except:
* JCR illegal chars {{/ : \[ ] | *}}
* 
[Character.isWhitespace()|https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#isWhitespace(char)],
 except for regular space {{u20}} which is allowed, except first or last char

> Document illegal node name chars in Oak
> ---
>
> Key: OAK-5089
> URL: https://issues.apache.org/jira/browse/OAK-5089
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: doc
>Reporter: Alexander Klimetschek
>
> From OAK-4857. Oak like Jackrabbit 2 has limits on valid node/property names 
> in addition to the illegal chars from the JCR spec. This isn't documented yet.
> Here is what I know so far:
> * illegal node name if entire name is empty or {{.}} or {{..}}
> * no length limit (\?)
> * otherwise name can have all unicode chars except:
> * JCR illegal chars {{/ : \[ ] | *}}
> * 
> [Character.isWhitespace()|https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#isWhitespace(char)],
>  except for regular space {{u20}} which is allowed, except first or last char



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-5090) Provide escape/unescape utility for Oak node names

2016-11-09 Thread Alexander Klimetschek (JIRA)
Alexander Klimetschek created OAK-5090:
--

 Summary: Provide escape/unescape utility for Oak node names
 Key: OAK-5090
 URL: https://issues.apache.org/jira/browse/OAK-5090
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: commons
Reporter: Alexander Klimetschek






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-5089) Document illegal node name chars in Oak

2016-11-09 Thread Alexander Klimetschek (JIRA)
Alexander Klimetschek created OAK-5089:
--

 Summary: Document illegal node name chars in Oak
 Key: OAK-5089
 URL: https://issues.apache.org/jira/browse/OAK-5089
 Project: Jackrabbit Oak
  Issue Type: Documentation
  Components: doc
Reporter: Alexander Klimetschek






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4857) Support space chars common in CJK inside node names

2016-11-09 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-4857:
---
Fix Version/s: 1.6

> Support space chars common in CJK inside node names
> ---
>
> Key: OAK-4857
> URL: https://issues.apache.org/jira/browse/OAK-4857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.4.7, 1.5.10
>Reporter: Alexander Klimetschek
>Assignee: Marcel Reutegger
> Fix For: 1.6
>
> Attachments: OAK-4857-tests.patch
>
>
> Oak (like Jackrabbit) does not allow spaces commonly used in CJK like 
> {{u3000}} (ideographic space) or {{u00A0}} (no-break space) _inside_ a node 
> name, while allowing some of them (the non breaking spaces) at the _beginning 
> or end_.
> They should be supported for better globalization readiness, and filesystems 
> allow them, making common filesystem to JCR mappings unnecessarily hard. 
> Escaping would be an option for applications, but there is currently no 
> utility method for it 
> ([Text.escapeIllegalJcrChars|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)]
>  will not escape these spaces), nor is it documented for applications how to 
> do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4857) Support space chars common in CJK inside node names

2016-11-09 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek reassigned OAK-4857:
--

Assignee: Marcel Reutegger

> Support space chars common in CJK inside node names
> ---
>
> Key: OAK-4857
> URL: https://issues.apache.org/jira/browse/OAK-4857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.4.7, 1.5.10
>Reporter: Alexander Klimetschek
>Assignee: Marcel Reutegger
> Fix For: 1.6
>
> Attachments: OAK-4857-tests.patch
>
>
> Oak (like Jackrabbit) does not allow spaces commonly used in CJK like 
> {{u3000}} (ideographic space) or {{u00A0}} (no-break space) _inside_ a node 
> name, while allowing some of them (the non breaking spaces) at the _beginning 
> or end_.
> They should be supported for better globalization readiness, and filesystems 
> allow them, making common filesystem to JCR mappings unnecessarily hard. 
> Escaping would be an option for applications, but there is currently no 
> utility method for it 
> ([Text.escapeIllegalJcrChars|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)]
>  will not escape these spaces), nor is it documented for applications how to 
> do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5088) o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing records

2016-11-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-5088.
--
   Resolution: Fixed
Fix Version/s: 1.6

Applied the patch in r1769038

> o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing 
> records 
> ---
>
> Key: OAK-5088
> URL: https://issues.apache.org/jira/browse/OAK-5088
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.5.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.6, 1.5.14
>
> Attachments: OAK-5088.patch
>
>
> The 
> {{org.apache.jackrabbit.oak.plugins.blob.datastore.DataStoreBlobStore#getReference}}
>  method logs WARNING level in cases the {{encodedBlobId}} is not stored. 
> Those cases are expected according to the JavaDoc [0] and thus should not log 
> WARNING level messages.
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/2acda3156cfad9993310e7aa0492cdc0b65aa5f7/oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/BlobStore.java#L83-L87



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5009) ExternalToExternalMigrationTest failures on Windows

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-5009:

Fix Version/s: (was: 1.5.14)
   (was: 1.6)
   1.5.13

> ExternalToExternalMigrationTest failures on Windows
> ---
>
> Key: OAK-5009
> URL: https://issues.apache.org/jira/browse/OAK-5009
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, segment-tar
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: test-failure
> Fix For: 1.5.13
>
>
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 0.463 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643533-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 13.021 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643996-0\segmentstore\data0a.tar
> Running 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 0.157 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657018-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 12.561 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657193-0\segmentstore\data0a.tar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-5009) ExternalToExternalMigrationTest failures on Windows

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-5009:
---

Assignee: Thomas Mueller

> ExternalToExternalMigrationTest failures on Windows
> ---
>
> Key: OAK-5009
> URL: https://issues.apache.org/jira/browse/OAK-5009
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, segment-tar
>Reporter: Julian Reschke
>Assignee: Thomas Mueller
>  Labels: test-failure
> Fix For: 1.5.13
>
>
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 0.463 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643533-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 13.021 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643996-0\segmentstore\data0a.tar
> Running 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 0.157 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657018-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 12.561 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657193-0\segmentstore\data0a.tar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-5009) ExternalToExternalMigrationTest failures on Windows

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-5009.
-
Resolution: Fixed

> ExternalToExternalMigrationTest failures on Windows
> ---
>
> Key: OAK-5009
> URL: https://issues.apache.org/jira/browse/OAK-5009
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, segment-tar
>Reporter: Julian Reschke
>  Labels: test-failure
> Fix For: 1.6, 1.5.14
>
>
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 0.463 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643533-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 13.021 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643996-0\segmentstore\data0a.tar
> Running 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 0.157 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657018-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 12.561 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657193-0\segmentstore\data0a.tar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-5009) ExternalToExternalMigrationTest failures on Windows

2016-11-09 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651326#comment-15651326
 ] 

Thomas Mueller commented on OAK-5009:
-

http://svn.apache.org/r1768995 (trunk)

[~tomek.rekawek] could you review my fix, and if needed fix my fix :-)

> ExternalToExternalMigrationTest failures on Windows
> ---
>
> Key: OAK-5009
> URL: https://issues.apache.org/jira/browse/OAK-5009
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, segment-tar
>Reporter: Julian Reschke
>  Labels: test-failure
> Fix For: 1.6, 1.5.14
>
>
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 0.463 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643533-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest)
>   Time elapsed: 13.021 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483643996-0\segmentstore\data0a.tar
> Running 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec 
> <<< FAILURE! - in 
> org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 0.157 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657018-0\segmentstore\data1a.tar
> blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest)
>   Time elapsed: 12.561 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> C:\tmp\1477483657193-0\segmentstore\data0a.tar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-2072.
-
Resolution: Fixed

> Lucene: inconsistent usage of the config option "persistence"
> -
>
> Key: OAK-2072
> URL: https://issues.apache.org/jira/browse/OAK-2072
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.5.13
>
>
> The Lucene index reader uses the configuration property "persistence", but 
> the editor (the component updating the index) does not. That leads to very 
> strange behavior if the property is missing, but the property "file" is set: 
> the reader would try to read from the file system, but those files are not 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"

2016-11-09 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651223#comment-15651223
 ] 

Thomas Mueller commented on OAK-2072:
-

http://svn.apache.org/r1768986 (trunk)

> Lucene: inconsistent usage of the config option "persistence"
> -
>
> Key: OAK-2072
> URL: https://issues.apache.org/jira/browse/OAK-2072
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.5.13
>
>
> The Lucene index reader uses the configuration property "persistence", but 
> the editor (the component updating the index) does not. That leads to very 
> strange behavior if the property is missing, but the property "file" is set: 
> the reader would try to read from the file system, but those files are not 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2072:

Fix Version/s: 1.5.13

> Lucene: inconsistent usage of the config option "persistence"
> -
>
> Key: OAK-2072
> URL: https://issues.apache.org/jira/browse/OAK-2072
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.5.13
>
>
> The Lucene index reader uses the configuration property "persistence", but 
> the editor (the component updating the index) does not. That leads to very 
> strange behavior if the property is missing, but the property "file" is set: 
> the reader would try to read from the file system, but those files are not 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-2072:
---

Assignee: Thomas Mueller

> Lucene: inconsistent usage of the config option "persistence"
> -
>
> Key: OAK-2072
> URL: https://issues.apache.org/jira/browse/OAK-2072
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>
> The Lucene index reader uses the configuration property "persistence", but 
> the editor (the component updating the index) does not. That leads to very 
> strange behavior if the property is missing, but the property "file" is set: 
> the reader would try to read from the file system, but those files are not 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4026) Index and query analysis tool

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-4026:

Fix Version/s: (was: 1.6)

> Index and query analysis tool
> -
>
> Key: OAK-4026
> URL: https://issues.apache.org/jira/browse/OAK-4026
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>
> Defining the right indexes is quite tricky. We should have a tool that lists 
> duplicate indexes, large indexes, indexes with incorrect or questionable 
> configuration.
> The tool should also be able to analyze query log files and suggest (best 
> case) which indexes to create or at least which queries are slow and need 
> indexes.
> The tool could be part of oak-run, so that it can be run against an existing 
> repository. The output should be plain text or JSON (to be consumed by a GUI 
> tool).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"

2016-11-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2072:

Fix Version/s: (was: 1.6)

> Lucene: inconsistent usage of the config option "persistence"
> -
>
> Key: OAK-2072
> URL: https://issues.apache.org/jira/browse/OAK-2072
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Priority: Minor
>
> The Lucene index reader uses the configuration property "persistence", but 
> the editor (the component updating the index) does not. That leads to very 
> strange behavior if the property is missing, but the property "file" is set: 
> the reader would try to read from the file system, but those files are not 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5088) o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing records

2016-11-09 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-5088:

Attachment: OAK-5088.patch

Attaching a patch that applies change proposed offline by [~chetanm]

[~chetanm] could you have a look ?

> o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing 
> records 
> ---
>
> Key: OAK-5088
> URL: https://issues.apache.org/jira/browse/OAK-5088
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.5.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.5.14
>
> Attachments: OAK-5088.patch
>
>
> The 
> {{org.apache.jackrabbit.oak.plugins.blob.datastore.DataStoreBlobStore#getReference}}
>  method logs WARNING level in cases the {{encodedBlobId}} is not stored. 
> Those cases are expected according to the JavaDoc [0] and thus should not log 
> WARNING level messages.
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/2acda3156cfad9993310e7aa0492cdc0b65aa5f7/oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/BlobStore.java#L83-L87



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5088) o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing records

2016-11-09 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-5088:

Flags: Patch

> o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing 
> records 
> ---
>
> Key: OAK-5088
> URL: https://issues.apache.org/jira/browse/OAK-5088
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.5.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: 1.5.14
>
>
> The 
> {{org.apache.jackrabbit.oak.plugins.blob.datastore.DataStoreBlobStore#getReference}}
>  method logs WARNING level in cases the {{encodedBlobId}} is not stored. 
> Those cases are expected according to the JavaDoc [0] and thus should not log 
> WARNING level messages.
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/2acda3156cfad9993310e7aa0492cdc0b65aa5f7/oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/BlobStore.java#L83-L87



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-5088) o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs WARNING for missing records

2016-11-09 Thread Timothee Maret (JIRA)
Timothee Maret created OAK-5088:
---

 Summary: o.a.j.o.p.b.d.DataStoreBlobStore#getReference logs 
WARNING for missing records 
 Key: OAK-5088
 URL: https://issues.apache.org/jira/browse/OAK-5088
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.5.12
Reporter: Timothee Maret
Assignee: Timothee Maret
 Fix For: 1.5.14


The 
{{org.apache.jackrabbit.oak.plugins.blob.datastore.DataStoreBlobStore#getReference}}
 method logs WARNING level in cases the {{encodedBlobId}} is not stored. Those 
cases are expected according to the JavaDoc [0] and thus should not log WARNING 
level messages.

[0] 
https://github.com/apache/jackrabbit-oak/blob/2acda3156cfad9993310e7aa0492cdc0b65aa5f7/oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/BlobStore.java#L83-L87






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4757) Adjust default timeout values for MongoDocumentStore

2016-11-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4757:
--
Fix Version/s: 1.4.10

Merged into 1.4 branch: http://svn.apache.org/r1768920

> Adjust default timeout values for MongoDocumentStore
> 
>
> Key: OAK-4757
> URL: https://issues.apache.org/jira/browse/OAK-4757
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.6, 1.5.10, 1.4.10
>
> Attachments: OAK-4757.patch, OAK-4757.patch
>
>
> Some default values timeouts of the MongoDB Java driver do not work well with 
> the lease time we use in Oak.
> Per default there is no socket timeout set and the driver waits for a new 
> connection up to 120 seconds, which is too log for lease update operations. 
> See also OAK-4739.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4770) Missing exception handling in ClusterNodeInfo.renewLease()

2016-11-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4770:
--
   Labels: resilience  (was: candidate_oak_1_4 resilience)
Fix Version/s: 1.4.10

Merged into 1.4 branch: http://svn.apache.org/r1768862 and 
http://svn.apache.org/r1768911

> Missing exception handling in ClusterNodeInfo.renewLease()
> --
>
> Key: OAK-4770
> URL: https://issues.apache.org/jira/browse/OAK-4770
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.6, 1.5.10, 1.4.10
>
>
> ClusterNodeInfo.renewLease() does not handle a potential 
> DocumentStoreException on {{findAndModify()}}. This may leave 
> {{previousLeaseEndTime}} in an inconsistent state and a subsequent 
> {{renewLease()}} call then considers the lease timed out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4792) Replace usage of AssertionError in ClusterNodeInfo

2016-11-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4792:
--
   Labels:   (was: candidate_oak_1_4)
Fix Version/s: 1.4.10

Merged into 1.4 branch: http://svn.apache.org/r1768904

> Replace usage of AssertionError in ClusterNodeInfo
> --
>
> Key: OAK-4792
> URL: https://issues.apache.org/jira/browse/OAK-4792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.10, 1.4.10
>
>
> As discussed in OAK-4770, I would like to replace usage of AssertionError in 
> ClusterNodeInfo with DocumentStoreException and clarify the contract 
> accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-5071) Persistent cache: use the asynchronous mode by default

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-5071:
---
Fix Version/s: 1.4.10

> Persistent cache: use the asynchronous mode by default
> --
>
> Key: OAK-5071
> URL: https://issues.apache.org/jira/browse/OAK-5071
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: cache, documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13, 1.4.10
>
>
> OAK-4882 resolves most of the problems with asynchronous queue for the 
> persistent cache. Let's use the async mode by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-5071) Persistent cache: use the asynchronous mode by default

2016-11-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650487#comment-15650487
 ] 

Tomek Rękawek commented on OAK-5071:


This has been partially backported to 1.4. The asynchronous mode isn't default 
yet, but the changes related to DIFF are introduced now.

[r1768897|https://svn.apache.org/r1768897].

> Persistent cache: use the asynchronous mode by default
> --
>
> Key: OAK-5071
> URL: https://issues.apache.org/jira/browse/OAK-5071
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: cache, documentmk
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.13
>
>
> OAK-4882 resolves most of the problems with asynchronous queue for the 
> persistent cache. Let's use the async mode by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2016-11-09 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh reassigned OAK-3001:
--

Assignee: Vikas Saurabh

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Critical
>  Labels: scalability
> Fix For: 1.6
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4796) filter events before adding to ChangeProcessor's queue

2016-11-09 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4796:
-
Fix Version/s: 1.6

> filter events before adding to ChangeProcessor's queue
> --
>
> Key: OAK-4796
> URL: https://issues.apache.org/jira/browse/OAK-4796
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.5.9
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: observation
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4796.changeSet.patch, OAK-4796.patch
>
>
> Currently the 
> [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335]
>  is in charge of doing the event diffing and filtering and does so in a 
> pooled Thread, ie asynchronously, at a later stage independent from the 
> commit. This has the advantage that the commit is fast, but has the following 
> potentially negative effects:
> # events (in the form of ContentChange Objects) occupy a slot of the queue 
> even if the listener is not interested in it - any commit lands on any 
> listener's queue. This reduces the capacity of the queue for 'actual' events 
> to be delivered. It therefore increases the risk that the queue fills - and 
> when full has various consequences such as loosing the CommitInfo etc.
> # each event==ContentChange later on must be evaluated, and for that a diff 
> must be calculated. Depending on runtime behavior that diff might be 
> expensive if no longer in the cache (documentMk specifically).
> As an improvement, this diffing+filtering could be done at an earlier stage 
> already, nearer to the commit, and in case the filter would ignore the event, 
> it would not have to be put into the queue at all, thus avoiding occupying a 
> slot and later potentially slower diffing.
> The suggestion is to implement this via the following algorithm:
> * During the commit, in a {{Validator}} the listener's filters are evaluated 
> - in an as-efficient-as-possible manner (Reason for doing it in a Validator 
> is that this doesn't add overhead as oak already goes through all changes for 
> other Validators). As a result a _list of potentially affected observers_ is 
> added to the {{CommitInfo}} (false positives are fine).
> ** Note that the above adds cost to the commit and must therefore be 
> carefully done and measured
> ** One potential measure could be to only do filtering when listener's queues 
> are larger than a certain threshold (eg 10)
> * The ChangeProcessor in {{contentChanged}} (in the one created in 
> [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224])
>  then checks the new commitInfo's _potentially affected observers_ list and 
> if it's not in the list, adds a {{NOOP}} token at the end of the queue. If 
> there's already a NOOP there, the two are collapsed (this way when a filter 
> is not affected it would have a NOOP at the end of the queue). If later on a 
> no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} 
> for the newly added {{ContentChange}} obj.
> ** To achieve that, the ContentChange obj is extended to not only have the 
> "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which 
> currently is implicitly maintained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4916) Add support for excluding commits to BackgroundObserver

2016-11-09 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4916:
-
Fix Version/s: 1.6

> Add support for excluding commits to BackgroundObserver
> ---
>
> Key: OAK-4916
> URL: https://issues.apache.org/jira/browse/OAK-4916
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core
>Affects Versions: 1.5.11
>Reporter: Stefan Egli
>Assignee: Stefan Egli
> Fix For: 1.6, 1.5.13
>
> Attachments: FilteringObserver.patch, OAK-4916.patch, 
> OAK-4916.v2.patch
>
>
> As part of pre-filtering commits it would be useful to have support in the 
> BackgroundObserver (in general) that would allow to exclude certain commits 
> from being added to the (BackgroundObserver's) queue, thus keeping the queue 
> smaller. The actual filtering is up to subclasses.
> The suggested implementation is as follows:
> * a new method {{isExcluded}} is introduced which represents a subclass hook 
> for filtering
> * excluded commits are not added to the queue
> * when multiple commits are excluded subsequently, this is collapsed
> * the first non-excluded commit (ContentChange) added to the queue is marked 
> with the last non-excluded root state as the 'previous root'
> * downstream Observers are notified of the exclusion of a commit via a 
> special CommitInfo {{NOOP_CHANGE}}: this instructs it to exclude this change 
> while at the same time 'fast-forwarding' the root state to the new one.
> ** this extra token is one way of solving the problem that 
> {{Observer.contentChanged}} represents a diff between two states but does not 
> transport the 'from' state explicitly - that is implicitly taken from the 
> previous call to {{contentChanged}}. Thus using such a gap token 
> ({{NOOP_CHANGE}}) seems to be the only way to instruct Observers to skip a 
> change.
> To repeat: whoever extends BackgroundObserver with filtering must be aware of 
> the new {{NOOP_CHANGE}} token. Anyone not doing filtering will not get any 
> {{NOOP_CHANGE}} tokens though.
> NOTE: See [comment further 
> below|https://issues.apache.org/jira/browse/OAK-4916?focusedCommentId=15572165&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15572165]
>  with a new suggested approach, which doesn't use NOOP_CHANGED but introduces 
> a new FilteringAwareObserver instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4907) Collect changes (paths, nts, props..) of a commit in a validator

2016-11-09 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4907:
-
Fix Version/s: 1.6

> Collect changes (paths, nts, props..) of a commit in a validator
> 
>
> Key: OAK-4907
> URL: https://issues.apache.org/jira/browse/OAK-4907
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core
>Affects Versions: 1.5.11
>Reporter: Stefan Egli
>Assignee: Stefan Egli
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4907.patch, OAK-4907.v2.patch
>
>
> It would be useful to collect a set of changes of a commit (eg in a 
> validator) that could later be used in an Observer for eg prefiltering.
> Such a change collector should collect paths, nodetypes, properties, 
> node-names (and perhaps more at a later stage) of all changes and store the 
> result in the CommitInfo's CommitContext.
> Note that this is a result of 
> [discussions|https://issues.apache.org/jira/browse/OAK-4796?focusedCommentId=15550962&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15550962]
>  around design in OAK-4796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4908) Best-effort prefiltering in ChangeProcessor based on ChangeSet

2016-11-09 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-4908:
-
Fix Version/s: 1.6

> Best-effort prefiltering in ChangeProcessor based on ChangeSet
> --
>
> Key: OAK-4908
> URL: https://issues.apache.org/jira/browse/OAK-4908
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, jcr
>Affects Versions: 1.5.11
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: review
> Fix For: 1.6, 1.5.13
>
> Attachments: OAK-4908.patch, OAK-4908.v2.patch, OAK-4908.v3.patch, 
> OAK-4908.v4.patch, OAK-4908.v5.patch
>
>
> This is a subtask as a result of 
> [discussions|https://issues.apache.org/jira/browse/OAK-4796?focusedCommentId=15550962&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15550962]
>  around design in OAK-4796:
> Based on the ChangeSet provided with OAK-4907 in the CommitContext, the 
> ChangeProcessor should do a best-effort prefiltering of commits before they 
> get added to the (BackgroundObserver's) queue.
> This consists of the following parts:
> * -the support for optionally excluding commits from being added to the queue 
> in the BackgroundObserver- EDIT: factored that out into OAK-4916
> * -the BackgroundObserver signaling downstream Observers that a change should 
> be excluded via a {{NOOP_CHANGE}} CommitInfo- EDIT: factored that out into 
> OAK-4916
> * the ChangeProcessor using OAK-4907's ChangeSet of the CommitContext for 
> best-effort prefiltering - and handling the {{NOOP_CHANGED}} CommitInfo 
> introduced in OAK-4916



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4756) A parallel approach to garbage collection

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4756:
---
Issue Type: New Feature  (was: Wish)

> A parallel approach to garbage collection
> -
>
> Key: OAK-4756
> URL: https://issues.apache.org/jira/browse/OAK-4756
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>  Labels: gc
>
> Assuming that:
> # Logic record IDs are implemented.
> # TAR files are ordered in reverse chronological order.
> # When reading segments, TAR files are consulted in order.
> # Segments in recent TAR files shadow segments in older TAR files with the 
> same segment ID.
> A new algorithm for garbage collection can be implemented:
> # Define the input for the garbage collection process. The input consists of 
> the current set of TAR files and a set of record IDs representing the GC 
> roots.
> # Traverse the GC roots and mark the records that are still in use. The mark 
> phase traverses the record graph and produces a list of record IDs. These 
> record IDs are referenced directly or indirectly by the given set of GC roots 
> and need to be kept. The list of record IDs is ordered by segment ID first 
> and record number next. This way, it is possible to process this list in one 
> pass and figure out which segment and which record should be saved at the end 
> of the garbage collection.
> # Remove unused records from segments and rewrite them in a new set of TAR 
> files. The list is produced in the previous step is traversed. For each 
> segment encountered, a new segment is created containing only the records 
> that were marked in the previous phase. This segment is then saved in a new 
> set of TAR files. The set of new TAR files is the result of the garbage 
> collection process. 
> # Add the new TAR files to the system. The system will append the new TAR 
> files to the segment store. The segments in these TAR files will shadow the 
> ones in older TAR files.
> # Remove TAR files from the old generation. It is safe to do so because the 
> new set of TAR files are currently shadowing the initial set of TAR files.
> While the garbage collection process is running, the system can work as usual 
> by starting a fresh TAR file. The result of the garbage collection is made 
> visible atomically only at the end, when the new TAR files are integrated 
> into the running system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4967) Offload the node deduplication cache offheap partially

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4967:
---
Labels: performance  (was: )

> Offload the node deduplication cache offheap partially
> --
>
> Key: OAK-4967
> URL: https://issues.apache.org/jira/browse/OAK-4967
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Alex Parvulescu
>  Labels: performance
>
> The node deduplication cache sits by default at {{8388608}} by default [0] 
> which means it can grow up to {{1.6GB}} on its own. It would be interesting 
> to look into offloading some it the items offheap by configuration, to reduce 
> the effect a full compaction might have on a running instance (and possibly 
> in general reduce the on-heap footprint).
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/WriterCacheManager.java#L70



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4949) SegmentWriter buffers child node list changes

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4949:
---
Labels: performance  (was: )

> SegmentWriter buffers child node list changes
> -
>
> Key: OAK-4949
> URL: https://issues.apache.org/jira/browse/OAK-4949
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>  Labels: performance
>
> The {{SegmentWriter}} currently buffers the list of child nodes changed on a 
> nodestate update [0] (new node or updated node). This can be problematic in a 
> scenario where there are a large number of children added to a node (ie. 
> unique index size seen to spike above {{10MM}} in one case).
> To have a reference for the impact of this, at the {{SegmentWriter}} level, 
> for a list of map entries of almost {{3MM}} items, I saw it take up around 
> {{245MB}} heap.
> This issue serves to track a possible improvement here in how we handle this 
> update scenario.
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentWriter.java#L516



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4582) Split Segment in a read-only and a read-write implementations

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4582:
---
Labels: technical_debt  (was: )

> Split Segment in a read-only and a read-write implementations
> -
>
> Key: OAK-4582
> URL: https://issues.apache.org/jira/browse/OAK-4582
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>  Labels: technical_debt
>
> {{Segment}} is central to the working of the Segment Store, but it currently 
> serves two purposes:
> # It is a temporary storage location for the currently written segment, 
> waiting to be full and flushed to disk.
> # It is a way to parse serialzed segments read from disk.
> To distinguish these two use cases, I suggest to promote {{Segment}} to the 
> status of interface, and to create two different implementations for a 
> read-only and a read-write segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4756) A parallel approach to garbage collection

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4756:
---
Labels: gc  (was: )

> A parallel approach to garbage collection
> -
>
> Key: OAK-4756
> URL: https://issues.apache.org/jira/browse/OAK-4756
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>  Labels: gc
>
> Assuming that:
> # Logic record IDs are implemented.
> # TAR files are ordered in reverse chronological order.
> # When reading segments, TAR files are consulted in order.
> # Segments in recent TAR files shadow segments in older TAR files with the 
> same segment ID.
> A new algorithm for garbage collection can be implemented:
> # Define the input for the garbage collection process. The input consists of 
> the current set of TAR files and a set of record IDs representing the GC 
> roots.
> # Traverse the GC roots and mark the records that are still in use. The mark 
> phase traverses the record graph and produces a list of record IDs. These 
> record IDs are referenced directly or indirectly by the given set of GC roots 
> and need to be kept. The list of record IDs is ordered by segment ID first 
> and record number next. This way, it is possible to process this list in one 
> pass and figure out which segment and which record should be saved at the end 
> of the garbage collection.
> # Remove unused records from segments and rewrite them in a new set of TAR 
> files. The list is produced in the previous step is traversed. For each 
> segment encountered, a new segment is created containing only the records 
> that were marked in the previous phase. This segment is then saved in a new 
> set of TAR files. The set of new TAR files is the result of the garbage 
> collection process. 
> # Add the new TAR files to the system. The system will append the new TAR 
> files to the segment store. The segments in these TAR files will shadow the 
> ones in older TAR files.
> # Remove TAR files from the old generation. It is safe to do so because the 
> new set of TAR files are currently shadowing the initial set of TAR files.
> While the garbage collection process is running, the system can work as usual 
> by starting a fresh TAR file. The result of the garbage collection is made 
> visible atomically only at the end, when the new TAR files are integrated 
> into the running system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4274) Memory-mapped files can't be explicitly unmapped

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4274:
---
Issue Type: Improvement  (was: Bug)

> Memory-mapped files can't be explicitly unmapped
> 
>
> Key: OAK-4274
> URL: https://issues.apache.org/jira/browse/OAK-4274
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Francesco Mari
>  Labels: gc, resilience
>
> As described by [this JDK 
> bug|http://bugs.java.com/view_bug.do?bug_id=4724038], there is no way to 
> explicitly unmap memory mapped files. A memory mapped file is unmapped only 
> if the corresponding {{MappedByteBuffer}} is garbage collected by the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4649) Move index files outside of the TAR files

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4649:
---
Labels: technical_debt  (was: )

> Move index files outside of the TAR files
> -
>
> Key: OAK-4649
> URL: https://issues.apache.org/jira/browse/OAK-4649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>  Labels: technical_debt
>
> TAR files currently embed three indexes: an index of the segments contained 
> in the TAR files, a graph index and an index of external binary references.
> Index files are checked for consistency purposes at the startup of the 
> system. Normally, if an index file is corrupted it is recreated. Since the 
> index file is contained inside the TAR file, recreating them implies 
> rewriting the whole TAR file and appending the new index. 
> This process creates unnecessary backups, since the biggest part of the TAR 
> file is effectively immutable. Moreover, because index files are stored in 
> the TAR files, we can't treat TAR files as true read-only files. There is 
> always the possibility that they have to be opened again in write mode for 
> the recovery of the index file.
> I propose to move those index files outside of the TAR files. TAR files will 
> end up being truly read-only files containing immutable data, and index files 
> will be granted their own physical files on the file system. Being index 
> files derived data, they now can be recreated at will without impacting the 
> read-only part of the segment store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2113) TarMK cold standby: ensure client integrity

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2113:
---
Labels: resilience  (was: )

> TarMK cold standby: ensure client integrity 
> 
>
> Key: OAK-2113
> URL: https://issues.apache.org/jira/browse/OAK-2113
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Manfred Baedke
>Priority: Minor
>  Labels: resilience
>
> TarMK cold standby needs measures to ensure the integrity of each segment on 
> the slave and that all segments that are reachable on the master from a given 
> segment are available on the slave.
> To ensure the integrity of a given segment on the slave, we can just use the 
> checksum stored in the segment, so there is no need to change the 
> communication protocol for this.
> To ensure that all segments that are reachable from a given segment are the 
> same on the master and on the slave, we need a new request to calculate a 
> suitable checksum on the master and send it back to the slave.
> If missing or broken segments are detected, the slave will pull them from the 
> master again. 
> Both measures combined should be scheduled to run regularly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3679) Rollback to timestamp

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3679:
---
Labels: tooling  (was: )

> Rollback to timestamp
> -
>
> Key: OAK-3679
> URL: https://issues.apache.org/jira/browse/OAK-3679
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, documentmk, segment-tar
>Reporter: Thomas Mueller
>  Labels: tooling
>
> We should have a feature to roll back to a certain point in time. The use 
> case are: 
> * undo a failed, large operation (for example upgrade, migration, installing 
> a package),
> * on a copy of the repository, switch to an old state for reading old content
> * recover from a corruption (for example corruption due to incorrect 
> "discovery" state, such as concurrent async index updates).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4146) Improve tarmkrecovery docs

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4146:
---
Labels: documentation  (was: )

> Improve tarmkrecovery docs
> --
>
> Key: OAK-4146
> URL: https://issues.apache.org/jira/browse/OAK-4146
> Project: Jackrabbit Oak
>  Issue Type: Documentation
>  Components: run, segment-tar, segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: documentation
>
> Add some helper steps on output and what you can actually do with it:
> {quote}
> 1. Run tarmkrecovery command
> {code:none}
> nohup java -Xmx2048m -jar oak-run-*.jar tarmkrecovery repository/segmentstore 
> &> tarmkrecovery.log &
> {code}
> 2. Take the output of the tarmkrecovery, take the top 10 items output 
> (excluding "Current head revision line") then reverse the order of those and 
> format them to journal.log file format (revision:offset root) and put those 
> values in a fresh journal.log in that format
> For example:
> {code:none}
> 6ee64a26-491e-4630-ac2e-bdad1f27e73a:257016 root
> 5ee64a26-491e-4630-ac2e-bdad1f27e73b:257111 root
> {code}
> 3. After setting up the new journal.log then run this command on the 
> segmentstore
> {code:none}
> nohup java -Xmx2048m -jar oak-run-*.jar check -p repository/segmentstore -d 
> &> check.log &
> {code}
> 4. That command will give you output of which of those 10 items in the 
> journal.log are good. Now remove all lines from the journal that come after 
> the last known good revision.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1905) SegmentMK: Arch segment(s)

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1905:
---
Labels: perfomance  (was: )

> SegmentMK: Arch segment(s)
> --
>
> Key: OAK-1905
> URL: https://issues.apache.org/jira/browse/OAK-1905
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Jukka Zitting
>Priority: Minor
>  Labels: perfomance
>
> There are a lot of constants and other commonly occurring name, values and 
> other data in a typical repository. To optimize storage space and access 
> speed, it would be useful to place such data in one or more constant "arch 
> segments" that are always cached in memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3893) SegmentWriter records cache could use thinner keys

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3893:
---
Labels: performance  (was: )

> SegmentWriter records cache could use thinner keys
> --
>
> Key: OAK-3893
> URL: https://issues.apache.org/jira/browse/OAK-3893
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-tar
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: performance
> Attachments: OAK-3893.patch
>
>
> The SegmentWriter keeps a records deduplication cache ('records' map) that 
> maintains 2 types of mappings:
> * template -> recordid
> * strings -> recordid
> For the first one (template-> recordid) we can come up with a thinner 
> representation of a template (a hash function that is fast and not very 
> collision prone) so we don't have to keep a reference to each template object.
> Same applies for second one, similar to what is happening in the StringsCache 
> now, we could keep the string value up to a certain size and beyond that, 
> hash it and use that for the deduplication map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4866) Design and implement a proper backup and restore API

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4866:
---
Labels: production  (was: )

> Design and implement a proper backup and restore API
> 
>
> Key: OAK-4866
> URL: https://issues.apache.org/jira/browse/OAK-4866
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>  Labels: production
>
> The current backup and restore API in {{org.apache.jackrabbit.oak.backup}} 
> refers to classes and interfaces that should remain private to the 
> oak-segment-tar bundle. This, in fact, is the reason why that package was not 
> exported as part of the effort for OAK-4843. The current backup and restore 
> API should be redesigned to be used from the outside of oak-segment-tar 
> without exporting implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4994) Implement additional record types

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4994:
---
Labels: tooling  (was: )

> Implement additional record types
> -
>
> Key: OAK-4994
> URL: https://issues.apache.org/jira/browse/OAK-4994
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>  Labels: tooling
>
> The records written in the segment store should be augmented with additional 
> types. In OAK-2498 the following additional types were identified:
> - List of property names. A list of strings, where every string is a property 
> name, is referenced by the template record.
> - List of list of values. This list is pointed to by the node record and 
> contains the values for single\- and multi\- value properties of that node. 
> The double indirection is needed to support multi-value properties.
> - Map from string to node. This map is referenced by the template and 
> represents the child relationship between nodes.
> - Super root. This is a marker type identifying top-level records for the 
> repository super-roots.
> Just adding these types doesn't improve the situation for the segment store, 
> though. Bucket and block records are not easily parseable because they have a 
> variable length and their size is not specified in the record value itself. 
> For record types to be used effectively, the way we serialize certain kind of 
> data has to be reviewed for further improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4779) ClusterNodeInfo may renew lease while recovery is running

2016-11-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-4779:
--
   Labels: resilience  (was: candidate_oak_1_4 resilience)
Fix Version/s: 1.4.10

Merged into 1.4 branch: http://svn.apache.org/r1768868 and 
http://svn.apache.org/r1768870

> ClusterNodeInfo may renew lease while recovery is running
> -
>
> Key: OAK-4779
> URL: https://issues.apache.org/jira/browse/OAK-4779
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.6, 1.5.10, 1.4.10
>
> Attachments: OAK-4779-2.patch, OAK-4779.patch
>
>
> ClusterNodeInfo.renewLease() does not detect when it is being recovered by 
> another cluster node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)