[jira] [Resolved] (OAK-4299) oak-run console should connect to repository in read-only mode by default

2016-04-25 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-4299.

   Resolution: Fixed
Fix Version/s: 1.5.2

[r1740950|https://svn.apache.org/r1740950]@trunk changes default behavior to 
use read-only mode. (Updated doc in 
[r1740951|https://svn.apache.org/r1740951]@trunk)

{{\-\-read\-write}} is now exposed instead of {{\-\-read\-only}} to connect in 
read-write mode.

As suggested, in read-only mode, we'd log
{noformat}
Repository connected in read-only mode. Use '--read-write' for write operations
{noformat}

> oak-run console should connect to repository in read-only mode by default
> -
>
> Key: OAK-4299
> URL: https://issues.apache.org/jira/browse/OAK-4299
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: production, tools
> Fix For: 1.6, 1.5.2
>
>
> With OAK-4182 and OAK-4298, oak-run->console supports connecting to a 
> repository in read only mode. But, as [~edivad] suggested 
> [here|https://issues.apache.org/jira/browse/OAK-4182?focusedCommentId=15245661=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15245661],
>  it might be more useful to connect to repositories in read-only mode by 
> default and require an explicit read-write flag for write operations.
> This, of course, doesn't align with 'always read-write mode' that's 
> operational currently.
> Btw, since, 1.5.2 hasn't been release yet, so, we can do this switch pretty 
> cleanly currently. Post 1.5.2 (which would then have {{--read-only}}, we 
> might have to retain that flag)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-4299) oak-run console should connect to repository in read-only mode by default

2016-04-25 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257538#comment-15257538
 ] 

Chetan Mehrotra edited comment on OAK-4299 at 4/26/16 4:14 AM:
---

Yeah this would change in expected behaviour. May be log a message at start if 
running in read only and mentioning about what flag to pass to switch to read 
write mode


was (Author: chetanm):
Yeah this would change in expected behaviour. May be log a message at start if 
running in read only and mentioning about what flag to pass to switch to read 
write mede

> oak-run console should connect to repository in read-only mode by default
> -
>
> Key: OAK-4299
> URL: https://issues.apache.org/jira/browse/OAK-4299
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: production, tools
> Fix For: 1.6
>
>
> With OAK-4182 and OAK-4298, oak-run->console supports connecting to a 
> repository in read only mode. But, as [~edivad] suggested 
> [here|https://issues.apache.org/jira/browse/OAK-4182?focusedCommentId=15245661=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15245661],
>  it might be more useful to connect to repositories in read-only mode by 
> default and require an explicit read-write flag for write operations.
> This, of course, doesn't align with 'always read-write mode' that's 
> operational currently.
> Btw, since, 1.5.2 hasn't been release yet, so, we can do this switch pretty 
> cleanly currently. Post 1.5.2 (which would then have {{--read-only}}, we 
> might have to retain that flag)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4299) oak-run console should connect to repository in read-only mode by default

2016-04-25 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257538#comment-15257538
 ] 

Chetan Mehrotra commented on OAK-4299:
--

Yeah this would change in expected behaviour. May be log a message at start if 
running in read only and mentioning about what flag to pass to switch to read 
write mede

> oak-run console should connect to repository in read-only mode by default
> -
>
> Key: OAK-4299
> URL: https://issues.apache.org/jira/browse/OAK-4299
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: production, tools
> Fix For: 1.6
>
>
> With OAK-4182 and OAK-4298, oak-run->console supports connecting to a 
> repository in read only mode. But, as [~edivad] suggested 
> [here|https://issues.apache.org/jira/browse/OAK-4182?focusedCommentId=15245661=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15245661],
>  it might be more useful to connect to repositories in read-only mode by 
> default and require an explicit read-write flag for write operations.
> This, of course, doesn't align with 'always read-write mode' that's 
> operational currently.
> Btw, since, 1.5.2 hasn't been release yet, so, we can do this switch pretty 
> cleanly currently. Post 1.5.2 (which would then have {{--read-only}}, we 
> might have to retain that flag)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1410) Allow automatic propagation of oak-whiteboard when adding security manager

2016-04-25 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra resolved OAK-1410.
---
Resolution: Won't Fix

> Allow automatic propagation of oak-whiteboard when adding security manager
> --
>
> Key: OAK-1410
> URL: https://issues.apache.org/jira/browse/OAK-1410
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4048) [regression] SyncHandler.listIdentities() returns all users, not only external ones

2016-04-25 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra resolved OAK-4048.
---
   Resolution: Fixed
Fix Version/s: 1.6

fixed in r1740879

> [regression] SyncHandler.listIdentities() returns all users, not only 
> external ones
> ---
>
> Key: OAK-4048
> URL: https://issues.apache.org/jira/browse/OAK-4048
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Alexander Klimetschek
>Assignee: Tobias Bocanegra
> Fix For: 1.6
>
> Attachments: OAK-4048.patch
>
>
> OAK-3508 accidentally introduced a regression in 
> https://github.com/apache/jackrabbit-oak/commit/cc78f6fdd122d1c9f200b43fc2b9536518ea996b#diff-490ff25c104d019ee25f92b2b8bdbabd
> If {{getIdentityRef(auth)}} returns null, {{createSyncedIdentity}} needs to 
> return null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4268) Clarify API contract for SyncResult.getIdentity

2016-04-25 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra updated OAK-4268:
--
Assignee: angela  (was: Tobias Bocanegra)

> Clarify API contract for SyncResult.getIdentity
> ---
>
> Key: OAK-4268
> URL: https://issues.apache.org/jira/browse/OAK-4268
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: angela
>Assignee: angela
>
> [~tripod], I would appreciate if you could add some clarifying javadoc to 
> {{SyncResult.getIdentity}} stating what the nature of the identity is 
> expected to be for various status and what it means if the identity is 
> {{null}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4303) Use the oak-segment-next in the oak-upgrade tests

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256563#comment-15256563
 ] 

Michael Dürig commented on OAK-4303:


[~frm] put the respective fixture in place for running {{oak-segment-next}} 
(OAK4259): {{-Dnsfixtures=SEGMENT_NEXT}}. 

[~tomek.rekawek], earlier when I applied the changes from OAK-3348 to 
{{oak-segment}} directly to my local checkout, I've seen the following tests 
failing:

oak-jcr:
{{UpgradeTest.upgradeFrom10}}

oak-upgrade:
{{AbstractOak2OakTest.validateMigration}}
{{RepositorySidegradeTest.verifyGenericProperties}}

Now with those changes gone into the separate {{oak-segment-next}} module these 
tests are green. However, all these tests directly instantiate a store from 
{{oak-segment}}. Not sure how to fit {{oak-segment-next}} into this picture. Is 
this something you could look into?

> Use the oak-segment-next in the oak-upgrade tests
> -
>
> Key: OAK-4303
> URL: https://issues.apache.org/jira/browse/OAK-4303
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next, upgrade
>Reporter: Tomek Rękawek
>  Labels: migration, upgrade
> Fix For: 1.6
>
>
> Since the oak-segment will be deprecated, let's switch to oak-segment-next in 
> all oak-upgrade tests (apart from the specific test covering the "classic" 
> oak-segment support).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4259) Implement fixtures for running again oak-segment and/or oak-segment-next

2016-04-25 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4259.
-
Resolution: Fixed

Fixed in r1740858. Anyway, I will probably have to rename the fixture name to 
something else as soon as OAK-4245 is resolved.

> Implement fixtures for running again oak-segment and/or oak-segment-next
> 
>
> Key: OAK-4259
> URL: https://issues.apache.org/jira/browse/OAK-4259
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>Priority: Blocker
>  Labels: testing
> Fix For: 1.6, 1.5.2
>
>
> We need fixtures to run UTs / ITs against either or both segment 
> implementations {{oak-segment}} and {{oak-segment-next}}. 
> Ideally we can enable them individually through e.g. environment variables. A 
> standard build would run against {{oak-segment}} so not to affect others. 
> {{oak-segment-next}} could be enabled on request locally or for the CI. 
> Once we deprecate {{oak-segment}} we would switch the default fixture to 
> {{oak-segment-next}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4275) Backport OAK-4065 (Counter index can get out of sync) to 1.2 and 1.4

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4275:
--
Fix Version/s: (was: 1.4.3)
   1.4.2

> Backport OAK-4065 (Counter index can get out of sync) to 1.2 and 1.4
> 
>
> Key: OAK-4275
> URL: https://issues.apache.org/jira/browse/OAK-4275
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.4.2, 1.2.15
>
>
> Even thought two workarounds exists (one is to rebuild the counter index, 
> another is to manually override the cost of an index), I think this needs to 
> be backported:
> * Once it occurs, some queries are very slow, and it's not always easy to 
> understand why.
> * The backport fixes the issue, and ensures people don't run into this issue 
> at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4306:
--
Labels:   (was: candidate_oak_1_4)

> Disable cleanup when compaction is paused
> -
>
> Key: OAK-4306
> URL: https://issues.apache.org/jira/browse/OAK-4306
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Blocker
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> Some internal longevity tests revealed a significant performance degradation  
> related to the compaction cleanup phase so it'd be good to disable the 
> cleanup when the compaction is paused.
> fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4306:
--
Priority: Blocker  (was: Major)

> Disable cleanup when compaction is paused
> -
>
> Key: OAK-4306
> URL: https://issues.apache.org/jira/browse/OAK-4306
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Blocker
>  Labels: candidate_oak_1_4
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> Some internal longevity tests revealed a significant performance degradation  
> related to the compaction cleanup phase so it'd be good to disable the 
> cleanup when the compaction is paused.
> fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4306:
--
Fix Version/s: 1.6

> Disable cleanup when compaction is paused
> -
>
> Key: OAK-4306
> URL: https://issues.apache.org/jira/browse/OAK-4306
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>  Labels: candidate_oak_1_4
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> Some internal longevity tests revealed a significant performance degradation  
> related to the compaction cleanup phase so it'd be good to disable the 
> cleanup when the compaction is paused.
> fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4099) Lucene index appear to be corrupted with compaction enabled

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4099:
--
Priority: Blocker  (was: Minor)

> Lucene index appear to be corrupted with compaction enabled
> ---
>
> Key: OAK-4099
> URL: https://issues.apache.org/jira/browse/OAK-4099
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Blocker
>  Labels: candidate_oak_1_2, candidate_oak_1_4, resilience
> Fix For: 1.6, 1.4.2
>
> Attachments: OAK-4099-v1.patch
>
>
> While running on SegmentNodStore and online compaction enabled it can happen 
> that access to Lucene index start failing with SegmentNotFoundException
> {noformat}
> Caused by: 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> a949519a-8903-44f9-a17e-b6d83fb32186 not found
>at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:870)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:136)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
>at 
> org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:64)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:259)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:307)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:404)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:411)
>at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.loadBlock(BlockTreeTermsReader.java:2397)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekCeil(BlockTreeTermsReader.java:1973)
>at 
> org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:225)
>at 
> org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:78)
>at 
> org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
>at 
> org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
>at 
> org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
>at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:418)
>at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:636)
>at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:683)
>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
> {noformat}
> The above segmentId was mentioned in the compaction log
> {noformat}
> 06.03.2016 02:03:30.706 *INFO* [TarMK flush thread 
> [/app/repository/segmentstore], active since Sun Mar 06 02:03:29 GMT 2016, 
> previous max duration 8218ms] 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data00233a.tar:
>37ec786e-a9f7-46eb-a3b5-ce5d4777ea01, 
> f36051fe-d8c4-46d1-ac1d-081946389eb6, fae91ff2-8ca6-4ac1-a8d8-d4bd09b7f6a6, 
> 16d87f09-721b-4155-a9c8-b8ecf471bfc3,
>e641f1a3-b323-44e6-aad0-7b894a1efb69, 
> edc9d141-6c05-42c9-a2a2-d7130fd9c826, b602372c-b17a-448a-a8e9-8bdccc64fb82, 
> acc2f032-07ba-46ed-a9c7-d3a05ab53d7a,
>a7323ed2-b2de-4006-ae51-e4f84165a0e4, 
> cb320c70-5ca9-4ed1-a972-e87a6bba9f9b, f45afd7e-5417-42dd-a2f7-4624f74b6c6e, 
> c66f66ef-cdd0-4327-abc6-bf910cb5768d,
>7f925a07-ff56-4613-ac8f-272a0e481926, 
> 4ad044ec-3b2d-4c3e-aeb0-d5f5a04bc23e, 82f1c3aa-2e0c-421c-a033-e4ffcb6002c7, 
> 1387655b-f633-4011-a55c-d9580e40929b,
>c50c94fc-2e8b-4904-a37f-0a33cc001312, 
> 7915e9ce-bb9d-4628-ad6f-e7f2844b2399, e7cd013b-a147-426a-af29-fa025058a08a, 
> f16d43b0-2113-4808-aea6-5910102e5c7d,
> ...
> *31edad2e-e14b-463d-a6af-540bac6009f1*,
> ...,
> *a949519a-8903-44f9-a17e-b6d83fb32186*,
> ...
> {noformat}
> *Note that system recovered after a restart so the corruption was transient*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4099) Lucene index appear to be corrupted with compaction enabled

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4099:
--
Fix Version/s: 1.4.2

> Lucene index appear to be corrupted with compaction enabled
> ---
>
> Key: OAK-4099
> URL: https://issues.apache.org/jira/browse/OAK-4099
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_2, candidate_oak_1_4, resilience
> Fix For: 1.6, 1.4.2
>
> Attachments: OAK-4099-v1.patch
>
>
> While running on SegmentNodStore and online compaction enabled it can happen 
> that access to Lucene index start failing with SegmentNotFoundException
> {noformat}
> Caused by: 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> a949519a-8903-44f9-a17e-b6d83fb32186 not found
>at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:870)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:136)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
>at 
> org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:64)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:259)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:307)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:404)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:411)
>at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.loadBlock(BlockTreeTermsReader.java:2397)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekCeil(BlockTreeTermsReader.java:1973)
>at 
> org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:225)
>at 
> org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:78)
>at 
> org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
>at 
> org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
>at 
> org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
>at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:418)
>at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:636)
>at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:683)
>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
> {noformat}
> The above segmentId was mentioned in the compaction log
> {noformat}
> 06.03.2016 02:03:30.706 *INFO* [TarMK flush thread 
> [/app/repository/segmentstore], active since Sun Mar 06 02:03:29 GMT 2016, 
> previous max duration 8218ms] 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data00233a.tar:
>37ec786e-a9f7-46eb-a3b5-ce5d4777ea01, 
> f36051fe-d8c4-46d1-ac1d-081946389eb6, fae91ff2-8ca6-4ac1-a8d8-d4bd09b7f6a6, 
> 16d87f09-721b-4155-a9c8-b8ecf471bfc3,
>e641f1a3-b323-44e6-aad0-7b894a1efb69, 
> edc9d141-6c05-42c9-a2a2-d7130fd9c826, b602372c-b17a-448a-a8e9-8bdccc64fb82, 
> acc2f032-07ba-46ed-a9c7-d3a05ab53d7a,
>a7323ed2-b2de-4006-ae51-e4f84165a0e4, 
> cb320c70-5ca9-4ed1-a972-e87a6bba9f9b, f45afd7e-5417-42dd-a2f7-4624f74b6c6e, 
> c66f66ef-cdd0-4327-abc6-bf910cb5768d,
>7f925a07-ff56-4613-ac8f-272a0e481926, 
> 4ad044ec-3b2d-4c3e-aeb0-d5f5a04bc23e, 82f1c3aa-2e0c-421c-a033-e4ffcb6002c7, 
> 1387655b-f633-4011-a55c-d9580e40929b,
>c50c94fc-2e8b-4904-a37f-0a33cc001312, 
> 7915e9ce-bb9d-4628-ad6f-e7f2844b2399, e7cd013b-a147-426a-af29-fa025058a08a, 
> f16d43b0-2113-4808-aea6-5910102e5c7d,
> ...
> *31edad2e-e14b-463d-a6af-540bac6009f1*,
> ...,
> *a949519a-8903-44f9-a17e-b6d83fb32186*,
> ...
> {noformat}
> *Note that system recovered after a restart so the corruption was transient*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4306:
--
Fix Version/s: 1.4.2

> Disable cleanup when compaction is paused
> -
>
> Key: OAK-4306
> URL: https://issues.apache.org/jira/browse/OAK-4306
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>  Labels: candidate_oak_1_4
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> Some internal longevity tests revealed a significant performance degradation  
> related to the compaction cleanup phase so it'd be good to disable the 
> cleanup when the compaction is paused.
> fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4305) BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins

2016-04-25 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256435#comment-15256435
 ] 

Julian Reschke commented on OAK-4305:
-

[~tomek.rekawek] that sounds a bit like OAK-4273; where these timeouts as well?

> BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins
> 
>
> Key: OAK-4305
> URL: https://issues.apache.org/jira/browse/OAK-4305
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Davide Giannella
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> The test 
> {{org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> }} has been failing quite frequently on jenkins.
> See epic for list. 
> Lately seen only on 1.4.
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/850/
> {noformat:title=failed tests from build 850}
> Test results:
> 3 tests failed.
> FAILED:  
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> Error Message:
> Thread hasn't finished in 10s
> Stack Trace:
> java.lang.AssertionError: Thread hasn't finished in 10s
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict(BulkCreateOrUpdateTest.java:300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> FAILED:  
> 

[jira] [Commented] (OAK-4245) Decide on a final name for oak-segment-next

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256422#comment-15256422
 ] 

Michael Dürig commented on OAK-4245:


Please vote here for the new name: 
http://jackrabbit.markmail.org/thread/fusyi22ed7lpbd7i

> Decide on a final name for oak-segment-next
> ---
>
> Key: OAK-4245
> URL: https://issues.apache.org/jira/browse/OAK-4245
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
> Fix For: 1.6, 1.5.2
>
>
> We should come up with a definite and final name for {{oak-segment-name}}. 
> Making this a blocker to avoid releasing with the current WIP name. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4099) Lucene index appear to be corrupted with compaction enabled

2016-04-25 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4099:
-
Labels: candidate_oak_1_2 candidate_oak_1_4 resilience  (was: 
candidate_oak_1_4 resilience)

> Lucene index appear to be corrupted with compaction enabled
> ---
>
> Key: OAK-4099
> URL: https://issues.apache.org/jira/browse/OAK-4099
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_2, candidate_oak_1_4, resilience
> Fix For: 1.6
>
> Attachments: OAK-4099-v1.patch
>
>
> While running on SegmentNodStore and online compaction enabled it can happen 
> that access to Lucene index start failing with SegmentNotFoundException
> {noformat}
> Caused by: 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> a949519a-8903-44f9-a17e-b6d83fb32186 not found
>at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:870)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:136)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
>at 
> org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:64)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:259)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:307)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:404)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:411)
>at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.loadBlock(BlockTreeTermsReader.java:2397)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekCeil(BlockTreeTermsReader.java:1973)
>at 
> org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:225)
>at 
> org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:78)
>at 
> org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
>at 
> org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
>at 
> org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
>at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:418)
>at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:636)
>at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:683)
>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
> {noformat}
> The above segmentId was mentioned in the compaction log
> {noformat}
> 06.03.2016 02:03:30.706 *INFO* [TarMK flush thread 
> [/app/repository/segmentstore], active since Sun Mar 06 02:03:29 GMT 2016, 
> previous max duration 8218ms] 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data00233a.tar:
>37ec786e-a9f7-46eb-a3b5-ce5d4777ea01, 
> f36051fe-d8c4-46d1-ac1d-081946389eb6, fae91ff2-8ca6-4ac1-a8d8-d4bd09b7f6a6, 
> 16d87f09-721b-4155-a9c8-b8ecf471bfc3,
>e641f1a3-b323-44e6-aad0-7b894a1efb69, 
> edc9d141-6c05-42c9-a2a2-d7130fd9c826, b602372c-b17a-448a-a8e9-8bdccc64fb82, 
> acc2f032-07ba-46ed-a9c7-d3a05ab53d7a,
>a7323ed2-b2de-4006-ae51-e4f84165a0e4, 
> cb320c70-5ca9-4ed1-a972-e87a6bba9f9b, f45afd7e-5417-42dd-a2f7-4624f74b6c6e, 
> c66f66ef-cdd0-4327-abc6-bf910cb5768d,
>7f925a07-ff56-4613-ac8f-272a0e481926, 
> 4ad044ec-3b2d-4c3e-aeb0-d5f5a04bc23e, 82f1c3aa-2e0c-421c-a033-e4ffcb6002c7, 
> 1387655b-f633-4011-a55c-d9580e40929b,
>c50c94fc-2e8b-4904-a37f-0a33cc001312, 
> 7915e9ce-bb9d-4628-ad6f-e7f2844b2399, e7cd013b-a147-426a-af29-fa025058a08a, 
> f16d43b0-2113-4808-aea6-5910102e5c7d,
> ...
> *31edad2e-e14b-463d-a6af-540bac6009f1*,
> ...,
> *a949519a-8903-44f9-a17e-b6d83fb32186*,
> ...
> {noformat}
> *Note that system recovered after a restart so the corruption was transient*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4226) Improve testing of DefaultSyncContext

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4226.
-
Resolution: Fixed

> Improve testing of DefaultSyncContext
> -
>
> Key: OAK-4226
> URL: https://issues.apache.org/jira/browse/OAK-4226
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: auth-external
>Reporter: angela
>Assignee: angela
> Fix For: 1.5.2
>
>
> - additional test-coverage
> - more tests for edge cases and special scenario (see for example OAK-4224)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4099) Lucene index appear to be corrupted with compaction enabled

2016-04-25 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4099:
-
Labels: candidate_oak_1_4 resilience  (was: resilience)

> Lucene index appear to be corrupted with compaction enabled
> ---
>
> Key: OAK-4099
> URL: https://issues.apache.org/jira/browse/OAK-4099
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_4, resilience
> Fix For: 1.6
>
> Attachments: OAK-4099-v1.patch
>
>
> While running on SegmentNodStore and online compaction enabled it can happen 
> that access to Lucene index start failing with SegmentNotFoundException
> {noformat}
> Caused by: 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> a949519a-8903-44f9-a17e-b6d83fb32186 not found
>at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:870)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:136)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
>at 
> org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
>at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:64)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:259)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:307)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:404)
>at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:411)
>at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.loadBlock(BlockTreeTermsReader.java:2397)
>at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekCeil(BlockTreeTermsReader.java:1973)
>at 
> org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:225)
>at 
> org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:78)
>at 
> org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
>at 
> org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
>at 
> org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
>at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:418)
>at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:636)
>at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:683)
>at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
> {noformat}
> The above segmentId was mentioned in the compaction log
> {noformat}
> 06.03.2016 02:03:30.706 *INFO* [TarMK flush thread 
> [/app/repository/segmentstore], active since Sun Mar 06 02:03:29 GMT 2016, 
> previous max duration 8218ms] 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data00233a.tar:
>37ec786e-a9f7-46eb-a3b5-ce5d4777ea01, 
> f36051fe-d8c4-46d1-ac1d-081946389eb6, fae91ff2-8ca6-4ac1-a8d8-d4bd09b7f6a6, 
> 16d87f09-721b-4155-a9c8-b8ecf471bfc3,
>e641f1a3-b323-44e6-aad0-7b894a1efb69, 
> edc9d141-6c05-42c9-a2a2-d7130fd9c826, b602372c-b17a-448a-a8e9-8bdccc64fb82, 
> acc2f032-07ba-46ed-a9c7-d3a05ab53d7a,
>a7323ed2-b2de-4006-ae51-e4f84165a0e4, 
> cb320c70-5ca9-4ed1-a972-e87a6bba9f9b, f45afd7e-5417-42dd-a2f7-4624f74b6c6e, 
> c66f66ef-cdd0-4327-abc6-bf910cb5768d,
>7f925a07-ff56-4613-ac8f-272a0e481926, 
> 4ad044ec-3b2d-4c3e-aeb0-d5f5a04bc23e, 82f1c3aa-2e0c-421c-a033-e4ffcb6002c7, 
> 1387655b-f633-4011-a55c-d9580e40929b,
>c50c94fc-2e8b-4904-a37f-0a33cc001312, 
> 7915e9ce-bb9d-4628-ad6f-e7f2844b2399, e7cd013b-a147-426a-af29-fa025058a08a, 
> f16d43b0-2113-4808-aea6-5910102e5c7d,
> ...
> *31edad2e-e14b-463d-a6af-540bac6009f1*,
> ...,
> *a949519a-8903-44f9-a17e-b6d83fb32186*,
> ...
> {noformat}
> *Note that system recovered after a restart so the corruption was transient*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4306:
-
Labels: candidate_oak_1_4  (was: )

> Disable cleanup when compaction is paused
> -
>
> Key: OAK-4306
> URL: https://issues.apache.org/jira/browse/OAK-4306
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>  Labels: candidate_oak_1_4
> Fix For: 1.5.2
>
>
> Some internal longevity tests revealed a significant performance degradation  
> related to the compaction cleanup phase so it'd be good to disable the 
> cleanup when the compaction is paused.
> fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4299) oak-run console should connect to repository in read-only mode by default

2016-04-25 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256415#comment-15256415
 ] 

Vikas Saurabh commented on OAK-4299:


[~chetanm], thoughts on this -
bq. This, of course, doesn't align with 'always read-write mode' that's 
operational currently.

> oak-run console should connect to repository in read-only mode by default
> -
>
> Key: OAK-4299
> URL: https://issues.apache.org/jira/browse/OAK-4299
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: production, tools
> Fix For: 1.6
>
>
> With OAK-4182 and OAK-4298, oak-run->console supports connecting to a 
> repository in read only mode. But, as [~edivad] suggested 
> [here|https://issues.apache.org/jira/browse/OAK-4182?focusedCommentId=15245661=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15245661],
>  it might be more useful to connect to repositories in read-only mode by 
> default and require an explicit read-write flag for write operations.
> This, of course, doesn't align with 'always read-write mode' that's 
> operational currently.
> Btw, since, 1.5.2 hasn't been release yet, so, we can do this switch pretty 
> cleanly currently. Post 1.5.2 (which would then have {{--read-only}}, we 
> might have to retain that flag)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4306) Disable cleanup when compaction is paused

2016-04-25 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-4306:


 Summary: Disable cleanup when compaction is paused
 Key: OAK-4306
 URL: https://issues.apache.org/jira/browse/OAK-4306
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.5.2


Some internal longevity tests revealed a significant performance degradation  
related to the compaction cleanup phase so it'd be good to disable the cleanup 
when the compaction is paused.

fyi [~mduerig], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4305) BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins

2016-04-25 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4305:

Component/s: rdbmk

> BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins
> 
>
> Key: OAK-4305
> URL: https://issues.apache.org/jira/browse/OAK-4305
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Davide Giannella
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> The test 
> {{org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> }} has been failing quite frequently on jenkins.
> See epic for list. 
> Lately seen only on 1.4.
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/850/
> {noformat:title=failed tests from build 850}
> Test results:
> 3 tests failed.
> FAILED:  
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> Error Message:
> Thread hasn't finished in 10s
> Stack Trace:
> java.lang.AssertionError: Thread hasn't finished in 10s
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict(BulkCreateOrUpdateTest.java:300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> FAILED:  
> org.apache.jackrabbit.oak.jcr.ConcurrentAddReferenceTest.addReferences[DocumentNodeStore[RDB]
>  on jdbc:h2:file:./target/oaktest]
> Error Message:

[jira] [Updated] (OAK-4305) BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins

2016-04-25 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-4305:

Priority: Minor  (was: Major)

> BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins
> 
>
> Key: OAK-4305
> URL: https://issues.apache.org/jira/browse/OAK-4305
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Davide Giannella
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.6
>
>
> The test 
> {{org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> }} has been failing quite frequently on jenkins.
> See epic for list. 
> Lately seen only on 1.4.
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/850/
> {noformat:title=failed tests from build 850}
> Test results:
> 3 tests failed.
> FAILED:  
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> Error Message:
> Thread hasn't finished in 10s
> Stack Trace:
> java.lang.AssertionError: Thread hasn't finished in 10s
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict(BulkCreateOrUpdateTest.java:300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> FAILED:  
> org.apache.jackrabbit.oak.jcr.ConcurrentAddReferenceTest.addReferences[DocumentNodeStore[RDB]
>  on jdbc:h2:file:./target/oaktest]
> 

[jira] [Assigned] (OAK-4305) BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella reassigned OAK-4305:
-

Assignee: Julian Reschke

[~reschke] tentatively assigning this to you. Didn't look at the code but you 
normally take care of rdb.

> BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins
> 
>
> Key: OAK-4305
> URL: https://issues.apache.org/jira/browse/OAK-4305
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Davide Giannella
>Assignee: Julian Reschke
> Fix For: 1.6
>
>
> The test 
> {{org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> }} has been failing quite frequently on jenkins.
> See epic for list. 
> Lately seen only on 1.4.
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/850/
> {noformat:title=failed tests from build 850}
> Test results:
> 3 tests failed.
> FAILED:  
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
>  RDB-Derby(embedded)]
> Error Message:
> Thread hasn't finished in 10s
> Stack Trace:
> java.lang.AssertionError: Thread hasn't finished in 10s
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict(BulkCreateOrUpdateTest.java:300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> FAILED:  
> 

[jira] [Created] (OAK-4305) BulkCreateOrUpdateTest.testConcurrentWithConflict failing on jenkins

2016-04-25 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-4305:
-

 Summary: BulkCreateOrUpdateTest.testConcurrentWithConflict failing 
on jenkins
 Key: OAK-4305
 URL: https://issues.apache.org/jira/browse/OAK-4305
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: Davide Giannella
 Fix For: 1.6


The test 
{{org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
 RDB-Derby(embedded)]
}} has been failing quite frequently on jenkins.

See epic for list. 

Lately seen only on 1.4.

https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/850/

{noformat:title=failed tests from build 850}
Test results:
3 tests failed.
FAILED:  
org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict[RDBFixture:
 RDB-Derby(embedded)]

Error Message:
Thread hasn't finished in 10s

Stack Trace:
java.lang.AssertionError: Thread hasn't finished in 10s
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest.testConcurrentWithConflict(BulkCreateOrUpdateTest.java:300)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


FAILED:  
org.apache.jackrabbit.oak.jcr.ConcurrentAddReferenceTest.addReferences[DocumentNodeStore[RDB]
 on jdbc:h2:file:./target/oaktest]

Error Message:
expected:<100> but was:<99>

Stack Trace:
java.lang.AssertionError: expected:<100> but was:<99>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 

[jira] [Updated] (OAK-4281) Rework memory estimation for compaction

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4281:
---
Assignee: Alex Parvulescu

> Rework memory estimation for compaction
> ---
>
> Key: OAK-4281
> URL: https://issues.apache.org/jira/browse/OAK-4281
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: compaction, gc
> Fix For: 1.6
>
>
> As a result of OAK-3348 we need to partially rework the memory estimation 
> step done for deciding whether compaction can run or not. In {{oak-segment}} 
> there was a {{delta}} value derived from the compaction map. As the latter is 
> gone in {{oak-segment-next}} we need to decide whether there is another way 
> to derive this delta or whether we want to drop it entirely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4283) Align GCMonitor API with implementation

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4283:
--

Assignee: Michael Dürig

> Align GCMonitor API with implementation 
> 
>
> Key: OAK-4283
> URL: https://issues.apache.org/jira/browse/OAK-4283
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: api-change, compaction, gc
> Fix For: 1.6
>
>
> The argument taken by {{GCMonitor.compacted}} related to parameters of the 
> compaction map. The latter has gone with OAK-3348. We need to come up with a 
> way to adjust this API accordingly. Also it might make sense to broaden the 
> scope of {{GCMonitor}} from its initial intent (logging) to a more general 
> one as this is how it is already used e.g. by the {{RefreshOnGC}} 
> implementation and for OAK-4096. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2714:
--
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

h2. Current issues

|| Test 
  || Builds || Fixture  || JVM || Branch ||
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428, 
758 | DOCUMENT_NS, SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | |
| 
org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodes\[DocumentNodeStore\[RDB\]
 on jdbc:h2:file:./target/oaktest\] | 829, 846 | | | 1.4 |
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT| 720, 818 | DOCUMENT_RDB | 1.7 
| 1.5 |
| 
org.apache.jackrabbit.oak.jcr.ConcurrentAddReferenceTest.addReferences[DocumentNodeStore\[RDB]
 on jdbc:h2:file:./target/oaktest] |  778, 850 | RDB, NS | 1.7, 1.8 | 1.4 |
| 
org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder\[RDBDocumentStore
 on jdbc:h2:file:./target/oaktest] |  851 | | | 1.2 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest.testNoSuggestions | 783 | 
Segment, rdb | 1.7, 1.8 | 1.0 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest.testSuggestSql | 783 | 
Segment, rdb | 1.7, 1.8 | 1.0 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest.testSuggestXPath | 783 | 
Segment, rdb | 1.7, 1.8 | 1.0 |
| org.apache.jackrabbit.oak.osgi.OSGiIT | 767, 770 | SEGMENT_MK, DOCUMENT_RDB, 
DOCUMENT_NS | 1.7, 1.8 | |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, 
DOCUMENT_RDB, DOCUMENT_NS | 1.5, 1.7 | |
| org.apache.jackrabbit.oak.osgi.OSGiIT.listServices | 851 | | | 1.2 |
| 
org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateClusterTest.testConcurrentWithConflict\[RDBFixture:RDB-Derby(embedded)]
 | 797, 823, 841, 842, 843, 847, 848, 850 | | | 1.4, 1.5 |
| org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest | 731, 732, 
767 | DOCUMENT_RDB, DOCUMENT_NS | 1.7, 1.8 | |
| 
org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceIT.testLargeStartStopFiesta
 | 803, 823, 849 | | | 1.5 |
| org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 
361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 | |
| 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreTest.recoverBranchCommit
 | 805 | | | 1.0 |
| org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest | 673, 674, 
786, 787 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.7, 1.8 |  |
| 
org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest.testUpdateAndDelete[MyFixture:
 RDB-Derby(embedded)] | 780, 785, 786, 787 | RDB, NS, | 1.7, 1.8 | 1.5, 1.4 |
| org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest | 
648, 679 | SEGMENT_MK, DOCUMENT_NS | 1.8 |  |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   | |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexDefinitionTest | 770 | 
DOCUMENT_RDB, DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 | |
| org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 
623, 624, 656, 679 | DOCUMENT_RDB | 1.7 | |
| 
org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexHookIT.testPropertyAddition
 | 775, 782, 783, 789, 821, 832 | Segment, rdb | 1.7, 1.8 | 1.5, 1.2, 1.0 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTestIT.testNativeMLTQuery
 | 783 | Segment, rdb | 1.7, 1.8 | 1.0 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTestIT.testNativeMLTQueryWithStream
 | 783 | Segment, rdb | 1.7, 1.8 | 1.0 |
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151, 490, 656, 679 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.5, 1.7, 1.8 | |
| 
org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization
 | 490, 656, 679 | DOCUMENT_RDB | 1.7 | |
| org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 
663 | SEGMENT_MK | 1.7 | |
| 
org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation
 | 627 | DOCUMENT_RDB |1.7 | |
| org.apache.jackrabbit.oak.plugins.segment.ExternalBlobIT.testDataStoreBlob | 
841 | | | 1.5 |
| org.apache.jackrabbit.oak.plugins.segment.ExternalBlobIT.testNullBlobId | 841 
| | | 1.5 |
| org.apache.jackrabbit.oak.plugins.segment.ExternalBlobIT.testSize | 841 | | | 
1.5 |
| 
org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite\[usePersistedMap:
 false] | 841 | | | 1.5 |
| 
org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite\[usePersistedMap:
 true] | 841 | | | 1.5 |
| org.apache.jackrabbit.oak.plugins.segment.standby.BrokenNetworkTest | 

[jira] [Assigned] (OAK-4282) Make the number of retained gc generation configurable

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4282:
--

Assignee: Michael Dürig

> Make the number of retained gc generation configurable
> --
>
> Key: OAK-4282
> URL: https://issues.apache.org/jira/browse/OAK-4282
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: cleanup, gc
> Fix For: 1.6
>
>
> The number of retained gc generations ("brutal" cleanup strategy) is 
> currently hard coded to 2. I think we need to make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4276) Refactor / rework compaction strategies

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4276:
--

Assignee: Michael Dürig

> Refactor / rework compaction strategies 
> 
>
> Key: OAK-4276
> URL: https://issues.apache.org/jira/browse/OAK-4276
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc, technical_debt
> Fix For: 1.6
>
>
> After the changes from OAK-3348 many if not all of the options in 
> {{CompactionStrategy}} do not apply any more. Specifically the new "brutal" 
> strategy is hard coded to always be in effect. We need to:
> * Decide which cleanup methods we want to keep supporting,
> * decide which options to expose through CompactionStrategy. E.g. 
> {{cloneBinaries}} was so far always set to {{false}} and I would opt to 
> remove the option as implementing might be tricky with the de-duplication 
> cache based compaction we now have,
> * optimally refactor {{CompactionStrategy}} into a proper abstract data type. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4293) Refactor / rework compaction gain estimation

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4293:
---
Assignee: Alex Parvulescu

> Refactor / rework compaction gain estimation 
> -
>
> Key: OAK-4293
> URL: https://issues.apache.org/jira/browse/OAK-4293
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: gc
> Fix For: 1.6
>
>
> I think we have to take another look at {{CompactionGainEstimate}} and see 
> whether we can up with a more efficient way to estimate the compaction gain. 
> The current implementation is expensive wrt. IO, CPU and cache coherence. If 
> we want to keep an estimation step we need IMO come up with a cheap way (at 
> least 2 orders of magnitude cheaper than compaction). Otherwise I would 
> actually propose to remove the current estimation approach entirely 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4285) Align cleanup of segment id tables with the new cleanup strategy

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4285:
---
Assignee: Alex Parvulescu

> Align cleanup of segment id tables with the new cleanup strategy 
> -
>
> Key: OAK-4285
> URL: https://issues.apache.org/jira/browse/OAK-4285
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: cleanup, gc
> Fix For: 1.6
>
>
> We need to align cleanup of the segment id tables with the new "brutal" 
> strategy introduced with OAK-3348. That is, we need to remove those segment 
> id's from the segment id tables whose segment have actually been gc'ed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4280) Compaction cannot be cancelled

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4280:
---
Assignee: Alex Parvulescu

> Compaction cannot be cancelled 
> ---
>
> Key: OAK-4280
> URL: https://issues.apache.org/jira/browse/OAK-4280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: compaction, gc
> Fix For: 1.6
>
>
> As a result of the de-duplication cache based online compaction approach from 
> OAK-3348 compaction cannot be cancelled any more (in the sense of OAK-3290). 
> As I assume we still need this feature we should look into ways to 
> re-implement it on top of the current approach. 
> Also I figure implementing a [partial compaction | 
> https://issues.apache.org/jira/browse/OAK-4122?focusedCommentId=15223924=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15223924]
>  approach on top of a commit scheduler (OAK-4122) would need a feature of 
> this sort. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4287) Disable / remove SegmentBufferWriter#checkGCGen

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4287:
--

Assignee: Michael Dürig

> Disable / remove SegmentBufferWriter#checkGCGen
> ---
>
> Key: OAK-4287
> URL: https://issues.apache.org/jira/browse/OAK-4287
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: assertion, compaction, gc
> Fix For: 1.6
>
>
> {{SegmentBufferWriter#checkGCGen}} is an after the fact check for back 
> references (see OAK-3348), logging a warning if detects any. As this check 
> loads the segment it checks the reference for, it is somewhat expensive. We 
> should either come up with a cheaper way for this check or remove it (at 
> least disable it by default). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4295:
---
Assignee: Francesco Mari

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4286) Rework failing tests in CompactionAndCleanupIT

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4286:
--

Assignee: Michael Dürig

> Rework failing tests in CompactionAndCleanupIT
> --
>
> Key: OAK-4286
> URL: https://issues.apache.org/jira/browse/OAK-4286
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: cleanup, compaction, gc, tests
> Fix For: 1.6
>
>
> The fix for OAK-3348 caused some of the tests in {{CompactionAndCleanupIT}} 
> to fail and I put the to ignored for the time being. We need to check whether 
> the test expectations still hold and rework them as required. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4288) TarReader.calculateForwardReferences only used by oak-run graph tool

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4288:
--

Assignee: Michael Dürig

> TarReader.calculateForwardReferences only used by oak-run graph tool
> 
>
> Key: OAK-4288
> URL: https://issues.apache.org/jira/browse/OAK-4288
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: cleanup, gc, tooling
> Fix For: 1.6
>
>
> {{TarReader.calculateForwardReferences}} is not used for production but only 
> for tooling so it would be good if we could remove that method from the 
> production code an put it into a tooling specific module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4289) Remove the gc generation from the segment meta data

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4289:
--

Assignee: Michael Dürig

> Remove the gc generation from the segment meta data
> ---
>
> Key: OAK-4289
> URL: https://issues.apache.org/jira/browse/OAK-4289
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: run, segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: compaction, gc, tooling
> Fix For: 1.6
>
>
> The segment meta info (OAK-3550) still contains the segment's gc generation. 
> As with OAK-3348 the gc generation gets written to the segment header 
> directly we should remove it from the segment meta info and update {{oak-run 
> graph}} accordingly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4290) Update segment parser to work with the new segment format

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-4290:
--

Assignee: Michael Dürig

> Update segment parser to work with the new segment format
> -
>
> Key: OAK-4290
> URL: https://issues.apache.org/jira/browse/OAK-4290
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: run, segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: gc, tooling
> Fix For: 1.6
>
>
> {{SegmentParser}} does not correctly handle the record id added to the 
> segment node states: for those segment node state containing an actual record 
> id the segment parser should process it and call the respective call backs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4102) Break cyclic dependency of FileStore and SegmentTracker

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4102:
---
Assignee: Francesco Mari

> Break cyclic dependency of FileStore and SegmentTracker
> ---
>
> Key: OAK-4102
> URL: https://issues.apache.org/jira/browse/OAK-4102
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: technical_debt
> Fix For: 1.6
>
>
> {{SegmentTracker}} and {{FileStore}} are mutually dependent on each other. 
> This is problematic and makes initialising instances of these classes 
> difficult: the {{FileStore}} constructor e.g. passes a not fully initialised 
> instance to the {{SegmentTracker}}, which in turn writes an initial node 
> state to the store. Notably using the not fully initialised {{FileStore}} 
> instance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4226) Improve testing of DefaultSyncContext

2016-04-25 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256308#comment-15256308
 ] 

angela commented on OAK-4226:
-

Committed revision 1740837: some improvements related to the expiration tests 
that seemed prone to failure.

> Improve testing of DefaultSyncContext
> -
>
> Key: OAK-4226
> URL: https://issues.apache.org/jira/browse/OAK-4226
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: auth-external
>Reporter: angela
>Assignee: angela
> Fix For: 1.5.2
>
>
> - additional test-coverage
> - more tests for edge cases and special scenario (see for example OAK-4224)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4278) Fix backup and restore

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4278:
---
Assignee: Alex Parvulescu

> Fix backup and restore
> --
>
> Key: OAK-4278
> URL: https://issues.apache.org/jira/browse/OAK-4278
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
>  Labels: backup, restore, technical_debt
> Fix For: 1.6
>
>
> {{FileStoreBackup}} and {{FileStoreRestore}} are currently broken as a side 
> effect of the fix from OAK-3348. ({{FileStoreBackupTest}} is currently being 
> skipped). 
> In {{oak-segment}} backup and restore functionality relied on the 
> {{Compactor}} class. The latter is gone in {{oak-segment-next}} as it is not 
> needed for online compaction any more. 
> Instead of sharing functionality from compaction directly, I think backup and 
> restore should come up with its own implementation that could be individually 
> tweaked for its task. If there is commonalities with offline compaction those 
> can still be shared thorough a common base class. See OAK-4279



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4292) Document Oak segment next

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4292:
---
Labels: documentation gc  (was: docuentation gc)

> Document Oak segment next
> -
>
> Key: OAK-4292
> URL: https://issues.apache.org/jira/browse/OAK-4292
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: doc, segment-next
>Reporter: Michael Dürig
>  Labels: documentation, gc
> Fix For: 1.6
>
>
> Document Oak Segment Next. Specifically:
> * New and changed configuration and monitoring options
> * Changes in gc (OAK-3348 et. all)
> * Changes in segment / tar format (OAK-3348)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4245) Decide on a final name for oak-segment-next

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4245:
---
Fix Version/s: 1.6

> Decide on a final name for oak-segment-next
> ---
>
> Key: OAK-4245
> URL: https://issues.apache.org/jira/browse/OAK-4245
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
> Fix For: 1.6, 1.5.2
>
>
> We should come up with a definite and final name for {{oak-segment-name}}. 
> Making this a blocker to avoid releasing with the current WIP name. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4304) SegmentParserTest leaks temporary folders

2016-04-25 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4304.
-
Resolution: Fixed

Fixed in r1740830.

> SegmentParserTest leaks temporary folders
> -
>
> Key: OAK-4304
> URL: https://issues.apache.org/jira/browse/OAK-4304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.6
>
>
> {{SegmentParserTest}} leaks temporary folders. The handling of temporary 
> folders should better be done using the {{TemporaryFolder}} JUnit rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3690) Decouple SegmentBufferWriter from SegmentStore

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3690:
---
Fix Version/s: 1.6

> Decouple SegmentBufferWriter from SegmentStore
> --
>
> Key: OAK-3690
> URL: https://issues.apache.org/jira/browse/OAK-3690
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: technical_debt
> Fix For: 1.6
>
>
> Currently {{SegmentBufferWriter.flush()}} directly calls 
> {{SegmentStore.writeSegment()}} once the current segment does not have enough 
> space for the next record. We should try to cut this dependency as 
> {{SegmentBufferWriter}} should only be concerned with providing buffers for 
> segments. Actually writing these to the store should be handled by a higher 
> level component. 
> A number of deadlock (e.g. (OAK-2560, OAK-3179, OAK-3264) we have seen is one 
> manifestation of this troublesome dependency. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3695) Expose ratio between waste and real data

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256272#comment-15256272
 ] 

Michael Dürig commented on OAK-3695:


Depending on what we come up with, OAK-4281 might be helpful here. 

> Expose ratio between waste and real data
> 
>
> Key: OAK-3695
> URL: https://issues.apache.org/jira/browse/OAK-3695
> Project: Jackrabbit Oak
>  Issue Type: Story
>  Components: segment-next
>Reporter: Valentin Olteanu
>  Labels: gc
> Fix For: 1.6
>
>
> As a user, I want to know the ratio (or precise absolute values) between 
> waste and real data on TarMK, so that I can decide if Revision GC needs to be 
> run. The measurement has to be done on a running repository and without 
> impacting the performance.
> This would also help measure the efficiency of Revision GC and see the effect 
> of improvements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3695) Expose ratio between waste and real data

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3695:
---
Fix Version/s: 1.6

> Expose ratio between waste and real data
> 
>
> Key: OAK-3695
> URL: https://issues.apache.org/jira/browse/OAK-3695
> Project: Jackrabbit Oak
>  Issue Type: Story
>  Components: segment-next
>Reporter: Valentin Olteanu
>  Labels: gc
> Fix For: 1.6
>
>
> As a user, I want to know the ratio (or precise absolute values) between 
> waste and real data on TarMK, so that I can decide if Revision GC needs to be 
> run. The measurement has to be done on a running repository and without 
> impacting the performance.
> This would also help measure the efficiency of Revision GC and see the effect 
> of improvements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4259) Implement fixtures for running again oak-segment and/or oak-segment-next

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4259:
---
Priority: Blocker  (was: Major)

> Implement fixtures for running again oak-segment and/or oak-segment-next
> 
>
> Key: OAK-4259
> URL: https://issues.apache.org/jira/browse/OAK-4259
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Priority: Blocker
>  Labels: testing
> Fix For: 1.6, 1.5.2
>
>
> We need fixtures to run UTs / ITs against either or both segment 
> implementations {{oak-segment}} and {{oak-segment-next}}. 
> Ideally we can enable them individually through e.g. environment variables. A 
> standard build would run against {{oak-segment}} so not to affect others. 
> {{oak-segment-next}} could be enabled on request locally or for the CI. 
> Once we deprecate {{oak-segment}} we would switch the default fixture to 
> {{oak-segment-next}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3869) Refactor RecordWriter.write to always return a RecordId

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-3869.

   Resolution: Fixed
Fix Version/s: 1.6

Fixed as part of the fix for OAK-3348

> Refactor RecordWriter.write to always return a RecordId
> ---
>
> Key: OAK-3869
> URL: https://issues.apache.org/jira/browse/OAK-3869
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: technical_debt
> Fix For: 1.6
>
>
> I think it would be cleaner if {{RecordId.write}} would always return a 
> {{RecordId}} instead of depending on its type parametrisation and would like 
> to refactor it to that respect.. 
> This is also a pre-requisite for my work on OAK-3348 and might also be for 
> OAK-3864. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4304) SegmentParserTest leaks temporary folders

2016-04-25 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-4304:
---

 Summary: SegmentParserTest leaks temporary folders
 Key: OAK-4304
 URL: https://issues.apache.org/jira/browse/OAK-4304
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-next
Reporter: Francesco Mari
Assignee: Francesco Mari
Priority: Minor
 Fix For: 1.6


{{SegmentParserTest}} leaks temporary folders. The handling of temporary 
folders should better be done using the {{TemporaryFolder}} JUnit rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3696) Improve SegmentMK resilience

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3696:
---
Fix Version/s: 1.6

> Improve SegmentMK resilience
> 
>
> Key: OAK-3696
> URL: https://issues.apache.org/jira/browse/OAK-3696
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: segment-next
>Reporter: Michael Marth
>Assignee: Michael Dürig
>  Labels: resilience
> Fix For: 1.6
>
>
> Epic for collection SegmentMK resilience improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4259) Implement fixtures for running again oak-segment and/or oak-segment-next

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4259:
---
Fix Version/s: 1.5.2

> Implement fixtures for running again oak-segment and/or oak-segment-next
> 
>
> Key: OAK-4259
> URL: https://issues.apache.org/jira/browse/OAK-4259
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: testing
> Fix For: 1.6, 1.5.2
>
>
> We need fixtures to run UTs / ITs against either or both segment 
> implementations {{oak-segment}} and {{oak-segment-next}}. 
> Ideally we can enable them individually through e.g. environment variables. A 
> standard build would run against {{oak-segment}} so not to affect others. 
> {{oak-segment-next}} could be enabled on request locally or for the CI. 
> Once we deprecate {{oak-segment}} we would switch the default fixture to 
> {{oak-segment-next}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4259) Implement fixtures for running again oak-segment and/or oak-segment-next

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4259:
---
Fix Version/s: 1.6

> Implement fixtures for running again oak-segment and/or oak-segment-next
> 
>
> Key: OAK-4259
> URL: https://issues.apache.org/jira/browse/OAK-4259
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: testing
> Fix For: 1.6, 1.5.2
>
>
> We need fixtures to run UTs / ITs against either or both segment 
> implementations {{oak-segment}} and {{oak-segment-next}}. 
> Ideally we can enable them individually through e.g. environment variables. A 
> standard build would run against {{oak-segment}} so not to affect others. 
> {{oak-segment-next}} could be enabled on request locally or for the CI. 
> Once we deprecate {{oak-segment}} we would switch the default fixture to 
> {{oak-segment-next}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2795) TarMK revision cleanup message too verbose.

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-2795.

   Resolution: Duplicate
Fix Version/s: (was: 1.6)

Resolving as duplicate. OAK-4165 should take care of this

> TarMK revision cleanup message too verbose.
> ---
>
> Key: OAK-2795
> URL: https://issues.apache.org/jira/browse/OAK-2795
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Zygmunt Wiercioch
>Priority: Minor
>  Labels: gc
>
> The segment GC message can be thousands of lines long on a system which 
> experienced a lot of activity.  For example, in my test, I had:
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data8a.tar: 
>  4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
> f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
> {noformat}
> followed by approximately 40,000 more lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2795) TarMK revision cleanup message too verbose.

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2795:
---
Fix Version/s: 1.6

> TarMK revision cleanup message too verbose.
> ---
>
> Key: OAK-2795
> URL: https://issues.apache.org/jira/browse/OAK-2795
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Zygmunt Wiercioch
>Priority: Minor
>  Labels: gc
> Fix For: 1.6
>
>
> The segment GC message can be thousands of lines long on a system which 
> experienced a lot of activity.  For example, in my test, I had:
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data8a.tar: 
>  4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
> f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
> {noformat}
> followed by approximately 40,000 more lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3349) Partial compaction

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3349:
---
Fix Version/s: 1.6

> Partial compaction
> --
>
> Key: OAK-3349
> URL: https://issues.apache.org/jira/browse/OAK-3349
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.6
>
>
> On big repositories compaction can take quite a while to run as it needs to 
> create a full deep copy of the current root node state. For such cases it 
> could be beneficial if we could partially compact the repository thus 
> splitting full compaction over multiple cycles. 
> Partial compaction would run compaction on a sub-tree just like we now run it 
> on the full tree. Afterwards it would create a new root node state by 
> referencing the previous root node state replacing said sub-tree with the 
> compacted one. 
> Todo: Asses feasibility and impact, implement prototype.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2833) Refactor TarMK

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2833:
---
Fix Version/s: 1.6

> Refactor TarMK
> --
>
> Key: OAK-2833
> URL: https://issues.apache.org/jira/browse/OAK-2833
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: technical_debt
> Fix For: 1.6
>
>
> Container issue for refactoring the TarMK to make it more testable, 
> maintainable, extensible and less entangled. 
> For example the segment format should be readable, writeable through 
> standalone means so tests, tools and production code can share this code. 
> Currently there is a lot of code duplication involved here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2849) Improve revision gc on SegmentMK

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2849:
---
Fix Version/s: 1.6

> Improve revision gc on SegmentMK
> 
>
> Key: OAK-2849
> URL: https://issues.apache.org/jira/browse/OAK-2849
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.6
>
> Attachments: SegmentCompactionIT-conflicts.png
>
>
> This is a container issue for the ongoing effort to improve revision gc of 
> the SegmentMK. 
> I'm exploring 
> * ways to make the reference graph as exact as possible and necessary: it 
> should not contain segments that are not referenceable any more and but must 
> contain all segments that are referenceable. 
> * ways to segregate the reference graph reducing dependencies between certain 
> set of segments as much as possible. 
> * Reducing the number of in memory references and their impact on gc as much 
> as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3036) DocumentRootBuilder: revisit update.limit default

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3036:
---
Fix Version/s: 1.6

> DocumentRootBuilder: revisit update.limit default
> -
>
> Key: OAK-3036
> URL: https://issues.apache.org/jira/browse/OAK-3036
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk, rdbmk, segment-next
>Reporter: Julian Reschke
>  Labels: performance
> Fix For: 1.6
>
>
> update.limit decides whether a commit is persisted using a branch or not. The 
> default is 1 (and can be overridden using the system property).
> A typical call pattern in JCR is to persist batches of ~1024 nodes. These 
> translate to more than 1 changes (see PackageImportIT), due to JCR 
> properties, and also indexing commit hooks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2896) Putting many elements into a map results in many small segments.

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2896:
---
Fix Version/s: 1.6

> Putting many elements into a map results in many small segments. 
> -
>
> Key: OAK-2896
> URL: https://issues.apache.org/jira/browse/OAK-2896
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Critical
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-2896.png, OAK-2896.xlsx, size-dist.png
>
>
> There is an issue with how the HAMT implementation 
> ({{SegmentWriter.writeMap()}} interacts with the 256 segment references limit 
> when putting many entries into the map: This limit gets regularly reached 
> once the maps contains about 200k entries. At that points segments get 
> prematurely flushed resulting in more segments, thus more references and thus 
> even smaller segments. It is common for segments to be as small as 7k with a 
> tar file containing up to 35k segments. This is problematic as at this point 
> handling of the segment graph becomes expensive, both memory and CPU wise. I 
> have seen persisted segment graphs as big as 35M where the usual size is a 
> couple of ks. 
> As the HAMT map is used for storing children of a node this might have an 
> advert effect on nodes with many child nodes. 
> The following code can be used to reproduce the issue: 
> {code}
> SegmentWriter writer = new SegmentWriter(segmentStore, getTracker(), V_11);
> MapRecord baseMap = null;
> for (;;) {
> Map map = newHashMap();
> for (int k = 0; k < 1000; k++) {
> RecordId stringId = 
> writer.writeString(String.valueOf(rnd.nextLong()));
> map.put(String.valueOf(rnd.nextLong()), stringId);
> }
> Stopwatch w = Stopwatch.createStarted();
> baseMap = writer.writeMap(baseMap, map);
> System.out.println(baseMap.size() + " " + w.elapsed());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-4226) Improve testing of DefaultSyncContext

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reopened OAK-4226:
-

[~frm] experienced intermittent test-failures for 
DefaultSyncContextTest#testIsExpiredSyncedGroup and #testIsExpiredSyncedUser.

reopening for investigation

> Improve testing of DefaultSyncContext
> -
>
> Key: OAK-4226
> URL: https://issues.apache.org/jira/browse/OAK-4226
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: auth-external
>Reporter: angela
>Assignee: angela
> Fix For: 1.5.2
>
>
> - additional test-coverage
> - more tests for edge cases and special scenario (see for example OAK-4224)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256255#comment-15256255
 ] 

Julian Sedding commented on OAK-4295:
-

Thanks for the explanation, this fully addresses my concerns.

+1 from my side.

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4211) FileAccess.Mapped leaks file channels

2016-04-25 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-4211:

Component/s: segmentmk

> FileAccess.Mapped leaks file channels 
> --
>
> Key: OAK-4211
> URL: https://issues.apache.org/jira/browse/OAK-4211
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next, segmentmk
>Reporter: Michael Dürig
>Assignee: Francesco Mari
> Fix For: 1.6, 1.5.2
>
> Attachments: OAK-4211-02.patch, OAK-4211-03.patch, OAK-4211.patch
>
>
> {{FileAccess.Mapped}} seems to leak {{FileChannnel}} instances. The file 
> channels are acquired in the constructor but never release. 
> FYI [~alex.parvulescu], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4211) FileAccess.Mapped leaks file channels

2016-04-25 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4211.
-
   Resolution: Fixed
Fix Version/s: 1.5.2

Fixed in r1740818.

> FileAccess.Mapped leaks file channels 
> --
>
> Key: OAK-4211
> URL: https://issues.apache.org/jira/browse/OAK-4211
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next, segmentmk
>Reporter: Michael Dürig
>Assignee: Francesco Mari
> Fix For: 1.6, 1.5.2
>
> Attachments: OAK-4211-02.patch, OAK-4211-03.patch, OAK-4211.patch
>
>
> {{FileAccess.Mapped}} seems to leak {{FileChannnel}} instances. The file 
> channels are acquired in the constructor but never release. 
> FYI [~alex.parvulescu], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256248#comment-15256248
 ] 

Francesco Mari commented on OAK-4295:
-

The segment version will be anyway replicated in the segments themselves. The 
system will not rely solely on the manifest, given that it would be so easy to 
modify manually. Consistency checks will be performed as usual.

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256222#comment-15256222
 ] 

Julian Sedding commented on OAK-4295:
-

If the intention is to create an easy way to find out what exact persistence 
version is being used +1.

However, we should make sure that the implementation does not rely on this 
information, as changing it could then lead to an unreadable repository. For 
robustness we should definitely keep the version in the tar file header and use 
this in critical checks, just as it is now.

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4148) RAT plugin complains about derby files

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4148:
--
Fix Version/s: 1.4.2

> RAT plugin complains about derby files
> --
>
> Key: OAK-4148
> URL: https://issues.apache.org/jira/browse/OAK-4148
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> When running checks for a release rat complains about the following extra 
> files created by the derby one
> {noformat}
> derby.log
> foo/db.lck
> foo/log/README_DO_NOT_TOUCH_FILES.txt
> foo/README_DO_NOT_TOUCH_FILES.txt
> foo/seg0/README_DO_NOT_TOUCH_FILES.txt
> foo/service.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4136) release profile in maven

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4136:
--
Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> release profile in maven
> 
>
> Key: OAK-4136
> URL: https://issues.apache.org/jira/browse/OAK-4136
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.2, 1.5.1
>
> Attachments: derby-log-in-target-folder.patch
>
>
> Create a maven profile (ie {{-Prelease}}) which will take care of
> enabling all the required profiles as well as injecting the right
> maven variables.
> So far
> {noformat:title=profiles to be activated}
> pedantic
> integrationTesting
> unittesting
> rdb-derby
> {noformat}
> {noformat:title=variables to be injected}
> nsfixtures=
> rdb.jdbc-url=jdbc:derby:foo;create=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4148) RAT plugin complains about derby files

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4148:
--
Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0 
candidate_oak_1_2 candidate_oak_1_4)

> RAT plugin complains about derby files
> --
>
> Key: OAK-4148
> URL: https://issues.apache.org/jira/browse/OAK-4148
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.2, 1.5.2
>
>
> When running checks for a release rat complains about the following extra 
> files created by the derby one
> {noformat}
> derby.log
> foo/db.lck
> foo/log/README_DO_NOT_TOUCH_FILES.txt
> foo/README_DO_NOT_TOUCH_FILES.txt
> foo/seg0/README_DO_NOT_TOUCH_FILES.txt
> foo/service.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4136) release profile in maven

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4136:
--
Fix Version/s: 1.4.2

> release profile in maven
> 
>
> Key: OAK-4136
> URL: https://issues.apache.org/jira/browse/OAK-4136
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.6, 1.4.2, 1.5.1
>
> Attachments: derby-log-in-target-folder.patch
>
>
> Create a maven profile (ie {{-Prelease}}) which will take care of
> enabling all the required profiles as well as injecting the right
> maven variables.
> So far
> {noformat:title=profiles to be activated}
> pedantic
> integrationTesting
> unittesting
> rdb-derby
> {noformat}
> {noformat:title=variables to be injected}
> nsfixtures=
> rdb.jdbc-url=jdbc:derby:foo;create=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4136) release profile in maven

2016-04-25 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256216#comment-15256216
 ] 

Davide Giannella commented on OAK-4136:
---

in 1.4 with http://svn.apache.org/r1740809

> release profile in maven
> 
>
> Key: OAK-4136
> URL: https://issues.apache.org/jira/browse/OAK-4136
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6, 1.5.1
>
> Attachments: derby-log-in-target-folder.patch
>
>
> Create a maven profile (ie {{-Prelease}}) which will take care of
> enabling all the required profiles as well as injecting the right
> maven variables.
> So far
> {noformat:title=profiles to be activated}
> pedantic
> integrationTesting
> unittesting
> rdb-derby
> {noformat}
> {noformat:title=variables to be injected}
> nsfixtures=
> rdb.jdbc-url=jdbc:derby:foo;create=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4148) RAT plugin complains about derby files

2016-04-25 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256217#comment-15256217
 ] 

Davide Giannella commented on OAK-4148:
---

in 1.4 with http://svn.apache.org/r1740809

> RAT plugin complains about derby files
> --
>
> Key: OAK-4148
> URL: https://issues.apache.org/jira/browse/OAK-4148
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6, 1.5.2
>
>
> When running checks for a release rat complains about the following extra 
> files created by the derby one
> {noformat}
> derby.log
> foo/db.lck
> foo/log/README_DO_NOT_TOUCH_FILES.txt
> foo/README_DO_NOT_TOUCH_FILES.txt
> foo/seg0/README_DO_NOT_TOUCH_FILES.txt
> foo/service.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4148) RAT plugin complains about derby files

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4148:
--
Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4  (was: )

> RAT plugin complains about derby files
> --
>
> Key: OAK-4148
> URL: https://issues.apache.org/jira/browse/OAK-4148
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6, 1.5.2
>
>
> When running checks for a release rat complains about the following extra 
> files created by the derby one
> {noformat}
> derby.log
> foo/db.lck
> foo/log/README_DO_NOT_TOUCH_FILES.txt
> foo/README_DO_NOT_TOUCH_FILES.txt
> foo/seg0/README_DO_NOT_TOUCH_FILES.txt
> foo/service.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4275) Backport OAK-4065 (Counter index can get out of sync) to 1.2 and 1.4

2016-04-25 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4275:
--
Fix Version/s: (was: 1.4.2)
   1.4.3

> Backport OAK-4065 (Counter index can get out of sync) to 1.2 and 1.4
> 
>
> Key: OAK-4275
> URL: https://issues.apache.org/jira/browse/OAK-4275
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.2.15, 1.4.3
>
>
> Even thought two workarounds exists (one is to rebuild the counter index, 
> another is to manually override the cost of an index), I think this needs to 
> be backported:
> * Once it occurs, some queries are very slow, and it's not always easy to 
> understand why.
> * The backport fixes the issue, and ensures people don't run into this issue 
> at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4303) Use the oak-segment-next in the oak-upgrade tests

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4303:
---
Labels: migration upgrade  (was: )

> Use the oak-segment-next in the oak-upgrade tests
> -
>
> Key: OAK-4303
> URL: https://issues.apache.org/jira/browse/OAK-4303
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next, upgrade
>Reporter: Tomek Rękawek
>  Labels: migration, upgrade
> Fix For: 1.6
>
>
> Since the oak-segment will be deprecated, let's switch to oak-segment-next in 
> all oak-upgrade tests (apart from the specific test covering the "classic" 
> oak-segment support).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4260) Define and implement migration from oak-segment to oak-segment-next

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256148#comment-15256148
 ] 

Tomek Rękawek commented on OAK-4260:


Committed the change in [r1740787|http://svn.apache.org/r1740787]. There's one 
change since the previous comment: the segment-next format doesn't require any 
custom schema. The classic format requires {{segment-old:}}. So, the actual 
command to migrate an old repo to the segment-next would be:

{noformat}
java -jar oak-upgrade.jar --mmap segment-old:crx-quickstart/repository 
crx-quickstart/new-repository
{noformat}

I created OAK-4303 to fix the failing tests that you've mentioned.

> Define and implement migration from oak-segment to oak-segment-next
> ---
>
> Key: OAK-4260
> URL: https://issues.apache.org/jira/browse/OAK-4260
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next, segmentmk, upgrade
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: migration
> Fix For: 1.6
>
>
> We need to come up with a plan, implementation and documentation for how we 
> deal with migrating from {{oak-segment}} to {{oak-segment-next}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4303) Use the oak-segment-next in the oak-upgrade tests

2016-04-25 Thread JIRA
Tomek Rękawek created OAK-4303:
--

 Summary: Use the oak-segment-next in the oak-upgrade tests
 Key: OAK-4303
 URL: https://issues.apache.org/jira/browse/OAK-4303
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: segment-next, upgrade
Reporter: Tomek Rękawek
 Fix For: 1.6


Since the oak-segment will be deprecated, let's switch to oak-segment-next in 
all oak-upgrade tests (apart from the specific test covering the "classic" 
oak-segment support).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4302) DefaultSyncContextTest contains duplicate test

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-4302.
-
   Resolution: Fixed
Fix Version/s: (was: 1.6)
   1.5.2

[~frm], thanks a lot for spotting and reporting

> DefaultSyncContextTest contains duplicate test
> --
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
>Priority: Trivial
> Fix For: 1.5.2
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4233) Property index stored locally

2016-04-25 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256102#comment-15256102
 ] 

Davide Giannella commented on OAK-4233:
---

[~tomek.rekawek]
bq. OK, if everyone agrees that we should implement the Marcel's idea, let's do 
it. It'll help to solve the original issue anyway, as the index changes will be 
saved asynchronously.

+1 

[~chetanm]
bq. If we are switching to async mode then its better to make use of Lucene as 
that provides much compact storage and reduces load on "nodes" collection

+1 for reusing Lucene. I guess we should simply replace the
OakDirectory with something like InMemoryDirectory. It could resolve
in a simple extension. (didn't check the code). I'm slightly concerned
about memory impact and any binary data. Don't know what exactly ends
in the data store from lucene; but should it stay in-memory?

bq. To make the async index more closer to current state (per cluster node) we 
can have another in memory Lucene index which gets updated by the background 
observer based on diff from last checkpoint and current state. The query can 
then be a union of cursors from the two indexes (base persisted index and in 
memory index)

+1 for using observation rather than commit hooks (If I understood
correctly your statement). This should allow us to process only
successful commits. And we should be able to limit it to local events
only.

[~catholicon]
bq. I couldn't come up with a case of duplicate rows

I don't recall QE performing distinct by default on UnionQueryImpl. I
can definitely remember wrong though.



> Property index stored locally
> -
>
> Key: OAK-4233
> URL: https://issues.apache.org/jira/browse/OAK-4233
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, query
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.6
>
>
> When running Oak in a cluster, each write operation is expensive. After 
> performing some stress-tests with a geo-distributed Mongo cluster, we've 
> found out that updating property indexes is a large part of the overall 
> traffic. Let's try to create a new property-local index, that will save the 
> indexed data locally, without sharing it.
> Assumptions:
> -there's a new {{property-local}} index type for which the data are saved in 
> the local SegmentNodeStore instance created specifically for this purpose,
> -local changes are indexed using a new editor, based on the 
> {{PropertyIndexEditor}},
> -remote changes are extracted from the JournalEntries fetched in the 
> background read operation and indexed as well,
> -the new index type won't support uniqueness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4302) DefaultSyncContextTest contains duplicate tests

2016-04-25 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256097#comment-15256097
 ] 

angela commented on OAK-4302:
-

[~frm], you are perfectly right about {{testGetIdentityRefSyncGroup}} and 
{{testGetIdentityRefSyncUser}}; the latter should obviously test the 
identity-ref for a synced user. 

the other tests however are indeed slightly different as they test different 
ways of creating a binary value; so false alarm i'd say :-)

> DefaultSyncContextTest contains duplicate tests
> ---
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
> Fix For: 1.6
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4302) DefaultSyncContextTest contains duplicate tests

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4302:

Priority: Trivial  (was: Major)

> DefaultSyncContextTest contains duplicate tests
> ---
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
>Priority: Trivial
> Fix For: 1.6
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4302) DefaultSyncContextTest contain duplicate test

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4302:

Summary: DefaultSyncContextTest contain duplicate test  (was: 
DefaultSyncContextTest contains duplicate tests)

> DefaultSyncContextTest contain duplicate test
> -
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
>Priority: Trivial
> Fix For: 1.6
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4302) DefaultSyncContextTest contains duplicate test

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-4302:

Summary: DefaultSyncContextTest contains duplicate test  (was: 
DefaultSyncContextTest contain duplicate test)

> DefaultSyncContextTest contains duplicate test
> --
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
>Priority: Trivial
> Fix For: 1.6
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4302) DefaultSyncContextTest contains duplicate tests

2016-04-25 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-4302:
---

Assignee: angela

> DefaultSyncContextTest contains duplicate tests
> ---
>
> Key: OAK-4302
> URL: https://issues.apache.org/jira/browse/OAK-4302
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Francesco Mari
>Assignee: angela
> Fix For: 1.6
>
>
> Some test methods in {{DefaultSyncContextTests}} are the same when compared 
> line-by-line:
> * {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
> {{testCreateValueFromInputStream}};
> * {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.
> If these methods are actually testing different things because of a lucky 
> state of the instance variable and an even luckier running order, they should 
> be reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4302) DefaultSyncContextTest contains duplicate tests

2016-04-25 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-4302:
---

 Summary: DefaultSyncContextTest contains duplicate tests
 Key: OAK-4302
 URL: https://issues.apache.org/jira/browse/OAK-4302
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-external
Reporter: Francesco Mari
 Fix For: 1.6


Some test methods in {{DefaultSyncContextTests}} are the same when compared 
line-by-line:

* {{testCreateValueFromBytesArray}}, {{testCreateValueFromBinary}} and 
{{testCreateValueFromInputStream}};
* {{testGetIdentityRefSyncGroup}} and {{testGetIdentityRefSyncUser}}.

If these methods are actually testing different things because of a lucky state 
of the instance variable and an even luckier running order, they should be 
reworked to be independent. Otherwise, some of them should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4112) Replace the query exclusive lock with a cache tracker

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-4112:
---
Attachment: OAK-4112-4.patch

> Replace the query exclusive lock with a cache tracker
> -
>
> Key: OAK-4112
> URL: https://issues.apache.org/jira/browse/OAK-4112
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-4112-1.patch, OAK-4112-2.patch, OAK-4112-3.patch, 
> OAK-4112-4.patch, OAK-4112.patch
>
>
> The {{MongoDocumentStore#query()}} method uses an expensive 
> {{TreeLock#acquireExclusive}} method, introduced in OAK-1897 to avoid caching 
> outdated documents.
> It should be possible to avoid acquiring the exclusive lock, by tracking the 
> cache changes that occurs during the Mongo find() operation. When the find() 
> is done, we can update the cache with the received documents if they haven't 
> been invalidated in the meantime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-4211) FileAccess.Mapped leaks file channels

2016-04-25 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari reassigned OAK-4211:
---

Assignee: Francesco Mari

> FileAccess.Mapped leaks file channels 
> --
>
> Key: OAK-4211
> URL: https://issues.apache.org/jira/browse/OAK-4211
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next
>Reporter: Michael Dürig
>Assignee: Francesco Mari
> Fix For: 1.6
>
> Attachments: OAK-4211-02.patch, OAK-4211-03.patch, OAK-4211.patch
>
>
> {{FileAccess.Mapped}} seems to leak {{FileChannnel}} instances. The file 
> channels are acquired in the constructor but never release. 
> FYI [~alex.parvulescu], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4211) FileAccess.Mapped leaks file channels

2016-04-25 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256053#comment-15256053
 ] 

Francesco Mari commented on OAK-4211:
-

I will commit the patch and resolve this issue. OAK-4274 will still be there as 
a reminder for this problem.

> FileAccess.Mapped leaks file channels 
> --
>
> Key: OAK-4211
> URL: https://issues.apache.org/jira/browse/OAK-4211
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next
>Reporter: Michael Dürig
> Fix For: 1.6
>
> Attachments: OAK-4211-02.patch, OAK-4211-03.patch, OAK-4211.patch
>
>
> {{FileAccess.Mapped}} seems to leak {{FileChannnel}} instances. The file 
> channels are acquired in the constructor but never release. 
> FYI [~alex.parvulescu], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4294) Consider making FileStore.writer volatile

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4294:
---
Issue Type: Improvement  (was: Task)

> Consider making FileStore.writer volatile
> -
>
> Key: OAK-4294
> URL: https://issues.apache.org/jira/browse/OAK-4294
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Michael Dürig
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.6
>
>
> That filed is not volatile although access by different threads. We should 
> consider changing it to volatile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4285) Align cleanup of segment id tables with the new cleanup strategy

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4285:
---
Description: We need to align cleanup of the segment id tables with the new 
"brutal" strategy introduced with OAK-3348. That is, we need to remove those 
segment id's from the segment id tables whose segment have actually been gc'ed. 
  (was: We need to align cleanup of the segment id tables with the new "brutal" 
strategy introduced with OAK-3348)

> Align cleanup of segment id tables with the new cleanup strategy 
> -
>
> Key: OAK-4285
> URL: https://issues.apache.org/jira/browse/OAK-4285
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: cleanup, gc
> Fix For: 1.6
>
>
> We need to align cleanup of the segment id tables with the new "brutal" 
> strategy introduced with OAK-3348. That is, we need to remove those segment 
> id's from the segment id tables whose segment have actually been gc'ed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4291) FileStore.flush prone to races leading to corruption

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4291:
---
Issue Type: Bug  (was: Task)

> FileStore.flush prone to races leading to corruption
> 
>
> Key: OAK-4291
> URL: https://issues.apache.org/jira/browse/OAK-4291
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next
>Reporter: Michael Dürig
>Priority: Critical
>  Labels: resilience
> Fix For: 1.6
>
>
> There is a small window in {{FileStore.flush}} that could lead to data 
> corruption: if we crash right after setting the persisted head but before any 
> delay-flushed {{SegmentBufferWriter}} instance flushes (see 
> {{SegmentBufferWriterPool.returnWriter()}}) then that data is lost although 
> it might already be referenced from the persisted head.
> We need to come up with a test case for this. 
> A possible fix would be to return a future from {{SegmentWriter.flush}} and 
> rely on a completion callback. Such a change would most likely also be useful 
> for OAK-3690. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4284) Garbage left behind when compaction does not succeed

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4284:
---
Issue Type: Improvement  (was: Task)

> Garbage left behind when compaction does not succeed
> 
>
> Key: OAK-4284
> URL: https://issues.apache.org/jira/browse/OAK-4284
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: cleanup, compaction, gc
> Fix For: 1.6
>
>
> As a result of the new cleanup approach introduced with OAK-3348 (brutal) a 
> compaction cycle that is not successful (either because of cancellation of 
> because of giving up waiting for the lock) leaves garbage behind, which is 
> only cleaned up 2 generations later. 
> We should look into ways to remove such garbage more pro-actively. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4280) Compaction cannot be cancelled

2016-04-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-4280:
---
Issue Type: Improvement  (was: Task)

> Compaction cannot be cancelled 
> ---
>
> Key: OAK-4280
> URL: https://issues.apache.org/jira/browse/OAK-4280
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.6
>
>
> As a result of the de-duplication cache based online compaction approach from 
> OAK-3348 compaction cannot be cancelled any more (in the sense of OAK-3290). 
> As I assume we still need this feature we should look into ways to 
> re-implement it on top of the current approach. 
> Also I figure implementing a [partial compaction | 
> https://issues.apache.org/jira/browse/OAK-4122?focusedCommentId=15223924=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15223924]
>  approach on top of a commit scheduler (OAK-4122) would need a feature of 
> this sort. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4112) Replace the query exclusive lock with a cache tracker

2016-04-25 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256047#comment-15256047
 ] 

Marcel Reutegger commented on OAK-4112:
---

- NodeDocumentCache has an unused import of {{org.h2.command.dml.Call}}
- NodeDocumentCache.get() creates a {{wrappedLoader}} but it is not used. I 
guess it should be used for the get() call on the inner caches. I'm a bit 
worried that we don't have a test that identified this issue.
- Synchronization on the BloomFilter now looks good to me

> Replace the query exclusive lock with a cache tracker
> -
>
> Key: OAK-4112
> URL: https://issues.apache.org/jira/browse/OAK-4112
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk
>Reporter: Tomek Rękawek
>  Labels: performance
> Fix For: 1.6
>
> Attachments: OAK-4112-1.patch, OAK-4112-2.patch, OAK-4112-3.patch, 
> OAK-4112.patch
>
>
> The {{MongoDocumentStore#query()}} method uses an expensive 
> {{TreeLock#acquireExclusive}} method, introduced in OAK-1897 to avoid caching 
> outdated documents.
> It should be possible to avoid acquiring the exclusive lock, by tracking the 
> cache changes that occurs during the Mongo find() operation. When the find() 
> is done, we can update the cache with the received documents if they haven't 
> been invalidated in the meantime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256045#comment-15256045
 ] 

Michael Dürig commented on OAK-4295:


+1, I like this approach. This might also be helpful in migration (OAK-4260).

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4274) Memory-mapped files can't be explicitly unmapped

2016-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256042#comment-15256042
 ] 

Michael Dürig commented on OAK-4274:


This might turn out to be worse than unmapping being deferred by JVM gc: client 
code holding a reference to a record "forever" will keep the containing tar 
file mapped, although the underlying segment might have been gc'ed already by 
revision gc. 

OTOH I don't think this is a problem re. memory consumption as the underlying 
OS will simply swap the actual memory content out. It might be a problem re. 
number of file handles. Definitely something to keep an eye on. Maybe 
[~jsedding]'s proposal with the phantom references gives us a way to better 
monitor this!?

> Memory-mapped files can't be explicitly unmapped
> 
>
> Key: OAK-4274
> URL: https://issues.apache.org/jira/browse/OAK-4274
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-next, segmentmk
>Reporter: Francesco Mari
>  Labels: gc, resilience
> Fix For: 1.6
>
>
> As described by [this JDK 
> bug|http://bugs.java.com/view_bug.do?bug_id=4724038], there is no way to 
> explicitly unmap memory mapped files. A memory mapped file is unmapped only 
> if the corresponding {{MappedByteBuffer}} is garbage collected by the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4295) Proper versioning of storage format

2016-04-25 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256041#comment-15256041
 ] 

Francesco Mari commented on OAK-4295:
-

The idea is to have a manifest file in the top level directory describing 
meta-information about the repository. In example, the version of the binary 
format used for the segments and the format of the TAR files are information 
that should be stored in the manifest file. These information apply to the 
repository as a whole, and it shouldn't be necessary to dig into TARs and 
segments to discover them. The manifest doesn't need to be a binary file. 
Actually, I think it should be encoded in some plain text format. JSON, YAML, 
properties file are all good candidates.

Moreover, the manifest can make it easier to detect old repository folders from 
new ones. A repository created by the older {{oak-segment}} will not have a 
manifest file, while every repository created by {{oak-segment-next}} will have 
one. This simple detection algorithm can be useful in {{oak-run}} to 
auto-create a fixture for the right version of the repository.

> Proper versioning of storage format
> ---
>
> Key: OAK-4295
> URL: https://issues.apache.org/jira/browse/OAK-4295
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segment-next
>Reporter: Michael Dürig
>  Labels: resilience, technical_debt
> Fix For: 1.6
>
>
> OAK-3348 introduced changes to the segment format (which has been bumped to 
> 12 with OAK-4232). However it also changes the format of the tar files (the 
> gc generation of the segments is written to the index file) which would also 
> require proper versioning.
> In a offline discussion [~frm] brought up the idea of adding a manifest file 
> to the store that would specify the format versions of the individual 
> components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >