[jira] [Commented] (OAK-2049) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-09-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134443#comment-14134443
 ] 

Michael Dürig commented on OAK-2049:


At http://svn.apache.org/r1625158 I added a test case that reproduces the issue 
at the {{SegmentWriter}} level. I can confirm that the patch from [~tmueller] 
makes the test pass. 

Note that this test actually takes the issue to the extreme such that the 
segment writer's internal position gets negative. This will cause a subsequent 
assertion to fail: 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java#L322

> ArrayIndexOutOfBoundsException in Segment.getRefId()
> 
>
> Key: OAK-2049
> URL: https://issues.apache.org/jira/browse/OAK-2049
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1, 1.0.4
>Reporter: Andrew Khoury
>Assignee: Jukka Zitting
>Priority: Critical
> Fix For: 1.0.6
>
>
> It looks like there is some SegmentMK bug that causes the 
> {{Segment.getRefId()}} to throw an {{ArrayIndexOutOfBoundsException}} in some 
> fairly rare corner cases.
> The data was originally migrated into oak via the crx2oak tool mentioned 
> here: http://docs.adobe.com/docs/en/aem/6-0/deploy/upgrade.html
> That tool uses *oak-core-1.0.0* creating an oak instance.
> Similar to OAK-1566 this system was using FileDataStore with SegmentNodeStore.
> In this case the error is seen when running offline compaction using 
> oak-run-1.1-SNAPSHOT.jar (latest).
> {code:none}
> > java -Xmx4096m -jar oak-run-1.1-SNAPSHOT.jar compact 
> > /oak/crx-quickstart/repository/segmentstore
> Apache Jackrabbit Oak 1.1-SNAPSHOT
> Compacting /wcm/cq-author/crx-quickstart/repository/segmentstore
> before [data00055a.tar, data00064a.tar, data00045b.tar, data5a.tar, 
> data00018a.tar, data00022a.tar, data00047a.tar, data00037a.tar, 
> data00049a.tar, data00014a.tar, data00066a.tar, data00020a.tar, 
> data00058a.tar, data00065a.tar, data00069a.tar, data00012a.tar, 
> data9a.tar, data00060a.tar, data00041a.tar, data00016a.tar, 
> data00072a.tar, data00048a.tar, data00061a.tar, data00053a.tar, 
> data00038a.tar, data1a.tar, data00034a.tar, data3a.tar, 
> data00052a.tar, data6a.tar, data00027a.tar, data00031a.tar, 
> data00056a.tar, data00035a.tar, data00063a.tar, data00068a.tar, 
> data8v.tar, data00010a.tar, data00043b.tar, data00021a.tar, 
> data00017a.tar, data00024a.tar, data00054a.tar, data00051a.tar, 
> data00057a.tar, data00059a.tar, data00036a.tar, data00033a.tar, 
> data00019a.tar, data00046a.tar, data00067a.tar, data4a.tar, 
> data00044a.tar, data00013a.tar, data00070a.tar, data00026a.tar, 
> data2a.tar, data00011a.tar, journal.log, data00030a.tar, data00042a.tar, 
> data00025a.tar, data00062a.tar, data00023a.tar, data00071a.tar, 
> data00032b.tar, data00040a.tar, data00015a.tar, data00029a.tar, 
> data00050a.tar, data0a.tar, data7a.tar, data00028a.tar, 
> data00039a.tar]
> -> compacting
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 206
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.getRefId(Segment.java:191)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.internalReadRecordId(Segment.java:299)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:295)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:78)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getProperties(SegmentNodeState.java:150)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:154)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$

[jira] [Commented] (OAK-2070) Segment corruption

2014-09-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134097#comment-14134097
 ] 

Michael Dürig commented on OAK-2070:


bq. Possibly a duplicate of OAK-2049

Yes I think this one might actually have the same root cause. I have seen 
negative header lengths in {{Segment.prepare}} caused by the {{ids}} list 
containing multiple copies of the same id. This eventually results in 
completely wrong segment size estimates. 

> Segment corruption
> --
>
> Key: OAK-2070
> URL: https://issues.apache.org/jira/browse/OAK-2070
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Thomas Mueller
> Fix For: 1.1
>
> Attachments: SearchByteSeq.java, SegmentDump.java
>
>
> I got a strange case of a corrupt segment. The error message is 
> "java.lang.IllegalStateException: Segment 
> b6e0cdbc-0004-9b05-3927-000498000493 not found". The segment id looks wrong, 
> my guess is that the segment header got corrupt somehow. The checksum of the 
> entry is correct however. I wrote a simple segment header dump tool (I will 
> attach it), which printed: 
> {noformat}
> segmentId: b6fec1b4-f321-4b64-aeab-31c27a939287.4c65e101
> length: 4
> maxRef: 1
> checksum: 4c65e101
> magic: 0aK
> version: 0
> id count: 66
> root count: 24
> blob ref count: 0
> reserved: 0
> segment #0: e44fec74-3bc9-4cee-a864-5f5fb4485cb3
> segment #1: b6480852-9af2-4f21-a542-933cd554e9ed
> segment #2: bec8216a-64e3-4683-a28a-08fbd7ed91f9
> segment #3: b8067f5a-16af-429f-a1b5-c2f1ed1d3c59
> segment #4: 7c2e6167-b292-444a-a318-4f43692f40f2
> segment #5: 3b707f6e-d1e8-4bc0-ace5-67499ac4c369
> segment #6: 0c15bf0a-770d-4057-af6b-baeaeffde613
> segment #7: 97814ad5-3fc3-436e-a7c6-2aed13bcf529
> segment #8: b4babf94-588f-4b32-a8d6-08ba3351abf2
> segment #9: 2eab77d7-ee41-4519-ad14-c31f4f72d2e2
> segment #10: 57949f56-15d1-4f65-a107-d517a222cb11
> segment #11: d25bb0f8-3948-4d33-a35a-50a60a1e9f27
> segment #12: 58066744-e2d6-4f1c-a962-caae337f48db
> segment #13: b195e811-a984-4b3d-a4c8-5fb33572bfce
> segment #14: 384b360d-69ce-4df4-a419-65b80d39b40c
> segment #15: 325b6083-de56-4a4b-a6e5-2eac83e9d4aa
> segment #16: e3839e53-c516-4dce-a39d-b6241d47ef1d
> segment #17: a56f0a75-625b-4684-af43-9936d4d0c4bd
> segment #18: b7428409-8dc0-4c5b-a050-78c838db0ef1
> segment #19: 6ccd5436-37a1-4b46-a5a4-63f939f2de76
> segment #20: 4f28ca6e-a52b-4a06-acb0-19b710366c96
> segment #21: 137ec136-bcfb-405b-a5e3-59b37c54229c
> segment #22: 8a53ffe4-6941-4b9b-a517-9ab59acf9b63
> segment #23: effd3f42-dff3-4082-a0fc-493beabe8d42
> segment #24: 173ef78d-9f41-49fb-ac25-70daba943b37
> segment #25: 9d3ab04d-d540-4256-a28f-c71134d1bfd0
> segment #26: d1966946-45c7-4539-a8f2-9c7979b698c5
> segment #27: b2a72be0-ee4e-4f9d-a957-f84908f05040
> segment #28: 8eead39e-62f7-4b6c-ac12-5caee295b6e5
> segment #29: 19832493-415c-46d2-aa43-0dd95982d312
> segment #30: 462cd84b-3638-4c8c-a273-42b1be5d3df3
> segment #31: 4b4f6a2d-1281-475c-adbe-34457b6de60d
> segment #32: e78b13e2-039d-4f73-a3a8-d25da3b78a93
> segment #33: 03b652ac-623b-4d3d-a5f0-2e7dcd23fd88
> segment #34: 3ff1c4cf-cbd7-45bb-a23a-7e033c64fe87
> segment #35: ed9fb9ad-0a8c-4d37-ab95-c2211cde6fee
> segment #36: 505f766d-509d-4f1b-ad0b-dfd59ef6b6a1
> segment #37: 398f1e80-e818-41b7-a055-ed6255f6e01b
> segment #38: 74cd7875-df17-43af-a2cc-f3256b2213b9
> segment #39: ad884c29-61c9-4bd5-ad57-76ffa32ae3e8
> segment #40: 29cfb8c8-f759-41b2-ac71-d35eac0910ad
> segment #41: af5032e1-b473-47ad-a8e2-a17e770982e8
> segment #42: 37263fa3-3331-4e89-af1f-38972b65e058
> segment #43: 81baf70d-5529-416f-a8bd-9858d62fd2cc
> segment #44: aa3abdfb-4fc7-4c49-a628-fb097f8db872
> segment #45: 81e958b4-0493-4a7b-ab92-5f3cc13119e7
> segment #46: f03d0200-fe8e-4e93-a94a-97b9052f13ab
> segment #47: d9931e67-cc8c-4f45-a1b5-129a0b07e1fb
> segment #48: e32ad441-12d9-4263-a258-b81bd85f499e
> segment #49: 2093a577-e797-44b5-aa3e-45c08b1ad9c1
> segment #50: dadf47d7-9728-43dd-aa55-177ccbc45b81
> segment #51: 9db4b6f5-c21c-4e27-a4ab-9a0c419477b2
> segment #52: 2ca39d68-e7bf-4c83-aebe-f708fb23659e
> segment #53: 5dc42abe-e17b-41a3-a890-47c8319a
> segment #54: a817c7fb-4f05-429e-ab23-49d68e222e6c
> segment #55: 8a5de869-a8a8-42cb-ac78-ec22763fe558
> segment #56: cca0c5d8-9269-4b1f-ad3b-1ca666ca9cb9
> segment #57: ff299dda-00ce-4e01-ac1c-2289447500b7
> segment #58: 0c94d46e-7fae-474f-ac89-8ba87a64cb56
> segment #59: 890f7fcc-7c7e-4dc7-acc5-ed0927ef1a9f
> segment #60: 16f5de56-2aaa-4ca2-afa7-4bcb536b74b1
> segment #61: 97db1991-1c29-4f8c-ae05-7f79b712fb4c
> segment #62: 936aa29a-2679-4998-a159-d9a19069786a
> segment #63: 0d1abedd-8c20-4c1b-a9c8-f60d7cb94008
> segment #64: 1ffb6ac4-d7a9-44c5-aaa6-bf967e87dec0
> segment #65: b6e0cdbc-0004-9b05-3927-000498000493
> ERROR: not a segment?
> root record reference #0: 77

[jira] [Commented] (OAK-2049) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-09-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134036#comment-14134036
 ] 

Michael Dürig commented on OAK-2049:


To make it fail we should probably add an assertion into 
{{SegmentWriter.flush}}. Something like

{code}
checkState(length <= buffer.length);
{code}

Putting this right before the segment size adjustment causes you test to fail 
reliable for me.

> ArrayIndexOutOfBoundsException in Segment.getRefId()
> 
>
> Key: OAK-2049
> URL: https://issues.apache.org/jira/browse/OAK-2049
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1, 1.0.4
>Reporter: Andrew Khoury
>Assignee: Jukka Zitting
>Priority: Critical
> Fix For: 1.0.6
>
>
> It looks like there is some SegmentMK bug that causes the 
> {{Segment.getRefId()}} to throw an {{ArrayIndexOutOfBoundsException}} in some 
> fairly rare corner cases.
> The data was originally migrated into oak via the crx2oak tool mentioned 
> here: http://docs.adobe.com/docs/en/aem/6-0/deploy/upgrade.html
> That tool uses *oak-core-1.0.0* creating an oak instance.
> Similar to OAK-1566 this system was using FileDataStore with SegmentNodeStore.
> In this case the error is seen when running offline compaction using 
> oak-run-1.1-SNAPSHOT.jar (latest).
> {code:none}
> > java -Xmx4096m -jar oak-run-1.1-SNAPSHOT.jar compact 
> > /oak/crx-quickstart/repository/segmentstore
> Apache Jackrabbit Oak 1.1-SNAPSHOT
> Compacting /wcm/cq-author/crx-quickstart/repository/segmentstore
> before [data00055a.tar, data00064a.tar, data00045b.tar, data5a.tar, 
> data00018a.tar, data00022a.tar, data00047a.tar, data00037a.tar, 
> data00049a.tar, data00014a.tar, data00066a.tar, data00020a.tar, 
> data00058a.tar, data00065a.tar, data00069a.tar, data00012a.tar, 
> data9a.tar, data00060a.tar, data00041a.tar, data00016a.tar, 
> data00072a.tar, data00048a.tar, data00061a.tar, data00053a.tar, 
> data00038a.tar, data1a.tar, data00034a.tar, data3a.tar, 
> data00052a.tar, data6a.tar, data00027a.tar, data00031a.tar, 
> data00056a.tar, data00035a.tar, data00063a.tar, data00068a.tar, 
> data8v.tar, data00010a.tar, data00043b.tar, data00021a.tar, 
> data00017a.tar, data00024a.tar, data00054a.tar, data00051a.tar, 
> data00057a.tar, data00059a.tar, data00036a.tar, data00033a.tar, 
> data00019a.tar, data00046a.tar, data00067a.tar, data4a.tar, 
> data00044a.tar, data00013a.tar, data00070a.tar, data00026a.tar, 
> data2a.tar, data00011a.tar, journal.log, data00030a.tar, data00042a.tar, 
> data00025a.tar, data00062a.tar, data00023a.tar, data00071a.tar, 
> data00032b.tar, data00040a.tar, data00015a.tar, data00029a.tar, 
> data00050a.tar, data0a.tar, data7a.tar, data00028a.tar, 
> data00039a.tar]
> -> compacting
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 206
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.getRefId(Segment.java:191)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.internalReadRecordId(Segment.java:299)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:295)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:78)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getProperties(SegmentNodeState.java:150)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:154)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:124)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(

[jira] [Updated] (OAK-2093) RDBBlobStore failure because of missing lastmod column on datastore_data table

2014-09-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2093:

Attachment: OAK-2093.diff

not fully tested

> RDBBlobStore failure because of missing lastmod column on datastore_data table
> --
>
> Key: OAK-2093
> URL: https://issues.apache.org/jira/browse/OAK-2093
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Julian Reschke
> Fix For: 1.1
>
> Attachments: OAK-2093.diff
>
>
> Revision 1571268 introduced a lastmod check on deleteChunks (see OAK-377). It 
> is applied to both tables, however only _meta has it.
> We either need to add the column (and maintain it), or change the code in 
> deleteChunks.
> [~tmueller] feedback appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2050) Query engine: disable or restrict built-in full-text engine

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133973#comment-14133973
 ] 

Thomas Mueller commented on OAK-2050:
-

Alex and me discussed this offline and came to the following conclusion: If no 
"real" full-text index is available, no results should be returned, and a 
warning is written to the log file saying that: either no full-text index is 
available, or the index is being re-built. That means the built-in full-text 
engine is disabled.



> Query engine: disable or restrict built-in full-text engine
> ---
>
> Key: OAK-2050
> URL: https://issues.apache.org/jira/browse/OAK-2050
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.1, 1.0.6
>
> Attachments: ExternalBlobTest.java.patch
>
>
> There is a built-in full-text engine that is used for full-text queries if no 
> full-text index (Lucene, Solr) is available.
> This built-in engine tries to load binaries in memory, which can result in 
> out-of-memory.
> We either need a way to disable this built-in mechanism (in which case the 
> query engine would throw an exception for full-text queries), and / or change 
> the built-in mechanism so it doesn't result in out-of-memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2050) Query engine: disable or restrict built-in full-text engine

2014-09-15 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2050:

Fix Version/s: (was: 1.0.7)
   1.0.6

> Query engine: disable or restrict built-in full-text engine
> ---
>
> Key: OAK-2050
> URL: https://issues.apache.org/jira/browse/OAK-2050
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.1, 1.0.6
>
> Attachments: ExternalBlobTest.java.patch
>
>
> There is a built-in full-text engine that is used for full-text queries if no 
> full-text index (Lucene, Solr) is available.
> This built-in engine tries to load binaries in memory, which can result in 
> out-of-memory.
> We either need a way to disable this built-in mechanism (in which case the 
> query engine would throw an exception for full-text queries), and / or change 
> the built-in mechanism so it doesn't result in out-of-memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1768) DocumentNodeBuilder.setChildNode() runs OOM with large tree

2014-09-15 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133951#comment-14133951
 ] 

Marcel Reutegger commented on OAK-1768:
---

I think we can get rid of the requirement to update the _lastRevs of documents 
that were modified directly by a commit. For those updates the _lastRev can be 
calculated with the commit revision on the commit root document. This is either 
on the current document or the commit root document. The overhead shouldn't be 
too big, since we have to go through the most recent revisions anyway to read 
the node.

This means we wouldn't have to track the _lastRevs of documents explicitly 
updated by the commit and only need to track _lastRev updates to ancestor 
documents that are not touched by the commit. This reduces the size of the 
UnsavedModifications significantly for cases where a large tree is added or 
removed. Another benefit is that we avoid redundancy. As mentioned already, in 
most cases the _lastRev can be calculated based on the changes on the document.

> DocumentNodeBuilder.setChildNode() runs OOM with large tree
> ---
>
> Key: OAK-1768
> URL: https://issues.apache.org/jira/browse/OAK-1768
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.1
>
> Attachments: 
> 0001-OAK-1768-DocumentNodeBuilder.setChildNode-runs-OOM-w.patch
>
>
> This is a follow up issue for OAK-1760. There we implemented a workaround in 
> the upgrade code to prevent the OOME. For the 1.1 release we should implement 
> a proper fix in the DocumentNodeStore as suggested by Jukka in OAK-1760.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2070) Segment corruption

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133895#comment-14133895
 ] 

Thomas Mueller commented on OAK-2070:
-

Possibly a duplicate of OAK-2049

> Segment corruption
> --
>
> Key: OAK-2070
> URL: https://issues.apache.org/jira/browse/OAK-2070
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Thomas Mueller
> Fix For: 1.1
>
> Attachments: SearchByteSeq.java, SegmentDump.java
>
>
> I got a strange case of a corrupt segment. The error message is 
> "java.lang.IllegalStateException: Segment 
> b6e0cdbc-0004-9b05-3927-000498000493 not found". The segment id looks wrong, 
> my guess is that the segment header got corrupt somehow. The checksum of the 
> entry is correct however. I wrote a simple segment header dump tool (I will 
> attach it), which printed: 
> {noformat}
> segmentId: b6fec1b4-f321-4b64-aeab-31c27a939287.4c65e101
> length: 4
> maxRef: 1
> checksum: 4c65e101
> magic: 0aK
> version: 0
> id count: 66
> root count: 24
> blob ref count: 0
> reserved: 0
> segment #0: e44fec74-3bc9-4cee-a864-5f5fb4485cb3
> segment #1: b6480852-9af2-4f21-a542-933cd554e9ed
> segment #2: bec8216a-64e3-4683-a28a-08fbd7ed91f9
> segment #3: b8067f5a-16af-429f-a1b5-c2f1ed1d3c59
> segment #4: 7c2e6167-b292-444a-a318-4f43692f40f2
> segment #5: 3b707f6e-d1e8-4bc0-ace5-67499ac4c369
> segment #6: 0c15bf0a-770d-4057-af6b-baeaeffde613
> segment #7: 97814ad5-3fc3-436e-a7c6-2aed13bcf529
> segment #8: b4babf94-588f-4b32-a8d6-08ba3351abf2
> segment #9: 2eab77d7-ee41-4519-ad14-c31f4f72d2e2
> segment #10: 57949f56-15d1-4f65-a107-d517a222cb11
> segment #11: d25bb0f8-3948-4d33-a35a-50a60a1e9f27
> segment #12: 58066744-e2d6-4f1c-a962-caae337f48db
> segment #13: b195e811-a984-4b3d-a4c8-5fb33572bfce
> segment #14: 384b360d-69ce-4df4-a419-65b80d39b40c
> segment #15: 325b6083-de56-4a4b-a6e5-2eac83e9d4aa
> segment #16: e3839e53-c516-4dce-a39d-b6241d47ef1d
> segment #17: a56f0a75-625b-4684-af43-9936d4d0c4bd
> segment #18: b7428409-8dc0-4c5b-a050-78c838db0ef1
> segment #19: 6ccd5436-37a1-4b46-a5a4-63f939f2de76
> segment #20: 4f28ca6e-a52b-4a06-acb0-19b710366c96
> segment #21: 137ec136-bcfb-405b-a5e3-59b37c54229c
> segment #22: 8a53ffe4-6941-4b9b-a517-9ab59acf9b63
> segment #23: effd3f42-dff3-4082-a0fc-493beabe8d42
> segment #24: 173ef78d-9f41-49fb-ac25-70daba943b37
> segment #25: 9d3ab04d-d540-4256-a28f-c71134d1bfd0
> segment #26: d1966946-45c7-4539-a8f2-9c7979b698c5
> segment #27: b2a72be0-ee4e-4f9d-a957-f84908f05040
> segment #28: 8eead39e-62f7-4b6c-ac12-5caee295b6e5
> segment #29: 19832493-415c-46d2-aa43-0dd95982d312
> segment #30: 462cd84b-3638-4c8c-a273-42b1be5d3df3
> segment #31: 4b4f6a2d-1281-475c-adbe-34457b6de60d
> segment #32: e78b13e2-039d-4f73-a3a8-d25da3b78a93
> segment #33: 03b652ac-623b-4d3d-a5f0-2e7dcd23fd88
> segment #34: 3ff1c4cf-cbd7-45bb-a23a-7e033c64fe87
> segment #35: ed9fb9ad-0a8c-4d37-ab95-c2211cde6fee
> segment #36: 505f766d-509d-4f1b-ad0b-dfd59ef6b6a1
> segment #37: 398f1e80-e818-41b7-a055-ed6255f6e01b
> segment #38: 74cd7875-df17-43af-a2cc-f3256b2213b9
> segment #39: ad884c29-61c9-4bd5-ad57-76ffa32ae3e8
> segment #40: 29cfb8c8-f759-41b2-ac71-d35eac0910ad
> segment #41: af5032e1-b473-47ad-a8e2-a17e770982e8
> segment #42: 37263fa3-3331-4e89-af1f-38972b65e058
> segment #43: 81baf70d-5529-416f-a8bd-9858d62fd2cc
> segment #44: aa3abdfb-4fc7-4c49-a628-fb097f8db872
> segment #45: 81e958b4-0493-4a7b-ab92-5f3cc13119e7
> segment #46: f03d0200-fe8e-4e93-a94a-97b9052f13ab
> segment #47: d9931e67-cc8c-4f45-a1b5-129a0b07e1fb
> segment #48: e32ad441-12d9-4263-a258-b81bd85f499e
> segment #49: 2093a577-e797-44b5-aa3e-45c08b1ad9c1
> segment #50: dadf47d7-9728-43dd-aa55-177ccbc45b81
> segment #51: 9db4b6f5-c21c-4e27-a4ab-9a0c419477b2
> segment #52: 2ca39d68-e7bf-4c83-aebe-f708fb23659e
> segment #53: 5dc42abe-e17b-41a3-a890-47c8319a
> segment #54: a817c7fb-4f05-429e-ab23-49d68e222e6c
> segment #55: 8a5de869-a8a8-42cb-ac78-ec22763fe558
> segment #56: cca0c5d8-9269-4b1f-ad3b-1ca666ca9cb9
> segment #57: ff299dda-00ce-4e01-ac1c-2289447500b7
> segment #58: 0c94d46e-7fae-474f-ac89-8ba87a64cb56
> segment #59: 890f7fcc-7c7e-4dc7-acc5-ed0927ef1a9f
> segment #60: 16f5de56-2aaa-4ca2-afa7-4bcb536b74b1
> segment #61: 97db1991-1c29-4f8c-ae05-7f79b712fb4c
> segment #62: 936aa29a-2679-4998-a159-d9a19069786a
> segment #63: 0d1abedd-8c20-4c1b-a9c8-f60d7cb94008
> segment #64: 1ffb6ac4-d7a9-44c5-aaa6-bf967e87dec0
> segment #65: b6e0cdbc-0004-9b05-3927-000498000493
> ERROR: not a segment?
> root record reference #0: 7766a
> root record reference #1: 76f93
> root record reference #2: 75e72
> root record reference #3: 758c6
> root record reference #4: 7529a
> root record reference #5: 74c59
> root record reference #6: 7464f
> root record reference #7: 740e0
> root record reference 

[jira] [Commented] (OAK-2049) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133893#comment-14133893
 ] 

Thomas Mueller commented on OAK-2049:
-

Partial test case (doesn't actually fail, but will hit the condition above and 
log a debug level message if this is enabled):

{noformat}
Index: src/test/java/org/apache/jackrabbit/oak/jcr/CRUDTest.java
===
--- src/test/java/org/apache/jackrabbit/oak/jcr/CRUDTest.java   (revision 
1624000)
+++ src/test/java/org/apache/jackrabbit/oak/jcr/CRUDTest.java   (working copy)
@@ -39,6 +39,32 @@
 public CRUDTest(NodeStoreFixture fixture) {
 super(fixture);
 }
+
+@Test
+public void testMultiValue() throws RepositoryException {
+Session session = getAdminSession();
+// Create
+Node testRoot = session.getRootNode().addNode("testRoot");
+long last = System.currentTimeMillis();
+for (int i = 0; i < 100; i++) {
+if (i % 1000 == 0) {
+long now = System.currentTimeMillis();
+if (now - last > 1000) {
+System.out.println(i);
+}
+}
+Node test = testRoot.addNode("test");
+for (int j = 0; j < 100; j++) {
+String s = "test" + i + "-" + j;
+test.setProperty("p" + j, new String[] { s, s, s, s, s, s, s,
+s, s, s, s, s, s, s });
+}
+session.save();
+test.remove();
+session.save();
+}
+session.save();
+}
 
 @Test
 public void testCRUD() throws RepositoryException {
{noformat}

> ArrayIndexOutOfBoundsException in Segment.getRefId()
> 
>
> Key: OAK-2049
> URL: https://issues.apache.org/jira/browse/OAK-2049
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1, 1.0.4
>Reporter: Andrew Khoury
>Assignee: Jukka Zitting
>Priority: Critical
> Fix For: 1.0.6
>
>
> It looks like there is some SegmentMK bug that causes the 
> {{Segment.getRefId()}} to throw an {{ArrayIndexOutOfBoundsException}} in some 
> fairly rare corner cases.
> The data was originally migrated into oak via the crx2oak tool mentioned 
> here: http://docs.adobe.com/docs/en/aem/6-0/deploy/upgrade.html
> That tool uses *oak-core-1.0.0* creating an oak instance.
> Similar to OAK-1566 this system was using FileDataStore with SegmentNodeStore.
> In this case the error is seen when running offline compaction using 
> oak-run-1.1-SNAPSHOT.jar (latest).
> {code:none}
> > java -Xmx4096m -jar oak-run-1.1-SNAPSHOT.jar compact 
> > /oak/crx-quickstart/repository/segmentstore
> Apache Jackrabbit Oak 1.1-SNAPSHOT
> Compacting /wcm/cq-author/crx-quickstart/repository/segmentstore
> before [data00055a.tar, data00064a.tar, data00045b.tar, data5a.tar, 
> data00018a.tar, data00022a.tar, data00047a.tar, data00037a.tar, 
> data00049a.tar, data00014a.tar, data00066a.tar, data00020a.tar, 
> data00058a.tar, data00065a.tar, data00069a.tar, data00012a.tar, 
> data9a.tar, data00060a.tar, data00041a.tar, data00016a.tar, 
> data00072a.tar, data00048a.tar, data00061a.tar, data00053a.tar, 
> data00038a.tar, data1a.tar, data00034a.tar, data3a.tar, 
> data00052a.tar, data6a.tar, data00027a.tar, data00031a.tar, 
> data00056a.tar, data00035a.tar, data00063a.tar, data00068a.tar, 
> data8v.tar, data00010a.tar, data00043b.tar, data00021a.tar, 
> data00017a.tar, data00024a.tar, data00054a.tar, data00051a.tar, 
> data00057a.tar, data00059a.tar, data00036a.tar, data00033a.tar, 
> data00019a.tar, data00046a.tar, data00067a.tar, data4a.tar, 
> data00044a.tar, data00013a.tar, data00070a.tar, data00026a.tar, 
> data2a.tar, data00011a.tar, journal.log, data00030a.tar, data00042a.tar, 
> data00025a.tar, data00062a.tar, data00023a.tar, data00071a.tar, 
> data00032b.tar, data00040a.tar, data00015a.tar, data00029a.tar, 
> data00050a.tar, data0a.tar, data7a.tar, data00028a.tar, 
> data00039a.tar]
> -> compacting
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 206
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.getRefId(Segment.java:191)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.internalReadRecordId(Segment.java:299)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:295)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:78)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getProperties(SegmentNodeState.java:150)
> at

[jira] [Comment Edited] (OAK-2049) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133891#comment-14133891
 ] 

Thomas Mueller edited comment on OAK-2049 at 9/15/14 1:40 PM:
--

Possibly the problem is in SegmentWriter.prepare, where "rootcount--" is called 
in a loop. In cases where the same record is referenced multiple times from the 
current record, and this record was so far a root record (for example the last 
record that was written, but not strictly necessary), I think this can end up 
with a wrong "rootcount", such that the segment is not flushed even thought it 
should be.

Proposed patch (just the fix, no test case yet):

{noformat}
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java
===
--- src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(revision 1624000)
+++ src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(working copy)
@@ -46,6 +46,7 @@
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashSet;
 import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
@@ -291,14 +292,32 @@
 refcount -= idcount;
 
 Set segmentIds = newIdentityHashSet();
+
+// The set of old record ids in this segment
+// that were previously root record ids, but will no longer be,
+// because the record to be written references them.
+// This needs to be a set, because the list of ids can
+// potentially reference the same record multiple times
+Set notRoots = new HashSet();
+
 for (RecordId recordId : ids) {
 SegmentId segmentId = recordId.getSegmentId();
 if (segmentId != segment.getSegmentId()) {
 segmentIds.add(segmentId);
 } else if (roots.containsKey(recordId)) {
-rootcount--;
+
+// log to verify OAK-2049 was hit
+// TODO this can be removed later on
+if (log.isDebugEnabled()) {
+if (notRoots.contains(recordId)) {
+log.debug("Multiple pointers to the same record");
+}
+}
+
+notRoots.add(recordId);
 }
 }
+rootcount -= notRoots.size();
 if (!segmentIds.isEmpty()) {
 for (int refid = 1; refid < refcount; refid++) {
 segmentIds.remove(segment.getRefId(refid));
{noformat}



was (Author: tmueller):
Possibly the problem is in SegmentWriter.prepare, where "rootcount--" is called 
in a loop. In cases where the same record is referenced multiple times from the 
current record, and this record was written just before (the last record that 
was written), I think this can end up with a wrong "rootcount", such that the 
segment is not flushed even thought it should be.

Proposed patch (just the fix, no test case yet):

{noformat}
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java
===
--- src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(revision 1624000)
+++ src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(working copy)
@@ -46,6 +46,7 @@
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashSet;
 import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
@@ -291,14 +292,32 @@
 refcount -= idcount;
 
 Set segmentIds = newIdentityHashSet();
+
+// The set of old record ids in this segment
+// that were previously root record ids, but will no longer be,
+// because the record to be written references them.
+// This needs to be a set, because the list of ids can
+// potentially reference the same record multiple times
+Set notRoots = new HashSet();
+
 for (RecordId recordId : ids) {
 SegmentId segmentId = recordId.getSegmentId();
 if (segmentId != segment.getSegmentId()) {
 segmentIds.add(segmentId);
 } else if (roots.containsKey(recordId)) {
-rootcount--;
+
+// log to verify OAK-2049 was hit
+// TODO this can be removed later on
+if (log.isDebugEnabled()) {
+if (notRoots.contains(recordId)) {
+log.debug("Multi

[jira] [Commented] (OAK-2049) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133891#comment-14133891
 ] 

Thomas Mueller commented on OAK-2049:
-

Possibly the problem is in SegmentWriter.prepare, where "rootcount--" is called 
in a loop. In cases where the same record is referenced multiple times from the 
current record, and this record was written just before (the last record that 
was written), I think this can end up with a wrong "rootcount", such that the 
segment is not flushed even thought it should be.

Proposed patch (just the fix, no test case yet):

{noformat}
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java
===
--- src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(revision 1624000)
+++ src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentWriter.java  
(working copy)
@@ -46,6 +46,7 @@
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashSet;
 import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
@@ -291,14 +292,32 @@
 refcount -= idcount;
 
 Set segmentIds = newIdentityHashSet();
+
+// The set of old record ids in this segment
+// that were previously root record ids, but will no longer be,
+// because the record to be written references them.
+// This needs to be a set, because the list of ids can
+// potentially reference the same record multiple times
+Set notRoots = new HashSet();
+
 for (RecordId recordId : ids) {
 SegmentId segmentId = recordId.getSegmentId();
 if (segmentId != segment.getSegmentId()) {
 segmentIds.add(segmentId);
 } else if (roots.containsKey(recordId)) {
-rootcount--;
+
+// log to verify OAK-2049 was hit
+// TODO this can be removed later on
+if (log.isDebugEnabled()) {
+if (notRoots.contains(recordId)) {
+log.debug("Multiple pointers to the same record");
+}
+}
+
+notRoots.add(recordId);
 }
 }
+rootcount -= notRoots.size();
 if (!segmentIds.isEmpty()) {
 for (int refid = 1; refid < refcount; refid++) {
 segmentIds.remove(segment.getRefId(refid));
{noformat}


> ArrayIndexOutOfBoundsException in Segment.getRefId()
> 
>
> Key: OAK-2049
> URL: https://issues.apache.org/jira/browse/OAK-2049
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1, 1.0.4
>Reporter: Andrew Khoury
>Assignee: Jukka Zitting
>Priority: Critical
> Fix For: 1.0.6
>
>
> It looks like there is some SegmentMK bug that causes the 
> {{Segment.getRefId()}} to throw an {{ArrayIndexOutOfBoundsException}} in some 
> fairly rare corner cases.
> The data was originally migrated into oak via the crx2oak tool mentioned 
> here: http://docs.adobe.com/docs/en/aem/6-0/deploy/upgrade.html
> That tool uses *oak-core-1.0.0* creating an oak instance.
> Similar to OAK-1566 this system was using FileDataStore with SegmentNodeStore.
> In this case the error is seen when running offline compaction using 
> oak-run-1.1-SNAPSHOT.jar (latest).
> {code:none}
> > java -Xmx4096m -jar oak-run-1.1-SNAPSHOT.jar compact 
> > /oak/crx-quickstart/repository/segmentstore
> Apache Jackrabbit Oak 1.1-SNAPSHOT
> Compacting /wcm/cq-author/crx-quickstart/repository/segmentstore
> before [data00055a.tar, data00064a.tar, data00045b.tar, data5a.tar, 
> data00018a.tar, data00022a.tar, data00047a.tar, data00037a.tar, 
> data00049a.tar, data00014a.tar, data00066a.tar, data00020a.tar, 
> data00058a.tar, data00065a.tar, data00069a.tar, data00012a.tar, 
> data9a.tar, data00060a.tar, data00041a.tar, data00016a.tar, 
> data00072a.tar, data00048a.tar, data00061a.tar, data00053a.tar, 
> data00038a.tar, data1a.tar, data00034a.tar, data3a.tar, 
> data00052a.tar, data6a.tar, data00027a.tar, data00031a.tar, 
> data00056a.tar, data00035a.tar, data00063a.tar, data00068a.tar, 
> data8v.tar, data00010a.tar, data00043b.tar, data00021a.tar, 
> data00017a.tar, data00024a.tar, data00054a.tar, data00051a.tar, 
> data00057a.tar, data00059a.tar, data00036a.tar, data00033a.tar, 
> data00019a.tar, data00046a.tar, data00067a.tar, data4a.tar, 
> data00044a.tar, data00013a.tar, data00070a.tar, data00026a.tar, 
> data2a.

[jira] [Commented] (OAK-2093) RDBBlobStore failure because of missing lastmod column on datastore_data table

2014-09-15 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133874#comment-14133874
 ] 

Thomas Mueller commented on OAK-2093:
-

I think we should have a test case, and fix the deleteChunks then, for example 
using a statement like this:

{noformat}
delete from  where id in (<...>) and not exists(
select * from  m where id = m.id and m.lastMod <= ?)
{noformat}

I think this statement should work with all databases, should be quite fast, 
and would be transactionally safe (no race condition possible). The statement 
would need to be tested on common databases.

> RDBBlobStore failure because of missing lastmod column on datastore_data table
> --
>
> Key: OAK-2093
> URL: https://issues.apache.org/jira/browse/OAK-2093
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Julian Reschke
> Fix For: 1.1
>
>
> Revision 1571268 introduced a lastmod check on deleteChunks (see OAK-377). It 
> is applied to both tables, however only _meta has it.
> We either need to add the column (and maintain it), or change the code in 
> deleteChunks.
> [~tmueller] feedback appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2093) RDBBlobStore failure because of missing lastmod column on datastore_data table

2014-09-15 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2093:

Fix Version/s: 1.1

> RDBBlobStore failure because of missing lastmod column on datastore_data table
> --
>
> Key: OAK-2093
> URL: https://issues.apache.org/jira/browse/OAK-2093
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Julian Reschke
> Fix For: 1.1
>
>
> Revision 1571268 introduced a lastmod check on deleteChunks (see OAK-377). It 
> is applied to both tables, however only _meta has it.
> We either need to add the column (and maintain it), or change the code in 
> deleteChunks.
> [~tmueller] feedback appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2094) Oak Upgrade should depend on oak-jcr with a 'test' scope

2014-09-15 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-2094.
--
Resolution: Fixed

rev 1624317 on trunk and rev 1624319 on 1.0 branch

> Oak Upgrade should depend on oak-jcr with a 'test' scope
> 
>
> Key: OAK-2094
> URL: https://issues.apache.org/jira/browse/OAK-2094
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.0.5
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Fix For: 1.1, 1.0.6
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1768) DocumentNodeBuilder.setChildNode() runs OOM with large tree

2014-09-15 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133774#comment-14133774
 ] 

Marcel Reutegger commented on OAK-1768:
---

Committed work in progress at http://svn.apache.org/r1624993

- Introduce different types of branch commits (regular and rebase commit) to 
reduce memory usage of a rebase
- Better hide UnsavedModifications as an implementation detail

> DocumentNodeBuilder.setChildNode() runs OOM with large tree
> ---
>
> Key: OAK-1768
> URL: https://issues.apache.org/jira/browse/OAK-1768
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.1
>
> Attachments: 
> 0001-OAK-1768-DocumentNodeBuilder.setChildNode-runs-OOM-w.patch
>
>
> This is a follow up issue for OAK-1760. There we implemented a workaround in 
> the upgrade code to prevent the OOME. For the 1.1 release we should implement 
> a proper fix in the DocumentNodeStore as suggested by Jukka in OAK-1760.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2099) AIOOBE in Segment#toString

2014-09-15 Thread JIRA
Michael Dürig created OAK-2099:
--

 Summary: AIOOBE in Segment#toString
 Key: OAK-2099
 URL: https://issues.apache.org/jira/browse/OAK-2099
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Michael Dürig


{{Segment#toString}} doesn't account for correcting the offset of a record in a 
segment in the case where the total segment size is less than the maximal 
segment size. This usually leads to an AIOOBE at the point where the root refs 
/ blob refs are listed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2098) oak-tarmk-failover doesn't compile - "cannot find symbol" in SegmentLoaderHandler

2014-09-15 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-2098.
-
Resolution: Fixed

Fixed by reverting the commit.

> oak-tarmk-failover doesn't compile - "cannot find symbol" in 
> SegmentLoaderHandler
> -
>
> Key: OAK-2098
> URL: https://issues.apache.org/jira/browse/OAK-2098
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run
>Affects Versions: 1.1
> Environment: java 7 or 6
>Reporter: Andrei Dulvac
>  Labels: failover, oak
>
> `$ mvn clean install` in the root of the project fails because 
> oak-tarmk-failover fails to build.
> Introduced by 41916b6. Ln 104:
> {code}
> catch (FileStore.FileStoreCorruptException e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2098) oak-tarmk-failover doesn't compile - "cannot find symbol" in SegmentLoaderHandler

2014-09-15 Thread Andrei Dulvac (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133743#comment-14133743
 ] 

Andrei Dulvac commented on OAK-2098:


Ping [~baedke] because 41916b6 introduced the issue. 

> oak-tarmk-failover doesn't compile - "cannot find symbol" in 
> SegmentLoaderHandler
> -
>
> Key: OAK-2098
> URL: https://issues.apache.org/jira/browse/OAK-2098
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run
>Affects Versions: 1.1
> Environment: java 7 or 6
>Reporter: Andrei Dulvac
>  Labels: failover, oak
>
> `$ mvn clean install` in the root of the project fails because 
> oak-tarmk-failover fails to build.
> Introduced by 41916b6. Ln 104:
> {code}
> catch (FileStore.FileStoreCorruptException e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2098) oak-tarmk-failover doesn't compile - "cannot find symbol" in SegmentLoaderHandler

2014-09-15 Thread Andrei Dulvac (JIRA)
Andrei Dulvac created OAK-2098:
--

 Summary: oak-tarmk-failover doesn't compile - "cannot find symbol" 
in SegmentLoaderHandler
 Key: OAK-2098
 URL: https://issues.apache.org/jira/browse/OAK-2098
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: run
Affects Versions: 1.1
 Environment: java 7 or 6
Reporter: Andrei Dulvac


`$ mvn clean install` in the root of the project fails because 
oak-tarmk-failover fails to build.

Introduced by 41916b6. Ln 104:
{code}
catch (FileStore.FileStoreCorruptException e) {
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)