[jira] [Resolved] (OAK-8012) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-8012.
---
   Resolution: Fixed
Fix Version/s: 1.11.0

Fixed in trunk: http://svn.apache.org/r1852493

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8012
> URL: https://issues.apache.org/jira/browse/OAK-8012
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
> Fix For: 1.12, 1.11.0
>
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8012) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755796#comment-16755796
 ] 

Marcel Reutegger commented on OAK-8012:
---

Add ignored test to trunk: http://svn.apache.org/r1852492

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8012
> URL: https://issues.apache.org/jira/browse/OAK-8012
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
> Fix For: 1.12
>
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8012) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8012:
-

 Summary: Unmerged branch changes visible after restart
 Key: OAK-8012
 URL: https://issues.apache.org/jira/browse/OAK-8012
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.10.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


Changes persisted to a branch are visible after a restart even when the branch 
has not been merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755793#comment-16755793
 ] 

Marcel Reutegger commented on OAK-8008:
---

Oh, noo. Wrong button, now the issue is closed.

I reverted all changes in trunk and removed the fix versions from this issue.

http://svn.apache.org/r1852491

The new issue is OAK-8012.

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8008
> URL: https://issues.apache.org/jira/browse/OAK-8008
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-8008:
--
Fix Version/s: (was: 1.11.0)
   (was: 1.12)

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8008
> URL: https://issues.apache.org/jira/browse/OAK-8008
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger closed OAK-8008.
-

I'll have to re-open this issue. The new tests in DocumentNodeStoreBranchTest 
make a potential backport difficult. The test class only exists back to the 
1.10 branch.

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8008
> URL: https://issues.apache.org/jira/browse/OAK-8008
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
> Fix For: 1.12, 1.11.0
>
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8005) Build Jackrabbit Oak #1904 failed

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755254#comment-16755254
 ] 

Hudson commented on OAK-8005:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1911|https://builds.apache.org/job/Jackrabbit%20Oak/1911/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1911/console]

> Build Jackrabbit Oak #1904 failed
> -
>
> Key: OAK-8005
> URL: https://issues.apache.org/jira/browse/OAK-8005
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1904 has failed.
> First failed run: [Jackrabbit Oak 
> #1904|https://builds.apache.org/job/Jackrabbit%20Oak/1904/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1904/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Fix Version/s: 1.10.1

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.1
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755018#comment-16755018
 ] 

Julian Reschke edited comment on OAK-8007 at 1/29/19 5:51 PM:
--

trunk: [r1852451|http://svn.apache.org/r1852451]
1.10: [r1852465|http://svn.apache.org/r1852465]



was (Author: reschke):
trunk: [r1852451|http://svn.apache.org/r1852451]

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.1
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Labels: candidate_oak_1_8  (was: candidate_oak_1_10 candidate_oak_1_8)

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.12, 1.11.0, 1.10.1
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7994) Principal Management APIs don't allow for search pagination

2019-01-29 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755214#comment-16755214
 ] 

angela commented on OAK-7994:
-

 [~stillalex], 
- jackrabbit api: i agree that it's reasonable to think about the improvement 
in terms of feasibility in oak first and only update the jackrabbit API when we 
have a better understanding of the changes needed
- ordering: do you mean ascending/descending by principal name or something 
else? 
- everyone: i wouldn't bother about the everyone principal and 'manually' 
inject it but rather marks that as known limitation and hint that 
{{PrincipalManager.getEveryone()}} needs to be called if the everyone principal 
is not 'native' of a given implementation (e.g. in the default implementation 
that would mean if no 'Group' exists, the query won't find/inject it.
- composite: jup will think about it.
- different impls: i can help you with that if you wish on the bright side 
it may help us avoid tailoring it for a specific implementation.

> Principal Management APIs don't allow for search pagination
> ---
>
> Key: OAK-7994
> URL: https://issues.apache.org/jira/browse/OAK-7994
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security, security-spi
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
>
> Current apis are limited to passing a search token, I'd like to also add 
> range (offset, limit).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8005) Build Jackrabbit Oak #1904 failed

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755165#comment-16755165
 ] 

Hudson commented on OAK-8005:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1910|https://builds.apache.org/job/Jackrabbit%20Oak/1910/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1910/console]

> Build Jackrabbit Oak #1904 failed
> -
>
> Key: OAK-8005
> URL: https://issues.apache.org/jira/browse/OAK-8005
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1904 has failed.
> First failed run: [Jackrabbit Oak 
> #1904|https://builds.apache.org/job/Jackrabbit%20Oak/1904/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1904/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-8008.
---
   Resolution: Fixed
Fix Version/s: 1.11.0

Fixed in trunk: http://svn.apache.org/r1852461

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8008
> URL: https://issues.apache.org/jira/browse/OAK-8008
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
> Fix For: 1.12, 1.11.0
>
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755137#comment-16755137
 ] 

Marcel Reutegger commented on OAK-8008:
---

Added an ignored test to trunk: http://svn.apache.org/r1852460

> Unmerged branch changes visible after restart
> -
>
> Key: OAK-8008
> URL: https://issues.apache.org/jira/browse/OAK-8008
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.10.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Critical
> Fix For: 1.12
>
>
> Changes persisted to a branch are visible after a restart even when the 
> branch has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7994) Principal Management APIs don't allow for search pagination

2019-01-29 Thread Alex Deparvu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754844#comment-16754844
 ] 

Alex Deparvu edited comment on OAK-7994 at 1/29/19 3:19 PM:


there are a few blockers here:
 - first, I didn't realize this will need a jackrabbit api update. I'll focus 
on the oak bits for now
 - second the results will have to have some sort of ordering which is not 
guaranteed by the current api (and impl)
 - third the way the everyone group is injected in the result set makes the 
pagination task pretty complicated
 - last, the composite provider will have to do some magic to maintain the 
expected results in order over the required range
 - [edit] another one: there are quite a few impl in oak (the principalImpl, 
the user mng, the external auth, the composite). I will not have the bandwith 
to fix all of them in one go, so we'll probably need some sensible defaults in 
place while the code gets improved case by case

[~anchela] any thoughts?


was (Author: alex.parvulescu):
there are a few blockers here:
 - first, I didn't realize this will need a jackrabbit api update. I'll focus 
on the oak bits for now
 - second the results will have to have some sort of ordering which is not 
guaranteed by the current api (and impl)
 - third the way the everyone group is injected in the result set makes the 
pagination task pretty complicated
 - last, the composite provider will have to do some magic to maintain the 
expected results in order over the required range

[~anchela] any thoughts?

> Principal Management APIs don't allow for search pagination
> ---
>
> Key: OAK-7994
> URL: https://issues.apache.org/jira/browse/OAK-7994
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security, security-spi
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
>
> Current apis are limited to passing a search token, I'd like to also add 
> range (offset, limit).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7904) Exporting query duration per index metrics with Sling Metrics / DropWizard

2019-01-29 Thread Paul Chibulcuteanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Chibulcuteanu resolved OAK-7904.
-
Resolution: Fixed
  Assignee: Thomas Mueller  (was: Paul Chibulcuteanu)

Opened OAK-8011 for measuring the overhead of measuring. 
Fixed in trunk [r1852102|http://svn.apache.org/r1852102].

> Exporting query duration per index metrics with Sling Metrics / DropWizard
> --
>
> Key: OAK-7904
> URL: https://issues.apache.org/jira/browse/OAK-7904
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: indexing, query
>Reporter: Paul Chibulcuteanu
>Assignee: Thomas Mueller
>Priority: Major
> Fix For: 1.12, 1.9.10
>
> Attachments: OAK-7904.0.patch, OAK-7904.1.patch
>
>
> Purpose of this task is to evaluate & create metric which calculates the 
> average duration of query for each index.
> This metric can be later used to evaluate which index(s) need to be optimised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Andrei Dulceanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755098#comment-16755098
 ] 

Andrei Dulceanu commented on OAK-8006:
--

[~frm], thanks for providing the sketch for a test case. In 
[^OAK-8006-test.patch] I managed to create a test based on it, which actually 
exhibits the desired behaviour. This is the good news. The bad news is that the 
first patch proposed wasn't effective. Now the problem re-surfaced higher in 
the call chain:
{noformat}
17:01:30.578 INFO  [standby-run-1] StandbyClientSyncExecution.java:193 Copying 
data segment dabdbb4c-4915-4466-a68a-02fdd9c286e7 from primary
17:01:30.590 INFO  [standby-run-1] StandbyClientSyncExecution.java:193 Copying 
data segment 009edfc9-1771-43f6-a18a-a26eb7bac3ad from primary
17:01:30.613 ERROR [standby-run-1] StandbyClientSync.java:183 Failed 
synchronizing state.
org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
009edfc9-1771-43f6-a18a-a26eb7bac3ad not found
at 
org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
at 
org.apache.jackrabbit.oak.segment.file.FileStore.lambda$14(FileStore.java:498)
at 
org.apache.jackrabbit.oak.segment.SegmentCache$EmptyCache.getSegment(SegmentCache.java:229)
at 
org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
at 
org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153)
at org.apache.jackrabbit.oak.segment.Record.getSegment(Record.java:70)
at 
org.apache.jackrabbit.oak.segment.ListRecord.getEntries(ListRecord.java:99)
at 
org.apache.jackrabbit.oak.segment.ListRecord.getEntries(ListRecord.java:92)
at 
org.apache.jackrabbit.oak.segment.SegmentStream.read(SegmentStream.java:165)
at com.google.common.io.ByteStreams.read(ByteStreams.java:828)
at com.google.common.io.ByteStreams.readFully(ByteStreams.java:695)
at com.google.common.io.ByteStreams.readFully(ByteStreams.java:676)
at 
org.apache.jackrabbit.oak.segment.SegmentStream.getString(SegmentStream.java:104)
at 
org.apache.jackrabbit.oak.segment.Segment.readString(Segment.java:467)
at 
org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:210)
at 
org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
at 
org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
at 
org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601)
at 
org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
at 
org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:534)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.copySegmentFromPrimary(StandbyClientSyncExecution.java:225)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.copySegmentHierarchyFromPrimary(StandbyClientSyncExecution.java:194)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.compareAgainstBaseState(StandbyClientSyncExecution.java:101)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:76)
at 
org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:165)
at 
org.apache.jackrabbit.oak.segment.standby.StandbySegmentBlobTestIT.testSyncWithLongBlobId(StandbySegmentBlobTestIT.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 

[jira] [Updated] (OAK-8011) Benchmark on QUERY_DURATION metrics implemented in OAK-7904

2019-01-29 Thread Paul Chibulcuteanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Chibulcuteanu updated OAK-8011:

Fix Version/s: 1.12

> Benchmark on QUERY_DURATION metrics implemented in OAK-7904
> ---
>
> Key: OAK-8011
> URL: https://issues.apache.org/jira/browse/OAK-8011
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: indexing, query
>Reporter: Paul Chibulcuteanu
>Priority: Major
> Fix For: 1.12
>
>
> As part of OAK-7904, there are some possible performance concerns on adding 
> additional metrics in code which is executed a lot.
> See [~tmueller]'s comment: 
> {code}
> Some comments on overhead of measuring:
> We measure here very common, and very fast operations. I don't know how fast 
> next() could be, but if everything is in memory, it could be faster than 600 
> ns. I measured the fastest measured operation was processed at 0.091904 
> milliseconds , that would be 91904 nanoseconds. Measures was this divided by 
> 256, so just 359 nanoseconds.
> System.nanoTime() can be slower than that, according to this older article it 
> can be 650 nanoseconds. We need to call it twice to measure, so 1'300 
> nanoseconds. Meaning, measuring in the worst case seens so far slows down the 
> operation by factor 4.6 (worst case seen so far).
> What we could do is use org.apache.jackrabbit.oak.stats Clock.Fast, which has 
> a much lower overhead than calling System.nanoTime(). The name "Fast" is 
> somewhat of a misnomer: the clock isn't really faster than other clocks, it's 
> just less overhead. So getting the current time is fast. Resolution is low, 
> but that wouldn't be a problem in our case, it's just that most of the time, 
> operations would be 0 ns, and rarely 100s of ns. On average, that would even 
> out (same as with the sampling it is using right now). The problems with 
> Clock.Fast are:
> Hard to get a hand on this instance.
> It uses a thread pool executor service, which is problematic. If the same 
> service is used by other threads that take milliseconds, then the clock is 
> extremely inaccurate. I would be better to use a simple, separate daemon 
> thread.
> {code}
> Seeing that there is the possibility to enable/disable the metrics stats two 
> separate benchmark tests can be run:
> * specifying the _oak.query.timerDisabled_ system prop
> * without specifying the _oak.query.timerDisabled_ system prop



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8011) Benchmark on QUERY_DURATION metrics implemented in OAK-7904

2019-01-29 Thread Paul Chibulcuteanu (JIRA)
Paul Chibulcuteanu created OAK-8011:
---

 Summary: Benchmark on QUERY_DURATION metrics implemented in 
OAK-7904
 Key: OAK-8011
 URL: https://issues.apache.org/jira/browse/OAK-8011
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Paul Chibulcuteanu


As part of OAK-7904, there are some possible performance concerns on adding 
additional metrics in code which is executed a lot.
See [~tmueller]'s comment: 

{code}
Some comments on overhead of measuring:

We measure here very common, and very fast operations. I don't know how fast 
next() could be, but if everything is in memory, it could be faster than 600 
ns. I measured the fastest measured operation was processed at 0.091904 
milliseconds , that would be 91904 nanoseconds. Measures was this divided by 
256, so just 359 nanoseconds.

System.nanoTime() can be slower than that, according to this older article it 
can be 650 nanoseconds. We need to call it twice to measure, so 1'300 
nanoseconds. Meaning, measuring in the worst case seens so far slows down the 
operation by factor 4.6 (worst case seen so far).

What we could do is use org.apache.jackrabbit.oak.stats Clock.Fast, which has a 
much lower overhead than calling System.nanoTime(). The name "Fast" is somewhat 
of a misnomer: the clock isn't really faster than other clocks, it's just less 
overhead. So getting the current time is fast. Resolution is low, but that 
wouldn't be a problem in our case, it's just that most of the time, operations 
would be 0 ns, and rarely 100s of ns. On average, that would even out (same as 
with the sampling it is using right now). The problems with Clock.Fast are:

Hard to get a hand on this instance.
It uses a thread pool executor service, which is problematic. If the same 
service is used by other threads that take milliseconds, then the clock is 
extremely inaccurate. I would be better to use a simple, separate daemon thread.
{code}

Seeing that there is the possibility to enable/disable the metrics stats two 
separate benchmark tests can be run:
* specifying the _oak.query.timerDisabled_ system prop
* without specifying the _oak.query.timerDisabled_ system prop



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8011) Benchmark on QUERY_DURATION metrics implemented in OAK-7904

2019-01-29 Thread Paul Chibulcuteanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Chibulcuteanu updated OAK-8011:

Component/s: query
 indexing

> Benchmark on QUERY_DURATION metrics implemented in OAK-7904
> ---
>
> Key: OAK-8011
> URL: https://issues.apache.org/jira/browse/OAK-8011
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: indexing, query
>Reporter: Paul Chibulcuteanu
>Priority: Major
>
> As part of OAK-7904, there are some possible performance concerns on adding 
> additional metrics in code which is executed a lot.
> See [~tmueller]'s comment: 
> {code}
> Some comments on overhead of measuring:
> We measure here very common, and very fast operations. I don't know how fast 
> next() could be, but if everything is in memory, it could be faster than 600 
> ns. I measured the fastest measured operation was processed at 0.091904 
> milliseconds , that would be 91904 nanoseconds. Measures was this divided by 
> 256, so just 359 nanoseconds.
> System.nanoTime() can be slower than that, according to this older article it 
> can be 650 nanoseconds. We need to call it twice to measure, so 1'300 
> nanoseconds. Meaning, measuring in the worst case seens so far slows down the 
> operation by factor 4.6 (worst case seen so far).
> What we could do is use org.apache.jackrabbit.oak.stats Clock.Fast, which has 
> a much lower overhead than calling System.nanoTime(). The name "Fast" is 
> somewhat of a misnomer: the clock isn't really faster than other clocks, it's 
> just less overhead. So getting the current time is fast. Resolution is low, 
> but that wouldn't be a problem in our case, it's just that most of the time, 
> operations would be 0 ns, and rarely 100s of ns. On average, that would even 
> out (same as with the sampling it is using right now). The problems with 
> Clock.Fast are:
> Hard to get a hand on this instance.
> It uses a thread pool executor service, which is problematic. If the same 
> service is used by other threads that take milliseconds, then the clock is 
> extremely inaccurate. I would be better to use a simple, separate daemon 
> thread.
> {code}
> Seeing that there is the possibility to enable/disable the metrics stats two 
> separate benchmark tests can be run:
> * specifying the _oak.query.timerDisabled_ system prop
> * without specifying the _oak.query.timerDisabled_ system prop



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Andrei Dulceanu (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-8006:
-
Attachment: OAK-8006-test.patch

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.10.1, 1.8.12
>
> Attachments: OAK-8006-test.patch, OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:533)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Commented] (OAK-8005) Build Jackrabbit Oak #1904 failed

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755085#comment-16755085
 ] 

Hudson commented on OAK-8005:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1909|https://builds.apache.org/job/Jackrabbit%20Oak/1909/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1909/console]

> Build Jackrabbit Oak #1904 failed
> -
>
> Key: OAK-8005
> URL: https://issues.apache.org/jira/browse/OAK-8005
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1904 has failed.
> First failed run: [Jackrabbit Oak 
> #1904|https://builds.apache.org/job/Jackrabbit%20Oak/1904/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1904/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8010) Lucene index: Index definitions implications of isRegexp

2019-01-29 Thread David Gonzalez (JIRA)
David Gonzalez created OAK-8010:
---

 Summary: Lucene index: Index definitions implications of isRegexp
 Key: OAK-8010
 URL: https://issues.apache.org/jira/browse/OAK-8010
 Project: Jackrabbit Oak
  Issue Type: Documentation
  Components: search
Reporter: David Gonzalez


It would be good to put a sentence in the documentation (in the `isRegexp` 
section of  [1]) citing if there are any adverse implications of using 
`isRegexp` in conjunction with options like `property=true`, `facets=true`, or 
other configs.

 

The common use of `isRegexp` is a way to index many properties, variable 
properties (variable in number). Any implications (or lack thereof) for this 
"broad brushed" approach with should be articulated to guide developers to use 
`isRegexp` in a responsible manner.

 

[1] 
https://jackrabbit.apache.org/oak/docs/query/lucene.html#property-definitions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Labels: candidate_oak_1_10 candidate_oak_1_8  (was: candidate_oak_1_10)

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.12, 1.11.0
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755018#comment-16755018
 ] 

Julian Reschke commented on OAK-8007:
-

trunk: [r1852451|http://svn.apache.org/r1852451]

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10
> Fix For: 1.12, 1.11.0
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8007.
-
   Resolution: Fixed
Fix Version/s: 1.11.0

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12, 1.11.0
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Labels: candidate_oak_1_10  (was: )

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10
> Fix For: 1.12, 1.11.0
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8009) Remove unused code in NodeDocument

2019-01-29 Thread Marcel Reutegger (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-8009.
---
   Resolution: Fixed
Fix Version/s: 1.11.0

Done in trunk: http://svn.apache.org/r1852449

> Remove unused code in NodeDocument
> --
>
> Key: OAK-8009
> URL: https://issues.apache.org/jira/browse/OAK-8009
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.12, 1.11.0
>
>
> The NodeDocument class has some unused code which can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8009) Remove unused code in NodeDocument

2019-01-29 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8009:
-

 Summary: Remove unused code in NodeDocument
 Key: OAK-8009
 URL: https://issues.apache.org/jira/browse/OAK-8009
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: documentmk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


The NodeDocument class has some unused code which can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754965#comment-16754965
 ] 

Julian Reschke commented on OAK-8007:
-

With some test coverage: 
https://issues.apache.org/jira/secure/attachment/12956709/OAK-8007.diff

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Attachment: OAK-8007.diff

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
> Attachments: OAK-8007.diff, OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8008) Unmerged branch changes visible after restart

2019-01-29 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8008:
-

 Summary: Unmerged branch changes visible after restart
 Key: OAK-8008
 URL: https://issues.apache.org/jira/browse/OAK-8008
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: documentmk
Affects Versions: 1.10.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


Changes persisted to a branch are visible after a restart even when the branch 
has not been merged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8005) Build Jackrabbit Oak #1904 failed

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754898#comment-16754898
 ] 

Hudson commented on OAK-8005:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1908|https://builds.apache.org/job/Jackrabbit%20Oak/1908/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1908/console]

> Build Jackrabbit Oak #1904 failed
> -
>
> Key: OAK-8005
> URL: https://issues.apache.org/jira/browse/OAK-8005
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1904 has failed.
> First failed run: [Jackrabbit Oak 
> #1904|https://builds.apache.org/job/Jackrabbit%20Oak/1904/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1904/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8001) Lucene index can be empty (no :data node) in composite node store setup

2019-01-29 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-8001:
---
Fix Version/s: 1.10.1

> Lucene index can be empty (no :data node) in composite node store setup
> ---
>
> Key: OAK-8001
> URL: https://issues.apache.org/jira/browse/OAK-8001
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.12, 1.11.0, 1.10.1
>
>
> In normal setups, even if no data is written to the index, an empty (valid) 
> lucene index is created - that's useful to take care of checking 
> if-index-exists everywhere before opening the index.
> {{DefaultIndexWriter#close}} has an explicit comment stating this to be an 
> explicit intent.
> In composite node stores though, if an index doesn't get any data to be 
> indexed then {{MultiplexingIndexWriter}} never opens a {{DefaultIndexWriter}} 
> (for one or all mounts - depending on if there were some writes then which 
> mount did they hit).
> {{MultiplexingIndexWriter}} does delegate {{close}} to its opened writers but 
> that doesn't give the opportunity to {{DefaultIndexWriter#close}} into play 
> if there was no writer opened for a given mount.
> This then leads to situation in composite node stores where very empty 
> indexes can have missing {{:data}} node. In fact this was one of the causes 
> that we hit OAK-7983 in one of AEM based project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8001) Lucene index can be empty (no :data node) in composite node store setup

2019-01-29 Thread Vikas Saurabh (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754895#comment-16754895
 ] 

Vikas Saurabh commented on OAK-8001:


Backported to 1.10 at [r1852434|https://svn.apache.org/r1852434].

> Lucene index can be empty (no :data node) in composite node store setup
> ---
>
> Key: OAK-8001
> URL: https://issues.apache.org/jira/browse/OAK-8001
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.12, 1.11.0
>
>
> In normal setups, even if no data is written to the index, an empty (valid) 
> lucene index is created - that's useful to take care of checking 
> if-index-exists everywhere before opening the index.
> {{DefaultIndexWriter#close}} has an explicit comment stating this to be an 
> explicit intent.
> In composite node stores though, if an index doesn't get any data to be 
> indexed then {{MultiplexingIndexWriter}} never opens a {{DefaultIndexWriter}} 
> (for one or all mounts - depending on if there were some writes then which 
> mount did they hit).
> {{MultiplexingIndexWriter}} does delegate {{close}} to its opened writers but 
> that doesn't give the opportunity to {{DefaultIndexWriter#close}} into play 
> if there was no writer opened for a given mount.
> This then leads to situation in composite node stores where very empty 
> indexes can have missing {{:data}} node. In fact this was one of the causes 
> that we hit OAK-7983 in one of AEM based project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7994) Principal Management APIs don't allow for search pagination

2019-01-29 Thread Alex Deparvu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754844#comment-16754844
 ] 

Alex Deparvu commented on OAK-7994:
---

there are a few blockers here:
 - first, I didn't realize this will need a jackrabbit api update. I'll focus 
on the oak bits for now
 - second the results will have to have some sort of ordering which is not 
guaranteed by the current api (and impl)
 - third the way the everyone group is injected in the result set makes the 
pagination task pretty complicated
 - last, the composite provider will have to do some magic to maintain the 
expected results in order over the required range

[~anchela] any thoughts?

> Principal Management APIs don't allow for search pagination
> ---
>
> Key: OAK-7994
> URL: https://issues.apache.org/jira/browse/OAK-7994
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: security, security-spi
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Major
>
> Current apis are limited to passing a search token, I'd like to also add 
> range (offset, limit).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Francesco Mari (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754821#comment-16754821
 ] 

Francesco Mari commented on OAK-8006:
-

[~dulceanu], I was able to sketch out a test case that activates the desired 
code path in {{SegmentBlob}}. The test case fails to test anything, but if you 
descend into the call {{blob.length()}} with your debugger, you can see that 
the new code is called. Maybe we can build on top of that?

{noformat}
public class SegmentBlobTest {

// `BLOB_SIZE` has to be chosen in such a way that is both:
//
// - Less than or equal to `minRecordLength` in the Blob Store. This is
//   necessary in order to create a sufficiently long blob ID.
// - Greater than or equal to `Segment.BLOB_ID_SMALL_LIMIT` in order to save
//   the blob ID as a large, external blob ID.
// - Greater than or equal to `Segment.MEDIUM_LIMIT`, otherwise the content
//   of the binary will be written as a mere value record instead of a
//   binary ID referencing an external binary.
//
// Since `Segment.MEDIUM_LIMIT` is the largest of the constants above, it is
// sufficient to set `BLOB_SIZE` to `Segment.MEDIUM_LIMIT`. 

private static final int BLOB_SIZE = 2 * SegmentTestConstants.MEDIUM_LIMIT;

private TemporaryFolder folder = new TemporaryFolder(new File("target"));

private TemporaryBlobStore blobStore = new TemporaryBlobStore(folder) {

@Override
protected void configureDataStore(FileDataStore dataStore) {
dataStore.setMinRecordLength(BLOB_SIZE);
}

};

private TemporaryFileStore fileStore = new TemporaryFileStore(folder, 
blobStore, false);

@Rule
public RuleChain ruleChain = RuleChain.outerRule(folder)
.around(blobStore)
.around(fileStore);

@Test
public void testReadLongBlobId() throws Exception {
SegmentNodeStore nodeStore = 
SegmentNodeStoreBuilders.builder(fileStore.fileStore()).build();
Blob blob = nodeStore.createBlob(new NullInputStream(BLOB_SIZE));
blob.length();
}
}
{noformat}

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.10.1, 1.8.12
>
> Attachments: OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> 

[jira] [Commented] (OAK-7997) Adding restrictions to ACLs yields empty results for queries in Jackrabbit Oak

2019-01-29 Thread Alex Deparvu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754813#comment-16754813
 ] 

Alex Deparvu commented on OAK-7997:
---

Patch looks good to me.

Some thoughts:
 - I think the LazyValue refactoring is making the patch a lot harder to read
 - SelectorImpl#evaluateTypeMatch is calling getReadOnlyTree twice, I think you 
should be able to use the same variable to fully benefit the lazy init tree 
value
 - there's some subtle diffs between NodeImpl#getMixinTypeNames and 
SelectorImpl#evaluateTypeMatch (mixin bits) post refactoring. is that by 
design? (is NodeImpl's canReadMixinTypes relevant in the SelectorImpl's 
context? not sure)


> Adding restrictions to ACLs yields empty results for queries in Jackrabbit Oak
> --
>
> Key: OAK-7997
> URL: https://issues.apache.org/jira/browse/OAK-7997
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query, security
>Affects Versions: 1.10.0, 1.8.10
>Reporter: Søren Jensen
>Assignee: angela
>Priority: Major
> Attachments: OAK-7997.patch, OAK-7997_2.patch
>
>
> Using Jackrabbit Oak, I've been attempting to configure security through 
> {{SecurityProvider}} and {{SecurityConfiguration's. In particular, I've been 
> using the restrictions which generally works as expected. However, when 
> dealing with JCR-SQL2}} queries, more gets filtered out than expected.
> *Details*
> It can be reproduced with the repository below.
> {code:java}
> / 
>   node  [nt:unstructured]
>     subnode [nt:unstructured] {code}
> On {{node}}, I add an access control entry with privilege {{JCR_ALL}} for 
> "{{user"}} together with a restriction for {{rep:glob}} -> {{""}}, such that 
> {{user}} do not have access to any children of {{node - in this case, only 
> subnode}}.
> It works as expected when using {{session.getNode}}:
>  * {{session.getNode("/node")}} returns the node
>  * {{session.getNode("/node/subnode")}} throws {{PathNotFoundException}} as 
> expected due to the restriction.
> However, when I execute the following {{JCR-SQL2}} query:
> {code:java}
> SELECT * FROM [nt:unstructured]{code}
> I get *no results back*. Here I would have expected to get {{/node}}, as it 
> is otherwise available when using {{session.getNode}}. Removing the 
> restriction yields the expected result of both _/node_ and _/node/subnode_.
> As discussed with [~anchela] on the _users_ mailing list, this may either be 
> an actual bug, or it is a conscious decision - in which case it would be nice 
> to have it documented for the security.
> *Code for reproducing:*
> The code for reproducing the error is shown below. The "_restrictions"_ map 
> below seems to be the problem, as this is what results in both _/node_ and 
> _/node/subnode_ being filtered out.
>  
> {code:java}
> public static void main(String[] args) throws Exception {
> Repository repository = new Jcr().with(new 
> MySecurityProvider()).createRepository();
> Session session = repository.login(new UserIdCredentials(""));// 
> principal is "SystemPrincipal.INSTANCE"
> // Create nodes
> Node node = session.getRootNode().addNode("node", "nt:unstructured");
> node.addNode("subnode", "nt:unstructured");
> // Add access control entry + restriction
> AccessControlManager acm = session.getAccessControlManager();
> JackrabbitAccessControlList acl = (JackrabbitAccessControlList) acm
> .getApplicablePolicies("/node").nextAccessControlPolicy();
> Privilege[] privileges = new 
> Privilege[]{acm.privilegeFromName(Privilege.JCR_ALL)};
> Map restrictions = new HashMap() 
> {{put("rep:glob", new StringValue(""));}};
> acl.addEntry(new PrincipalImpl("user"), privileges, true, restrictions);
> acm.setPolicy("/node", acl);
> session.save();
> // executes query
> RowIterator rows = repository.login(new 
> UserIdCredentials("user")).getWorkspace().getQueryManager()
> .createQuery("SELECT * FROM [nt:unstructured]", 
> Query.JCR_SQL2).execute().getRows();
> System.out.println("Number of rows: " + rows.getSize());  //Prints 0
> }
> {code}
> *Code for security configuration:*
> The above code makes use of "MySecurityProvider". I do not suspect this to be 
> the root cause, but please let me know if it can be helpful to have. The 
> security provider has the configuration set to 
> "ConfigurationParameters.EMPTY", and it uses all the default implementations 
> present within the Jackrabbit Oak project. The only exception is the 
> _AuthenticationConfiguration_ which uses a custom implementation using 
> pre-authentication:
>  
> {code:java}
> class MyAuthenticationConfiguration extends AuthenticationConfigurationImpl {
> public MyAuthenticationConfiguration(SecurityProvider 

[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Andrei Dulceanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754812#comment-16754812
 ] 

Andrei Dulceanu commented on OAK-8006:
--

Backported to 1.10 at r1852429.

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.10.1, 1.8.12
>
> Attachments: OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:533)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Andrei Dulceanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754800#comment-16754800
 ] 

Andrei Dulceanu commented on OAK-8006:
--

[~frm], thanks for reviewing! I share your opinion regarding fixing only in 
trunk and 1.10 branch.

Fixed in trunk at r1852424.

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.10.1, 1.8.12
>
> Attachments: OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.writeSegment(FileStore.java:533)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Commented] (OAK-8006) SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby

2019-01-29 Thread Francesco Mari (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754785#comment-16754785
 ] 

Francesco Mari commented on OAK-8006:
-

[~dulceanu], LGTM. Given the urgency of this issue, I would fix it immediately 
on trunk and backport it to 1.10, but I would wait until backporting further 
until we come with a way to reproduce this with a test case.

> SegmentBlob#readLongBlobId might cause SegmentNotFoundException on standby
> --
>
> Key: OAK-8006
> URL: https://issues.apache.org/jira/browse/OAK-8006
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, tarmk-standby
>Affects Versions: 1.6.0
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: cold-standby
> Fix For: 1.12, 1.10.1, 1.8.12
>
> Attachments: OAK-8006.patch
>
>
> When persisting a segment transferred from master, among others, the cold 
> standby needs to read the binary references from the segment. While this 
> usually doesn't involve any additional reads from any other segments, there 
> is a special case concerning binary IDs larger than 4092 bytes. These can 
> live in other segments (which got transferred prior to the current segment 
> and are already on the standby), but it might also be the case that the 
> binary ID is stored in the same segment. If this happens, the call to 
> {{blobId.getSegment()}}[0], triggers a new read of the current, un-persisted 
> segment . Thus, a {{SegmentNotFoundException}} is thrown:
> {noformat}
> 22.01.2019 09:35:59.345 *ERROR* [standby-run-1] 
> org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed 
> synchronizing state.
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> d40a9da6-06a2-4dc0-ab91-5554a33c02b0 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:284)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$10(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>  [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721) 
> [com.adobe.granite.osgi.wrapper.guava:15.0.0.0002]
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:498)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.RecordId.getSegment(RecordId.java:98) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readLongBlobId(SegmentBlob.java:206)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.SegmentBlob.readBlobId(SegmentBlob.java:163)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore$3.consume(AbstractFileStore.java:262)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.Segment.forEachRecord(Segment.java:601) 
> [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readBinaryReferences(AbstractFileStore.java:257)
>  [org.apache.jackrabbit.oak-segment-tar:1.10.0]
> at 
> 

[jira] [Created] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-8007:
---

 Summary: RDBDocumentStore: potential off-heap memory leakage due 
to unclosed GzipInputStream
 Key: OAK-8007
 URL: https://issues.apache.org/jira/browse/OAK-8007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.12






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754726#comment-16754726
 ] 

Julian Reschke commented on OAK-8007:
-

Potential fix: 
https://issues.apache.org/jira/secure/attachment/12956671/OAK-8007.diff

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
> Attachments: OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Attachment: OAK-8007.diff

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
> Attachments: OAK-8007.diff
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8007) RDBDocumentStore: potential off-heap memory leakage due to unclosed GzipInputStream

2019-01-29 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8007:

Description: {{RDBDocumentSerializer}} does not close an instance of 
{{GZIPInputStream}}. This may cause leakage off off-heap memory until the 
object is finalized (see http://www.evanjones.ca/java-native-leak-bug.html, and 
thanks to [~jhoh] for pointing this out).

> RDBDocumentStore: potential off-heap memory leakage due to unclosed 
> GzipInputStream
> ---
>
> Key: OAK-8007
> URL: https://issues.apache.org/jira/browse/OAK-8007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
>
> {{RDBDocumentSerializer}} does not close an instance of {{GZIPInputStream}}. 
> This may cause leakage off off-heap memory until the object is finalized (see 
> http://www.evanjones.ca/java-native-leak-bug.html, and thanks to [~jhoh] for 
> pointing this out).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)