[jira] [Commented] (OAK-8573) Build Jackrabbit Oak #2340 failed

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914809#comment-16914809
 ] 

Hudson commented on OAK-8573:
-

Build is still failing.
Failed run: [Jackrabbit Oak 
#2341|https://builds.apache.org/job/Jackrabbit%20Oak/2341/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2341/console]

> Build Jackrabbit Oak #2340 failed
> -
>
> Key: OAK-8573
> URL: https://issues.apache.org/jira/browse/OAK-8573
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2340 has failed.
> First failed run: [Jackrabbit Oak 
> #2340|https://builds.apache.org/job/Jackrabbit%20Oak/2340/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2340/console]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8576) Add instrumentation to AzureDataStore

2019-08-23 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8576:
--

 Summary: Add instrumentation to AzureDataStore
 Key: OAK-8576
 URL: https://issues.apache.org/jira/browse/OAK-8576
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: blob-cloud-azure
Affects Versions: 1.16.0
Reporter: Matt Ryan
Assignee: Matt Ryan


Add instrumentation to {{AzureDataStore}} to aid in understanding performance 
characteristics when used in production.  See 
[https://jackrabbit.apache.org/oak/docs/nodestore/document/metrics.html] for an 
example.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8577) Add instrumentation to S3DataStore

2019-08-23 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8577:
--

 Summary: Add instrumentation to S3DataStore
 Key: OAK-8577
 URL: https://issues.apache.org/jira/browse/OAK-8577
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: blob-cloud
Affects Versions: 1.160
Reporter: Matt Ryan
Assignee: Matt Ryan


Add instrumentation to {{S3DataStore}} to aid in understanding performance 
characteristics when used in production.  See 
[https://jackrabbit.apache.org/oak/docs/nodestore/document/metrics.html] for an 
example.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8298) [Direct Binary Access] Blobs that are directly uploaded are not tracked by BlobIdTracker

2019-08-23 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914686#comment-16914686
 ] 

Matt Ryan commented on OAK-8298:


Proposed to also backport this fix to 1.10.  I've submitted this proposal 
on-list and am waiting for feedback.

> [Direct Binary Access] Blobs that are directly uploaded are not tracked by 
> BlobIdTracker
> 
>
> Key: OAK-8298
> URL: https://issues.apache.org/jira/browse/OAK-8298
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.18.0
>
>
> Blobs that are uploaded to the content repository are supposed to be tracked 
> by the {{BlobIdTracker}} once the blob is saved.  This is done for blobs 
> uploaded the traditional way in {{DataStoreBlobStore.writeBlob()}} [0].
> For blobs uploaded directly, they do not go through this method and so the 
> blob ID is never added to the {{BlobIdTracker}}.  This has impact on DSGC as 
> the {{MarkSweepGarbageCollector}} relies on the {{BlobIdTracker}} to provide 
> an accurate accounting of the blob IDs in the blob store to determine which 
> ones to retain and which to delete.
> This should be a pretty easy fix.  All direct uploads pass through 
> {{DataStoreBlobStore.completeBlobUpload()}} [1], and we have at that point 
> access to the {{DataRecord}} created by the direct upload.  We can get the 
> blob ID from the {{DataRecord}} and add it to the {{BlobIdTracker}} there.
> [0] - 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L241]
> [1] - 
> [https://github.com/apache/jackrabbit-oak/blob/46cde5ee49622c94bc95648edf84cf4c00ae1d58/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L723]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8298) [Direct Binary Access] Blobs that are directly uploaded are not tracked by BlobIdTracker

2019-08-23 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8298:
---
Fix Version/s: 1.10.5

> [Direct Binary Access] Blobs that are directly uploaded are not tracked by 
> BlobIdTracker
> 
>
> Key: OAK-8298
> URL: https://issues.apache.org/jira/browse/OAK-8298
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.18.0, 1.10.5
>
>
> Blobs that are uploaded to the content repository are supposed to be tracked 
> by the {{BlobIdTracker}} once the blob is saved.  This is done for blobs 
> uploaded the traditional way in {{DataStoreBlobStore.writeBlob()}} [0].
> For blobs uploaded directly, they do not go through this method and so the 
> blob ID is never added to the {{BlobIdTracker}}.  This has impact on DSGC as 
> the {{MarkSweepGarbageCollector}} relies on the {{BlobIdTracker}} to provide 
> an accurate accounting of the blob IDs in the blob store to determine which 
> ones to retain and which to delete.
> This should be a pretty easy fix.  All direct uploads pass through 
> {{DataStoreBlobStore.completeBlobUpload()}} [1], and we have at that point 
> access to the {{DataRecord}} created by the direct upload.  We can get the 
> blob ID from the {{DataRecord}} and add it to the {{BlobIdTracker}} there.
> [0] - 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L241]
> [1] - 
> [https://github.com/apache/jackrabbit-oak/blob/46cde5ee49622c94bc95648edf84cf4c00ae1d58/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L723]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8298) [Direct Binary Access] Blobs that are directly uploaded are not tracked by BlobIdTracker

2019-08-23 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8298:
---
Fix Version/s: 1.18.0

> [Direct Binary Access] Blobs that are directly uploaded are not tracked by 
> BlobIdTracker
> 
>
> Key: OAK-8298
> URL: https://issues.apache.org/jira/browse/OAK-8298
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.18.0
>
>
> Blobs that are uploaded to the content repository are supposed to be tracked 
> by the {{BlobIdTracker}} once the blob is saved.  This is done for blobs 
> uploaded the traditional way in {{DataStoreBlobStore.writeBlob()}} [0].
> For blobs uploaded directly, they do not go through this method and so the 
> blob ID is never added to the {{BlobIdTracker}}.  This has impact on DSGC as 
> the {{MarkSweepGarbageCollector}} relies on the {{BlobIdTracker}} to provide 
> an accurate accounting of the blob IDs in the blob store to determine which 
> ones to retain and which to delete.
> This should be a pretty easy fix.  All direct uploads pass through 
> {{DataStoreBlobStore.completeBlobUpload()}} [1], and we have at that point 
> access to the {{DataRecord}} created by the direct upload.  We can get the 
> blob ID from the {{DataRecord}} and add it to the {{BlobIdTracker}} there.
> [0] - 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L241]
> [1] - 
> [https://github.com/apache/jackrabbit-oak/blob/46cde5ee49622c94bc95648edf84cf4c00ae1d58/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L723]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8298) [Direct Binary Access] Blobs that are directly uploaded are not tracked by BlobIdTracker

2019-08-23 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8298:
---
Affects Version/s: 1.16.0
   1.10.4

> [Direct Binary Access] Blobs that are directly uploaded are not tracked by 
> BlobIdTracker
> 
>
> Key: OAK-8298
> URL: https://issues.apache.org/jira/browse/OAK-8298
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> Blobs that are uploaded to the content repository are supposed to be tracked 
> by the {{BlobIdTracker}} once the blob is saved.  This is done for blobs 
> uploaded the traditional way in {{DataStoreBlobStore.writeBlob()}} [0].
> For blobs uploaded directly, they do not go through this method and so the 
> blob ID is never added to the {{BlobIdTracker}}.  This has impact on DSGC as 
> the {{MarkSweepGarbageCollector}} relies on the {{BlobIdTracker}} to provide 
> an accurate accounting of the blob IDs in the blob store to determine which 
> ones to retain and which to delete.
> This should be a pretty easy fix.  All direct uploads pass through 
> {{DataStoreBlobStore.completeBlobUpload()}} [1], and we have at that point 
> access to the {{DataRecord}} created by the direct upload.  We can get the 
> blob ID from the {{DataRecord}} and add it to the {{BlobIdTracker}} there.
> [0] - 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L241]
> [1] - 
> [https://github.com/apache/jackrabbit-oak/blob/46cde5ee49622c94bc95648edf84cf4c00ae1d58/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L723]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8298) [Direct Binary Access] Blobs that are directly uploaded are not tracked by BlobIdTracker

2019-08-23 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914684#comment-16914684
 ] 

Matt Ryan commented on OAK-8298:


Fixed in revision 
[r1865795|https://svn.apache.org/viewvc?view=revision=1865795].

> [Direct Binary Access] Blobs that are directly uploaded are not tracked by 
> BlobIdTracker
> 
>
> Key: OAK-8298
> URL: https://issues.apache.org/jira/browse/OAK-8298
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-cloud, blob-cloud-azure, blob-plugins
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Fix For: 1.18.0
>
>
> Blobs that are uploaded to the content repository are supposed to be tracked 
> by the {{BlobIdTracker}} once the blob is saved.  This is done for blobs 
> uploaded the traditional way in {{DataStoreBlobStore.writeBlob()}} [0].
> For blobs uploaded directly, they do not go through this method and so the 
> blob ID is never added to the {{BlobIdTracker}}.  This has impact on DSGC as 
> the {{MarkSweepGarbageCollector}} relies on the {{BlobIdTracker}} to provide 
> an accurate accounting of the blob IDs in the blob store to determine which 
> ones to retain and which to delete.
> This should be a pretty easy fix.  All direct uploads pass through 
> {{DataStoreBlobStore.completeBlobUpload()}} [1], and we have at that point 
> access to the {{DataRecord}} created by the direct upload.  We can get the 
> blob ID from the {{DataRecord}} and add it to the {{BlobIdTracker}} there.
> [0] - 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L241]
> [1] - 
> [https://github.com/apache/jackrabbit-oak/blob/46cde5ee49622c94bc95648edf84cf4c00ae1d58/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java#L723]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8575) Minimize network calls required when calling getRecord()

2019-08-23 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8575:
---
Affects Version/s: 1.16.0
   1.10.4

> Minimize network calls required when calling getRecord()
> 
>
> Key: OAK-8575
> URL: https://issues.apache.org/jira/browse/OAK-8575
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The {{getRecord()}} method first checks existence of the blob and then 
> retrieves metadata to construct the resulting {{DataRecord}}.  If the call to 
> get metadata fails in a way we can handle if the blob does not exist, we can 
> skip the {{exists()}} call and just try to get the metadata, avoiding one 
> network API call.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8574) Minimize network calls required when completing direct upload

2019-08-23 Thread Matt Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914642#comment-16914642
 ] 

Matt Ryan commented on OAK-8574:


Note:  A fix for OAK-8575 would further reduce the number of API calls by one 
for both cases.

> Minimize network calls required when completing direct upload
> -
>
> Key: OAK-8574
> URL: https://issues.apache.org/jira/browse/OAK-8574
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The {{completeHttpUpload()}} in the cloud data store backends can be improved 
> in terms of quantity of cloud storage service API calls.
> Suggestions include:
>  * Try a single {{getRecord()}} call at the beginning instead of calling 
> {{exists()}}.  If an exception is thrown, catch it - this means the record 
> doesn't exist and can be written.  If a record is returned, we don't write - 
> and instead return this record.
>  * Don't check for existence after writing the record; instead assume that 
> the record is written correctly if no error or exception from SDK.  Verify 
> this behavior in unit tests.
>  * After writing the record, construct the record directly instead of calling 
> {{getRecord()}} to do it.
> This removes two network API calls if the record is written and one if the 
> record already exists.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8574) Minimize network calls required when completing direct upload

2019-08-23 Thread Matt Ryan (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Ryan updated OAK-8574:
---
Parent: OAK-8551
Issue Type: Technical task  (was: Improvement)

> Minimize network calls required when completing direct upload
> -
>
> Key: OAK-8574
> URL: https://issues.apache.org/jira/browse/OAK-8574
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob-cloud, blob-cloud-azure
>Affects Versions: 1.16.0, 1.10.4
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
>
> The {{completeHttpUpload()}} in the cloud data store backends can be improved 
> in terms of quantity of cloud storage service API calls.
> Suggestions include:
>  * Try a single {{getRecord()}} call at the beginning instead of calling 
> {{exists()}}.  If an exception is thrown, catch it - this means the record 
> doesn't exist and can be written.  If a record is returned, we don't write - 
> and instead return this record.
>  * Don't check for existence after writing the record; instead assume that 
> the record is written correctly if no error or exception from SDK.  Verify 
> this behavior in unit tests.
>  * After writing the record, construct the record directly instead of calling 
> {{getRecord()}} to do it.
> This removes two network API calls if the record is written and one if the 
> record already exists.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8575) Minimize network calls required when calling getRecord()

2019-08-23 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8575:
--

 Summary: Minimize network calls required when calling getRecord()
 Key: OAK-8575
 URL: https://issues.apache.org/jira/browse/OAK-8575
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: blob-cloud, blob-cloud-azure
Reporter: Matt Ryan
Assignee: Matt Ryan


The {{getRecord()}} method first checks existence of the blob and then 
retrieves metadata to construct the resulting {{DataRecord}}.  If the call to 
get metadata fails in a way we can handle if the blob does not exist, we can 
skip the {{exists()}} call and just try to get the metadata, avoiding one 
network API call.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8574) Minimize network calls required when completing direct upload

2019-08-23 Thread Matt Ryan (Jira)
Matt Ryan created OAK-8574:
--

 Summary: Minimize network calls required when completing direct 
upload
 Key: OAK-8574
 URL: https://issues.apache.org/jira/browse/OAK-8574
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob-cloud, blob-cloud-azure
Affects Versions: 1.10.4, 1.16.0
Reporter: Matt Ryan
Assignee: Matt Ryan


The {{completeHttpUpload()}} in the cloud data store backends can be improved 
in terms of quantity of cloud storage service API calls.

Suggestions include:
 * Try a single {{getRecord()}} call at the beginning instead of calling 
{{exists()}}.  If an exception is thrown, catch it - this means the record 
doesn't exist and can be written.  If a record is returned, we don't write - 
and instead return this record.
 * Don't check for existence after writing the record; instead assume that the 
record is written correctly if no error or exception from SDK.  Verify this 
behavior in unit tests.
 * After writing the record, construct the record directly instead of calling 
{{getRecord()}} to do it.

This removes two network API calls if the record is written and one if the 
record already exists.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8572) oak-upgrade's IndexAccessor lives in o.a.j.core

2019-08-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/OAK-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914552#comment-16914552
 ] 

Tomek Rękawek commented on OAK-8572:


The change LGTM. The ignored test fails for the "Short parent, long name" 
parameter, but this is consistent before and after the patch.

> oak-upgrade's IndexAccessor lives in o.a.j.core
> ---
>
> Key: OAK-8572
> URL: https://issues.apache.org/jira/browse/OAK-8572
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: OAK-8572.diff
>
>
> This causes problems with Javadoc generation, as now the doc tool is confused 
> about the core packae showing up both in jackrabbit classic and Oak.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8573) Build Jackrabbit Oak #2340 failed

2019-08-23 Thread Hudson (Jira)
Hudson created OAK-8573:
---

 Summary: Build Jackrabbit Oak #2340 failed
 Key: OAK-8573
 URL: https://issues.apache.org/jira/browse/OAK-8573
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #2340 has failed.
First failed run: [Jackrabbit Oak 
#2340|https://builds.apache.org/job/Jackrabbit%20Oak/2340/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2340/console]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8486) update jackson-databind dependency to 2.9.9.1

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8486:

Labels: candidate_oak_1_6  (was: )

> update jackson-databind dependency to 2.9.9.1
> -
>
> Key: OAK-8486
> URL: https://issues.apache.org/jira/browse/OAK-8486
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_6
> Fix For: 1.16.0, 1.10.4, 1.8.16
>
>
> https://github.com/FasterXML/jackson-databind/issues/2334



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8560) Update jackson-databind dependency to 2.9.9.3

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8560:

Labels: candidate_oak_1_6  (was: )

> Update jackson-databind dependency to 2.9.9.3
> -
>
> Key: OAK-8560
> URL: https://issues.apache.org/jira/browse/OAK-8560
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Affects Versions: 1.16.0, 1.8.15, 1.10.4
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_6
> Fix For: 1.18.0, 1.8.16, 1.10.5
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (OAK-8560) Update jackson-databind dependency to 2.9.9.3

2019-08-23 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914294#comment-16914294
 ] 

Julian Reschke edited comment on OAK-8560 at 8/23/19 3:22 PM:
--

trunk: [r1865752|http://svn.apache.org/r1865752]
1.10: [r1865756|http://svn.apache.org/r1865756]


was (Author: reschke):
trunk: [r1865752|http://svn.apache.org/r1865752]

> Update jackson-databind dependency to 2.9.9.3
> -
>
> Key: OAK-8560
> URL: https://issues.apache.org/jira/browse/OAK-8560
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Affects Versions: 1.16.0, 1.8.15, 1.10.4
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
> Fix For: 1.18.0, 1.8.16, 1.10.5
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-7473) [BlobGC] MarkSweepGarbageCollector does not always use the blobGcMaxAgeInSecs config

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7473:

Description: 
I wanted to use the _blobGcMaxAgeInSecs_ config of the SegmentNodeStoreService 
to reduce the max age of the blobs for GC. It looks like the 
MarkSweepGarbageCollector[1] does no longer support the blobGcMaxAgeInSecs 
configuration of the SegmentNodeStoreService[3]. Instead a max age of 24h is 
hardcoded (added with OAK-5546 and revision r1811532 [3]) in the constructor 
used by the SegmentNodeStoreService.

Looks like a regression from OAK-5546, or is there a specific reason for no 
longer supporting _blobGcMaxAgeInSecs_ config?

[1] 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L188]
[2] 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentNodeStoreService.java#L690]
[3] http://svn.apache.org/viewvc?rev=1811532=rev

 

  was:
I wanted to use the _blobGcMaxAgeInSecs_ config of the SegmentNodeStoreService 
to reduce the max age of the blobs for GC. It looks like the 
MarkSweepGarbageCollector[1] does no longer support the blobGcMaxAgeInSecs 
configuration of the SegmentNodeStoreService[3]. Instead a max age of 24h is 
hardcoded (added with OAK-5546 and revision r1811532 [3]) in the custructor 
used by the SegmentNodeStoreService.

Looks like a regression from OAK-5546, or is there a specific reason for no 
longer supporting _blobGcMaxAgeInSecs_ config?

[1] 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L188]
[2] 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentNodeStoreService.java#L690]
[3] http://svn.apache.org/viewvc?rev=1811532=rev

 


> [BlobGC] MarkSweepGarbageCollector does not always use the blobGcMaxAgeInSecs 
> config
> 
>
> Key: OAK-7473
> URL: https://issues.apache.org/jira/browse/OAK-7473
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.9.0
>Reporter: Marc Pfaff
>Assignee: Francesco Mari
>Priority: Major
> Fix For: 1.10.0, 1.9.1
>
>
> I wanted to use the _blobGcMaxAgeInSecs_ config of the 
> SegmentNodeStoreService to reduce the max age of the blobs for GC. It looks 
> like the MarkSweepGarbageCollector[1] does no longer support the 
> blobGcMaxAgeInSecs configuration of the SegmentNodeStoreService[3]. Instead a 
> max age of 24h is hardcoded (added with OAK-5546 and revision r1811532 [3]) 
> in the constructor used by the SegmentNodeStoreService.
> Looks like a regression from OAK-5546, or is there a specific reason for no 
> longer supporting _blobGcMaxAgeInSecs_ config?
> [1] 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/MarkSweepGarbageCollector.java#L188]
> [2] 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentNodeStoreService.java#L690]
> [3] http://svn.apache.org/viewvc?rev=1811532=rev
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (OAK-8560) Update jackson-databind dependency to 2.9.9.3

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8560.
-
Resolution: Fixed

trunk: [r1865752|http://svn.apache.org/r1865752]

> Update jackson-databind dependency to 2.9.9.3
> -
>
> Key: OAK-8560
> URL: https://issues.apache.org/jira/browse/OAK-8560
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Affects Versions: 1.16.0, 1.8.15, 1.10.4
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
> Fix For: 1.18.0, 1.8.16, 1.10.5
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8552) Minimize network calls required when creating a direct download URI

2019-08-23 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914292#comment-16914292
 ] 

Thomas Mueller commented on OAK-8552:
-

* #getReference
** My vote would go to "introduce a new cleaner API Blob#isInlined (name can be 
changed)". I might be more work, but I think the solution would be much clearer.
* #exists
** I think the exists method implies a network access (unless inlined).
** I'm afraid I don't currently understand why "remove the existence check" 
would "lead to perceived performance drop"... I don't understand how "remove 
the existence check" would work at all... I mean, it's reverting OAK-7998, 
right? If we revert OAK-7998 (which might be OK), then I assume we need to 
solve the root problem of OAK-7998 in some other way.

> Minimize network calls required when creating a direct download URI
> ---
>
> Key: OAK-8552
> URL: https://issues.apache.org/jira/browse/OAK-8552
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Attachments: OAK-8552_ApiChange.patch
>
>
> We need to isolate and try to optimize network calls required to create a 
> direct download URI.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8560) Update jackson-databind dependency to 2.9.9.3

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8560:

Summary: Update jackson-databind dependency to 2.9.9.3  (was: Update 
jackson-databind dependency to 2.9.10)

> Update jackson-databind dependency to 2.9.9.3
> -
>
> Key: OAK-8560
> URL: https://issues.apache.org/jira/browse/OAK-8560
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Affects Versions: 1.16.0, 1.8.15, 1.10.4
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Blocker
> Fix For: 1.18.0, 1.8.16, 1.10.5
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-4508) LoggingDocumentStore: better support for multiple instances

2019-08-23 Thread Thomas Mueller (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-4508:

Summary: LoggingDocumentStore: better support for multiple instances  (was: 
LoggingDocumentStore: better support for multple instances)

> LoggingDocumentStore: better support for multiple instances
> ---
>
> Key: OAK-4508
> URL: https://issues.apache.org/jira/browse/OAK-4508
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Julian Reschke
>Priority: Minor
>
> It would be cool if the logging could be configured to use a specific prefix 
> instead of "ds." - this would make it more useful when debugging code that 
> involves two different ds instances in the same VM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8572) oak-upgrade's IndexAccessor lives in o.a.j.core

2019-08-23 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914205#comment-16914205
 ] 

Julian Reschke commented on OAK-8572:
-

Proposed change:  [^OAK-8572.diff]  - using reflection. Unfortunately this 
currently doesn't seem to have test coverage, as the 
{{LongNameTest.testUpgradeToDocStore}} is currently ignored.

Another alternative would be to exclude the class from Javadoc generation, but 
I haven't figured out how (I can suppress packages, but not single classes).

[~tomek.rekawek]?

> oak-upgrade's IndexAccessor lives in o.a.j.core
> ---
>
> Key: OAK-8572
> URL: https://issues.apache.org/jira/browse/OAK-8572
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: OAK-8572.diff
>
>
> This causes problems with Javadoc generation, as now the doc tool is confused 
> about the core packae showing up both in jackrabbit classic and Oak.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OAK-8572) oak-upgrade's IndexAccessor lives in o.a.j.core

2019-08-23 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8572:

Attachment: OAK-8572.diff

> oak-upgrade's IndexAccessor lives in o.a.j.core
> ---
>
> Key: OAK-8572
> URL: https://issues.apache.org/jira/browse/OAK-8572
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: OAK-8572.diff
>
>
> This causes problems with Javadoc generation, as now the doc tool is confused 
> about the core packae showing up both in jackrabbit classic and Oak.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (OAK-8572) oak-upgrade's IndexAccessor lives in o.a.j.core

2019-08-23 Thread Julian Reschke (Jira)
Julian Reschke created OAK-8572:
---

 Summary: oak-upgrade's IndexAccessor lives in o.a.j.core
 Key: OAK-8572
 URL: https://issues.apache.org/jira/browse/OAK-8572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Reporter: Julian Reschke


This causes problems with Javadoc generation, as now the doc tool is confused 
about the core packae showing up both in jackrabbit classic and Oak.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8552) Minimize network calls required when creating a direct download URI

2019-08-23 Thread Amit Jain (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914136#comment-16914136
 ] 

Amit Jain commented on OAK-8552:


[~mattvryan]

Regarding the 2 issues above
 * #getReference - Tried a few optimizations:
 ** Removed the need to get a DataRecord instance and use a dummy DataRecord 
instance [1]. It fails segment standby test case(s) because of expectation that 
non null getReference means available locally [2]. That can be fixed but the 
solution seems hacky.
 ** Another option can be to introduce a new cleaner API Blob#isInlined (name 
can be changed) as outlined in the patch [^OAK-8552_ApiChange.patch]. The 
changes touch a lot of places but is quite trivial. Test cases still need to be 
added.
 * #exists check -
 ** IIUC, the need to check existence is because of asynchronous uploads, then 
one option is to actually disable that and remove the existence check. It would 
lead to perceived performance drop, perceived because JCR call returns quickly 
but time to reach the cloud backend for a binary would be the same as in 
synchronous uploads or even little worse.
 ** Another option is to introduce a in-memory cache locally in the Backend for 
ids uploaded. The idea to use BlobTracker does not work because that was only 
introduced for DSGC and it also doesn't wait to add the id only after an 
asynchronous upload. Also, if DSGC is meant to run outside the server (i.e. 
oak-run) then it is most likely the BlobTracker would be disabled.

[~tmueller] wdyt?

[1]
{code:java}
Index: 
oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===
--- 
oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java
 (revision b4e0a5ba954b7de4b508aa197847223800f1c320)
+++ 
oak-blob-plugins/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/DataStoreBlobStore.java
 (date 1566293954000)
@@ -52,6 +52,8 @@
 import com.google.common.io.Closeables;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.io.IOUtils;
+import org.apache.jackrabbit.core.data.AbstractDataRecord;
+import org.apache.jackrabbit.core.data.AbstractDataStore;
 import org.apache.jackrabbit.core.data.DataIdentifier;
 import org.apache.jackrabbit.core.data.DataRecord;
 import org.apache.jackrabbit.core.data.DataStore;
@@ -312,16 +314,34 @@
 return null;
 }
 
-DataRecord record;
-try {
-record = delegate.getRecordIfStored(new DataIdentifier(blobId));
-if (record != null) {
-return record.getReference();
-} else {
-log.debug("No blob found for id [{}]", blobId);
-}
-} catch (DataStoreException e) {
-log.warn("Unable to access the blobId for  [{}]", blobId, e);
+// Get reference without possible round-tripping using a dummy data 
record
+if (delegate instanceof AbstractDataStore) {
+return new AbstractDataRecord((AbstractDataStore) delegate, new 
DataIdentifier(blobId)) {
+
+@Override public long getLength() {
+return 0;
+}
+
+@Override public InputStream getStream() {
+return null;
+}
+
+@Override public long getLastModified() {
+return 0;
+}
+}.getReference();
+} else {
+DataRecord record;
+try {
+record = delegate.getRecordIfStored(new 
DataIdentifier(blobId));
+if (record != null) {
+return record.getReference();
+} else {
+log.debug("No blob found for id [{}]", blobId);
+}
+} catch (DataStoreException e) {
+log.warn("Unable to access the blobId for  [{}]", blobId, e);
+}
 }
 return  null;
 }
{code}
[2] 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/standby/client/RemoteBlobProcessor.java#L78-L89]

> Minimize network calls required when creating a direct download URI
> ---
>
> Key: OAK-8552
> URL: https://issues.apache.org/jira/browse/OAK-8552
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Attachments: OAK-8552_ApiChange.patch
>
>
> We need to isolate and try to optimize network calls required to create a 
> direct 

[jira] [Updated] (OAK-8552) Minimize network calls required when creating a direct download URI

2019-08-23 Thread Amit Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-8552:
---
Attachment: OAK-8552_ApiChange.patch

> Minimize network calls required when creating a direct download URI
> ---
>
> Key: OAK-8552
> URL: https://issues.apache.org/jira/browse/OAK-8552
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: blob-cloud, blob-cloud-azure
>Reporter: Matt Ryan
>Assignee: Matt Ryan
>Priority: Major
> Attachments: OAK-8552_ApiChange.patch
>
>
> We need to isolate and try to optimize network calls required to create a 
> direct download URI.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OAK-8481) RDB*Store: update postgresql jdbc driver reference to 42.4.6

2019-08-23 Thread GianMaria Romanato (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16914004#comment-16914004
 ] 

GianMaria Romanato commented on OAK-8481:
-

It's 42.2.6, as 42.4.6 does not exist and the pom.xml refers to 42.2.6.

> RDB*Store: update postgresql jdbc driver reference to 42.4.6
> 
>
> Key: OAK-8481
> URL: https://issues.apache.org/jira/browse/OAK-8481
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.16.0, 1.10.4, 1.8.16
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)