[jira] [Updated] (OAK-3917) SuggestionHelper creating unnecessary temporary directories

2016-10-04 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3917:
-
Labels: candidate_oak_1_2  (was: )

> SuggestionHelper creating unnecessary temporary directories
> ---
>
> Key: OAK-3917
> URL: https://issues.apache.org/jira/browse/OAK-3917
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Vikas Saurabh
>  Labels: candidate_oak_1_2
> Fix For: 1.4, 1.3.15
>
> Attachments: OAK-3917.patch
>
>
> SuggestHelper create a temporary directory [1] mostly likely as a workaround 
> to allow use of {{OakDirectory}}. With time this can grow unbounded (as dir 
> is not cleaned) and following exception is seen.
> {noformat}
> java.lang.IllegalStateException: Failed to create directory within 1 
> attempts (tried 145369739-0 to 145369739-)
> at com.google.common.io.Files.createTempDir(Files.java:608)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.util.SuggestHelper.getLookup(SuggestHelper.java:107)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:106)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.findIndexNode(IndexTracker.java:162)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.acquireIndexNode(IndexTracker.java:137)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.getPlans(LucenePropertyIndex.java:222)
> {noformat}
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.3.14/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/util/SuggestHelper.java#L107



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4868) Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 released

2016-10-04 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4868:
---
Description: 
SegmentS3DataStoreBlobGCIT in oak-it needs to be updated with changes similar 
to done for OAK-4848.
This would require oak-segment-tar updated to 0.0.14 once released.

  was:
SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
with changes similar to done for OAK-4848.
This would require oak-segment-tar updated to Oak 1.5.11 once released.


> Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 
> released
> --
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.5.12
>
>
> SegmentS3DataStoreBlobGCIT in oak-it needs to be updated with changes similar 
> to done for OAK-4848.
> This would require oak-segment-tar updated to 0.0.14 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4868) Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 released

2016-10-04 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547490#comment-15547490
 ] 

Amit Jain commented on OAK-4868:


[~frm] Sorry its the other way round, oak-segment-tar version has to be updated 
for changes to be done to oak-it. I have update the issue accordingly.

> Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 
> released
> --
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.5.12
>
>
> SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
> with changes similar to done for OAK-4848.
> This would require oak-segment-tar updated to Oak 1.5.11 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4868) Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 released

2016-10-04 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4868:
---
Summary: Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 
0.0.14 released  (was: Update SegmentDataStoreBlobGCTest in oak-segment-tar 
with changes done for OAK-4858)

> Update SegmentS3DataStoreBlobGCTest in oak-it once oak-segment-tar 0.0.14 
> released
> --
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.5.12
>
>
> SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
> with changes similar to done for OAK-4848.
> This would require oak-segment-tar updated to Oak 1.5.11 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4868) Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for OAK-4858

2016-10-04 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4868:
---
Fix Version/s: (was: Segment Tar 0.0.14)
   1.5.12

> Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for 
> OAK-4858
> ---
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.5.12
>
>
> SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
> with changes similar to done for OAK-4848.
> This would require oak-segment-tar updated to Oak 1.5.11 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4868) Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for OAK-4858

2016-10-04 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-4868:
---
Component/s: (was: segment-tar)

> Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for 
> OAK-4858
> ---
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: Segment Tar 0.0.14
>
>
> SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
> with changes similar to done for OAK-4848.
> This would require oak-segment-tar updated to Oak 1.5.11 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4857) Support space chars common in CJK inside node names

2016-10-04 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547113#comment-15547113
 ] 

Alexander Klimetschek commented on OAK-4857:


OS filesystems:
* https://en.wikipedia.org/wiki/Filename#Comparison_of_filename_limitations
* http://www.comentum.com/File-Systems-HFS-FAT-UFS.html
* Notably, OSX HFS+ only has / as restricted character.

Cross-platform compatible filename tips from Apple: 
https://support.apple.com/en-us/HT202808

Dropbox: https://www.dropbox.com/en/help/145

Box:
* 
https://community.box.com/t5/Box-Sync/What-do-problem-file-notifications-in-Sync-4-0-mean/ta-p/82#name
* 
https://community.box.com/t5/Box-Sync/Which-File-Types-Are-Ignored-Or-Blocked-In-Box-Sync/ta-p/117

Microsoft OneDrive/Sharepoint: https://support.microsoft.com/en-us/kb/2933738

Adobe Creative Cloud Files: 
https://helpx.adobe.com/creative-cloud/kb/arent-my-files-syncing.html

Apple iCloud (Drive): ???

Google Drive: ???

Google Cloud Platform:
* https://cloud.google.com/storage/docs/naming
* 
https://cloud.google.com/storage/docs/gsutil/addlhelp/Filenameencodingandinteroperabilityproblems
 (slightly tooling specific)

Amazon S3: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html


> Support space chars common in CJK inside node names
> ---
>
> Key: OAK-4857
> URL: https://issues.apache.org/jira/browse/OAK-4857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.4.7, 1.5.10
>Reporter: Alexander Klimetschek
> Attachments: OAK-4857-tests.patch
>
>
> Oak (like Jackrabbit) does not allow spaces commonly used in CJK like 
> {{u3000}} (ideographic space) or {{u00A0}} (no-break space) _inside_ a node 
> name, while allowing some of them (the non breaking spaces) at the _beginning 
> or end_.
> They should be supported for better globalization readiness, and filesystems 
> allow them, making common filesystem to JCR mappings unnecessarily hard. 
> Escaping would be an option for applications, but there is currently no 
> utility method for it 
> ([Text.escapeIllegalJcrChars|https://jackrabbit.apache.org/api/2.8/org/apache/jackrabbit/util/Text.html#escapeIllegalJcrChars(java.lang.String)]
>  will not escape these spaces), nor is it documented for applications how to 
> do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4888) Warn or fail queries above a configurable cost value

2016-10-04 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-4888:
---
Description: 
*Problem:* It's easy to run into performance problems with queries that are not 
backed by an index or miss the right one. Developers writing these queries 
typically do not have the real large production data, and thus don't see that a 
query would scale badly, and would not see any traversal warnings, as these 
only happen with a large number of results.

*Proposal:* Oak's query engine already calculates a cost estimate to make a 
decision which index to use, or even if there is any at all. This cost estimate 
could be used to find out if a query is not supported by an index or with one 
that is not suitable enough (e.g. ordering by property that is not indexed)

If a query is above a certain cost value, a big warning could be put out or 
even the query could be made to fail (maybe a per query option, that you might 
want to have to "fail" by default to ensure people are not overlooking the 
problem). The cost limit should be configurable, as it might depend on the 
hardware power.

  was:
*Problem:* It's easy to run into performance problems with queries that are not 
backed by an index or miss the right one. Developers writing these queries 
typically do not have the real large production data, and thus don't see that a 
query would scale badly, and would not see any traversal warnings, as these 
only happen with a large number of results.

*Proposal:* Oak's query engine already calculates a cost estimate to make a 
decision which index to use, or even if there is any at all. This cost estimate 
could be used to find out if a query is not supported by an index or with one 
that is not suitable enough (e.g. ordering by property that is not indexed)

If a query is above a certain cost value, a big warning could be put out or 
even the query could be made to fail (maybe a per query option, that you might 
want to have to "fail" by default to ensure people are not missing the 
problem). The cost limit should be configurable, as it might depend on the 
hardware power.


> Warn or fail queries above a configurable cost value
> 
>
> Key: OAK-4888
> URL: https://issues.apache.org/jira/browse/OAK-4888
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Alexander Klimetschek
>
> *Problem:* It's easy to run into performance problems with queries that are 
> not backed by an index or miss the right one. Developers writing these 
> queries typically do not have the real large production data, and thus don't 
> see that a query would scale badly, and would not see any traversal warnings, 
> as these only happen with a large number of results.
> *Proposal:* Oak's query engine already calculates a cost estimate to make a 
> decision which index to use, or even if there is any at all. This cost 
> estimate could be used to find out if a query is not supported by an index or 
> with one that is not suitable enough (e.g. ordering by property that is not 
> indexed)
> If a query is above a certain cost value, a big warning could be put out or 
> even the query could be made to fail (maybe a per query option, that you 
> might want to have to "fail" by default to ensure people are not overlooking 
> the problem). The cost limit should be configurable, as it might depend on 
> the hardware power.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4888) Warn or fail queries above a configurable cost value

2016-10-04 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-4888:
---
Description: 
*Problem:* It's easy to run into performance problems with queries that are not 
backed by an index or miss the right one. Developers writing these queries 
typically do not have the real large production data, and thus don't see that a 
query would scale badly, and would not see any traversal warnings, as these 
only happen with a large number of results.

*Proposal:* Oak's query engine already calculates a cost estimate to make a 
decision which index to use, or even if there is any at all. This cost estimate 
could be used to find out if a query is not supported by an index or with one 
that is not suitable enough (e.g. ordering by property that is not indexed)

If a query is above a certain cost value, a big warning could be put out or 
even the query could be made to fail (maybe a per query option, that you might 
want to have to "fail" by default to ensure people are not missing the 
problem). The cost limit should be configurable, as it might depend on the 
hardware power.

  was:
*Problem:* It's easy to run into performance problems with queries that are not 
backed by an index or miss the right one. Developers writing these queries 
typically do not have the real large production data, and thus don't see that a 
query would scale badly, and would not see any traversal warnings, as these 
only happen with a large number of results.

*Proposal:* Oak's query engine already calculates a cost estimate to make a 
decision which index to use, or even if there is any at all. This cost estimate 
could be used to find out if a query is not supported by an index or with one 
that is not suitable enough (e.g. ordering by property that is not indexed)

If a query is above cost value, a big warning could be put out or even the 
query could be made to fail (maybe a per query option, that you might want to 
have to "fail" by default to ensure people are not missing the problem). The 
cost limit should be configurable, as it might depend on the hardware power.


> Warn or fail queries above a configurable cost value
> 
>
> Key: OAK-4888
> URL: https://issues.apache.org/jira/browse/OAK-4888
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Alexander Klimetschek
>
> *Problem:* It's easy to run into performance problems with queries that are 
> not backed by an index or miss the right one. Developers writing these 
> queries typically do not have the real large production data, and thus don't 
> see that a query would scale badly, and would not see any traversal warnings, 
> as these only happen with a large number of results.
> *Proposal:* Oak's query engine already calculates a cost estimate to make a 
> decision which index to use, or even if there is any at all. This cost 
> estimate could be used to find out if a query is not supported by an index or 
> with one that is not suitable enough (e.g. ordering by property that is not 
> indexed)
> If a query is above a certain cost value, a big warning could be put out or 
> even the query could be made to fail (maybe a per query option, that you 
> might want to have to "fail" by default to ensure people are not missing the 
> problem). The cost limit should be configurable, as it might depend on the 
> hardware power.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4887) Query cost estimation: ordering by an unindexed property not reflected

2016-10-04 Thread Alexander Klimetschek (JIRA)
Alexander Klimetschek created OAK-4887:
--

 Summary: Query cost estimation: ordering by an unindexed property 
not reflected
 Key: OAK-4887
 URL: https://issues.apache.org/jira/browse/OAK-4887
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: query
Affects Versions: 1.4.2
Reporter: Alexander Klimetschek


A query that orders by an unindexed property seems to have no effect on the 
cost estimation, compared to the same query without the order by, although it 
has a big impact on the execution performance for larger results/indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4888) Warn or fail queries above a configurable cost value

2016-10-04 Thread Alexander Klimetschek (JIRA)
Alexander Klimetschek created OAK-4888:
--

 Summary: Warn or fail queries above a configurable cost value
 Key: OAK-4888
 URL: https://issues.apache.org/jira/browse/OAK-4888
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Alexander Klimetschek


*Problem:* It's easy to run into performance problems with queries that are not 
backed by an index or miss the right one. Developers writing these queries 
typically do not have the real large production data, and thus don't see that a 
query would scale badly, and would not see any traversal warnings, as these 
only happen with a large number of results.

*Proposal:* Oak's query engine already calculates a cost estimate to make a 
decision which index to use, or even if there is any at all. This cost estimate 
could be used to find out if a query is not supported by an index or with one 
that is not suitable enough (e.g. ordering by property that is not indexed)

If a query is above cost value, a big warning could be put out or even the 
query could be made to fail (maybe a per query option, that you might want to 
have to "fail" by default to ensure people are not missing the problem). The 
cost limit should be configurable, as it might depend on the hardware power.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4887) Query cost estimation: ordering by an unindexed property not reflected

2016-10-04 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-4887:
---
Issue Type: Improvement  (was: Bug)

> Query cost estimation: ordering by an unindexed property not reflected
> --
>
> Key: OAK-4887
> URL: https://issues.apache.org/jira/browse/OAK-4887
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.4.2
>Reporter: Alexander Klimetschek
>
> A query that orders by an unindexed property seems to have no effect on the 
> cost estimation, compared to the same query without the order by, although it 
> has a big impact on the execution performance for larger results/indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4844) Analyse effects of simplified record ids

2016-10-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546587#comment-15546587
 ] 

Alex Parvulescu commented on OAK-4844:
--

{noformat}
Total size:
17 GB in  72845 data segments
768 KB in  3 bulk segments
3 GB in maps (46450859 leaf and branch records)
1 GB in lists (55469092 list and bucket records)
3 GB in values (value and block records of 70765681 properties, 
3429/378684/0/1214419 small/medium/long/external blobs, 46258452/1862224/159 
small/medium/long strings)
161 MB in templates (16772712 template records)
2 GB in nodes (251591739 node records)
{noformat}

> Analyse effects of simplified record ids
> 
>
> Key: OAK-4844
> URL: https://issues.apache.org/jira/browse/OAK-4844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: performance
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4844-02.patch, OAK-4844-3.patch, OAK-4844.patch
>
>
> OAK-4631 introduced a simplified serialisation for record ids. This causes 
> their footprint on disk to increase from 3 bytes to 18 bytes. OAK-4631 has 
> some initial analysis on the effect this is having on repositories as a 
> whole. 
> I'm opening this issue as a dedicated task to further look into mitigation 
> strategies (if necessary). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance

2016-10-04 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546504#comment-15546504
 ] 

Timothee Maret commented on OAK-4883:
-

[~alexparvulescu], [~frm] debugging the code with a default primary instance 
(File nodestore, no custom blob store), it appears the reason there's no 
{{BlobGCMBean}} is due to the fact that

{code}
store.getBlobStore()
{code}

returns {{null}} and therefor, the check at [0] can't be satisfied which cause 
the {{BlobGCMBean}} not to be registered. This occurs with a standby instance 
as well. Looking at the code and the change at [1], it seems to me that the 
logic for the primary case has not changed and that the only way to have a 
BlobGCMBean registered is to have a custom blob store configured.

Before digging further, could you help me understand whether it is expected 
that the {{store.getBlobStore()}} may return null in some setups

[0] 
https://github.com/apache/jackrabbit-oak/blob/2b6c2f5340f3b6485dda5c493f6343d232c883e9/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentNodeStoreService.java#L646
[1] 
https://github.com/tmaret/jackrabbit-oak/commit/982bc43374a4564726b8b77454c3803b691bac24

> Missing BlobGarbageCollection MBean on standby instance
> ---
>
> Key: OAK-4883
> URL: https://issues.apache.org/jira/browse/OAK-4883
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Reporter: Alex Parvulescu
>Assignee: Timothee Maret
>  Labels: gc
>
> The {{BlobGarbageCollection}} MBean is no longer available on a standby 
> instance, this affects non-shared datastore setups (on a shared datastore 
> you'd only need to run blob gc on the primary).
> This change was introduced by OAK-4089 (and backported to 1.4 branch with 
> OAK-4093).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4886) The cold standby client should not load from the server a locally available segment

2016-10-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545965#comment-15545965
 ] 

Francesco Mari commented on OAK-4886:
-

I simplified the code at r1763302 by using only one set to track the 'visited' 
and 'local' sets of segments, since a visited segment becomes a local one after 
being transferred from the primary instance.

> The cold standby client should not load from the server a locally available 
> segment
> ---
>
> Key: OAK-4886
> URL: https://issues.apache.org/jira/browse/OAK-4886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
>
> If a new segment is loaded from the server, and that segment references other 
> segments that are locally available, the standby client will load those 
> segments anyway. Segments that are already available locally should not be 
> loaded from the server again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4886) The cold standby client should not load from the server a locally available segment

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4886.
-
Resolution: Fixed

Fixed at r1763299.

> The cold standby client should not load from the server a locally available 
> segment
> ---
>
> Key: OAK-4886
> URL: https://issues.apache.org/jira/browse/OAK-4886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
>
> If a new segment is loaded from the server, and that segment references other 
> segments that are locally available, the standby client will load those 
> segments anyway. Segments that are already available locally should not be 
> loaded from the server again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4886) The cold standby client should not load from the server a locally available segment

2016-10-04 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-4886:
---

 Summary: The cold standby client should not load from the server a 
locally available segment
 Key: OAK-4886
 URL: https://issues.apache.org/jira/browse/OAK-4886
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-tar
Reporter: Francesco Mari
Assignee: Francesco Mari
 Fix For: Segment Tar 0.0.14


If a new segment is loaded from the server, and that segment references other 
segments that are locally available, the standby client will load those 
segments anyway. Segments that are already available locally should not be 
loaded from the server again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4884) Test failure: org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4884.
-
Resolution: Fixed

The latest five builds on the Jenkins instance I use don't show any failure 
related to cold standby. Before this fix, some cold standby test was randomly 
failing every two or three builds. I will resolve this issue for the moment, 
but will keep an eye open for future random failures related to connection 
problems.

> Test failure: 
> org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2
> -
>
> Key: OAK-4884
> URL: https://issues.apache.org/jira/browse/OAK-4884
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
>
> The test 
> {{org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2}}
>  fails intermittently with the following error:
> {noformat}
> expected: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... } 
> }> but was: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... 
> } }>
> {noformat}
> The cause of the issue seems to be the proxy holding to a port after it's 
> been asked to disconnect:
> {noformat}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method) ~[na:1.7.0_80]
>   at sun.nio.ch.Net.bind(Net.java:463) ~[na:1.7.0_80]
>   at sun.nio.ch.Net.bind(Net.java:455) ~[na:1.7.0_80]
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
> ~[na:1.7.0_80]
>   at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:505)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:490)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]
> {noformat}
> After that, the client fails when trying to connect to the server through the 
> proxy:
> {noformat}
> 09:39:11.795 ERROR [main] StandbyClientSync.java:160Failed 
> synchronizing state.
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: /127.0.0.1:41866
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[na:1.7.0_80]
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) 
> ~[na:1.7.0_80]
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:552)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:466) 

[jira] [Assigned] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance

2016-10-04 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret reassigned OAK-4883:
---

Assignee: Timothee Maret

> Missing BlobGarbageCollection MBean on standby instance
> ---
>
> Key: OAK-4883
> URL: https://issues.apache.org/jira/browse/OAK-4883
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Reporter: Alex Parvulescu
>Assignee: Timothee Maret
>  Labels: gc
>
> The {{BlobGarbageCollection}} MBean is no longer available on a standby 
> instance, this affects non-shared datastore setups (on a shared datastore 
> you'd only need to run blob gc on the primary).
> This change was introduced by OAK-4089 (and backported to 1.4 branch with 
> OAK-4093).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4885) RDB*Store: update mysql JDBC driver reference to 5.1.40

2016-10-04 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-4885:
---

 Summary: RDB*Store: update mysql JDBC driver reference to 5.1.40
 Key: OAK-4885
 URL: https://issues.apache.org/jira/browse/OAK-4885
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Trivial


See .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4884) Test failure: org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2

2016-10-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545205#comment-15545205
 ] 

Francesco Mari commented on OAK-4884:
-

In r1763269 I made bind, connect and close synchronous for the standby client, 
server and proxy. I also added a bunch of debug messages along the way and 
reorganised some code. I moved NetworkErrorProxy to the 
org.apache.jackrabbit.oak.segment.standby package, so that its debug messages 
are logged as part of our tests. I also refined the useProxy method in 
BrokenNetworkTest regarding the acquisition and releasing of the standby 
client, server and proxy. I am already aware that this change will increase the 
build time of one or two minutes, but I am going to keep it this way as long as 
we are seeing intermittent failures in the cold standby tests.

> Test failure: 
> org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2
> -
>
> Key: OAK-4884
> URL: https://issues.apache.org/jira/browse/OAK-4884
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
>
> The test 
> {{org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2}}
>  fails intermittently with the following error:
> {noformat}
> expected: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... } 
> }> but was: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... 
> } }>
> {noformat}
> The cause of the issue seems to be the proxy holding to a port after it's 
> been asked to disconnect:
> {noformat}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method) ~[na:1.7.0_80]
>   at sun.nio.ch.Net.bind(Net.java:463) ~[na:1.7.0_80]
>   at sun.nio.ch.Net.bind(Net.java:455) ~[na:1.7.0_80]
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
> ~[na:1.7.0_80]
>   at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:505)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:490)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) 
> ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]
> {noformat}
> After that, the client fails when trying to connect to the server through the 
> proxy:
> {noformat}
> 09:39:11.795 ERROR [main] StandbyClientSync.java:160Failed 
> synchronizing state.
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: /127.0.0.1:41866
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[na:1.7.0_80]
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) 
> ~[na:1.7.0_80]
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
>  ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
>   at 
> 

[jira] [Comment Edited] (OAK-2498) Root record references provide too little context for parsing a segment

2016-10-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544848#comment-15544848
 ] 

Alex Parvulescu edited comment on OAK-2498 at 10/4/16 12:07 PM:


results:
{noformat}
Total size:
20 GB in  82284 data segments
768 KB in  3 bulk segments
4 GB in maps (46450859 leaf and branch records)
1 GB in lists (55469092 list and bucket records)
3 GB in values (value and block records of 70765667 properties, 
3429/378684/0/1214419 small/medium/long/external blobs, 46258452/1862224/159 
small/medium/long strings)
194 MB in templates (16772712 template records)
3 GB in nodes (251591739 node records)
{noformat}


was (Author: alex.parvulescu):
sure, I'll post the results here

> Root record references provide too little context for parsing a segment
> ---
>
> Key: OAK-2498
> URL: https://issues.apache.org/jira/browse/OAK-2498
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: tools
> Fix For: Segment Tar 0.0.14
>
>
> According to the [documentation | 
> http://jackrabbit.apache.org/oak/docs/nodestore/segmentmk.html] the root 
> record references in a segment header provide enough context for parsing all 
> records within this segment without any external information. 
> Turns out this is not true: if a root record reference turns e.g. to a list 
> record. The items in that list are record ids of unknown type. So even though 
> those records might live in the same segment, we can't parse them as we don't 
> know their type. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4884) Test failure: org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2

2016-10-04 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-4884:
---

 Summary: Test failure: 
org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2
 Key: OAK-4884
 URL: https://issues.apache.org/jira/browse/OAK-4884
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-tar
Reporter: Francesco Mari
Assignee: Francesco Mari
 Fix For: Segment Tar 0.0.14


The test 
{{org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxyFlippedIntermediateByteChange2}}
 fails intermittently with the following error:

{noformat}
expected: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... } 
}> but was: org.apache.jackrabbit.oak.segment.SegmentNodeState<{ root = { ... } 
}>
{noformat}

The cause of the issue seems to be the proxy holding to a port after it's been 
asked to disconnect:

{noformat}
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.7.0_80]
at sun.nio.ch.Net.bind(Net.java:463) ~[na:1.7.0_80]
at sun.nio.ch.Net.bind(Net.java:455) ~[na:1.7.0_80]
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
~[na:1.7.0_80]
at 
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:505)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:490)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
 ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
 ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]
{noformat}

After that, the client fails when trying to connect to the server through the 
proxy:

{noformat}
09:39:11.795 ERROR [main] StandbyClientSync.java:160Failed 
synchronizing state.
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
/127.0.0.1:41866
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
~[na:1.7.0_80]
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) 
~[na:1.7.0_80]
at 
io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:552)
 ~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:466) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:438) 
~[netty-transport-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
 ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 ~[netty-common-4.0.41.Final.jar:4.0.41.Final]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance

2016-10-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4883:
-
Description: 
The {{BlobGarbageCollection}} MBean is no longer available on a standby 
instance, this affects non-shared datastore setups (on a shared datastore you'd 
only need to run blob gc on the primary).
This change was introduced by OAK-4089 (and backported to 1.4 branch with 
OAK-4093).

  was:
The {{BlobGarbageCollection}} MBean is no longer available on a standby 
instance, this affects non-shared datastore setups (on a shared datastore you'd 
only need to run blob gc on the primary).
This change was introduced by OAK-4089.


> Missing BlobGarbageCollection MBean on standby instance
> ---
>
> Key: OAK-4883
> URL: https://issues.apache.org/jira/browse/OAK-4883
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Reporter: Alex Parvulescu
>  Labels: gc
>
> The {{BlobGarbageCollection}} MBean is no longer available on a standby 
> instance, this affects non-shared datastore setups (on a shared datastore 
> you'd only need to run blob gc on the primary).
> This change was introduced by OAK-4089 (and backported to 1.4 branch with 
> OAK-4093).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance

2016-10-04 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-4883:
-
Labels: gc  (was: )

> Missing BlobGarbageCollection MBean on standby instance
> ---
>
> Key: OAK-4883
> URL: https://issues.apache.org/jira/browse/OAK-4883
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar, segmentmk
>Reporter: Alex Parvulescu
>  Labels: gc
>
> The {{BlobGarbageCollection}} MBean is no longer available on a standby 
> instance, this affects non-shared datastore setups (on a shared datastore 
> you'd only need to run blob gc on the primary).
> This change was introduced by OAK-4089 (and backported to 1.4 branch with 
> OAK-4093).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4868) Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for OAK-4858

2016-10-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544852#comment-15544852
 ] 

Francesco Mari commented on OAK-4868:
-

[~amitjain], oak-segment-tar now depends on Oak 1.5.11. Feel free to make the 
appropriate adjustments to SegmentDataStoreBlobGCIT and related tests.

> Update SegmentDataStoreBlobGCTest in oak-segment-tar with changes done for 
> OAK-4858
> ---
>
> Key: OAK-4868
> URL: https://issues.apache.org/jira/browse/OAK-4868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, segment-tar
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: Segment Tar 0.0.14
>
>
> SegmentDataStoreBlobGCIT and its S3 counterpart in oak-it needs to be updated 
> with changes similar to done for OAK-4848.
> This would require oak-segment-tar updated to Oak 1.5.11 once released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2498) Root record references provide too little context for parsing a segment

2016-10-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544848#comment-15544848
 ] 

Alex Parvulescu commented on OAK-2498:
--

sure, I'll post the results here

> Root record references provide too little context for parsing a segment
> ---
>
> Key: OAK-2498
> URL: https://issues.apache.org/jira/browse/OAK-2498
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: tools
> Fix For: Segment Tar 0.0.14
>
>
> According to the [documentation | 
> http://jackrabbit.apache.org/oak/docs/nodestore/segmentmk.html] the root 
> record references in a segment header provide enough context for parsing all 
> records within this segment without any external information. 
> Turns out this is not true: if a root record reference turns e.g. to a list 
> record. The items in that list are record ids of unknown type. So even though 
> those records might live in the same segment, we can't parse them as we don't 
> know their type. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4880) StandbySyncExecution logs too many INFO lines

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4880.
-
Resolution: Fixed
  Assignee: Francesco Mari  (was: Timothee Maret)

Committed at r1763254.

> StandbySyncExecution logs too many INFO lines
> -
>
> Key: OAK-4880
> URL: https://issues.apache.org/jira/browse/OAK-4880
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4880.patch
>
>
> The logs for loading/inspecting/marking segment is currently at INFO level 
> which generates a consequent amount of logs. The level should be moved to 
> debug or trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4856) List checkpoints (segment-tar)

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4856.
-
Resolution: Fixed
  Assignee: Francesco Mari  (was: Michael Dürig)

Fixed at r1763252 and r1763253.

> List checkpoints (segment-tar)
> --
>
> Key: OAK-4856
> URL: https://issues.apache.org/jira/browse/OAK-4856
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Marcel Reutegger
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4856.patch
>
>
> This is the same as OAK-4850 but for the segment-tar module when the Oak 
> dependency is updated to 1.5.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4875) Update dependency on Oak modules to version 1.5.11

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4875.
-
Resolution: Fixed

Fixed at r1763252.

> Update dependency on Oak modules to version 1.5.11
> --
>
> Key: OAK-4875
> URL: https://issues.apache.org/jira/browse/OAK-4875
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
>
> The dependency to the rest of the Oak components should be updated to version 
> 1.5.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4877) Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi configuration is used against oak-segment-tar

2016-10-04 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544817#comment-15544817
 ] 

Timothee Maret commented on OAK-4877:
-

Thanks [~frm]!

> Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi 
> configuration is used against oak-segment-tar
> -
>
> Key: OAK-4877
> URL: https://issues.apache.org/jira/browse/OAK-4877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4877.patch
>
>
> Similarly to OAK-4702, OSGI configurations for the PID 
> {{org.apache.jackrabbit.oak.plugins.segment.standby.store.StandbyStoreService}}
>  should be intercepted and a meaningful message should be printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-4877) Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi configuration is used against oak-segment-tar

2016-10-04 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-4877.
-
Resolution: Fixed
  Assignee: Francesco Mari  (was: Timothee Maret)

Committed at r1763247.

> Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi 
> configuration is used against oak-segment-tar
> -
>
> Key: OAK-4877
> URL: https://issues.apache.org/jira/browse/OAK-4877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Francesco Mari
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4877.patch
>
>
> Similarly to OAK-4702, OSGI configurations for the PID 
> {{org.apache.jackrabbit.oak.plugins.segment.standby.store.StandbyStoreService}}
>  should be intercepted and a meaningful message should be printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3588) SegmentNodeStore allow tweaking of lock fairness

2016-10-04 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544803#comment-15544803
 ] 

Chetan Mehrotra commented on OAK-3588:
--

Just to see if this change causes any adverse impact on the longevity runs we 
run internally unless you consider the change safe enough to be backported to 
branches

> SegmentNodeStore allow tweaking of lock fairness
> 
>
> Key: OAK-3588
> URL: https://issues.apache.org/jira/browse/OAK-3588
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.4, 1.3.10, 1.2.8, 1.0.24
>
> Attachments: OAK-3588.patch
>
>
> This will expose the merge semaphores's fairness policy as the semaphore is 
> currently configured as _NonfairSync_ (default value).
> Opening this option will be beneficial in collecting more stats around thread 
> starvation and what the best policy is (fair vs non fair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4807) SecondaryStoreConfigIT intermittently failing due to incorrect MongoURL

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4807.
-

Bulk close for 1.5.11

> SecondaryStoreConfigIT intermittently failing due to incorrect MongoURL
> ---
>
> Key: OAK-4807
> URL: https://issues.apache.org/jira/browse/OAK-4807
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.6, 1.5.11
>
>
> {{SecondaryStoreConfigIT}} is currently failing with following exception 
> while obtaining admin session 
> {noformat}
> Caused by: javax.jcr.LoginException: java.lang.IllegalArgumentException: 
> 0x3fd76c17134c7563656e6534365365676d656e74496e666f03342e3702ec010008026f73054c696e75780b6a6176612e76656e646f72124f7261636c6520436f72706f726174696f6e0c6a6176612e76657273696f6e08312e382e305f36360e6c7563656e652e76657273696f6e0c312e362d534e415053484f54076f732e6172636805616d64363406736f7572636505666c7573680a6f732e76657273696f6e11332e31332e302d33372d67656e657269630974696d657374616d700d3134373437393935353633065f362e636665065f362e636673055f362e736969753b9ab8d3d55929425cba257b25e6#251
> at 
> org.apache.jackrabbit.oak.commons.StringUtils.getHexDigit(StringUtils.java:79)
> at 
> org.apache.jackrabbit.oak.commons.StringUtils.convertHexToBytes(StringUtils.java:58)
> at 
> org.apache.jackrabbit.oak.spi.blob.AbstractBlobStore.getBlobLength(AbstractBlobStore.java:517)
> at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.length(BlobStoreBlob.java:57)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:330)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.(OakDirectory.java:507)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4847) Support any types of node builders in the initializers

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4847.
-

Bulk close for 1.5.11

> Support any types of node builders in the initializers
> --
>
> Key: OAK-4847
> URL: https://issues.apache.org/jira/browse/OAK-4847
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.11
>
>
> In case that one of the following classes:
> * org.apache.jackrabbit.oak.plugins.nodetype.write.InitialContent
> * org.apache.jackrabbit.oak.security.privilege.PrivilegeInitializer
> * org.apache.jackrabbit.oak.security.user.UserInitializer
> tries to initialize a NodeBuilder which is not derived from the 
> MemoryNodeState or ModifiedNodeState, it breaks with an exception:
> {noformat}
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:7
> 7)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeStore.merge(MemoryNo
> deStore.java:126)
>   at
> org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:249)
> (...)
>   at
> org.apache.jackrabbit.oak.plugins.nodetype.write.NodeTypeRegistry.regis
> terBuiltIn(NodeTypeRegistry.java:86)
>   at
> org.apache.jackrabbit.oak.plugins.nodetype.write.InitialContent.initial
> ize(InitialContent.java:120)
> {noformat}
> It can be fixed wrapping the node state passed to the MemoryNodeStore 
> constructor with a MemoryNodeState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4854) Simplify TarNodeStore

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4854.
-

Bulk close for 1.5.11

> Simplify TarNodeStore
> -
>
> Key: OAK-4854
> URL: https://issues.apache.org/jira/browse/OAK-4854
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.11
>
>
> TarNodeStore in oak-upgrade can be simplified by extending from 
> ProxyNodeStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4043) Oak run checkpoints needs to account for multiple index lanes

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4043.
-

Bulk close for 1.5.11

> Oak run checkpoints needs to account for multiple index lanes
> -
>
> Key: OAK-4043
> URL: https://issues.apache.org/jira/browse/OAK-4043
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Critical
> Fix For: 1.6, 1.4.8, 1.5.11
>
> Attachments: OAK-4043.patch
>
>
> Oak run {{checkpoints rm-unreferenced}} [0] currently is hardcoded on a 
> single checkpoint reference (the default one). Now is it possible to add 
> multiple lanes (OAK-2749), which we already did in AEM, but the checkpoint 
> tool is blissfully unaware of this and it might trigger a full reindex 
> following offline compaction.
> fyi [~edivad], [~chetanm]
> [0] https://github.com/apache/jackrabbit-oak/tree/trunk/oak-run#checkpoints



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4848) Improve oak-blob-cloud tests

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4848.
-

Bulk close for 1.5.11

> Improve oak-blob-cloud tests
> 
>
> Key: OAK-4848
> URL: https://issues.apache.org/jira/browse/OAK-4848
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Blocker
>  Labels: technical_debt
> Fix For: 1.5.11
>
>
> Most S3 DataStore tests inherit from Jackrabbit 2's TestCaseBase. To make 
> extend them we should copy the class into Oak and let S3DataStore inherit 
> from that. In doing do we can also use the DataStore initialization part 
> already available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4774) Check usage of DocumentStoreException in MongoDocumentStore

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4774.
-

Bulk close for 1.5.11

> Check usage of DocumentStoreException in MongoDocumentStore
> ---
>
> Key: OAK-4774
> URL: https://issues.apache.org/jira/browse/OAK-4774
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> resilience
> Fix For: 1.6, 1.5.11
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in MongoDocumentStore and make sure MongoDB Java 
> driver specific exceptions are handled consistently and wrapped in a 
> DocumentStoreException. At the same time, cache consistency needs to be 
> checked as well in case of a driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4842) Upgrade breaks if there's no SearchManager configured in repository.xml

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4842.
-

Bulk close for 1.5.11

> Upgrade breaks if there's no SearchManager configured in repository.xml
> ---
>
> Key: OAK-4842
> URL: https://issues.apache.org/jira/browse/OAK-4842
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.5.10
>Reporter: Tomek Rękawek
> Fix For: 1.6, 1.5.11
>
>
> In case that the option SearchManager section in repository.xml is empty, the 
> upgrade fails with the NullPointerException. Also, the {{--skip-name-check}} 
> option (which would prevent this) is not accepted by the option parser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4806) Remove usage of Tree in LuceneIndexEditor

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4806.
-

Bulk close for 1.5.11

> Remove usage of Tree in LuceneIndexEditor
> -
>
> Key: OAK-4806
> URL: https://issues.apache.org/jira/browse/OAK-4806
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: performance
> Fix For: 1.6, 1.5.11
>
>
> {{LuceneIndexEditor}} currently creates 2 tree instances for determining 
> IndexRule. [~ianeboston] highlighted this on list [1] and this is something 
> which we should avoid and remove usage of Tree api
> This was earlier done so as to simplify future support for conditional rules 
> (OAK-2281) which might need access to ancestor which is not possible with 
> NodeState api.  As that is not going to be done so we can get rid of Tree 
> construction in the editor.
> [1] 
> https://lists.apache.org/thread.html/7d51b45296f5801c3b510a30a4847ce297707fb4e0d4c2cefe19be62@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4580) Update to Mongo Java Driver 3.2.x

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4580.
-

Bulk close for 1.5.11

> Update to Mongo Java Driver 3.2.x
> -
>
> Key: OAK-4580
> URL: https://issues.apache.org/jira/browse/OAK-4580
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.6, 1.5.11
>
>
> Oak currently uses version 2.14, which does not support some of the new 
> features available in 3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4734) AsyncIndexUpdateClusterTestIT fails occasionally

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4734.
-

Bulk close for 1.5.11

> AsyncIndexUpdateClusterTestIT fails occasionally
> 
>
> Key: OAK-4734
> URL: https://issues.apache.org/jira/browse/OAK-4734
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.5.9
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.5.11
>
> Attachments: OAK-4734.patch
>
>
> Seen on travis: https://travis-ci.org/apache/jackrabbit-oak/builds/156735673
> {noformat}
> missingCheckpointDueToEventualConsistency(org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdateClusterTestIT)
>   Time elapsed: 6.159 sec  <<< FAILURE!
> java.lang.AssertionError: Reindexing should not happen
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdateClusterTestIT.after(AsyncIndexUpdateClusterTestIT.java:84)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4832) Upgrade breaks if the SecurityManager section in repository.xml is empty

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4832.
-

Bulk close for 1.5.11

> Upgrade breaks if the SecurityManager section in repository.xml is empty
> 
>
> Key: OAK-4832
> URL: https://issues.apache.org/jira/browse/OAK-4832
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.0.33, 1.4.7, 1.2.19, 1.5.10
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.4.8, 1.5.11
>
>
> In case that the option SecurityManager section in repository.xml is empty, 
> the upgrade [fails with the 
> NullPointerException|http://jackrabbit.markmail.org/thread/2hhhrorj2huyy3lo]:
> {noformat}
> Caused by: javax.jcr.RepositoryException: Failed to copy content
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:525)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.upgrade(OakUpgrade.java:65)
>at 
> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:53)
>... 1 more
> Caused by: java.lang.NullPointerException
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.mapSecurityConfig(RepositoryUpgrade.java:615)
>at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:388)
>... 3 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4867) Avoid queries for first level previous documents during GC

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4867.
-

Bulk close for 1.5.11

> Avoid queries for first level previous documents during GC
> --
>
> Key: OAK-4867
> URL: https://issues.apache.org/jira/browse/OAK-4867
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: performance
> Fix For: 1.6, 1.5.11
>
>
> Revision GC in a first phase finds all documents for deleted nodes and their 
> previous documents. Reading the previous documents can be avoided in some 
> cases. The ids of first level previous documents can be derived directly from 
> the previous map in the main document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4850) List checkpoints

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4850.
-

Bulk close for 1.5.11

> List checkpoints
> 
>
> Key: OAK-4850
> URL: https://issues.apache.org/jira/browse/OAK-4850
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, documentmk, segmentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.4.8, 1.5.11
>
> Attachments: OAK-4850.patch
>
>
> Introduce a new method on {{NodeStore}} that lists the currently valid 
> checkpoints:
> {code}
> /**
>  * Returns all currently valid checkpoints.
>  *
>  * @return valid checkpoints.
>  */
> @Nonnull
> Iterable checkpoints();
> {code}
> The NodeStore interface already has methods to create and release a 
> checkpoint, as well as retrieving the root state for a checkpoint, but it is 
> currently not possible to _list_ checkpoints. Using the checkpoint facility 
> as designed right now can lead to a situation where a checkpoint is orphaned. 
> That is, some code created a checkpoint but was unable to store the reference 
> because the system e.g. crashed. Orphaned checkpoints can affect garbage 
> collection because they prevent it from cleaning up old data. Right now, this 
> requires users to run tools like oak-run to get rid of those checkpoints.
> As suggested in OAK-4826, client code should be able to automatically clean 
> up unused checkpoints. This requires a method to list existing checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4815) ReferenceIndex slowdown due to OAK-3403

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4815.
-

Bulk close for 1.5.11

> ReferenceIndex slowdown due to OAK-3403
> ---
>
> Key: OAK-4815
> URL: https://issues.apache.org/jira/browse/OAK-4815
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.5.11
>
>
> In OAK-3403, a lookup using the ReferenceIndex now uses "new FilterImpl()", 
> which in turn uses "new QueryEngineSettings()". And this is slow, because it 
> is an AnnotatedStandardMBean, and instantiating such an object is quite slow. 
> (Actually it is also wrong, because the configured query engine settings are 
> not used in that case.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4863) Reduce query batch size for deleted documents

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4863.
-

Bulk close for 1.5.11

> Reduce query batch size for deleted documents
> -
>
> Key: OAK-4863
> URL: https://issues.apache.org/jira/browse/OAK-4863
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.6, 1.5.11
>
>
> MongoVersionGCSupport uses the default batchSize when it queries for possibly 
> deleted documents. The default will initially read 100 documents and then as 
> many as fit into a 4MB response. Depending on the document size a couple of 
> thousand will fit in there and take time to process. It may happen that the 
> MongoDB cursor then times out and the VersionGC fails.
> An easy and safe solution is to reduce the batch size to a given number of 
> documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4412) Lucene hybrid index

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4412.
-

Bulk close for 1.5.11

> Lucene hybrid index
> ---
>
> Key: OAK-4412
> URL: https://issues.apache.org/jira/browse/OAK-4412
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Tomek Rękawek
>Assignee: Chetan Mehrotra
>  Labels: docs-impacting
> Fix For: 1.6, 1.5.11
>
> Attachments: OAK-4412-v1.diff, OAK-4412.patch, hybrid-benchmark.sh, 
> hybrid-result-v1.txt
>
>
> When running Oak in a cluster, each write operation is expensive. After 
> performing some stress-tests with a geo-distributed Mongo cluster, we've 
> found out that updating property indexes is a large part of the overall 
> traffic.
> The asynchronous index would be an answer here (as the index update won't be 
> made in the client request thread), but the AEM requires the updates to be 
> visible immediately in order to work properly.
> The idea here is to enhance the existing asynchronous Lucene index with a 
> synchronous, locally-stored counterpart that will persist only the data since 
> the last Lucene background reindexing job.
> The new index can be stored in memory or (if necessary) in MMAPed local 
> files. Once the "main" Lucene index is being updated, the local index will be 
> purged.
> Queries will use an union of results from the {{lucene}} and 
> {{lucene-memory}} indexes.
> The {{lucene-memory}} index, as a local stored entity, will be updated using 
> an observer, so it'll get both local and remote changes.
> The original idea has been suggested by [~chetanm] in the discussion for the 
> OAK-4233.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4826) Auto removal of orphaned checkpoints

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4826.
-

Bulk close for 1.5.11

> Auto removal of orphaned checkpoints
> 
>
> Key: OAK-4826
> URL: https://issues.apache.org/jira/browse/OAK-4826
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Marcel Reutegger
> Fix For: 1.6, 1.4.8, 1.5.11
>
> Attachments: OAK-4826.patch, OAK-4826.patch, OAK-4826.patch
>
>
> Currently if in a running system there are some orphaned checkpoint present 
> then they prevent the revision gc (compaction for segment) from being 
> effective. 
> So far the practice has been to use {{oak-run checkpoints rm-unreferenced}} 
> command to clean them up manually. This was set to manual as it was not 
> possible to determine whether current checkpoint is in use or not. 
> rm-unreferenced works with the basis that checkpoints are only made from 
> AsyncIndexUpdate and hence can check if the checkpoint is in use by cross 
> checking with {{:async}} state. Doing it in auto mode is risky as 
> {{checkpoint}} api can be used by any module.
> With OAK-2314 we also record some metadata like {{creator}} and {{name}}. 
> This can be used for auto cleanup. For example in some running system 
> following checkpoints are listed
> {noformat}
> Mon Sep 19 18:02:09 EDT 2016  Sun Jun 16 18:02:09 EDT 2019
> r15744787d0a-1-1
>  
> creator=AsyncIndexUpdate
> name=fulltext-async
> thread=sling-default-4070-Registered Service.653
>  
> Mon Sep 19 18:02:09 EDT 2016  Sun Jun 16 18:02:09 EDT 2019
> r15744787d0a-0-1
>  
> creator=AsyncIndexUpdate
> name=async
> thread=sling-default-4072-Registered Service.656
>  
> Fri Aug 19 18:57:33 EDT 2016  Thu May 16 18:57:33 EDT 2019
> r156a50612e1-1-1
>  
> creator=AsyncIndexUpdate
> name=async
> thread=sling-default-10-Registered Service.654
>  
> Wed Aug 10 12:13:20 EDT 2016  Tue May 07 12:25:52 EDT 2019
> r156753ac38d-0-1
>  
> creator=AsyncIndexUpdate
> name=async
> thread=sling-default-6041-Registered Service.1966
> {noformat}
> As can be seen that last 2 checkpoints are orphan and they would prevent 
> revision gc. For auto mode we can use following heuristic
> # List all current checkpoints
> # Only keep the latest checkpoint for given {{creator}} and {{name}} combo. 
> Other entries from same pair which are older i.e. creation time can be 
> consider orphan and deleted
> This logic can be implemented 
> {{org.apache.jackrabbit.oak.checkpoint.Checkpoints}} and can be invoked by 
> Revision GC logic (both in DocumentNodeStore and SegmentNodeStore) to 
> determine the base revision to keep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4819) Improve revision GC resilience

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4819.
-

Bulk close for 1.5.11

> Improve revision GC resilience
> --
>
> Key: OAK-4819
> URL: https://issues.apache.org/jira/browse/OAK-4819
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_0, resilience
> Fix For: 1.6, 1.4.8, 1.5.11, 1.2.20
>
>
> Revision GC may currently fail when a document with a malformed id is read 
> from the DocumentStore. E.g. a document stored accidentally in the nodes 
> collection or malformed for some other reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4830) StringUtils.estimateMemoryUsage() can throw NullPointerException

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4830.
-

Bulk close for 1.5.11

> StringUtils.estimateMemoryUsage() can throw NullPointerException
> 
>
> Key: OAK-4830
> URL: https://issues.apache.org/jira/browse/OAK-4830
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: commons
>Affects Versions: 1.5.10
>Reporter: Matt Ryan
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.5.11
>
> Attachments: OAK-4830.1.patch
>
>
> The {{estimateMemoryUsage()}} method in {{StringUtils}} in {{oak-commons}} 
> doesn't check if the string being passed in is null, so there's potential to 
> throw a {{NullPointerException}} if called with a null string.
> Suggest it returns 0 in this case.  I have a patch ready.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4838) Move S3 classes to oak-blob-cloud module

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4838.
-

Bulk close for 1.5.11

> Move S3 classes to oak-blob-cloud module
> 
>
> Key: OAK-4838
> URL: https://issues.apache.org/jira/browse/OAK-4838
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Blocker
>  Labels: technical_debt
> Fix For: 1.5.11
>
>
> Some S3 related classes are present in oak-core module. These should be moved 
> to oak-blob-cloud.
> This would also flip the module dependencies to oak-core -> oak-blob-cloud



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4802) Basic cache consistency test on exception

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4802.
-

Bulk close for 1.5.11

> Basic cache consistency test on exception
> -
>
> Key: OAK-4802
> URL: https://issues.apache.org/jira/browse/OAK-4802
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> resilience
> Fix For: 1.6, 1.5.11
>
>
> OAK-4774 and OAK-4793 aim to check if the cache behaviour of a DocumentStore 
> implementation when the underlying backend throws an exception even though 
> the operation succeeded. E.g. the response cannot be sent back because of a 
> network issue.
> This issue will provide the DocumentStore independent part of those tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4805) Misconfigured lucene index definition can render the whole system unusable

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4805.
-

Bulk close for 1.5.11

> Misconfigured lucene index definition can render the whole system unusable
> --
>
> Key: OAK-4805
> URL: https://issues.apache.org/jira/browse/OAK-4805
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>  Labels: candidate_oak_1_0
> Fix For: 1.6, 1.4.8, 1.5.11, 1.2.20
>
> Attachments: OAK-4805.patch
>
>
> Mis-configured index definition can throw an exception while collecting 
> plans. This causes any query (even unrelated ones) to not work as cost 
> calculation logic would consult a badly constructed index def. Overall a 
> mis-configured index definition can practically grind the whole system to 
> halt as the whole query framework stops working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4841) Error during MongoDB initialization

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4841.
-

Bulk close for 1.5.11

> Error during MongoDB initialization
> ---
>
> Key: OAK-4841
> URL: https://issues.apache.org/jira/browse/OAK-4841
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.5.6
>Reporter: Giovanni Landolina
>Assignee: Marcel Reutegger
> Fix For: 1.6, 1.5.11
>
>
> In my configuration, there's a NullPointerException when doing
> new DocumentMK.Builder().setMongoDB(db)
> due to the "serverStatus" command not returning the expected object.
> Here's my stacktrace:
> java.lang.NullPointerException
> at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
> at java.util.regex.Matcher.reset(Matcher.java:309)
> at java.util.regex.Matcher.(Matcher.java:229)
> at java.util.regex.Pattern.matcher(Pattern.java:1093)
> at 
> org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.checkVersion(MongoDocumentStore.java:305)
> at 
> org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.(MongoDocumentStore.java:236)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setMongoDB(DocumentMK.java:633)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setMongoDB(DocumentMK.java:655)
> The "serverStatus" command in MongoDocumentStore constructor returns:
> { "ok" : 0.0 , "errmsg" : "not authorized on  to execute command 
> { serverStatus: true }" , "code" : 13}
> Prior to version 1.5.6, "buildInfo" command was called instead, which happens 
> to work from a mongo shell, while "serverStatus" doesn't.
> Seems related to OAK-4111.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4831) Don't break the upgrade tests if the directory can't be cleaned-up

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4831.
-

Bulk close for 1.5.11

> Don't break the upgrade tests if the directory can't be cleaned-up
> --
>
> Key: OAK-4831
> URL: https://issues.apache.org/jira/browse/OAK-4831
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.11
>
>
> The oak-upgrade tests creates a lot of repositories. If a repository doesn't 
> release the directory it uses, [the whole test 
> fails|http://jackrabbit.markmail.org/thread/lmffrxvki7nxm3qy]. We should 
> handle it more gracefully:
> a) log the exact reason why the directory can't be removed,
> b) carry on with the tests,
> c) create directories in the Maven target directory, so they can be easily 
> removed with {{mvn clean}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4655) Enable configuring multiple segment nodestore instances in same setup

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4655.
-

Bulk close for 1.5.11

> Enable configuring multiple segment nodestore instances in same setup
> -
>
> Key: OAK-4655
> URL: https://issues.apache.org/jira/browse/OAK-4655
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
>Assignee: Tomek Rękawek
> Fix For: 1.6, 1.5.11
>
> Attachments: OAK-4655.v1.patch, OAK-4655.v2.patch
>
>
> With OAK-4369 and OAK-4490 its now possible to configure a new 
> SegmentNodeStore to act as secondry nodestore (OAK-4180). Recently for few 
> other features we see a requirement to configure a SegmentNodeStore just for 
> storage purpose. For e.g.
> # OAK-4180 - Enables use of SegmentNodeStore as a secondary store to 
> compliment DocumentNodeStore
> #* Always uses BlobStore from primary DocumentNodeStore
> #* Compaction to be enabled
> # OAK-4654 - Enable use of SegmentNodeStore for private mount in a 
> multiplexing nodestore setup
> #* Might use its own blob store
> #* Compaction might be disabled as it would be read only
> # OAK-4581 - Proposes to make use of SegmentNodeStore for storing event queue 
> offline
> In all these setups we need to configure a SegmentNodeStore which has 
> following aspect
> # NodeStore instance is not directly exposed but exposed via 
> {{NodeStoreProvider}} interface with {{role}} service property specifying the 
> intended usage
> # NodeStore here is not fully functional i.e. it would not be configured with 
> std observers, would not be used by ContentRepository etc
> # It needs to be ensured that any JMX MBean registered accounts for "role" so 
> that there is no collision
> With existing SegmentNodeStoreService we can only configure 1 nodestore. To 
> support above cases we need a OSGi config factory based implementation which 
> enables creation of multiple SegmentNodeStore instances (each with different 
> directory and different settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4858) Use Integer.getInteger() to read system property

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4858.
-

Bulk close for 1.5.11

> Use Integer.getInteger() to read system property
> 
>
> Key: OAK-4858
> URL: https://issues.apache.org/jira/browse/OAK-4858
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Trivial
> Fix For: 1.6, 1.5.11
>
>
> As suggested in OAK-4826, reading integer system properties in 
> AsyncIndexUpdate can be simplified with Integer.getInteger().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4840) Incorrect branch commit value

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4840.
-

Bulk close for 1.5.11

> Incorrect branch commit value
> -
>
> Key: OAK-4840
> URL: https://issues.apache.org/jira/browse/OAK-4840
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.4
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.6, 1.4.8, 1.5.11
>
>
> The commit tag for a branch commit is incorrect. The issue was introduced 
> with OAK-3646. A branch commit currently sets the entire revision vector as 
> the commit tag, while only the local branch revision is necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4712) Publish S3DataStore stats in JMX MBean

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4712.
-

Bulk close for 1.5.11

> Publish S3DataStore stats in JMX MBean
> --
>
> Key: OAK-4712
> URL: https://issues.apache.org/jira/browse/OAK-4712
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Matt Ryan
>Assignee: Amit Jain
> Fix For: 1.4.8, 1.5.11
>
> Attachments: OAK-4712.2.diff, OAK-4712.3.diff, OAK-4712.4.diff, 
> OAK-4712.5.diff, OAK-4712.7.patch, OAK-4712.8.patch, OAK_4712_6_Amit.patch
>
>
> This feature is to publish statistics about the S3DataStore via a JMX MBean.  
> There are two statistics suggested:
> * Indicate the number of files actively being synchronized from the local 
> cache into S3
> * Given a path to a local file, indicate whether synchronization of file into 
> S3 has completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-4793) Check usage of DocumentStoreException in RDBDocumentStore

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-4793.
-

Bulk close for 1.5.11

> Check usage of DocumentStoreException in RDBDocumentStore
> -
>
> Key: OAK-4793
> URL: https://issues.apache.org/jira/browse/OAK-4793
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> resilience
> Fix For: 1.6, 1.5.11
>
> Attachments: OAK-4793-1.diff, OAK-4793-2.diff, OAK-4793.diff, 
> OAK-4793.diff
>
>
> With OAK-4771 the usage of DocumentStoreException was clarified in the 
> DocumentStore interface. The purpose of this task is to check usage of the 
> DocumentStoreException in RDBDocumentStore and make sure JDBC driver specific 
> exceptions are handled consistently and wrapped in a DocumentStoreException. 
> At the same time, cache consistency needs to be checked as well in case of a 
> driver exception. E.g. invalidate if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3574) Query engine: support p=lowercase('x') and other function-based indexes

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3574.
-

Bulk close for 1.5.11

> Query engine: support p=lowercase('x') and other function-based indexes
> ---
>
> Key: OAK-3574
> URL: https://issues.apache.org/jira/browse/OAK-3574
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, lucene, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.5.11
>
> Attachments: OAK-3574-lucene-2.patch, OAK-3574-lucene.patch
>
>
> Currently, the query engine and indexes don't support function-based indexes 
> for conditions of the form lowercase(property) = 'x'. This needs to be 
> supported, specially for the Lucene property index.
> Also important is lowercase(name()).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3858) Review slow running tests

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3858.
-

Bulk close for 1.5.11

> Review slow running tests
> -
>
> Key: OAK-3858
> URL: https://issues.apache.org/jira/browse/OAK-3858
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Francesco Mari
>Assignee: Marcel Reutegger
>  Labels: test
> Fix For: 1.6, 1.5.11
>
> Attachments: module-build-times.png
>
>
> Some of the tests executed during a normal {{mvn clean test}} execution seem 
> to be very slow if compared with the rest of the suite. On my machine, some 
> problematic tests are:
> {noformat}
> Running org.apache.jackrabbit.oak.spi.blob.FileBlobStoreTest
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.982 sec
> Running org.apache.jackrabbit.oak.plugins.document.BasicDocumentStoreTest
> Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.961 sec
> Running org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.076 sec
> Running org.apache.jackrabbit.oak.plugins.document.ConcurrentDocumentStoreTest
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.054 sec
> Running 
> org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 982.526 sec
> Running org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreTest
> Tests run: 53, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.132 sec
> Running org.apache.jackrabbit.oak.plugins.document.LastRevRecoveryAgentTest
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.068 sec
> Running 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStorePerformanceTest
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 10.006 sec
> Running org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreTest
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.017 sec
> Running org.apache.jackrabbit.oak.plugins.document.VersionGCWithSplitTest
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.128 sec
> Running 
> org.apache.jackrabbit.oak.security.authentication.ldap.LdapLoginStandaloneTest
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.96 sec
> {noformat}
> These tests should be analyzed for potential errors or moved to the 
> integration test phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3588) SegmentNodeStore allow tweaking of lock fairness

2016-10-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544764#comment-15544764
 ] 

Michael Dürig commented on OAK-3588:


Yes, I got this part. My question was rather related to what we should look out 
for during these 1-2 releases [~chetanm] mentioned. 

> SegmentNodeStore allow tweaking of lock fairness
> 
>
> Key: OAK-3588
> URL: https://issues.apache.org/jira/browse/OAK-3588
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.4, 1.3.10, 1.2.8, 1.0.24
>
> Attachments: OAK-3588.patch
>
>
> This will expose the merge semaphores's fairness policy as the semaphore is 
> currently configured as _NonfairSync_ (default value).
> Opening this option will be beneficial in collecting more stats around thread 
> starvation and what the best policy is (fair vs non fair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4879) Proper implementation of getOrCreateReferenceKey in CachingFDS

2016-10-04 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-4879:
--
Fix Version/s: (was: 1.5.11)
   1.6

> Proper implementation of getOrCreateReferenceKey in CachingFDS
> --
>
> Key: OAK-4879
> URL: https://issues.apache.org/jira/browse/OAK-4879
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.6, 1.2.20, 1.4.9
>
> Attachments: OAK-4879.patch
>
>
> When cacheSize is configured to be > 0 then {{CachingFDS}} is registered but 
> the implementation of {{getOrCreateReferenceKey}} is different from 
> {{FileDataStore}} since, it inherits from {{CachingDataStore}} which 
> implements a different method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2498) Root record references provide too little context for parsing a segment

2016-10-04 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544759#comment-15544759
 ] 

Francesco Mari commented on OAK-2498:
-

[~alex.parvulescu], can you run your benchmark on the latest trunk, as of 
r1763184?

> Root record references provide too little context for parsing a segment
> ---
>
> Key: OAK-2498
> URL: https://issues.apache.org/jira/browse/OAK-2498
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: tools
> Fix For: Segment Tar 0.0.14
>
>
> According to the [documentation | 
> http://jackrabbit.apache.org/oak/docs/nodestore/segmentmk.html] the root 
> record references in a segment header provide enough context for parsing all 
> records within this segment without any external information. 
> Turns out this is not true: if a root record reference turns e.g. to a list 
> record. The items in that list are record ids of unknown type. So even though 
> those records might live in the same segment, we can't parse them as we don't 
> know their type. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544731#comment-15544731
 ] 

Tomek Rękawek commented on OAK-4882:


Thread dump, redacted by Chetan:

A read blocks on cache load (multiple thread reading same value)
{noformat}
"X.X.X.X [1471910520681] GET /... HTTP/1.1" prio=10 tid=0x7f9c9d296000 
nid=0x6b80 waiting for monitor entry [0x7f9d79058000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.access(CacheLIRS.java:884)
- waiting to lock <0x0004030760a8> (a 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.get(CacheLIRS.java:867)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS.getIfPresent(CacheLIRS.java:362)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.getIfPresent(NodeCache.java:127)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.get(NodeCache.java:142)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getNode(DocumentNodeStore.java:824)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChildNode(DocumentNodeState.java:253)
{noformat}

Then the thread which is loading is blocked on eviction
{noformat}
"X.X.X.X [1471910536795] GET /... HTTP/1.1" prio=10 tid=0x7f9c9fafe800 
nid=0x4437 waiting for monitor entry [0x7f9dcddbf000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheActionDispatcher.add(CacheActionDispatcher.java:87)
- waiting to lock <0x00040293ada0> (a 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheActionDispatcher)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheWriteQueue.addPut(CacheWriteQueue.java:78)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.evicted(NodeCache.java:221)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder$1.evicted(DocumentMK.java:986)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder$1.evicted(DocumentMK.java:982)
at org.apache.jackrabbit.oak.cache.CacheLIRS.evicted(CacheLIRS.java:217)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.evict(CacheLIRS.java:1214)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.put(CacheLIRS.java:1131)
- locked <0x0004030760a8> (a 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment)
{noformat}

Here CacheActionDispatcher.add is synchronized and has 6 threads waiting. Now 
the thread which has the lock is busy with eviction
{noformat}
"X.X.X.X [1471909807710] POST /... HTTP/1.1" prio=10 tid=0x7f9c8d65b800 
nid=0x5937 runnable [0x7f9c43645000]
   java.lang.Thread.State: RUNNABLE
at java.util.HashMap.createEntry(HashMap.java:869)
at java.util.HashMap.addEntry(HashMap.java:856)
at java.util.HashMap.put(HashMap.java:484)
at java.util.HashSet.add(HashSet.java:217)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheWriteQueue.addInvalidate(CacheWriteQueue.java:61)
- locked <0x000402e522c0> (a 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheWriteQueue)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheActionDispatcher.cleanTheQueue(CacheActionDispatcher.java:103)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheActionDispatcher.add(CacheActionDispatcher.java:88)
- locked <0x00040293ada0> (a 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheActionDispatcher)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.async.CacheWriteQueue.addPut(CacheWriteQueue.java:78)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.evicted(NodeCache.java:221)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder$1.evicted(DocumentMK.java:986)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder$1.evicted(DocumentMK.java:982)
at org.apache.jackrabbit.oak.cache.CacheLIRS.evicted(CacheLIRS.java:217)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.evict(CacheLIRS.java:1214)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment.put(CacheLIRS.java:1131)
- locked <0x000402e55ad8> (a 
org.apache.jackrabbit.oak.cache.CacheLIRS$Segment)
at org.apache.jackrabbit.oak.cache.CacheLIRS.put(CacheLIRS.java:258)
at org.apache.jackrabbit.oak.cache.CacheLIRS.put(CacheLIRS.java:269)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.getIfPresent(NodeCache.java:133)
at 

[jira] [Commented] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544723#comment-15544723
 ] 

Tomek Rękawek commented on OAK-4882:


//cc: [~chetanm], [~tmueller]

> Bottleneck in the asynchronous persistent cache
> ---
>
> Key: OAK-4882
> URL: https://issues.apache.org/jira/browse/OAK-4882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, documentmk
>Affects Versions: 1.5.10, 1.4.8
>Reporter: Tomek Rękawek
> Fix For: 1.6
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4882) Bottleneck in the asynchronous persistent cache

2016-10-04 Thread JIRA
Tomek Rękawek created OAK-4882:
--

 Summary: Bottleneck in the asynchronous persistent cache
 Key: OAK-4882
 URL: https://issues.apache.org/jira/browse/OAK-4882
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: cache, documentmk
Affects Versions: 1.4.8, 1.5.10
Reporter: Tomek Rękawek
 Fix For: 1.6


The class responsible for accepting new cache operations which will be handled 
asynchronously is 
[CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
 In case of a high load, when the queue is full (=1024 entries), the 
[add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
 method removes the oldest 256 entries. However, we can't afford losing the 
updates (as it may result in having stale entries in the cache), so all the 
removed entries are compacted into one big invalidate action.

The compaction action 
([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
 still holds the lock taken in add() method, so threads which tries to add 
something to the queue have to wait until cleanTheQueue() ends.

Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so it 
won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4877) Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi configuration is used against oak-segment-tar

2016-10-04 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544716#comment-15544716
 ] 

Timothee Maret commented on OAK-4877:
-

[~frm] could you look at the patch ? It essentially consume the configuration 
for the old PID {{o.a.j.o.p.s.s.s.StandbyStoreService}} and print a hint 
message on how to fix it (move the config to the new PID).
It share most bits with the existing 
{{SegmentNodeStoreServiceDeprecationError}}.

> Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi 
> configuration is used against oak-segment-tar
> -
>
> Key: OAK-4877
> URL: https://issues.apache.org/jira/browse/OAK-4877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4877.patch
>
>
> Similarly to OAK-4702, OSGI configurations for the PID 
> {{org.apache.jackrabbit.oak.plugins.segment.standby.store.StandbyStoreService}}
>  should be intercepted and a meaningful message should be printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4877) Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi configuration is used against oak-segment-tar

2016-10-04 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4877:

Flags: Patch

> Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi 
> configuration is used against oak-segment-tar
> -
>
> Key: OAK-4877
> URL: https://issues.apache.org/jira/browse/OAK-4877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4877.patch
>
>
> Similarly to OAK-4702, OSGI configurations for the PID 
> {{org.apache.jackrabbit.oak.plugins.segment.standby.store.StandbyStoreService}}
>  should be intercepted and a meaningful message should be printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4877) Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi configuration is used against oak-segment-tar

2016-10-04 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4877:

Attachment: OAK-4877.patch

Attaching svn compatible patch, otherwise available at 
https://github.com/tmaret/jackrabbit-oak/commit/416ce7f7562c705d80f829ed329bbeb1bb5b4913.patch

> Fail loudly if an obsolete o.a.j.o.p.s.s.s.StandbyStoreService OSGi 
> configuration is used against oak-segment-tar
> -
>
> Key: OAK-4877
> URL: https://issues.apache.org/jira/browse/OAK-4877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4877.patch
>
>
> Similarly to OAK-4702, OSGI configurations for the PID 
> {{org.apache.jackrabbit.oak.plugins.segment.standby.store.StandbyStoreService}}
>  should be intercepted and a meaningful message should be printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4881) Make merge semaphore in SegmentNodeStore fair by default

2016-10-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544704#comment-15544704
 ] 

Alex Parvulescu commented on OAK-4881:
--

bq. Further going forward OAK-4122 would replace the lock
this applies to {{segment-tar}} only.

> Make merge semaphore in SegmentNodeStore fair by default
> 
>
> Key: OAK-4881
> URL: https://issues.apache.org/jira/browse/OAK-4881
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6
>
>
> Currently the merge semaphore in SegmentNodeStore is by default non fair. 
> OAK-3588 provided a config option to make it fair.
> We should change the default to fair so as to ensure writer threads never get 
> starved. 
> Eventually this change would need to be backported to branches. Further going 
> forward OAK-4122 would replace the lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4881) Make merge semaphore in SegmentNodeStore fair by default

2016-10-04 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4881:
-
Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4  (was: )

> Make merge semaphore in SegmentNodeStore fair by default
> 
>
> Key: OAK-4881
> URL: https://issues.apache.org/jira/browse/OAK-4881
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4
> Fix For: 1.6
>
>
> Currently the merge semaphore in SegmentNodeStore is by default non fair. 
> OAK-3588 provided a config option to make it fair.
> We should change the default to fair so as to ensure writer threads never get 
> starved. 
> Eventually this change would need to be backported to branches. Further going 
> forward OAK-4122 would replace the lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3588) SegmentNodeStore allow tweaking of lock fairness

2016-10-04 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544685#comment-15544685
 ] 

Chetan Mehrotra commented on OAK-3588:
--

Exactly! Opened OAK-4881 for that

> SegmentNodeStore allow tweaking of lock fairness
> 
>
> Key: OAK-3588
> URL: https://issues.apache.org/jira/browse/OAK-3588
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.4, 1.3.10, 1.2.8, 1.0.24
>
> Attachments: OAK-3588.patch
>
>
> This will expose the merge semaphores's fairness policy as the semaphore is 
> currently configured as _NonfairSync_ (default value).
> Opening this option will be beneficial in collecting more stats around thread 
> starvation and what the best policy is (fair vs non fair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4881) Make merge semaphore in SegmentNodeStore fair

2016-10-04 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-4881:
-
Description: 
Currently the merge semaphore in SegmentNodeStore is by default non fair. 
OAK-3588 provided a config option to make it fair.

We should change the default to fair so as to ensure writer threads never get 
starved. 

Eventually this change would need to be backported to branches. Further going 
forward OAK-4122 would replace the lock

  was:
Currently the merge semaphore in SegmentNodeStore is by default non fair. 
OAK-3588 provided a config option to make it fair.

We should change the default to fair so as to ensure writer threads never get 
starved. 

Eventually this change would need to be backported to branches


> Make merge semaphore in SegmentNodeStore fair
> -
>
> Key: OAK-4881
> URL: https://issues.apache.org/jira/browse/OAK-4881
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar, segmentmk
>Reporter: Chetan Mehrotra
> Fix For: 1.6
>
>
> Currently the merge semaphore in SegmentNodeStore is by default non fair. 
> OAK-3588 provided a config option to make it fair.
> We should change the default to fair so as to ensure writer threads never get 
> starved. 
> Eventually this change would need to be backported to branches. Further going 
> forward OAK-4122 would replace the lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4881) Make merge semaphore in SegmentNodeStore fair

2016-10-04 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-4881:


 Summary: Make merge semaphore in SegmentNodeStore fair
 Key: OAK-4881
 URL: https://issues.apache.org/jira/browse/OAK-4881
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar, segmentmk
Reporter: Chetan Mehrotra
 Fix For: 1.6


Currently the merge semaphore in SegmentNodeStore is by default non fair. 
OAK-3588 provided a config option to make it fair.

We should change the default to fair so as to ensure writer threads never get 
starved. 

Eventually this change would need to be backported to branches



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3588) SegmentNodeStore allow tweaking of lock fairness

2016-10-04 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544677#comment-15544677
 ] 

Alex Parvulescu commented on OAK-3588:
--

bq. Anything specific you want to look out for?
I think [~chetanm] wants to switch the default to {{true}}.

> SegmentNodeStore allow tweaking of lock fairness
> 
>
> Key: OAK-3588
> URL: https://issues.apache.org/jira/browse/OAK-3588
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.4, 1.3.10, 1.2.8, 1.0.24
>
> Attachments: OAK-3588.patch
>
>
> This will expose the merge semaphores's fairness policy as the semaphore is 
> currently configured as _NonfairSync_ (default value).
> Opening this option will be beneficial in collecting more stats around thread 
> starvation and what the best policy is (fair vs non fair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3588) SegmentNodeStore allow tweaking of lock fairness

2016-10-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544661#comment-15544661
 ] 

Michael Dürig commented on OAK-3588:


I think this would make sense. Anything specific you want to look out for? 

Eventually this should be superseded by OAK-4122. 

> SegmentNodeStore allow tweaking of lock fairness
> 
>
> Key: OAK-3588
> URL: https://issues.apache.org/jira/browse/OAK-3588
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.4, 1.3.10, 1.2.8, 1.0.24
>
> Attachments: OAK-3588.patch
>
>
> This will expose the merge semaphores's fairness policy as the semaphore is 
> currently configured as _NonfairSync_ (default value).
> Opening this option will be beneficial in collecting more stats around thread 
> starvation and what the best policy is (fair vs non fair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-4880) StandbySyncExecution logs too many INFO lines

2016-10-04 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544630#comment-15544630
 ] 

Timothee Maret commented on OAK-4880:
-

[~frm] could you look at the patch ? It uses {{debug}} for logging the 
activities in {{StandbyClientSyncExecution}}.

> StandbySyncExecution logs too many INFO lines
> -
>
> Key: OAK-4880
> URL: https://issues.apache.org/jira/browse/OAK-4880
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4880.patch
>
>
> The logs for loading/inspecting/marking segment is currently at INFO level 
> which generates a consequent amount of logs. The level should be moved to 
> debug or trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4880) StandbySyncExecution logs too many INFO lines

2016-10-04 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4880:

Flags: Patch

> StandbySyncExecution logs too many INFO lines
> -
>
> Key: OAK-4880
> URL: https://issues.apache.org/jira/browse/OAK-4880
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4880.patch
>
>
> The logs for loading/inspecting/marking segment is currently at INFO level 
> which generates a consequent amount of logs. The level should be moved to 
> debug or trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-4880) StandbySyncExecution logs too many INFO lines

2016-10-04 Thread Timothee Maret (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothee Maret updated OAK-4880:

Attachment: OAK-4880.patch

Attaching svn compatible patch otherwise available at 
https://github.com/tmaret/jackrabbit-oak/commit/b5575b67195fe83076a37dcbec1d6fceb7ada949.patch

> StandbySyncExecution logs too many INFO lines
> -
>
> Key: OAK-4880
> URL: https://issues.apache.org/jira/browse/OAK-4880
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: Segment Tar 0.0.12
>Reporter: Timothee Maret
>Assignee: Timothee Maret
> Fix For: Segment Tar 0.0.14
>
> Attachments: OAK-4880.patch
>
>
> The logs for loading/inspecting/marking segment is currently at INFO level 
> which generates a consequent amount of logs. The level should be moved to 
> debug or trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-4880) StandbySyncExecution logs too many INFO lines

2016-10-04 Thread Timothee Maret (JIRA)
Timothee Maret created OAK-4880:
---

 Summary: StandbySyncExecution logs too many INFO lines
 Key: OAK-4880
 URL: https://issues.apache.org/jira/browse/OAK-4880
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segment-tar
Affects Versions: Segment Tar 0.0.12
Reporter: Timothee Maret
Assignee: Timothee Maret
 Fix For: Segment Tar 0.0.14


The logs for loading/inspecting/marking segment is currently at INFO level 
which generates a consequent amount of logs. The level should be moved to debug 
or trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)