[jira] [Resolved] (OAK-2300) QueryEngine should also make use of jcr:primaryType property restriction for NodeType condition

2015-07-29 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2300.
--
   Resolution: Duplicate
Fix Version/s: (was: 1.3.4)

> QueryEngine should also make use of jcr:primaryType property restriction for 
> NodeType condition
> ---
>
> Key: OAK-2300
> URL: https://issues.apache.org/jira/browse/OAK-2300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Chetan Mehrotra
>Priority: Minor
>
> Queries generated by Jackrabbit GQL support [1] specifies nodeType as 
> property restrictions
> {noformat}
> SELECT [jcr:path],
>[jcr:score],
>*
> FROM   [nt:base] AS a
> WHERE  Isdescendantnode(a, '/content/dam')
> ANDCONTAINS([jcr:content/metadata/dc:format], 'image')
> AND[jcr:primarytype] = 'dam:Asset'
> ANDCONTAINS(*, 'foo')
> {noformat}
> For above query the {{Filter.matchesAllTypes}} returns true which is not the 
> intention. This causes issues with {{LucenePropertyIndex}} as it determines 
> the applicable {{indexingRule}} based on the nodeType.
> It would be helpful if QueryEngine takes into account such 
> propertyRestrictions or we fix GQL!
> [1] 
> http://jackrabbit.apache.org/api/2.1/org/apache/jackrabbit/commons/query/GQL.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3165) Redundant test for duplicate membership in Group.addMember

2015-07-29 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646764#comment-14646764
 ] 

Tobias Bocanegra commented on OAK-3165:
---

yes. you are correct. the {{MembershipWriter}} already checks if the user is 
part of the members.

(we could also remove the check and just search for the first property that has 
fewer values as the defined max size. and/or we could remember where the last 
member was removed from. however, keeping the check like this is more robust, I 
think)

> Redundant test for duplicate membership in Group.addMember
> --
>
> Key: OAK-3165
> URL: https://issues.apache.org/jira/browse/OAK-3165
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.4
>
> Attachments: OAK-3165.patch
>
>
> while looking at the {{MembershipWriter.addMember}} again I spotted the 
> following code:
> {code}
> int bestCount = membershipSizeThreshold;
> PropertyState bestProperty = null;
> Tree bestTree = null;
> while (trees.hasNext()) {
> Tree t = trees.next();
> PropertyState refs = t.getProperty(UserConstants.REP_MEMBERS);
> if (refs != null) {
> int numRefs = 0;
> for (String ref : refs.getValue(Type.WEAKREFERENCES)) {
> if (ref.equals(memberContentId)) {
> return false;
> }
> numRefs++;
> }
> if (numRefs < bestCount) {
> bestCount = numRefs;
> bestProperty = refs;
> bestTree = t;
> }
> }
> }
> {code}
> the fact that loop doesn't stop once the first {{bestProperty}} has been 
> found and there is an explicit test for the the references already containing 
> the contentID of the member to be added, indicates to me that the additional 
> test for {{isDeclaredMember(newMember)}} in {{GroupImpl}} is redundant and 
> can be omitted.
> proposed improvement and some additional test attached as patch.
> [~tripod], i would appreciate if you could confirm that my reading of your 
> code is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3165) Redundant test for duplicate membership in Group.addMember

2015-07-29 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3165:

Attachment: OAK-3165.patch

> Redundant test for duplicate membership in Group.addMember
> --
>
> Key: OAK-3165
> URL: https://issues.apache.org/jira/browse/OAK-3165
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.4
>
> Attachments: OAK-3165.patch
>
>
> while looking at the {{MembershipWriter.addMember}} again I spotted the 
> following code:
> {code}
> int bestCount = membershipSizeThreshold;
> PropertyState bestProperty = null;
> Tree bestTree = null;
> while (trees.hasNext()) {
> Tree t = trees.next();
> PropertyState refs = t.getProperty(UserConstants.REP_MEMBERS);
> if (refs != null) {
> int numRefs = 0;
> for (String ref : refs.getValue(Type.WEAKREFERENCES)) {
> if (ref.equals(memberContentId)) {
> return false;
> }
> numRefs++;
> }
> if (numRefs < bestCount) {
> bestCount = numRefs;
> bestProperty = refs;
> bestTree = t;
> }
> }
> }
> {code}
> the fact that loop doesn't stop once the first {{bestProperty}} has been 
> found and there is an explicit test for the the references already containing 
> the contentID of the member to be added, indicates to me that the additional 
> test for {{isDeclaredMember(newMember)}} in {{GroupImpl}} is redundant and 
> can be omitted.
> proposed improvement and some additional test attached as patch.
> [~tripod], i would appreciate if you could confirm that my reading of your 
> code is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3165) Redundant test for duplicate membership in Group.addMember

2015-07-29 Thread angela (JIRA)
angela created OAK-3165:
---

 Summary: Redundant test for duplicate membership in Group.addMember
 Key: OAK-3165
 URL: https://issues.apache.org/jira/browse/OAK-3165
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
 Fix For: 1.3.4


while looking at the {{MembershipWriter.addMember}} again I spotted the 
following code:

{code}
int bestCount = membershipSizeThreshold;
PropertyState bestProperty = null;
Tree bestTree = null;
while (trees.hasNext()) {
Tree t = trees.next();
PropertyState refs = t.getProperty(UserConstants.REP_MEMBERS);
if (refs != null) {
int numRefs = 0;
for (String ref : refs.getValue(Type.WEAKREFERENCES)) {
if (ref.equals(memberContentId)) {
return false;
}
numRefs++;
}
if (numRefs < bestCount) {
bestCount = numRefs;
bestProperty = refs;
bestTree = t;
}
}
}
{code}

the fact that loop doesn't stop once the first {{bestProperty}} has been found 
and there is an explicit test for the the references already containing the 
contentID of the member to be added, indicates to me that the additional test 
for {{isDeclaredMember(newMember)}} in {{GroupImpl}} is redundant and can be 
omitted.

proposed improvement and some additional test attached as patch.
[~tripod], i would appreciate if you could confirm that my reading of your code 
is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3154) Improve SimpleNodeAggregator performance with a NodeState cache

2015-07-29 Thread Joel Richard (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646211#comment-14646211
 ] 

Joel Richard commented on OAK-3154:
---

[~alex.parvulescu], I was told that this class might not be used much longer. 
Is that the case?

> Improve SimpleNodeAggregator performance with a NodeState cache
> ---
>
> Key: OAK-3154
> URL: https://issues.apache.org/jira/browse/OAK-3154
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.3.3
>Reporter: Joel Richard
>  Labels: performance
>
> I have profiled a query where 16% of the query fetching time is spent inside 
> of SimpleNodeAggregator.isNodeType. In my case, a lot of nodes which are read 
> have overlapping paths.
> Because the nodes seem to be iterated alphabetically, it would be possible to 
> cache the previous NodeState chain in an array and reuse as much as possible 
> if the previous and current path overlap. This would significantly reduce the 
> query fetching time in cases where a lot of paths are similar. Since the 
> NodeState cache array can be reused for the whole query execution, the 
> possible overhead of it should be negligible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3155) AsyncIndex stats do not capture execution for runs where no indexing is performed

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646144#comment-14646144
 ] 

Chetan Mehrotra commented on OAK-3155:
--

Linking to OAK-3164 as testcase for this issue fails due to how checkpoints are 
managed within MemoryNodeStore

> AsyncIndex stats do not capture execution for runs where no indexing is 
> performed
> -
>
> Key: OAK-3155
> URL: https://issues.apache.org/jira/browse/OAK-3155
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3155.patch
>
>
> {{AsyncIndexStats}} currently provides a time series of the stats around when 
> async indexing run and what time it took. Current logic only collect stats 
> when actual indexing is performed. If no change happened in the repository 
> from the last run that stats around that is ignored.
> Due to this if we see a time series of when runs happen then it might not be 
> true and gives a false impression that async indexing is slow. 
> As a fix we should record execution stats for every run



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3164) MemoryNodeStore issues duplicate checkpoint

2015-07-29 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3164:
-
Attachment: OAK-3164.patch

[patch|^OAK-3164.patch] with testcase and fix which replace current naming 
logic with a increasing counter to ensure that names are not duplicate

[~alex.parvulescu] [~mduerig] Can you review the patch?

> MemoryNodeStore issues duplicate checkpoint
> ---
>
> Key: OAK-3164
> URL: https://issues.apache.org/jira/browse/OAK-3164
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: e
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3164.patch
>
>
> {{MemoryNodeStore}} can possibly issue duplicate checkpoint. In below logic 
> if {{checkpoints}} has one entry then on subsequent call to issue checkpoint 
> the name would be set to {{checkpoint1}} only. This causes AsyncIndexUpdate 
> test to fail in some cases
> {code}
> @Override
> public String checkpoint(long lifetime, @Nonnull Map 
> properties) {
> checkArgument(lifetime > 0);
> checkNotNull(properties);
> String checkpoint = "checkpoint" + checkpoints.size();
> checkpoints.put(checkpoint, new Checkpoint(getRoot(), properties));
> return checkpoint;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3164) MemoryNodeStore issues duplicate checkpoint

2015-07-29 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3164:


 Summary: MemoryNodeStore issues duplicate checkpoint
 Key: OAK-3164
 URL: https://issues.apache.org/jira/browse/OAK-3164
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: e
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.3.4


{{MemoryNodeStore}} can possibly issue duplicate checkpoint. In below logic if 
{{checkpoints}} has one entry then on subsequent call to issue checkpoint the 
name would be set to {{checkpoint1}} only. This causes AsyncIndexUpdate test to 
fail in some cases

{code}
@Override
public String checkpoint(long lifetime, @Nonnull Map 
properties) {
checkArgument(lifetime > 0);
checkNotNull(properties);
String checkpoint = "checkpoint" + checkpoints.size();
checkpoints.put(checkpoint, new Checkpoint(getRoot(), properties));
return checkpoint;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2292) Use ConcurrentUpdateSolrServer for remote updates

2015-07-29 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated OAK-2292:
-
Fix Version/s: 1.0.18

> Use ConcurrentUpdateSolrServer for remote updates
> -
>
> Key: OAK-2292
> URL: https://issues.apache.org/jira/browse/OAK-2292
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: solr
>Affects Versions: 1.1.2
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.1.5, 1.0.18
>
>
> For remote Solr we are using the HttpSolrServer which is fine for searching, 
> however for indexing the ConcurrentUpdateSolrServer should be much performant.
> I did a quick local test and the timing for indexing 19k docs goes down from 
> 3 mins to 5 seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3160) Implement Session.hasPermission(String, String...) and support for additional actions

2015-07-29 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3160.
-
   Resolution: Fixed
Fix Version/s: 1.3.4

Committed revision 1693270.


> Implement Session.hasPermission(String, String...) and support for additional 
> actions
> -
>
> Key: OAK-3160
> URL: https://issues.apache.org/jira/browse/OAK-3160
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, jcr
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.4
>
>
> see JCR-3885 for details. this is the Oak part of this improvement as the 
> implementation of JackrabbitSession and the underlying action resolution 
> needs to be adjusted in Oak as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3162) ArrayIndexOutOfBoundsException in Lucene indexer at time of index close

2015-07-29 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3162:
---
Environment: 1.0.16  (was: OAK-1.0.16)

> ArrayIndexOutOfBoundsException in Lucene indexer at time of index close
> ---
>
> Key: OAK-3162
> URL: https://issues.apache.org/jira/browse/OAK-3162
> Project: Jackrabbit Oak
>  Issue Type: Bug
> Environment: 1.0.16
>Reporter: Vikas Saurabh
>
> In our local environment using 1.0.16, we are seeing following exception 
> constantly.
> {noformat}
> 28.07.2015 03:19:25.517 *ERROR* [pool-6-thread-3] 
> org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
> execution of org.apache.j
> ackrabbit.oak.plugins.index.AsyncIndexUpdate@1c7ba9ed : 1713
> java.lang.ArrayIndexOutOfBoundsException: 1713
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:147)
> at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:76)
> at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:67)
> at 
> org.apache.lucene.index.SegmentMerger.setDocMaps(SegmentMerger.java:352)
> at org.apache.lucene.index.SegmentMerger.(SegmentMerger.java:64)
> at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1941)
> at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3124)
> at 
> org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:159)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:186)
> at 
> org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
> at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
> at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
> at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
> at 
> org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
> (cc [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3162) ArrayIndexOutOfBoundsException in Lucene indexer at time of index close

2015-07-29 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3162:
---
Description: 
In our local environment using 1.0.16, we are seeing following exception 
constantly.
{noformat}
28.07.2015 03:19:25.517 *ERROR* [pool-6-thread-3] 
org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
execution of org.apache.j
ackrabbit.oak.plugins.index.AsyncIndexUpdate@1c7ba9ed : 1713
java.lang.ArrayIndexOutOfBoundsException: 1713
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:147)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:76)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:67)
at 
org.apache.lucene.index.SegmentMerger.setDocMaps(SegmentMerger.java:352)
at org.apache.lucene.index.SegmentMerger.(SegmentMerger.java:64)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1941)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3124)
at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:159)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:186)
at 
org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

(cc [~chetanm])

  was:
In our local environment using 1.0.16, we are seeing following exception 
constantly. Along with that, logs also are showing "Reindexing will be 
performed" message over and over
{noformat}
28.07.2015 03:19:25.517 *ERROR* [pool-6-thread-3] 
org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
execution of org.apache.j
ackrabbit.oak.plugins.index.AsyncIndexUpdate@1c7ba9ed : 1713
java.lang.ArrayIndexOutOfBoundsException: 1713
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:147)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:76)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:67)
at 
org.apache.lucene.index.SegmentMerger.setDocMaps(SegmentMerger.java:352)
at org.apache.lucene.index.SegmentMerger.(SegmentMerger.java:64)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1941)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3124)
at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:159)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:186)
at 
org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
at 
or

[jira] [Updated] (OAK-3162) ArrayIndexOutOfBoundsException in Lucene indexer at time of index close

2015-07-29 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3162:
-
Summary: ArrayIndexOutOfBoundsException in Lucene indexer at time of index 
close  (was: Seeing following exception during async index update )

> ArrayIndexOutOfBoundsException in Lucene indexer at time of index close
> ---
>
> Key: OAK-3162
> URL: https://issues.apache.org/jira/browse/OAK-3162
> Project: Jackrabbit Oak
>  Issue Type: Bug
> Environment: OAK-1.0.16
>Reporter: Vikas Saurabh
>
> In our local environment using 1.0.16, we are seeing following exception 
> constantly. Along with that, logs also are showing "Reindexing will be 
> performed" message over and over
> {noformat}
> 28.07.2015 03:19:25.517 *ERROR* [pool-6-thread-3] 
> org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
> execution of org.apache.j
> ackrabbit.oak.plugins.index.AsyncIndexUpdate@1c7ba9ed : 1713
> java.lang.ArrayIndexOutOfBoundsException: 1713
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:147)
> at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:76)
> at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:67)
> at 
> org.apache.lucene.index.SegmentMerger.setDocMaps(SegmentMerger.java:352)
> at org.apache.lucene.index.SegmentMerger.(SegmentMerger.java:64)
> at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1941)
> at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3124)
> at 
> org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:159)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:186)
> at 
> org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
> at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
> at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
> at 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
> at 
> org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
> (cc [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3163) Improve binary comparison during repeated upgrades

2015-07-29 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645982#comment-14645982
 ] 

Julian Sedding edited comment on OAK-3163 at 7/29/15 12:45 PM:
---

Attached patch that leverages the content-identity comparison in 
{{AbstractBlob#equal}}. This speeds up comparison of binaries and thus the 
upgrade of repositories with a significant amount of binaries.

The copy-phase while incrementally upgrading a repository with 2.6mio nodes was 
4x faster with this patch (43min vs 10min).


was (Author: jsedding):
Attached patch that leverages the content-identity comparison in 
{{AbstractBlob#equal}}. This speeds up comparison of binaries and thus the 
upgrade of repositories with a significant amount of binaries.

> Improve binary comparison during repeated upgrades
> --
>
> Key: OAK-3163
> URL: https://issues.apache.org/jira/browse/OAK-3163
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.2.3, 1.3.3
>Reporter: Julian Sedding
>  Labels: performance
> Attachments: OAK-3163.patch
>
>
> OAK-2619 introduced the possibility to run the same upgrade repeatedly and 
> achieve incremental upgrades. This reduces the time taken for {{CommitHook}} 
> processing, because the diff is reduced.
> However, for incremental upgrades to be really useful, the content-copy phase 
> needs to be fast. Currently an optimization that was proposed in OAK-2626 was 
> lost due to the implementation of a better solution to some part of the 
> problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3163) Improve binary comparison during repeated upgrades

2015-07-29 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated OAK-3163:

Attachment: OAK-3163.patch

Attached patch that leverages the content-identity comparison in 
{{AbstractBlob#equal}}. This speeds up comparison of binaries and thus the 
upgrade of repositories with a significant amount of binaries.

> Improve binary comparison during repeated upgrades
> --
>
> Key: OAK-3163
> URL: https://issues.apache.org/jira/browse/OAK-3163
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.2.3, 1.3.3
>Reporter: Julian Sedding
>  Labels: performance
> Attachments: OAK-3163.patch
>
>
> OAK-2619 introduced the possibility to run the same upgrade repeatedly and 
> achieve incremental upgrades. This reduces the time taken for {{CommitHook}} 
> processing, because the diff is reduced.
> However, for incremental upgrades to be really useful, the content-copy phase 
> needs to be fast. Currently an optimization that was proposed in OAK-2626 was 
> lost due to the implementation of a better solution to some part of the 
> problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3163) Improve binary comparison during repeated upgrades

2015-07-29 Thread Julian Sedding (JIRA)
Julian Sedding created OAK-3163:
---

 Summary: Improve binary comparison during repeated upgrades
 Key: OAK-3163
 URL: https://issues.apache.org/jira/browse/OAK-3163
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.3.3, 1.2.3
Reporter: Julian Sedding


OAK-2619 introduced the possibility to run the same upgrade repeatedly and 
achieve incremental upgrades. This reduces the time taken for {{CommitHook}} 
processing, because the diff is reduced.

However, for incremental upgrades to be really useful, the content-copy phase 
needs to be fast. Currently an optimization that was proposed in OAK-2626 was 
lost due to the implementation of a better solution to some part of the 
problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3161) Extend documentation for FileDataStore in http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore

2015-07-29 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645978#comment-14645978
 ] 

Konrad Windszus commented on OAK-3161:
--

I just figure out that if path is not set but FileDataStore is used the default 
(in AEM 6.1) for the datastore path is 
{{crx-quickstart/repository/repository/datastore}} which looks like a bug to me.

> Extend documentation for FileDataStore in 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore
> -
>
> Key: OAK-3161
> URL: https://issues.apache.org/jira/browse/OAK-3161
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: doc
>Reporter: Konrad Windszus
>
> Currently in 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore
>  only the properties
> path, minRecordLength, maxCachedBinarySize, cacheSizeInMB are described.
> Some things are still missing though:
> - repositoryHome (what is the difference to path?)
> - encodeLengthInId
> - for path it is not clear what the default is (because I can leave that 
> property out, so either this property is not mandatory or it has a default!)
> There might be also some other properties which are not listed here. This is 
> especially important as there is no Manifest generated for that OSGi service 
> (where one could look up what can be configured). It is also hard to figure 
> out from looking at the code, since not only properties directly set on 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/AbstractDataStoreService.java
>  are supported, but also all fields accessible through setters on the 
> underlying Jackrabbit 2 Datastore 
> (https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/AbstractDataStoreService.java#L63).
> So probably it would be good to link the documentation of that here as well 
> (http://wiki.apache.org/jackrabbit/DataStore#Configuration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3162) Seeing following exception during async index update

2015-07-29 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3162:
--

 Summary: Seeing following exception during async index update 
 Key: OAK-3162
 URL: https://issues.apache.org/jira/browse/OAK-3162
 Project: Jackrabbit Oak
  Issue Type: Bug
 Environment: OAK-1.0.16
Reporter: Vikas Saurabh


In our local environment using 1.0.16, we are seeing following exception 
constantly. Along with that, logs also are showing "Reindexing will be 
performed" message over and over
{noformat}
28.07.2015 03:19:25.517 *ERROR* [pool-6-thread-3] 
org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
execution of org.apache.j
ackrabbit.oak.plugins.index.AsyncIndexUpdate@1c7ba9ed : 1713
java.lang.ArrayIndexOutOfBoundsException: 1713
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:147)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:76)
at org.apache.lucene.index.MergeState$DocMap.build(MergeState.java:67)
at 
org.apache.lucene.index.SegmentMerger.setDocMaps(SegmentMerger.java:352)
at org.apache.lucene.index.SegmentMerger.(SegmentMerger.java:64)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1941)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3124)
at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:159)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:186)
at 
org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

(cc [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3161) Extend documentation for FileDataStore in http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore

2015-07-29 Thread Konrad Windszus (JIRA)
Konrad Windszus created OAK-3161:


 Summary: Extend documentation for FileDataStore in 
http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore
 Key: OAK-3161
 URL: https://issues.apache.org/jira/browse/OAK-3161
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: doc
Reporter: Konrad Windszus


Currently in 
http://jackrabbit.apache.org/oak/docs/osgi_config.html#Jackrabbit_2_-_FileDataStore
 only the properties
path, minRecordLength, maxCachedBinarySize, cacheSizeInMB are described.
Some things are still missing though:
- repositoryHome (what is the difference to path?)
- encodeLengthInId
- for path it is not clear what the default is (because I can leave that 
property out, so either this property is not mandatory or it has a default!)

There might be also some other properties which are not listed here. This is 
especially important as there is no Manifest generated for that OSGi service 
(where one could look up what can be configured). It is also hard to figure out 
from looking at the code, since not only properties directly set on 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/AbstractDataStoreService.java
 are supported, but also all fields accessible through setters on the 
underlying Jackrabbit 2 Datastore 
(https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/blob/datastore/AbstractDataStoreService.java#L63).
So probably it would be good to link the documentation of that here as well 
(http://wiki.apache.org/jackrabbit/DataStore#Configuration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2707) Performance: Consider cache for SessionImpl#getNamespacePrefixes

2015-07-29 Thread Joel Richard (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645948#comment-14645948
 ] 

Joel Richard commented on OAK-2707:
---

Sling does now cache the namespaces globally for the new JcrValueMap. 
Unfortunately, the problem remains for the JcrPropertyMap which is part of the 
Sling API.

> Performance: Consider cache for SessionImpl#getNamespacePrefixes
> 
>
> Key: OAK-2707
> URL: https://issues.apache.org/jira/browse/OAK-2707
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Affects Versions: 1.1.7
>Reporter: Joel Richard
>Priority: Critical
>  Labels: performance
> Attachments: OAK-2707.patch, Screen Shot 2015-03-30 at 08.38.58.png
>
>
> Sesison#getNamespacePrefixes is heavily used in Sling (see 
> org.apache.sling.jcr.resource.JcrPropertyMap#escapeKeyName) and can easily be 
> called 5000-8000 times per page load.
> Therefore, it adds 10-15% page rendering overhead for Sling-based websites 
> (see attachment). Would it be possible to add either a global cache or at 
> least a session cache for this method?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3160) Implement Session.hasPermission(String, String...) and support for additional actions

2015-07-29 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-3160:
---

Assignee: angela

> Implement Session.hasPermission(String, String...) and support for additional 
> actions
> -
>
> Key: OAK-3160
> URL: https://issues.apache.org/jira/browse/OAK-3160
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, jcr
>Reporter: angela
>Assignee: angela
>
> see JCR-3885 for details. this is the Oak part of this improvement as the 
> implementation of JackrabbitSession and the underlying action resolution 
> needs to be adjusted in Oak as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3160) Implement Session.hasPermission(String, String...) and support for additional actions

2015-07-29 Thread angela (JIRA)
angela created OAK-3160:
---

 Summary: Implement Session.hasPermission(String, String...) and 
support for additional actions
 Key: OAK-3160
 URL: https://issues.apache.org/jira/browse/OAK-3160
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Reporter: angela


see JCR-3885 for details. this is the Oak part of this improvement as the 
implementation of JackrabbitSession and the underlying action resolution needs 
to be adjusted in Oak as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645832#comment-14645832
 ] 

Chetan Mehrotra commented on OAK-3145:
--

bq. the whiteboard/tracker pattern doesn't seem to support the push model

The suggested approach is to always use {{Tracker.getServices()}} instead of 
calling it once at init call. This call is cheap as it does not perform a 
lookup (pull) at time of call. Instead it keeps track of registered services 
via ServiceTracker. For an example see {{WhiteboardIndexProvider}}

> Allow plugging in additional jcr-descriptors
> 
>
> Key: OAK-3145
> URL: https://issues.apache.org/jira/browse/OAK-3145
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.3
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3145.patch, OAK-3145.v2.patch
>
>
> The current way oak compiles the JCR descriptors is somewhat hard-coded and 
> does not allow an easy way to extend it. Currently oak-jcr's 
> {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
> {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
> {{JCRDescriptorsImpl}}). The result of this is then put into the 
> {{RepositoryImpl.descriptors}} field - which is subsequently used.
> While these descriptors can explicitly be overwritten via {{put}}, that all 
> has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
> There's no plug-in mechanism.
> Using the whiteboard pattern there should be a mechanism that allows to 
> provide services that implement {{Descriptors.class}} which could then be 
> woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
> example. The plugin mechanism would thus allow hook in Descriptors as 'the 
> base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
> ensure the final say is with oak-core/oak-jcr) - as opening up such a 
> *Descriptors-plugin mechanism could potentially be used by 3rd party code 
> too*.
> PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3159:
---
Fix Version/s: 1.3.6

> Extend documentation for SegmentNodeStoreService in 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
> ---
>
> Key: OAK-3159
> URL: https://issues.apache.org/jira/browse/OAK-3159
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: doc
>Reporter: Konrad Windszus
> Fix For: 1.3.6
>
>
> Currently the documentation at 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only 
> documents the properties
> # repository.home and
> # tarmk.size
> All the other properties like customBlobStore, tarmk.mode,  are not 
> documented. Please extend that. Also it would be good, if the table could be 
> extended with what type is supported for the individual properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3159:
---
Component/s: doc

> Extend documentation for SegmentNodeStoreService in 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
> ---
>
> Key: OAK-3159
> URL: https://issues.apache.org/jira/browse/OAK-3159
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: doc
>Reporter: Konrad Windszus
> Fix For: 1.3.6
>
>
> Currently the documentation at 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only 
> documents the properties
> # repository.home and
> # tarmk.size
> All the other properties like customBlobStore, tarmk.mode,  are not 
> documented. Please extend that. Also it would be good, if the table could be 
> extended with what type is supported for the individual properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1333) SegmentMK: Support for Blobs in external storage

2015-07-29 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645830#comment-14645830
 ] 

Konrad Windszus commented on OAK-1333:
--

Done in OAK-3159.

> SegmentMK: Support for Blobs in external storage
> 
>
> Key: OAK-1333
> URL: https://issues.apache.org/jira/browse/OAK-1333
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Affects Versions: 0.14
>Reporter: Dominique Pfister
>Assignee: Chetan Mehrotra
> Fix For: 0.19
>
> Attachments: OAK-1333.patch, patch.txt, patch2.txt
>
>
> Currently, when putting a Blob into a PropertyState, the SegmentNodeStore 
> automatically consumes the Blob if it isn't of type SegmentBlob, and stores 
> its contents in the SegmentStore.
> It would be nice, if one could put _external_ Blobs as well, where the 
> SegmentStore would only store a reference to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore

2015-07-29 Thread Konrad Windszus (JIRA)
Konrad Windszus created OAK-3159:


 Summary: Extend documentation for SegmentNodeStoreService in 
http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
 Key: OAK-3159
 URL: https://issues.apache.org/jira/browse/OAK-3159
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Konrad Windszus


Currently the documentation at 
http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only 
documents the properties
# repository.home and
# tarmk.size
All the other properties like customBlobStore, tarmk.mode,  are not 
documented. Please extend that. Also it would be good, if the table could be 
extended with what type is supported for the individual properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-29 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645829#comment-14645829
 ] 

Stefan Egli commented on OAK-3145:
--

agreed, it should be closed esp with the current state where it doesn't 
continuously monitor the registrations. I'll do that.

more generally speaking: the whiteboard/tracker pattern doesn't seem to support 
the push model (of being notified when a new service is triggered) but rather 
relies on the pull model (where you have to go {{Tracker.getServices()}} 
periodically in case you'd want to support late-joining services to be taken 
into account). Or am I missing something?

> Allow plugging in additional jcr-descriptors
> 
>
> Key: OAK-3145
> URL: https://issues.apache.org/jira/browse/OAK-3145
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.3
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3145.patch, OAK-3145.v2.patch
>
>
> The current way oak compiles the JCR descriptors is somewhat hard-coded and 
> does not allow an easy way to extend it. Currently oak-jcr's 
> {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
> {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
> {{JCRDescriptorsImpl}}). The result of this is then put into the 
> {{RepositoryImpl.descriptors}} field - which is subsequently used.
> While these descriptors can explicitly be overwritten via {{put}}, that all 
> has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
> There's no plug-in mechanism.
> Using the whiteboard pattern there should be a mechanism that allows to 
> provide services that implement {{Descriptors.class}} which could then be 
> woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
> example. The plugin mechanism would thus allow hook in Descriptors as 'the 
> base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
> ensure the final say is with oak-core/oak-jcr) - as opening up such a 
> *Descriptors-plugin mechanism could potentially be used by 3rd party code 
> too*.
> PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1333) SegmentMK: Support for Blobs in external storage

2015-07-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645823#comment-14645823
 ] 

Michael Dürig commented on OAK-1333:


Please open a dedicated issue for the doc component for tracking this. 

> SegmentMK: Support for Blobs in external storage
> 
>
> Key: OAK-1333
> URL: https://issues.apache.org/jira/browse/OAK-1333
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Affects Versions: 0.14
>Reporter: Dominique Pfister
>Assignee: Chetan Mehrotra
> Fix For: 0.19
>
> Attachments: OAK-1333.patch, patch.txt, patch2.txt
>
>
> Currently, when putting a Blob into a PropertyState, the SegmentNodeStore 
> automatically consumes the Blob if it isn't of type SegmentBlob, and stores 
> its contents in the SegmentStore.
> It would be nice, if one could put _external_ Blobs as well, where the 
> SegmentStore would only store a reference to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1333) SegmentMK: Support for Blobs in external storage

2015-07-29 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645791#comment-14645791
 ] 

Konrad Windszus commented on OAK-1333:
--

Can the documentation in 
http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore be 
adjusted to clarify when this boolean property customBlobStore needs to be set?

> SegmentMK: Support for Blobs in external storage
> 
>
> Key: OAK-1333
> URL: https://issues.apache.org/jira/browse/OAK-1333
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Affects Versions: 0.14
>Reporter: Dominique Pfister
>Assignee: Chetan Mehrotra
> Fix For: 0.19
>
> Attachments: OAK-1333.patch, patch.txt, patch2.txt
>
>
> Currently, when putting a Blob into a PropertyState, the SegmentNodeStore 
> automatically consumes the Blob if it isn't of type SegmentBlob, and stores 
> its contents in the SegmentStore.
> It would be nice, if one could put _external_ Blobs as well, where the 
> SegmentStore would only store a reference to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3158) IAE when specifiying 2G cache for FileStore

2015-07-29 Thread JIRA
Michael Dürig created OAK-3158:
--

 Summary: IAE when specifiying 2G cache for FileStore
 Key: OAK-3158
 URL: https://issues.apache.org/jira/browse/OAK-3158
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Michael Dürig
Priority: Minor
 Fix For: 1.3.6


{{FileStore.newFileStore(dir).withCacheSize(2048)}} results in

{noformat}Max memory must not be negative
java.lang.IllegalArgumentException: Max memory must not be negative
at 
org.apache.jackrabbit.oak.cache.CacheLIRS.setMaxMemory(CacheLIRS.java:464)
at org.apache.jackrabbit.oak.cache.CacheLIRS.(CacheLIRS.java:163)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Builder.build(CacheLIRS.java:1537)
at 
org.apache.jackrabbit.oak.cache.CacheLIRS$Builder.build(CacheLIRS.java:1533)
at 
org.apache.jackrabbit.oak.plugins.segment.StringCache.(StringCache.java:52)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.(SegmentTracker.java:126)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:343)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:84)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$Builder.create(FileStore.java:294)
{noformat}

There is an integer overflow cause by using ints instead of longs to specify 
the cache size.

[~tmueller], could you have a look?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3155) AsyncIndex stats do not capture execution for runs where no indexing is performed

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645785#comment-14645785
 ] 

Chetan Mehrotra commented on OAK-3155:
--

bq. What will happen is the #start method will get called again on the next 
run, so the watch will reset making the intention to keep the stats running 
superfluous.

That would still be fine. There are two stats we collect
# Execution count
# Time taken for each execution

In case of missed checkpoint execution count would not get updated for that run 
and execution time would also be assumed to be zero. In next cycle when 
checkpoint can be obtained then execution count would be increased and time 
taken would only be for that run as watch is resent. So end stats are still 
correct. 

However if we increment the count even when checkpoint cannot be obtained then 
it gives a false impression that indexing is running fine i.e. no of executions 
per minute are high which is not the real case. Probably we should track 
checkpoint miss separately. Say count number of checkpoint for certain duration 
1 min. If the count is more than 5 then log a warning/info message

> AsyncIndex stats do not capture execution for runs where no indexing is 
> performed
> -
>
> Key: OAK-3155
> URL: https://issues.apache.org/jira/browse/OAK-3155
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3155.patch
>
>
> {{AsyncIndexStats}} currently provides a time series of the stats around when 
> async indexing run and what time it took. Current logic only collect stats 
> when actual indexing is performed. If no change happened in the repository 
> from the last run that stats around that is ignored.
> Due to this if we see a time series of when runs happen then it might not be 
> true and gives a false impression that async indexing is slow. 
> As a fix we should record execution stats for every run



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3157) Lucene suggestions don't work if suggested phrases don't return documents on :fulltext search

2015-07-29 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645750#comment-14645750
 ] 

Vikas Saurabh commented on OAK-3157:


I think that we should do phrase query of {{:suggest}} field for validating 
access. The reason I feel so is:
# We've identified a list of suggestions 
# The suggestions are generated from lucene-docs based of their {{:suggest}} 
field
# We need to identify if these suggestions should be shown to end-user based on 
access
# Phrase query on {{:suggest}} then should give us the relevant docs which 
contains those suggestions and hence those docs can be used to access control 
check.

[~teofili], [~chetanm], can you guys please take a look?

> Lucene suggestions don't work if suggested phrases don't return documents on 
> :fulltext search
> -
>
> Key: OAK-3157
> URL: https://issues.apache.org/jira/browse/OAK-3157
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  looks up for suggestions based on indexed properties annotated with 
> {{useInSuggest}}. The implementation then tries to find a document which has 
> this phrase (currently searched using full text search) and is readable by 
> the user.
> In case, no such document result, then the suggestions don't show up in end 
> result even though the documents from which these suggestions were generated 
> were indeed readable by end-user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3147) Make it possible to collapse results under jcr:content nodes

2015-07-29 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-3147.
--
Resolution: Fixed

> Make it possible to collapse results under jcr:content nodes
> 
>
> Key: OAK-3147
> URL: https://issues.apache.org/jira/browse/OAK-3147
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.2.4, 1.3.4, 1.0.18
>
>
> In some use cases a lot of search results live in descendants of jcr:content 
> nodes although the caller is only interested in those jcr:content nodes as 
> results (that post filtering being applied using primary types or other 
> constraints) so that the Solr query could be optimized by applying Solr's 
> Collapse query parser in order to collapse such documents and therefore 
> allowing huge improvements in performance, especially on the query engine not 
> having to filter e.g. 90% of the nodes returned by the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3157) Lucene suggestions don't work if suggested phrases don't return documents on :fulltext search

2015-07-29 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3157:
---
Attachment: LuceneIndexSuggestionTest.java

Attaching [^LuceneIndexSuggestionTest.java]. {{testSuggestQuery}} test is for 
verifying this issue.

> Lucene suggestions don't work if suggested phrases don't return documents on 
> :fulltext search
> -
>
> Key: OAK-3157
> URL: https://issues.apache.org/jira/browse/OAK-3157
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  looks up for suggestions based on indexed properties annotated with 
> {{useInSuggest}}. The implementation then tries to find a document which has 
> this phrase (currently searched using full text search) and is readable by 
> the user.
> In case, no such document result, then the suggestions don't show up in end 
> result even though the documents from which these suggestions were generated 
> were indeed readable by end-user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645746#comment-14645746
 ] 

Chetan Mehrotra commented on OAK-3145:
--

Missed one aspect. The tracker would need to be closed. Also with current 
approach any Descriptor provider should have been registered by the time 
{{t.getServices()}} is called. So something to keep note of

> Allow plugging in additional jcr-descriptors
> 
>
> Key: OAK-3145
> URL: https://issues.apache.org/jira/browse/OAK-3145
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.3
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3145.patch, OAK-3145.v2.patch
>
>
> The current way oak compiles the JCR descriptors is somewhat hard-coded and 
> does not allow an easy way to extend it. Currently oak-jcr's 
> {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
> {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
> {{JCRDescriptorsImpl}}). The result of this is then put into the 
> {{RepositoryImpl.descriptors}} field - which is subsequently used.
> While these descriptors can explicitly be overwritten via {{put}}, that all 
> has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
> There's no plug-in mechanism.
> Using the whiteboard pattern there should be a mechanism that allows to 
> provide services that implement {{Descriptors.class}} which could then be 
> woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
> example. The plugin mechanism would thus allow hook in Descriptors as 'the 
> base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
> ensure the final say is with oak-core/oak-jcr) - as opening up such a 
> *Descriptors-plugin mechanism could potentially be used by 3rd party code 
> too*.
> PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3157) Lucene suggestions don't work if suggested phrases don't return documents on :fulltext search

2015-07-29 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3157:
--

 Summary: Lucene suggestions don't work if suggested phrases don't 
return documents on :fulltext search
 Key: OAK-3157
 URL: https://issues.apache.org/jira/browse/OAK-3157
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Reporter: Vikas Saurabh


In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 looks up for suggestions based on indexed properties annotated with 
{{useInSuggest}}. The implementation then tries to find a document which has 
this phrase (currently searched using full text search) and is readable by the 
user.

In case, no such document result, then the suggestions don't show up in end 
result even though the documents from which these suggestions were generated 
were indeed readable by end-user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3155) AsyncIndex stats do not capture execution for runs where no indexing is performed

2015-07-29 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645741#comment-14645741
 ] 

Alex Parvulescu commented on OAK-3155:
--

looks good!

the only issue I see is around the '//Do not update the status as technically 
the run is not complete' behavior. What will happen is the #start method will 
get called again on the next run, so the watch will reset making the intention 
to keep the stats running superfluous. I think we should call 
#postAsyncRunStatsStatus anyway, even if it's a failed run, I wouldn't mix 
error handling with stats.

> AsyncIndex stats do not capture execution for runs where no indexing is 
> performed
> -
>
> Key: OAK-3155
> URL: https://issues.apache.org/jira/browse/OAK-3155
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3155.patch
>
>
> {{AsyncIndexStats}} currently provides a time series of the stats around when 
> async indexing run and what time it took. Current logic only collect stats 
> when actual indexing is performed. If no change happened in the repository 
> from the last run that stats around that is ignored.
> Due to this if we see a time series of when runs happen then it might not be 
> true and gives a false impression that async indexing is slow. 
> As a fix we should record execution stats for every run



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-29 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3001:
-
Fix Version/s: (was: 1.3.4)
   1.3.5

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Critical
>  Labels: scalability
> Fix For: 1.2.4, 1.3.5
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-29 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3145:
-
Attachment: OAK-3145.v2.patch

Good point [~chetanm], I've updated the patch: [^OAK-3145.v2.patch] - would 
push this in a few days unless there's any -1 review coming ;)

> Allow plugging in additional jcr-descriptors
> 
>
> Key: OAK-3145
> URL: https://issues.apache.org/jira/browse/OAK-3145
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.3
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3145.patch, OAK-3145.v2.patch
>
>
> The current way oak compiles the JCR descriptors is somewhat hard-coded and 
> does not allow an easy way to extend it. Currently oak-jcr's 
> {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
> {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
> {{JCRDescriptorsImpl}}). The result of this is then put into the 
> {{RepositoryImpl.descriptors}} field - which is subsequently used.
> While these descriptors can explicitly be overwritten via {{put}}, that all 
> has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
> There's no plug-in mechanism.
> Using the whiteboard pattern there should be a mechanism that allows to 
> provide services that implement {{Descriptors.class}} which could then be 
> woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
> example. The plugin mechanism would thus allow hook in Descriptors as 'the 
> base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
> ensure the final say is with oak-core/oak-jcr) - as opening up such a 
> *Descriptors-plugin mechanism could potentially be used by 3rd party code 
> too*.
> PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645712#comment-14645712
 ] 

Chetan Mehrotra commented on OAK-3156:
--

[~tmueller] [~teofili] Looks like we would need to have way to convey to QE 
that results rows for such queries are pseudo rows and QE should not apply 
standard checks for it. Further its very much possible that user might not have 
a read permission for a node represent by '/' and in such a case also suggestor 
would not work.

Any thoughts on how to handle such query results?

> Lucene suggestions index definition can't be restricted to a specific type of 
> node
> --
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> While performing a suggestor query like 
> {code}
> SELECT [rep:suggest()] as suggestion  FROM [nt:unstructured] WHERE 
> suggest('foo')
> {code}
> Suggestor does not provide any result. In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWords));
> ...
> {code}
> and
> {code:title=LucenePropertyIndex.java}
> LuceneResultRow(Iterable suggestWords) {
> this.path = "/";
> this.score = 1.0d;
> this.suggestWords = suggestWords;
> }
> {code}
> Due to path being set to "/", {{SelectorImpl}} later filters out the result 
> as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3156:
-
Description: 
While performing a suggestor query like 

{code}
SELECT [rep:suggest()] as suggestion  FROM [nt:unstructured] WHERE 
suggest('foo')
{code}

Suggestor does not provide any result. In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 in Oak work only for index definitions for {{nt:base}} nodetype.
So, an index definition like:
{code:xml}









{code}
works, but if we change nodetype to {{nt:unstructured}} like:
{code:xml}









{code}
, it won't work.

The issue is that suggestor implementation essentially is passing a pseudo row 
with path=/.:
{code:title=LucenePropertyIndex.java}
private boolean loadDocs() {
...
queue.add(new LuceneResultRow(suggestedWords));
...
{code}
and
{code:title=LucenePropertyIndex.java}
LuceneResultRow(Iterable suggestWords) {
this.path = "/";
this.score = 1.0d;
this.suggestWords = suggestWords;
}
{code}
Due to path being set to "/", {{SelectorImpl}} later filters out the result as 
{{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.

  was:
In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 in Oak work only for index definitions for {{nt:base}} nodetype.
So, an index definition like:
{code:xml}









{code}
works, but if we change nodetype to {{nt:unstructured}} like:
{code:xml}









{code}
, it won't work.

The issue is that suggestor implementation essentially is passing a pseudo row 
with path=/.:
{code:title=LucenePropertyIndex.java}
private boolean loadDocs() {
...
queue.add(new LuceneResultRow(suggestedWords));
...
{code}
and
{code:title=LucenePropertyIndex.java}
LuceneResultRow(Iterable suggestWords) {
this.path = "/";
this.score = 1.0d;
this.suggestWords = suggestWords;
}
{code}
Due to path being set to "/", {{SelectorImpl}} later filters out the result as 
{{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.


> Lucene suggestions index definition can't be restricted to a specific type of 
> node
> --
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> While performing a suggestor query like 
> {code}
> SELECT [rep:suggest()] as suggestion  FROM [nt:unstructured] WHERE 
> suggest('foo')
> {code}
> Suggestor does not provide any result. In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWor

[jira] [Updated] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3156:
-
Summary: Lucene suggestions index definition can't be restricted to a 
specific type of node  (was: [OAK] Lucene suggestions index definition can't be 
restricted to a specific type of node)

> Lucene suggestions index definition can't be restricted to a specific type of 
> node
> --
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWords));
> ...
> {code}
> and
> {code:title=LucenePropertyIndex.java}
> LuceneResultRow(Iterable suggestWords) {
> this.path = "/";
> this.score = 1.0d;
> this.suggestWords = suggestWords;
> }
> {code}
> Due to path being set to "/", {{SelectorImpl}} later filters out the result 
> as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2739) take appropriate action when lease cannot be renewed (in time)

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645698#comment-14645698
 ] 

Chetan Mehrotra commented on OAK-2739:
--

Thanks for the details. Now I understand the problem better. In that case your 
proposed _prevention_ based approach would be the right one to go for. Such a 
logic can be implemented in a DocumentStore wrapper or via some pre condition 
check passed to DocumentStore implementation. 

> take appropriate action when lease cannot be renewed (in time)
> --
>
> Key: OAK-2739
> URL: https://issues.apache.org/jira/browse/OAK-2739
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: mongomk
>Affects Versions: 1.2
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.5
>
>
> Currently, in an oak-cluster when (e.g.) one oak-client stops renewing its 
> lease (ClusterNodeInfo.renewLease()), this will be eventually noticed by the 
> others in the same oak-cluster. Those then mark this client as {{inactive}} 
> and start recoverying and subsequently removing that node from any further 
> merge etc operation.
> Now, whatever the reason was why that client stopped renewing the lease 
> (could be an exception, deadlock, whatever) - that client itself still 
> considers itself as {{active}} and continues to take part in the cluster 
> action.
> This will result in a unbalanced situation where that one client 'sees' 
> everybody as {{active}} while the others see this one as {{inactive}}.
> If this ClusterNodeInfo state should be something that can be built upon, and 
> to avoid any inconsistency due to unbalanced handling, the inactive node 
> should probably retire gracefully - or any other appropriate action should be 
> taken, other than just continuing as today.
> This ticket is to keep track of ideas and actions taken wrt this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-3147) Make it possible to collapse results under jcr:content nodes

2015-07-29 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reopened OAK-3147:
--

reopening as I've noticed Collapsing query parser's settings can be leveraged 
to allow:
- not indexing path_collapsed for nodes that are not descendants of jcr:content 
nodes
- better query time performance
- more precise collapsing on jcr:content nodes based on min(depth)

> Make it possible to collapse results under jcr:content nodes
> 
>
> Key: OAK-3147
> URL: https://issues.apache.org/jira/browse/OAK-3147
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.2.4, 1.3.4, 1.0.18
>
>
> In some use cases a lot of search results live in descendants of jcr:content 
> nodes although the caller is only interested in those jcr:content nodes as 
> results (that post filtering being applied using primary types or other 
> constraints) so that the Solr query could be optimized by applying Solr's 
> Collapse query parser in order to collapse such documents and therefore 
> allowing huge improvements in performance, especially on the query engine not 
> having to filter e.g. 90% of the nodes returned by the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.trivialUpdates | 236, 
249, 253, 297 | DOCUMENT_RDB | 1.6, 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | 
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | 
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | 
DOCUMENT_RDB | 1.8 |


  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| 

[jira] [Updated] (OAK-2878) Test failure: AutoCreatedItemsTest.autoCreatedItems

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2878:
---
Description: 
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138, 249, 292, 297

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/

  was:
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138, 249, 292

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/


> Test failure: AutoCreatedItemsTest.autoCreatedItems
> ---
>
> Key: OAK-2878
> URL: https://issues.apache.org/jira/browse/OAK-2878
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
> Environment: Jenkins: https://builds.apache.org/job/Apache Jackrabbit 
> Oak matrix/
>Reporter: Michael Dürig
>Assignee: Julian Reschke
>  Labels: CI, jenkins
>
> {{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
> on Jenkins: No default node type available for /testdata/property
> {noformat}
> javax.jcr.nodetype.ConstraintViolationException: No default node type 
> available for /testdata/property
>   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
>   at 
> org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
>   at 
> org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
>  

[jira] [Updated] (OAK-3156) [OAK] Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3156:
---
Attachment: LuceneIndexSuggestionTest.java

Attaching [^\LuceneIndexSuggestionTest.java]. {{testSuggestQueryOnNonNtBase}} 
test is for verifying this issue.

> [OAK] Lucene suggestions index definition can't be restricted to a specific 
> type of node
> 
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWords));
> ...
> {code}
> and
> {code:title=LucenePropertyIndex.java}
> LuceneResultRow(Iterable suggestWords) {
> this.path = "/";
> this.score = 1.0d;
> this.suggestWords = suggestWords;
> }
> {code}
> Due to path being set to "/", {{SelectorImpl}} later filters out the result 
> as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3156) [OAK] Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645690#comment-14645690
 ] 

Vikas Saurabh edited comment on OAK-3156 at 7/29/15 8:26 AM:
-

Attaching [^LuceneIndexSuggestionTest.java]. {{testSuggestQueryOnNonNtBase}} 
test is for verifying this issue.


was (Author: catholicon):
Attaching [^\LuceneIndexSuggestionTest.java]. {{testSuggestQueryOnNonNtBase}} 
test is for verifying this issue.

> [OAK] Lucene suggestions index definition can't be restricted to a specific 
> type of node
> 
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
> Attachments: LuceneIndexSuggestionTest.java
>
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWords));
> ...
> {code}
> and
> {code:title=LucenePropertyIndex.java}
> LuceneResultRow(Iterable suggestWords) {
> this.path = "/";
> this.score = 1.0d;
> this.suggestWords = suggestWords;
> }
> {code}
> Due to path being set to "/", {{SelectorImpl}} later filters out the result 
> as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3156) [OAK] Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3156:
---
Description: 
In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 in Oak work only for index definitions for {{nt:base}} nodetype.
So, an index definition like:
{code:xml}









{code}
works, but if we change nodetype to {{nt:unstructured}} like:
{code:xml}









{code}
, it won't work.

The issue is that suggestor implementation essentially is passing a pseudo row 
with path=/.:
{code:title=LucenePropertyIndex.java}
private boolean loadDocs() {
...
queue.add(new LuceneResultRow(suggestedWords));
...
{code}
and
{code:title=LucenePropertyIndex.java}
LuceneResultRow(Iterable suggestWords) {
this.path = "/";
this.score = 1.0d;
this.suggestWords = suggestWords;
}
{code}
Due to path being set to "/", {{SelectorImpl}} later filters out the result as 
{{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.

  was:
In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 in Oak work only for index definitions for {{nt:base}} nodetype.
So, an index definition like:
{code:xml}









{code}
works, but if we change nodetype to {{dam:Asset}} like:
{code:xml}









{code}
, it won't work.


> [OAK] Lucene suggestions index definition can't be restricted to a specific 
> type of node
> 
>
> Key: OAK-3156
> URL: https://issues.apache.org/jira/browse/OAK-3156
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>
> In current implementation, 
> [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
>  in Oak work only for index definitions for {{nt:base}} nodetype.
> So, an index definition like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> works, but if we change nodetype to {{nt:unstructured}} like:
> {code:xml}
>  jcr:primaryType="oak:QueryIndexDefinition"
> async="async"
> compatVersion="{Long}2"
> type="lucene">
> 
> 
> 
>  jcr:primaryType="nt:unstructured"
> analyzed="{Boolean}true"
> name="description"
> propertyIndex="{Boolean}true"
> useInSuggest="{Boolean}true"/>
> 
> 
> 
> 
> {code}
> , it won't work.
> The issue is that suggestor implementation essentially is passing a pseudo 
> row with path=/.:
> {code:title=LucenePropertyIndex.java}
> private boolean loadDocs() {
> ...
> queue.add(new LuceneResultRow(suggestedWords));
> ...
> {code}
> and
> {code:title=LucenePropertyIndex.java}
> LuceneResultRow(Iterable suggestWords) {
> this.path = "/";
> this.score = 1.0d;
> this.suggestWords = suggestWords;
> }
> {code}
> Due to path being set to "/", {{SelectorImpl}} later filters out the result 
> as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3156) [OAK] Lucene suggestions index definition can't be restricted to a specific type of node

2015-07-29 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3156:
--

 Summary: [OAK] Lucene suggestions index definition can't be 
restricted to a specific type of node
 Key: OAK-3156
 URL: https://issues.apache.org/jira/browse/OAK-3156
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Reporter: Vikas Saurabh


In current implementation, 
[suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions]
 in Oak work only for index definitions for {{nt:base}} nodetype.
So, an index definition like:
{code:xml}









{code}
works, but if we change nodetype to {{dam:Asset}} like:
{code:xml}









{code}
, it won't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3139) SNFE in persisted comapation map when using CLEAN_OLD

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-3139.

Resolution: Fixed

Turns out the problem was keeping a reference to a {{SegmentWriter}} that would 
only rarely be written to. As we are running with {{CLEAN_OLD}} the id of its 
underlying segment could get cleaned from the {{SegmentIdTable}} after a while. 
Flushing the segment thereafter would still cause it to be persisted but a 
subsequent cleanup run would immediately remove it again. 

Fixed at http://svn.apache.org/r1693195 by using a fresh {{SegmentWriter}} 
instance for each compaction round. 

> SNFE in persisted comapation map when using CLEAN_OLD
> -
>
> Key: OAK-3139
> URL: https://issues.apache.org/jira/browse/OAK-3139
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Critical
>  Labels: compaction, gc
> Fix For: 1.3.4
>
> Attachments: OAK-3139.patch
>
>
> When using {{CLEAN_OLD}} it might happen that segments of the persisted 
> compaction map get collected. --The reason for this is that only the segment 
> containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
> segments of the compaction map eligible for collection once old enough.--
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> 95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:919)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:134)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
> at 
> org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:154)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:186)
> at 
> org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:118)
> at 
> org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:100)
> at 
> org.apache.jackrabbit.oak.plugins.segment.CompactionMap.get(CompactionMap.java:93)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.uncompact(SegmentWriter.java:1023)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1033)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.(SegmentNodeStore.java:418)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:204)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3139) SNFE in persisted comapation map when using CLEAN_OLD

2015-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3139:
---
Description: 
When using {{CLEAN_OLD}} it might happen that segments of the persisted 
compaction map get collected. --The reason for this is that only the segment 
containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
segments of the compaction map eligible for collection once old enough.--

{noformat}
org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:919)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:134)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
at 
org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:154)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:186)
at 
org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:118)
at 
org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:100)
at 
org.apache.jackrabbit.oak.plugins.segment.CompactionMap.get(CompactionMap.java:93)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.uncompact(SegmentWriter.java:1023)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1033)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.(SegmentNodeStore.java:418)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:204)
{noformat}

  was:
When using {{CLEAN_OLD}} it might happen that segments of the persisted 
compaction map get collected. The reason for this is that only the segment 
containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
segments of the compaction map eligible for collection once old enough. 

{noformat}
org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:919)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:134)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
at 
org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:154)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:186)
at 
org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:118)
at 
org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:100)
at 
org.apache.jackrabbit.oak.plugins.segment.CompactionMap.get(CompactionMap.java:93)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.uncompact(SegmentWriter.java:1023)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1033)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.(SegmentNodeStore.java:418)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:204)
{noformat}


> SNFE in persisted comapation map when using CLEAN_OLD
> -
>
> Key: OAK-3139
> URL: https://issues.apache.org/jira/browse/OAK-3139
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Critical
>  Labels: compaction, gc
> Fix For: 1.3.4
>
> Attachments: OAK-3139.patch
>
>
> When using {{CLEAN_OLD}} it might happen that segments of the persisted 
> compaction map get collected. --The reason for this is that only the segment 
> containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
> segments of the compaction map eligible for collection once old enough.--
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
> 95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
> at 
> org.apache.jackrabbit.oak.plug

[jira] [Commented] (OAK-2739) take appropriate action when lease cannot be renewed (in time)

2015-07-29 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645620#comment-14645620
 ] 

Stefan Egli commented on OAK-2739:
--

{quote}Currently background lease is updated periodically (every 1 sec) by a 
dedicated thread which just perform a single operation and not much. So even if 
there are issues in other parts this thread would continue to work (which might 
be wrong) and still update the lease every 1 sec.{quote}
That's not entirely correct: the least is updated only every {{leaseTime / 2}} 
- by default every 30 sec. It _checks_ every 1 sec but will only _update_ it 
every 30 sec.

{quote}So to me lease update does not look like an operation which would take 
long time and cause above mentioned issues. May be I am missing something 
here{quote}
Generally speaking there are many reasons why one particular instance would not 
update a lease in time:
# because it crashed
# because it can't talk to mongo anymore
# because the process was halted and continued (eg open/close laptop, process 
started in fg - Ctrl-Z, kill -STOP, other terminal surprises)
# because the memory is very low, thus very long GC cycles, preventing much 
from happening in the VM
# because the {{BackgroundLeaseUpdate}} task for some reason died
# because the {{BackgroundLeaseUpdate}} task for some reason is halted or runs 
into a deadlock
# because of something I forgot or some other yet to find-out VM mystery

In any case, the other instances have no way of figuring out the exact reason 
and they can only assume that 1. happened. And if that's not the case, then 
this ticket is about finding a way that prevents the instance from continuing 
should it not be able to update the lease within eg 30sec. I think it's fair to 
demand that an instance is always capable of updating the lease every 30sec and 
if it can't, then it shall remain silent once and for all. I'm not saying it is 
a situation that is likely to occur very frequently - but if we're to build a 
reliable system then this part imv is a critical part of it.

> take appropriate action when lease cannot be renewed (in time)
> --
>
> Key: OAK-2739
> URL: https://issues.apache.org/jira/browse/OAK-2739
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: mongomk
>Affects Versions: 1.2
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.5
>
>
> Currently, in an oak-cluster when (e.g.) one oak-client stops renewing its 
> lease (ClusterNodeInfo.renewLease()), this will be eventually noticed by the 
> others in the same oak-cluster. Those then mark this client as {{inactive}} 
> and start recoverying and subsequently removing that node from any further 
> merge etc operation.
> Now, whatever the reason was why that client stopped renewing the lease 
> (could be an exception, deadlock, whatever) - that client itself still 
> considers itself as {{active}} and continues to take part in the cluster 
> action.
> This will result in a unbalanced situation where that one client 'sees' 
> everybody as {{active}} while the others see this one as {{inactive}}.
> If this ClusterNodeInfo state should be something that can be built upon, and 
> to avoid any inconsistency due to unbalanced handling, the inactive node 
> should probably retire gracefully - or any other appropriate action should be 
> taken, other than just continuing as today.
> This ticket is to keep track of ideas and actions taken wrt this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-29 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645590#comment-14645590
 ] 

Chetan Mehrotra commented on OAK-3145:
--

Overall approach looks fine. Just a minor note - The new logic in Oak class can 
be moved to {{AggregatingDescriptors}} like how its being done in 
{{CompositeHook.compose}}, {{CompositeEditor.compose}}. Also we can rename 
{{AggregatingDescriptors}} to {{CompositeDescriptors}} so that it follows 
current convention

> Allow plugging in additional jcr-descriptors
> 
>
> Key: OAK-3145
> URL: https://issues.apache.org/jira/browse/OAK-3145
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.3
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Minor
> Fix For: 1.3.4
>
> Attachments: OAK-3145.patch
>
>
> The current way oak compiles the JCR descriptors is somewhat hard-coded and 
> does not allow an easy way to extend it. Currently oak-jcr's 
> {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
> {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
> {{JCRDescriptorsImpl}}). The result of this is then put into the 
> {{RepositoryImpl.descriptors}} field - which is subsequently used.
> While these descriptors can explicitly be overwritten via {{put}}, that all 
> has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
> There's no plug-in mechanism.
> Using the whiteboard pattern there should be a mechanism that allows to 
> provide services that implement {{Descriptors.class}} which could then be 
> woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
> example. The plugin mechanism would thus allow hook in Descriptors as 'the 
> base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
> ensure the final say is with oak-core/oak-jcr) - as opening up such a 
> *Descriptors-plugin mechanism could potentially be used by 3rd party code 
> too*.
> PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-29 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645579#comment-14645579
 ] 

Julian Reschke commented on OAK-3001:
-

I recommend going back to the API discussion on oak-dev; there are more things 
to consider, such as java based filtering and/or returning sparse results 
(which, for instance, have only certain system properties set)

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Critical
>  Labels: scalability
> Fix For: 1.2.4, 1.3.4
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)