[jira] [Resolved] (OAK-2495) Shared DataStore GC support for FileDataStore

2015-02-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2495.

   Resolution: Fixed
Fix Version/s: 1.1.7

Committed changes to trunk with http://svn.apache.org/r1659459

> Shared DataStore GC support for FileDataStore
> -
>
> Key: OAK-2495
> URL: https://issues.apache.org/jira/browse/OAK-2495
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.1.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2514) Shared DataStore GC framework support

2015-02-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2514.

   Resolution: Fixed
Fix Version/s: 1.1.7

Committed changes to trunk with commits
* http://svn.apache.org/r1659457
* http://svn.apache.org/r1659458


> Shared DataStore GC framework support
> -
>
> Key: OAK-2514
> URL: https://issues.apache.org/jira/browse/OAK-2514
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.1.7
>
>
> Add support for share DataStore gc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2514) Shared DataStore GC framework support

2015-02-12 Thread Amit Jain (JIRA)
Amit Jain created OAK-2514:
--

 Summary: Shared DataStore GC framework support
 Key: OAK-2514
 URL: https://issues.apache.org/jira/browse/OAK-2514
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Amit Jain
Assignee: Amit Jain


Add support for share DataStore gc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2493) DataStore GC: Fix incorrect tests

2015-02-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2493.

Resolution: Fixed

OAK-2503 fixed the issue Julian was having.

> DataStore GC: Fix incorrect tests
> -
>
> Key: OAK-2493
> URL: https://issues.apache.org/jira/browse/OAK-2493
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.1.7
>
>
> DataStore GC test for SegmentNodeStore is out of sync with the latest changes 
> and is incorrect. Also, the data store gc tests need a tighter test 
> expectation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2503) DataStore: Cleanup tests

2015-02-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2503.

   Resolution: Fixed
Fix Version/s: 1.1.7

> DataStore: Cleanup tests
> 
>
> Key: OAK-2503
> URL: https://issues.apache.org/jira/browse/OAK-2503
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.1.7
>
>
> Fix some problems in the datastore tests:
> * Post test cleanup of artifacts.
> * Move {{DataStoreUtils}} helper class to correct package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2513) algorithm with O(n!) in mongoMk rebase - not finishing in years

2015-02-12 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318755#comment-14318755
 ] 

Stefan Egli edited comment on OAK-2513 at 2/12/15 7:35 PM:
---

OAK-1768 introduced Branch.RebaseCommit/BranchCommitImpl so looks related


was (Author: egli):
OAK-1981 introduced Branch.RebaseCommit/BranchCommitImpl so looks related

> algorithm with O(n!) in mongoMk rebase - not finishing in years
> ---
>
> Key: OAK-2513
> URL: https://issues.apache.org/jira/browse/OAK-2513
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.9, 1.0.11
> Environment: oak 1.0.9.r1646300 and 1.0.11 (see further details as to 
> which stack is from which version below)
>Reporter: Stefan Egli
>Priority: Blocker
>
> In MongoMk/DocumentNodeStore there is an algorithm which is O(n!) meaning 
> that even with few entries (eg 12 or 13 as witnessed) it would result in the 
> order of billion operations (which consist of map.containsKey, map.put for 
> example) - ie it would create an almost infinite loop.
> This situation is seen in DocumentNodeStore.merge() and 
> DocumentNodeStore.readNode() where there is a branch with unsaved revisions 
> (and as [~chetanm] pointed out, oak only creates a branch for large commits).
> The assumption that it is an O(n!) algorithm are:
>  * document.Branch.rebase() constructs a RebaseCommit() object, passing it 
> the list of previous commits, adding the resulting commit to that list of 
> previous commits
>  * this constructs a list of RebaseCommit() objects, that each have the (n-1) 
> previous objects
>  * later in either document.Branch.applyTo() or isModified() it recursively 
> goes through this list of previous commits - which since that list also 
> consists of RebaseCommit objects and each of those consists (n-1) previous 
> objects, results in n * (n-1) * (n-2) * (n-3) such recursions.
>  * note that during the 'almost endless loop' there is no activity against 
> the underlying eg mongod - but nevertheless does the cache.load() call keep a 
> lock for the given amount of time (for the affected path at least)
> Since n! starts to get big already with a low digit (eg 8!=40,320) you could 
> start seeing burst of rebase operations becoming slow quite soon.
> Attaching stacktraces where this has been witness and underlining the issue 
> below. 
> with oak 1.0.11:
> {code}
>java.lang.Thread.State: RUNNABLE
>   at org.mapdb.BTreeMap.findChildren(BTreeMap.java:580)
>   at org.mapdb.BTreeMap.get(BTreeMap.java:607)
>   at org.mapdb.BTreeMap.containsKey(BTreeMap.java:1505)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$BranchCommitImpl.isModified(Branch.java:325)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch.getUnsavedLastRevision(Branch.java:242)
>   at 
> org.apache.jackrabbit.oak.plugins.document.NodeDocument.getNodeAtRevision(NodeDocument.java:896)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:929)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:706)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:703)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>   at 
> com.goo

[jira] [Commented] (OAK-2513) algorithm with O(n!) in mongoMk rebase - not finishing in years

2015-02-12 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318755#comment-14318755
 ] 

Stefan Egli commented on OAK-2513:
--

OAK-1981 introduced Branch.RebaseCommit/BranchCommitImpl so looks related

> algorithm with O(n!) in mongoMk rebase - not finishing in years
> ---
>
> Key: OAK-2513
> URL: https://issues.apache.org/jira/browse/OAK-2513
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.9, 1.0.11
> Environment: oak 1.0.9.r1646300 and 1.0.11 (see further details as to 
> which stack is from which version below)
>Reporter: Stefan Egli
>Priority: Blocker
>
> In MongoMk/DocumentNodeStore there is an algorithm which is O(n!) meaning 
> that even with few entries (eg 12 or 13 as witnessed) it would result in the 
> order of billion operations (which consist of map.containsKey, map.put for 
> example) - ie it would create an almost infinite loop.
> This situation is seen in DocumentNodeStore.merge() and 
> DocumentNodeStore.readNode() where there is a branch with unsaved revisions 
> (and as [~chetanm] pointed out, oak only creates a branch for large commits).
> The assumption that it is an O(n!) algorithm are:
>  * document.Branch.rebase() constructs a RebaseCommit() object, passing it 
> the list of previous commits, adding the resulting commit to that list of 
> previous commits
>  * this constructs a list of RebaseCommit() objects, that each have the (n-1) 
> previous objects
>  * later in either document.Branch.applyTo() or isModified() it recursively 
> goes through this list of previous commits - which since that list also 
> consists of RebaseCommit objects and each of those consists (n-1) previous 
> objects, results in n * (n-1) * (n-2) * (n-3) such recursions.
>  * note that during the 'almost endless loop' there is no activity against 
> the underlying eg mongod - but nevertheless does the cache.load() call keep a 
> lock for the given amount of time (for the affected path at least)
> Since n! starts to get big already with a low digit (eg 8!=40,320) you could 
> start seeing burst of rebase operations becoming slow quite soon.
> Attaching stacktraces where this has been witness and underlining the issue 
> below. 
> with oak 1.0.11:
> {code}
>java.lang.Thread.State: RUNNABLE
>   at org.mapdb.BTreeMap.findChildren(BTreeMap.java:580)
>   at org.mapdb.BTreeMap.get(BTreeMap.java:607)
>   at org.mapdb.BTreeMap.containsKey(BTreeMap.java:1505)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$BranchCommitImpl.isModified(Branch.java:325)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Branch.getUnsavedLastRevision(Branch.java:242)
>   at 
> org.apache.jackrabbit.oak.plugins.document.NodeDocument.getNodeAtRevision(NodeDocument.java:896)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:929)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:706)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:703)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(Local

[jira] [Created] (OAK-2513) algorithm with O(n!) in mongoMk rebase - not finishing in years

2015-02-12 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-2513:


 Summary: algorithm with O(n!) in mongoMk rebase - not finishing in 
years
 Key: OAK-2513
 URL: https://issues.apache.org/jira/browse/OAK-2513
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.0.11, 1.0.9
 Environment: oak 1.0.9.r1646300 and 1.0.11 (see further details as to 
which stack is from which version below)
Reporter: Stefan Egli
Priority: Blocker


In MongoMk/DocumentNodeStore there is an algorithm which is O(n!) meaning that 
even with few entries (eg 12 or 13 as witnessed) it would result in the order 
of billion operations (which consist of map.containsKey, map.put for example) - 
ie it would create an almost infinite loop.

This situation is seen in DocumentNodeStore.merge() and 
DocumentNodeStore.readNode() where there is a branch with unsaved revisions 
(and as [~chetanm] pointed out, oak only creates a branch for large commits).

The assumption that it is an O(n!) algorithm are:
 * document.Branch.rebase() constructs a RebaseCommit() object, passing it the 
list of previous commits, adding the resulting commit to that list of previous 
commits
 * this constructs a list of RebaseCommit() objects, that each have the (n-1) 
previous objects
 * later in either document.Branch.applyTo() or isModified() it recursively 
goes through this list of previous commits - which since that list also 
consists of RebaseCommit objects and each of those consists (n-1) previous 
objects, results in n * (n-1) * (n-2) * (n-3) such recursions.
 * note that during the 'almost endless loop' there is no activity against the 
underlying eg mongod - but nevertheless does the cache.load() call keep a lock 
for the given amount of time (for the affected path at least)

Since n! starts to get big already with a low digit (eg 8!=40,320) you could 
start seeing burst of rebase operations becoming slow quite soon.

Attaching stacktraces where this has been witness and underlining the issue 
below. 

with oak 1.0.11:
{code}
   java.lang.Thread.State: RUNNABLE
at org.mapdb.BTreeMap.findChildren(BTreeMap.java:580)
at org.mapdb.BTreeMap.get(BTreeMap.java:607)
at org.mapdb.BTreeMap.containsKey(BTreeMap.java:1505)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$BranchCommitImpl.isModified(Branch.java:325)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch$RebaseCommit.isModified(Branch.java:366)
at 
org.apache.jackrabbit.oak.plugins.document.Branch.getUnsavedLastRevision(Branch.java:242)
at 
org.apache.jackrabbit.oak.plugins.document.NodeDocument.getNodeAtRevision(NodeDocument.java:896)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:929)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:706)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:703)
at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
- locked <0x000552d56720> (a 
com.google.common.cache.LocalCache$StrongAccessEntry)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
at 
org.apache.jackrabbit.oak.plugins.docum

[jira] [Created] (OAK-2512) ACL filtering for faceted search

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2512:


 Summary: ACL filtering for faceted search
 Key: OAK-2512
 URL: https://issues.apache.org/jira/browse/OAK-2512
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: query
Reporter: Tommaso Teofili
 Fix For: 1.2


Faceted search results should be secured via ACL checks, meaning that facets 
should only come that nodes and properties that the calling user is able to 
read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2511) Support for faceted search in Lucene index

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2511:


 Summary: Support for faceted search in Lucene index
 Key: OAK-2511
 URL: https://issues.apache.org/jira/browse/OAK-2511
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: oak-lucene
Reporter: Tommaso Teofili
 Fix For: 1.2


A Lucene index should be able to provide faceted search results, based on the 
support provided in the query engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2510) Support for faceted search in Solr index

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2510:


 Summary: Support for faceted search in Solr index
 Key: OAK-2510
 URL: https://issues.apache.org/jira/browse/OAK-2510
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.2


A Solr index should be able to provide faceted search results, based on the 
support provided in the query engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2509) Support for faceted search in query engine

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2509:


 Summary: Support for faceted search in query engine
 Key: OAK-2509
 URL: https://issues.apache.org/jira/browse/OAK-2509
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: query
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.2


Support for queries like {{select [jcr:path], [facet(tags)] from [nt:base] 
where contains([jcr:title], 'oak')}} and their _xpath_ equivalents should be 
provided in  the query engine, thus still being compliant with the JCR spec and 
no Oak specific API would be needed.

Each row in the results would then contain the facets as a String value, which 
could then be adapted to a typed class (e.g. Collection) via dedicated 
method in a specific utility class (e.g. {{QueryUtils}} in _oak-commons_).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-1736) Support for Faceted Search

2015-02-12 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reassigned OAK-1736:


Assignee: Tommaso Teofili

> Support for Faceted Search
> --
>
> Key: OAK-1736
> URL: https://issues.apache.org/jira/browse/OAK-1736
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: oak-lucene, oak-solr, query
>Reporter: Thomas Mueller
>Assignee: Tommaso Teofili
> Fix For: 1.2
>
>
> Details to be defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2395) RDB: MS SQL Server support

2015-02-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318303#comment-14318303
 ] 

Julian Reschke edited comment on OAK-2395 at 2/12/15 3:14 PM:
--

we probably need to switch to binary IDs here as well; the table create passes, 
but in practice only 900 bytes == 450 characters (MS uses UCS-2) are allowed in 
the primary key. See BasicDocumentStoreTest.testMaxIdAscii. (see also 
https://technet.microsoft.com/en-us/library/ms191241%28v=sql.105%29.aspx)


was (Author: reschke):
we probably need to switch to binary IDs here as well; the table create passes, 
but in practice only 900 bytes == 450 characters (MS uses UCS-2) are allowed in 
the primary key. See BasicDocumentStoreTest.testMaxIdAscii.

> RDB: MS SQL Server support
> --
>
> Key: OAK-2395
> URL: https://issues.apache.org/jira/browse/OAK-2395
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: mongomk
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.2
>
> Attachments: OAK-2395-v0.1.patch, sqlserver.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2508) ACL filtering on spellchecks

2015-02-12 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-2508.
--
Resolution: Fixed

fixed in r1659285

> ACL filtering on spellchecks
> 
>
> Key: OAK-2508
> URL: https://issues.apache.org/jira/browse/OAK-2508
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: oak-lucene, oak-solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.1.7
>
>
> As done for _rep:suggest_ also spellcheck results should be ACL filtered the 
> same way we do for suggestions, that is by running a query for each 
> spellcheck result and check if a resulting node is accessible from the 
> calling user (via {{Filter#isAccessible}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2395) RDB: MS SQL Server support

2015-02-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318303#comment-14318303
 ] 

Julian Reschke commented on OAK-2395:
-

we probably need to switch to binary IDs here as well; the table create passes, 
but in practice only 900 bytes == 450 characters (MS uses UCS-2) are allowed in 
the primary key. See BasicDocumentStoreTest.testMaxIdAscii.

> RDB: MS SQL Server support
> --
>
> Key: OAK-2395
> URL: https://issues.apache.org/jira/browse/OAK-2395
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: mongomk
>Reporter: Amit Jain
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.2
>
> Attachments: OAK-2395-v0.1.patch, sqlserver.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2508) ACL filtering on spellchecks

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2508:


 Summary: ACL filtering on spellchecks
 Key: OAK-2508
 URL: https://issues.apache.org/jira/browse/OAK-2508
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.1.7


As done for _rep:suggest_ also spellcheck results should be ACL filtered the 
same way we do for suggestions, that is by running a query for each spellcheck 
result and check if a resulting node is accessible from the calling user (via 
{{Filter#isAccessible}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2507) Truncate journal.log after off line compaction

2015-02-12 Thread JIRA
Michael Dürig created OAK-2507:
--

 Summary: Truncate journal.log after off line compaction
 Key: OAK-2507
 URL: https://issues.apache.org/jira/browse/OAK-2507
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig


After off line compaction the repository contains a single revision. However 
the journal.log file will still contain the trail of all revisions that have 
been removed during the compaction process. I suggest we truncate the 
journal.log to only contain the latest revision created during compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2505) Rows parameter is provided but the value is not taken from configuration

2015-02-12 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-2505.
--
Resolution: Fixed

fixed in r1659264

> Rows parameter is provided but the value is not taken from configuration
> 
>
> Key: OAK-2505
> URL: https://issues.apache.org/jira/browse/OAK-2505
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: oak-solr
>Affects Versions: 1.0.11
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.0.12
>
>
> Due to a wrong [SVN merge|http://svn.apache.org/r1601556] OAK-1800 (fetching 
> a configurable number of rows) doesn't work as expected in branch_1.0 because 
> the _rows_ configuration parameter is correctly set and fetched but not used 
> in _SolrQueryIndex_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2110) performance issues with VersionGarbageCollector

2015-02-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2110:

Fix Version/s: (was: 1.1.7)
   1.1.8

> performance issues with VersionGarbageCollector
> ---
>
> Key: OAK-2110
> URL: https://issues.apache.org/jira/browse/OAK-2110
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Reporter: Julian Reschke
> Fix For: 1.1.8
>
>
> This one currently special-cases Mongo. For other persistences, it
> - fetches *all* documents
> - filters by SD_TYPE
> - filters by lastmod of versions
> - deletes what remains
> This is not only inefficient but also fails with OutOfMemory for any larger 
> repo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1266) DocumentStore implementation for relational databases

2015-02-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-1266:

Component/s: rdbmk

> DocumentStore implementation for relational databases
> -
>
> Key: OAK-1266
> URL: https://issues.apache.org/jira/browse/OAK-1266
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: mongomk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.2
>
>
> There should be an alternative DocumentStore implementation that persists to 
> SQL databases rather than MongoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2506) port RDB support back to Oak 1.0

2015-02-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-2506:
---

 Summary: port RDB support back to Oak 1.0
 Key: OAK-2506
 URL: https://issues.apache.org/jira/browse/OAK-2506
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, parent, rdbmk, run
Affects Versions: 1.0.11
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor


This will affect:

- the parent POM (profiles for various DBs)
- adding RDBDocumentStore, RDBBlobStore + friends + tests
- rewiring DocumentNodeStoreService + fixtures
- oak-run support (repo instantiation)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2493) DataStore GC: Fix incorrect tests

2015-02-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318153#comment-14318153
 ] 

Julian Reschke commented on OAK-2493:
-

I haven't been able to reproduce the problem again; so whatever you did may 
have fixed it.

> DataStore GC: Fix incorrect tests
> -
>
> Key: OAK-2493
> URL: https://issues.apache.org/jira/browse/OAK-2493
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.1.7
>
>
> DataStore GC test for SegmentNodeStore is out of sync with the latest changes 
> and is incorrect. Also, the data store gc tests need a tighter test 
> expectation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2505) Rows parameter is provided but the value is not taken from configuration

2015-02-12 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2505:


 Summary: Rows parameter is provided but the value is not taken 
from configuration
 Key: OAK-2505
 URL: https://issues.apache.org/jira/browse/OAK-2505
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-solr
Affects Versions: 1.0.11
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.12


Due to a wrong [SVN merge|http://svn.apache.org/r1601556] OAK-1800 (fetching a 
configurable number of rows) doesn't work as expected in branch_1.0 because the 
_rows_ configuration parameter is correctly set and fetched but not used in 
_SolrQueryIndex_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2490) Make it possible to use the PermissionProvider from within query indexes

2015-02-12 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318007#comment-14318007
 ] 

Thomas Mueller commented on OAK-2490:
-

> It would be nice if we change the IndexPlan exposing this information to the 
> query engine

Yes, even thought I don't think this is a high priority. Sure, access rights 
checks can be avoided in this case, but I would expect this information is 
cached. There is a slight complication: the select list can contain other 
properties than the condition (select a from x where b = 1). So access rights 
checks for b are not enough in this case. If access rights are on the node 
level (all properties are readable, or none are readable) then this is not a 
problem.

> Make it possible to use the PermissionProvider from within query indexes
> 
>
> Key: OAK-2490
> URL: https://issues.apache.org/jira/browse/OAK-2490
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.1.7
>
> Attachments: OAK-2490.0.patch, OAK-2490.1.patch, OAK-2490.2.patch
>
>
> As discussed on OAK-2423 and OAK-2473 it's useful to have a 
> {{PermissionProvider}} down into {{QueryIndex}} implementations and/or 
> {{SelectorImpl}}, depending on wether the ACL checks have to be done on an 
> implementation base or generally.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2504) oak-run debug should list a breakdown of space usage per record type

2015-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2504:
---
Fix Version/s: (was: 1.1.7)
   1.1.8

> oak-run debug should list a breakdown of space usage per record type
> 
>
> Key: OAK-2504
> URL: https://issues.apache.org/jira/browse/OAK-2504
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: run
>Reporter: Michael Dürig
>Assignee: Michael Dürig
> Fix For: 1.1.8
>
>
> The current output of oak-run in debug run mode only gives a disk space 
> breakdown for bulk and data segments. For OAK-2408 it would be good if we 
> could have a finer grained breakdown where we could see which record type 
> take up how much space. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2384) SegmentNotFoundException when keeping JCR Value references

2015-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2384:
---
Fix Version/s: (was: 1.1.7)
   1.1.8

> SegmentNotFoundException when keeping JCR Value references
> --
>
> Key: OAK-2384
> URL: https://issues.apache.org/jira/browse/OAK-2384
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Critical
>  Labels: gc
> Fix For: 1.0.12, 1.1.8
>
> Attachments: org.apache.sling.jcr.resource-2.4.0.jar
>
>
> With OAK-2192 revision gc started to remove segments older than a certain 
> threshold. The underlying assumption was that old sessions would call refresh 
> (i.e. auto refresh) anyway once they become active again. However, it turns 
> out that refreshing a sessions does not affect JCR values as those are 
> directly tied to the underlying record. Accessing those values after its 
> segment has been gc'ed results in a {{SegmentNotFoundException}}. 
> Keeping reference to JCR values is an important use case for Sling's 
> {{JcrPropertyMap}}, which is widely used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2262:
---
Fix Version/s: (was: 1.1.7)
   1.1.8

> Add metadata about the changed value to a PROPERTY_CHANGED event on a 
> multivalued property
> --
>
> Key: OAK-2262
> URL: https://issues.apache.org/jira/browse/OAK-2262
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, jcr
>Affects Versions: 1.1.2
>Reporter: Tommaso Teofili
>Assignee: Michael Dürig
>  Labels: observation
> Fix For: 1.1.8
>
>
> When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
> value can have actually changed so that handlers of such events do not need 
> any further information to process it and eventually work on the changed 
> value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
> (e.g. String[]) may relate to any of the values and that brings a source of 
> uncertainty on event handlers processing such changes because there's no mean 
> to understand which property value had been changed and therefore to them to 
> react accordingly.
> A workaround for that is to create Oak specific _Observers_ which can deal 
> with the diff between before and after state and create a specific event 
> containing the "diff", however this would add a non trivial load to the 
> repository because of the _Observer_ itself and because of the additional 
> events being generated while it'd be great if the 'default' events would have 
> metadata e.g. of the changed value index or similar information that can help 
> understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2504) oak-run debug should list a breakdown of space usage per record type

2015-02-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317903#comment-14317903
 ] 

Michael Dürig commented on OAK-2504:


Initial version at http://svn.apache.org/r1659183

> oak-run debug should list a breakdown of space usage per record type
> 
>
> Key: OAK-2504
> URL: https://issues.apache.org/jira/browse/OAK-2504
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: run
>Reporter: Michael Dürig
>Assignee: Michael Dürig
> Fix For: 1.1.7
>
>
> The current output of oak-run in debug run mode only gives a disk space 
> breakdown for bulk and data segments. For OAK-2408 it would be good if we 
> could have a finer grained breakdown where we could see which record type 
> take up how much space. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2504) oak-run debug should list a breakdown of space usage per record type

2015-02-12 Thread JIRA
Michael Dürig created OAK-2504:
--

 Summary: oak-run debug should list a breakdown of space usage per 
record type
 Key: OAK-2504
 URL: https://issues.apache.org/jira/browse/OAK-2504
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 1.1.7


The current output of oak-run in debug run mode only gives a disk space 
breakdown for bulk and data segments. For OAK-2408 it would be good if we could 
have a finer grained breakdown where we could see which record type take up how 
much space. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2006) Verify the maven baseline output and fix the warnings

2015-02-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317833#comment-14317833
 ] 

Michael Dürig commented on OAK-2006:


Note that the baseline feature could make the build somewhat indeterministic as 
it uses the local Maven cache for determining the last version to compare 
against. See http://markmail.org/message/6rnscyv3gurgfdpd

> Verify the maven baseline output and fix the warnings
> -
>
> Key: OAK-2006
> URL: https://issues.apache.org/jira/browse/OAK-2006
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Attachments: baseline-oak-core.patch
>
>
> Currently the maven baseline plugin only logs the package version mismatches, 
> it doesn't fail the build. It would be beneficial to start looking at the 
> output and possibly fix some of the warnings (increase the OSGi package 
> versions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)