[jira] [Commented] (OAK-1522) Provide PojoSR based RepositoryFactory implementation

2014-05-28 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010915#comment-14010915
 ] 

Chetan Mehrotra commented on OAK-1522:
--

Fixed the compilation issues (with JDK 7) and test failure on windows and 
reenabled the module. [~reschke] Can you check if the module builds at your end

 Provide PojoSR based RepositoryFactory implementation
 -

 Key: OAK-1522
 URL: https://issues.apache.org/jira/browse/OAK-1522
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1

 Attachments: OAK-1522.patch


 This features provides support for configuring Repository using in built OSGi 
 support in non OSGi env using PojoSR [1].
 For more details refer to [2]. Note that PojoSR is being moved to Apache 
 Felix (FELIX-4445)
 [1] https://code.google.com/p/pojosr/
 [2] http://jackrabbit-oak.markmail.org/thread/7zpux64mj6vecwzf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1861) Limit memory usage of DocumentNodeStore.readChildren()

2014-05-28 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-1861:
-

 Summary: Limit memory usage of DocumentNodeStore.readChildren()
 Key: OAK-1861
 URL: https://issues.apache.org/jira/browse/OAK-1861
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger


There is still a TODO in DocumentNodeStore.readChildren() about memory usage. 
The name offset is already implemented and used when iterating over many child 
nodes. But there are still cases where the readChildren() method itself may use 
too much memory. This happens when there are a lot of documents for deleted 
child nodes. The for loop inside readChildren() will double the rawLimit until 
it is able to fetch the requested nodes and start again with an empty list of 
children. This should be improved to continue after the last returned document.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1862) Checkpoints release method

2014-05-28 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-1862:


 Summary: Checkpoints release method
 Key: OAK-1862
 URL: https://issues.apache.org/jira/browse/OAK-1862
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu


I'd like to add the option to release a given checkpoint to the NodeStore apis. 
This would help with keeping the checkpoints created by the async indexer under 
check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1804) TarMK compaction

2014-05-28 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010990#comment-14010990
 ] 

Alex Parvulescu commented on OAK-1804:
--

bq. In revision 1597854 I adjusted the compaction code to avoid memory problems 
when doing a deep compaction of an existing repository.
I managed to run into a deadlock and I think this is the change responsible for 
it :)

{code}
pool-8-thread-5 - Thread t@192
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:610)
- waiting to lock 6b766619 (a 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore) owned by TarMK flush 
thread: /../repository/segmentstore t@58
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:213)
- locked 79495e14 (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.checkpoint(SegmentNodeStore.java:209)
- locked 20066ede (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore)
at 
org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.checkpoint(ProxyNodeStore.java:67)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:149)
- locked 658c13f (a 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate)

TarMK flush thread: /../repository/segmentstore - Thread t@58
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeTemplate(SegmentWriter.java:845)
- waiting to lock 79495e14 (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) owned by 
pool-8-thread-5 t@192
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:950)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:61)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.updated(SegmentNodeBuilder.java:46)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:205)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:489)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:364)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:408)
- locked 6b766619 (a 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore)
- locked 365727bc (a java.util.concurrent.atomic.AtomicReference)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:247)
{code}

 TarMK compaction
 

 Key: OAK-1804
 URL: https://issues.apache.org/jira/browse/OAK-1804
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: segmentmk
Reporter: Jukka Zitting
Assignee: Alex Parvulescu
  Labels: production, tools
 Fix For: 1.0.1, 1.1

 Attachments: SegmentNodeStore.java.patch, compaction.patch


 The TarMK would benefit from periodic compact operations that would 
 traverse and recreate (parts of) the content tree in order to optimize the 
 storage layout. More specifically, such compaction would:
 * Optimize performance by increasing locality and reducing duplication, both 
 of which improve the effectiveness of caching.
 * Allow the garbage collector to release more unused disk space by removing 
 references to segments where only a subset of content is reachable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1522) Provide PojoSR based RepositoryFactory implementation

2014-05-28 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010992#comment-14010992
 ] 

Julian Reschke commented on OAK-1522:
-

[~chetanm] looks good.

 Provide PojoSR based RepositoryFactory implementation
 -

 Key: OAK-1522
 URL: https://issues.apache.org/jira/browse/OAK-1522
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1

 Attachments: OAK-1522.patch


 This features provides support for configuring Repository using in built OSGi 
 support in non OSGi env using PojoSR [1].
 For more details refer to [2]. Note that PojoSR is being moved to Apache 
 Felix (FELIX-4445)
 [1] https://code.google.com/p/pojosr/
 [2] http://jackrabbit-oak.markmail.org/thread/7zpux64mj6vecwzf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1845) Add command to execute script

2014-05-28 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1845.
--

   Resolution: Fixed
Fix Version/s: 1.1

The requirements are now addressed via groovy based shell support. So explicit 
eval command not required unless we need to support evaluating scripts in lang 
other than Groovy. If such a need arises specific issue can be created 

 Add command to execute script
 -

 Key: OAK-1845
 URL: https://issues.apache.org/jira/browse/OAK-1845
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1


 Add a command which can execute arbitrary script
 {noformat}
  script /path/to/script
  script -groovy println 'Hello'
  script print(hello)
  script EOT
 println(nodeStore.getRoot())
 EOT
 {noformat}
 For the last the script would be assumed to be javascript by default. Later 
 we can enhance the Console to read multi line string use herestring 
 The ConsoleSession can be exposed as bind variable and can be used by script 
 to access the runtime



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (OAK-1805) Debugging console

2014-05-28 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-1805:


Assignee: Chetan Mehrotra

 Debugging console
 -

 Key: OAK-1805
 URL: https://issues.apache.org/jira/browse/OAK-1805
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Jukka Zitting
Assignee: Chetan Mehrotra
  Labels: production, tools
 Attachments: OAK-1805-groovysh.patch


 It would be nice to for {{oak-run}} to come with a debugging console like the 
 {{cli}} mode in {{jackrabbit-standalone}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1805) Debugging console

2014-05-28 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1805.
--

Resolution: Fixed

Required support added with http://svn.apache.org/r1597963

Also tested the shell support on Windows and Ubuntu. On windows there is a 
minor issue that previous command is not displayed with up/down arrow key. This 
is due to GROOVY-6453 [1] and requires an update of jline [2] 2.12 (current 
released in 2.11). For now one can use ctrl+n and ctrl+p instead of up/down 
array key

[1] https://jira.codehaus.org/browse/GROOVY-6453
[2] https://github.com/jline/jline2

 Debugging console
 -

 Key: OAK-1805
 URL: https://issues.apache.org/jira/browse/OAK-1805
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: run
Reporter: Jukka Zitting
Assignee: Chetan Mehrotra
  Labels: production, tools
 Attachments: OAK-1805-groovysh.patch


 It would be nice to for {{oak-run}} to come with a debugging console like the 
 {{cli}} mode in {{jackrabbit-standalone}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1804) TarMK compaction

2014-05-28 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011024#comment-14011024
 ] 

Alex Parvulescu commented on OAK-1804:
--

I managed to work around the deadlock by wrapping line 209 ( 
_store.getTracker().getWriter().flush();_ ) into a _synchronized (store)_ block:
{code}
synchronized (store) {
  store.getTracker().getWriter().flush();
}
{code}

[~jukkaz] any ideas?

 TarMK compaction
 

 Key: OAK-1804
 URL: https://issues.apache.org/jira/browse/OAK-1804
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: segmentmk
Reporter: Jukka Zitting
Assignee: Alex Parvulescu
  Labels: production, tools
 Fix For: 1.0.1, 1.1

 Attachments: SegmentNodeStore.java.patch, compaction.patch


 The TarMK would benefit from periodic compact operations that would 
 traverse and recreate (parts of) the content tree in order to optimize the 
 storage layout. More specifically, such compaction would:
 * Optimize performance by increasing locality and reducing duplication, both 
 of which improve the effectiveness of caching.
 * Allow the garbage collector to release more unused disk space by removing 
 references to segments where only a subset of content is reachable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1804) TarMK compaction

2014-05-28 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011043#comment-14011043
 ] 

Alex Parvulescu commented on OAK-1804:
--

... and another deadlock...
This happened while shutting down CQ, but from the stack traces it doesn't look 
related.

{code}

pool-8-thread-1 - Thread t@170
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:610)
- waiting to lock 7aa244df (a 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore) owned by TarMK flush 
thread: /../repository/segmentstore t@58
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:213)
- locked 36c6a6bf (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:271)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:237)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeValueRecord(SegmentWriter.java:512)
- locked 36c6a6bf (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.internalWriteStream(SegmentWriter.java:765)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeStream(SegmentWriter.java:747)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeBlob(SegmentWriter.java:717)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeProperty(SegmentWriter.java:808)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeProperty(SegmentWriter.java:796)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1004)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeAdded(SegmentWriter.java:967)
at 
org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:393)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:964)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:992)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:973)
at 
org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:964)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:973)
at 
org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:964)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:61)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.init(SegmentNodeStore.java:309)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:132)
at 
org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.merge(ProxyNodeStore.java:42)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:181)
- locked 7349fabe (a 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate)
at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

TarMK flush thread: /../repository/segmentstore - Thread t@58
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeValueRecord(SegmentWriter.java:506)
- waiting to lock 36c6a6bf (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) owned by 
pool-8-thread-1 t@170
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.internalWriteStream(SegmentWriter.java:765)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeStream(SegmentWriter.java:747)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.clone(SegmentBlob.java:149)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:352)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 

[jira] [Updated] (OAK-1752) Node name queries should use an index

2014-05-28 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1752:
---

Fix Version/s: 1.1

 Node name queries should use an index
 -

 Key: OAK-1752
 URL: https://issues.apache.org/jira/browse/OAK-1752
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Alex Parvulescu
 Fix For: 1.1


 The node name queries don't use any index currently, making them really slow 
 and triggering a lot of traversal warnings.
 Simply adding node names to a property index would be too much content 
 indexed, but as Lucene already indexes the node names, using this index would 
 be one viable option.
 {code}
 /jcr:root//*[fn:name() = 'jcr:content']
 /jcr:root//*[jcr:like(fn:name(), 'jcr:con%')] 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1861) Limit memory usage of DocumentNodeStore.readChildren()

2014-05-28 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1861:
---

Fix Version/s: 1.1

 Limit memory usage of DocumentNodeStore.readChildren()
 --

 Key: OAK-1861
 URL: https://issues.apache.org/jira/browse/OAK-1861
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
 Fix For: 1.1


 There is still a TODO in DocumentNodeStore.readChildren() about memory usage. 
 The name offset is already implemented and used when iterating over many 
 child nodes. But there are still cases where the readChildren() method itself 
 may use too much memory. This happens when there are a lot of documents for 
 deleted child nodes. The for loop inside readChildren() will double the 
 rawLimit until it is able to fetch the requested nodes and start again with 
 an empty list of children. This should be improved to continue after the last 
 returned document.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1804) TarMK compaction

2014-05-28 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011056#comment-14011056
 ] 

Alex Parvulescu commented on OAK-1804:
--

another one, even if it has the same root cause, I'm pasting it here as it 
looks like a different combination of locks

{code}
pool-8-thread-1 - Thread t@171
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:584)
- waiting to lock 2c1d063c (a 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore) owned by TarMK flush 
thread: /../repository/segmentstore t@58
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:106)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:97)
- locked 5621c650 (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentId)
at 
org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:68)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getChildNodeMap(SegmentNodeState.java:80)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getChildNode(SegmentNodeState.java:336)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.getRoot(SegmentNodeStore.java:117)
at 
org.apache.jackrabbit.oak.spi.state.ProxyNodeStore.getRoot(ProxyNodeStore.java:35)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.isAlreadyRunning(AsyncIndexUpdate.java:273)
at 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:144)
- locked 5bc6882c (a 
org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate)
at 
org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)

TarMK flush thread: /../repository/segmentstore - Thread t@58
   java.lang.Thread.State: BLOCKED
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:93)
- waiting to lock 5621c650 (a 
org.apache.jackrabbit.oak.plugins.segment.SegmentId) owned by pool-8-thread-1 
t@171
at 
org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:68)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:74)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getProperties(SegmentNodeState.java:142)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:343)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.compact(FileStore.java:371)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:411)
- locked 2c1d063c (a 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore)
- locked 19a94a0 (a java.util.concurrent.atomic.AtomicReference)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:247)
at java.lang.Thread.run(Thread.java:744)
{code}

 TarMK compaction
 

 Key: OAK-1804
 URL: https://issues.apache.org/jira/browse/OAK-1804
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: segmentmk
Reporter: Jukka Zitting
Assignee: Alex Parvulescu
  Labels: production, tools
 Fix For: 1.0.1, 1.1

 Attachments: SegmentNodeStore.java.patch, compaction.patch


 The TarMK would benefit from periodic compact operations that would 
 traverse and recreate (parts of) the content tree in order to optimize the 
 storage layout. More specifically, such compaction would:
 * Optimize performance by increasing locality and reducing duplication, both 
 of which improve the effectiveness of caching.
 * Allow the garbage collector to release more unused disk space by removing 
 references to segments where only a subset of content is reachable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1838) NodeStore API: divergence in contract and implementations

2014-05-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011069#comment-14011069
 ] 

Michael Dürig commented on OAK-1838:


I implemented something log the lines of point 1 above in my private [Github 
branch|https://github.com/mduerig/jackrabbit-oak/tree/OAK-1838]. 
{{NodeStore.merge}} and {{NodeStoreBranch.merge}} now take and 
{{EditorProvider}} instead of a {{CommitHook}}. To cover for raw 
{{CommitHooks}} I added {{HookEditor}}, which wraps the former into into an 
{{Editor}}. This approach allows me to pass a {{NodeBuilder}} directly into the 
{{EditorProvider}} so that one will be used during pre commit processing 
instead of a new one acquires from the after state of the {{CommitHook}}. 
Finally I replaced the thread local like approach in the document node store 
to pass the current branch around with passing it explicitly with this new 
mechanism.

With these changes (integration)test pass except for 
{{RepositoryGroupMemberUpgradeTest#verifyMemberOf}}, which needs further 
investigation. 

The advantage of this approach is that we now can document the memory 
limitation of the node builders passed to client code, while previously this 
was pretty much implied. More concretely this means node builders are generally 
not save for a lot of changes unless the one passed through the 
{{EditorProvider}}. 

Overall I don't like this approach very much. While it is a slight improvement 
to the current situation it seems to complex and too fragile. Passing all that 
state around (i.e. ({{NodeBuilder}}) instead of relying on the pure (i.e. 
stateless) {{CommitHook.processCommit}} function complicates things too much 
IMO. 

I very much would like to see an effort into point 2 from above so we could 
weight pros and cons of both approaches against each other. 

[~mreutegg], [~jukkaz], review and comments appreciated. 

 NodeStore API: divergence in contract and implementations
 -

 Key: OAK-1838
 URL: https://issues.apache.org/jira/browse/OAK-1838
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Michael Dürig

 Currently there is a gap between what the API mandates and what the document 
 and kernel node stores implement. This hinders further evolution of the Oak 
 stack as implementers must always be aware of non explicit design 
 requirements. We should look into ways we could close this gap by bringing 
 implementation and specification closer towards each other. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1859) MIgration from TarMK to MongoMK

2014-05-28 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1859:
---

Fix Version/s: 1.1

 MIgration from TarMK to MongoMK
 ---

 Key: OAK-1859
 URL: https://issues.apache.org/jira/browse/OAK-1859
 Project: Jackrabbit Oak
  Issue Type: Wish
  Components: upgrade
Affects Versions: 1.0
Reporter: Przemo Pakulski
 Fix For: 1.1


 Is migration from TarMK to MongoMK possible ?
 Let's say I initially used TarMK, then realized that Mongo is better option, 
 is there any way to migrate existing content to MongoDB ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1844) Verify resilience goals

2014-05-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1844:
---

Labels: resilience  (was: )

 Verify resilience goals
 ---

 Key: OAK-1844
 URL: https://issues.apache.org/jira/browse/OAK-1844
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Michael Dürig
  Labels: resilience
 Fix For: 1.1


 This is a container issue for verifying the resilience goals set out for Oak. 
 See https://wiki.apache.org/jackrabbit/Resilience for the work in progress of 
 such goals. Once we have an agreement on that, subtasks of this issue could 
 be used to track the verification process of each of the individual goals. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1448) Tool to detect and possibly fix permission store inconsistencies

2014-05-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1448:
---

Attachment: OAK-1448-2.patch

Slightly edited patch, collapsing the various execute methods in 
{{RepositoryManager}} into one by introducing the concept of a named management 
operation. 

 Tool to detect and possibly fix permission store inconsistencies
 

 Key: OAK-1448
 URL: https://issues.apache.org/jira/browse/OAK-1448
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Marth
Assignee: angela
Priority: Minor
  Labels: production, resilience, tools
 Fix For: 1.1

 Attachments: OAK-1448-2.patch, OAK-1448_.patch


 I think we should prepare for cases where the permission store (managed as a 
 tree mirrored to the content tree) goes out of sync with the content tree for 
 whatever reason.
 Ideally, that would be an online tool (maybe exposed via JMX) that goes back 
 the MVCC revisions to find the offending commit (so that have a chance to 
 reduce the number of such occurences) and fixes the inconsistency on head.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1863) Generic operation tasks should be able to return specific results

2014-05-28 Thread JIRA
Michael Dürig created OAK-1863:
--

 Summary: Generic operation tasks should be able to return specific 
results
 Key: OAK-1863
 URL: https://issues.apache.org/jira/browse/OAK-1863
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 1.1


The generic interface for operation management tasks added with OAK-1160 does 
so far not provide a way for specific tasks to return a value apart from a 
genetic status. With the consistency checking we are starting to add (OAK-1448) 
such a needs start to arise. 

To address this I propose to change type of the tasks that can be passed to the 
constructor of {{ManagementOperation}}. The result type of the task (i.e. 
{{Callable}}) should change from {{Long}} to some generic container, which 
would carry the result of the task. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1448) Tool to detect and possibly fix permission store inconsistencies

2014-05-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011242#comment-14011242
 ] 

Michael Dürig commented on OAK-1448:


In order to be useful the consistency checker would need to return some kind of 
a report. This is currently not possible as the task ({{Callable}}) passed to 
{{ManagementOperation}} can only return a {{Long}}. See OAK-1863, which 
addresses this. 

 Tool to detect and possibly fix permission store inconsistencies
 

 Key: OAK-1448
 URL: https://issues.apache.org/jira/browse/OAK-1448
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Marth
Assignee: angela
Priority: Minor
  Labels: production, resilience, tools
 Fix For: 1.1

 Attachments: OAK-1448-2.patch, OAK-1448_.patch


 I think we should prepare for cases where the permission store (managed as a 
 tree mirrored to the content tree) goes out of sync with the content tree for 
 whatever reason.
 Ideally, that would be an online tool (maybe exposed via JMX) that goes back 
 the MVCC revisions to find the offending commit (so that have a chance to 
 reduce the number of such occurences) and fixes the inconsistency on head.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1804) TarMK compaction

2014-05-28 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011980#comment-14011980
 ] 

Jukka Zitting commented on OAK-1804:


I think we need to move the compaction operation from the flush method to a 
separate method (or even its own class), to be invoked either when explicitly 
requested or on schedule by the flush thread. That way the code won't interfere 
with the complex synchronization logic in flush(). I'll follow up with the 
required changes tomorrow unless you beat me to this or some other solution. 

 TarMK compaction
 

 Key: OAK-1804
 URL: https://issues.apache.org/jira/browse/OAK-1804
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: segmentmk
Reporter: Jukka Zitting
Assignee: Alex Parvulescu
  Labels: production, tools
 Fix For: 1.0.1, 1.1

 Attachments: SegmentNodeStore.java.patch, compaction.patch


 The TarMK would benefit from periodic compact operations that would 
 traverse and recreate (parts of) the content tree in order to optimize the 
 storage layout. More specifically, such compaction would:
 * Optimize performance by increasing locality and reducing duplication, both 
 of which improve the effectiveness of caching.
 * Allow the garbage collector to release more unused disk space by removing 
 references to segments where only a subset of content is reachable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1864) Deadlock when creating nodes in TarMK

2014-05-28 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-1864:
---

Attachment: scalability_tdump.log

Thead dump from when the issue occurs.

 Deadlock when creating nodes in TarMK
 -

 Key: OAK-1864
 URL: https://issues.apache.org/jira/browse/OAK-1864
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Amit Jain
 Attachments: scalability_tdump.log


 Deadlock occurs when the flush thread runs {{FileStore#flush}} and 
 concurrently nodes are being written in TarMK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1864) Deadlock when creating nodes in TarMK

2014-05-28 Thread Amit Jain (JIRA)
Amit Jain created OAK-1864:
--

 Summary: Deadlock when creating nodes in TarMK
 Key: OAK-1864
 URL: https://issues.apache.org/jira/browse/OAK-1864
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Amit Jain
 Attachments: scalability_tdump.log

Deadlock occurs when the flush thread runs {{FileStore#flush}} and concurrently 
nodes are being written in TarMK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1864) Deadlock when creating nodes in TarMK

2014-05-28 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-1864.


Resolution: Duplicate

Being tracked with OAK-1804

 Deadlock when creating nodes in TarMK
 -

 Key: OAK-1864
 URL: https://issues.apache.org/jira/browse/OAK-1864
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Amit Jain
 Attachments: scalability_tdump.log


 Deadlock occurs when the flush thread runs {{FileStore#flush}} and 
 concurrently nodes are being written in TarMK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)