[jira] [Comment Edited] (OAK-2619) Repeated upgrades

2015-07-16 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629403#comment-14629403
 ] 

Julian Sedding edited comment on OAK-2619 at 7/16/15 8:11 AM:
--

Thanks [~baedke]. Adding the length check should not do any harm, except maybe 
when upgrading to SegmentNodeStore. It feels odd, however, that the upgrade 
code needs to take care of implementation details of the DocumentNodeStore. Not 
sure I recall my reasoning correctly, but I guess I figured that the first 
version of the upgrade code was written relatively early on and the 
DocumentNodeStore did not properly guard against this condition at the time. 
Maybe a test could clarify (and verify) the behaviour (see OAK-3111).


was (Author: jsedding):
Thanks [~baedke]. Adding the length check should not do any harm, except maybe 
when upgrading to SegmentNodeStore. It feels odd, however, that the upgrade 
code needs to take care of implementation details of the DocumentNodeStore. Not 
sure I recall my reasoning correctly, but I guess I figured that the first 
version of the upgrade code was written relatively early on and the 
DocumentNodeStore did not properly guard against this condition at the time. 
Maybe a test could clarify (and verify) the behaviour.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Fix For: 1.3.3

 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3111) Reconsider check for max node name length

2015-07-16 Thread Julian Sedding (JIRA)
Julian Sedding created OAK-3111:
---

 Summary: Reconsider check for max node name length
 Key: OAK-3111
 URL: https://issues.apache.org/jira/browse/OAK-3111
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: upgrade
Reporter: Julian Sedding
Priority: Minor


In OAK-2619 the necessity of a check for node name length was briefly 
discussed. It may be worthwhile to write a test case for upgrading long node 
names and find out what happens with and without the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3112) Performance degradation of UnsavedModifications on MapDB

2015-07-16 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3112:
-

 Summary: Performance degradation of UnsavedModifications on MapDB
 Key: OAK-3112
 URL: https://issues.apache.org/jira/browse/OAK-3112
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.17, 1.0.16, 1.0.15, 1.0.14, 1.0.13, 1.0.12, 1.0.11, 
1.0.10, 1.0.9, 1.0.8, 1.0.7
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.0.18


UnsavedModifications performance degrades when used in combination with the 
MapDB backed MapFactory. Calls become more and more expensive the longer the 
instance is in use. The is caused by a limitation of MapDB, which does not 
remove empty BTree nodes.

A test performed with random paths added to the map and later removed again in 
a loop shows a increase to roughly 1 second to read keys present in the map 
when the underlying data file is about 50MB in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3112) Performance degradation of UnsavedModifications on MapDB

2015-07-16 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629427#comment-14629427
 ] 

Marcel Reutegger commented on OAK-3112:
---

This only affects the 1.0 branch where MapDB was introduced with OAK-1768. For 
1.2 the dependency was removed again with OAK-2131.

 Performance degradation of UnsavedModifications on MapDB
 

 Key: OAK-3112
 URL: https://issues.apache.org/jira/browse/OAK-3112
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.0.11, 1.0.12, 1.0.13, 
 1.0.14, 1.0.15, 1.0.16, 1.0.17
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.0.18


 UnsavedModifications performance degrades when used in combination with the 
 MapDB backed MapFactory. Calls become more and more expensive the longer the 
 instance is in use. The is caused by a limitation of MapDB, which does not 
 remove empty BTree nodes.
 A test performed with random paths added to the map and later removed again 
 in a loop shows a increase to roughly 1 second to read keys present in the 
 map when the underlying data file is about 50MB in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2948) Expose DefaultSyncHandler

2015-07-16 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629506#comment-14629506
 ] 

Konrad Windszus commented on OAK-2948:
--

The fix version is wrong. This change is only part of 1.3.2.

 Expose DefaultSyncHandler
 -

 Key: OAK-2948
 URL: https://issues.apache.org/jira/browse/OAK-2948
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-external
Reporter: Konrad Windszus
 Fix For: 1.3.0


 We do have the use case of extending the user sync. Unfortunately 
 {{DefaultSyncHandler}} is not exposed, so if you want to change one single 
 aspect of the user synchronisation you have to copy over the code from the 
 {{DefaultSyncHandler}}. Would it be possible to make that class part of the 
 exposed classes, so that deriving your own class from that DefaultSyncHandler 
 is possible?
 Very often company LDAPs are not very standardized. In our case we face an 
 issue, that the membership is being listed in a user attribute, rather than 
 in a group attribute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1842) ISE: Unexpected value record type: f2 is thrown when FileBlobStore is used

2015-07-16 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629353#comment-14629353
 ] 

Francesco Mari commented on OAK-1842:
-

Linking this issue to OAK-3107, because it should be possible to save blob IDs 
longer than 4096 bytes for the {{FileBlobStore}} to be properly used.

 ISE: Unexpected value record type: f2 is thrown when FileBlobStore is used
 

 Key: OAK-1842
 URL: https://issues.apache.org/jira/browse/OAK-1842
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: blob
Affects Versions: 1.0
Reporter: Konrad Windszus
Assignee: Francesco Mari
  Labels: resilience
 Fix For: 1.3.3


 The stacktrace of the call shows something like
 {code}
 20.05.2014 11:13:07.428 *ERROR* [OsgiInstallerImpl] 
 com.adobe.granite.installer.factory.packages.impl.PackageTransformer Error 
 while processing install task.
 java.lang.IllegalStateException: Unexpected value record type: f2
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.length(SegmentBlob.java:101)
 at 
 org.apache.jackrabbit.oak.plugins.value.BinaryImpl.getSize(BinaryImpl.java:74)
 at 
 org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:435)
 at 
 org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:376)
 at 
 org.apache.jackrabbit.vault.packaging.impl.JcrPackageImpl.getPackage(JcrPackageImpl.java:324)
 {code}
 The blob store was configured correctly and according to the log also 
 correctly initialized
 {code}
 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService 
 Initializing SegmentNodeStore with BlobStore 
 [org.apache.jackrabbit.oak.spi.blob.FileBlobStore@7e3dec43]
 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService Component 
 still not activated. Ignoring the initialization call
 20.05.2014 11:11:07.077 *INFO* [FelixStartLevel] 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK opened: 
 crx-quickstart/repository/segmentstore (mmap=true)
 {code}
 Under which circumstances can the length within the SegmentBlob be invalid?
 This only happens if a File Blob Store is configured 
 (http://jackrabbit.apache.org/oak/docs/osgi_config.html). If a file datastore 
 is used, there is no such exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-07-16 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629403#comment-14629403
 ] 

Julian Sedding commented on OAK-2619:
-

Thanks [~baedke]. Adding the length check should not do any harm, except maybe 
when upgrading to SegmentNodeStore. It feels odd, however, that the upgrade 
code needs to take care of implementation details of the DocumentNodeStore. Not 
sure I recall my reasoning correctly, but I guess I figured that the first 
version of the upgrade code was written relatively early on and the 
DocumentNodeStore did not properly guard against this condition at the time. 
Maybe a test could clarify (and verify) the behaviour.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Fix For: 1.3.3

 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3112) Performance degradation of UnsavedModifications on MapDB

2015-07-16 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629463#comment-14629463
 ] 

Marcel Reutegger commented on OAK-3112:
---

Added a test to 1.0 branch, which shows the degradation over time: 
http://svn.apache.org/r1691340

 Performance degradation of UnsavedModifications on MapDB
 

 Key: OAK-3112
 URL: https://issues.apache.org/jira/browse/OAK-3112
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.0.11, 1.0.12, 1.0.13, 
 1.0.14, 1.0.15, 1.0.16, 1.0.17
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.0.18


 UnsavedModifications performance degrades when used in combination with the 
 MapDB backed MapFactory. Calls become more and more expensive the longer the 
 instance is in use. The is caused by a limitation of MapDB, which does not 
 remove empty BTree nodes.
 A test performed with random paths added to the map and later removed again 
 in a loop shows a increase to roughly 1 second to read keys present in the 
 map when the underlying data file is about 50MB in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3107) SegmentWriter should be able to store blob IDs longer than 4096 bytes

2015-07-16 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-3107:

Attachment: OAK-3107-01.patch

The attached patch implements the encoding proposed in the description of the 
issue. I don't know if this is the optimal solution to the problem, but it 
shouldn't affect segments written with the previous version of the code while 
still allowing to cope with large binary IDs.

 SegmentWriter should be able to store blob IDs longer than 4096 bytes
 -

 Key: OAK-3107
 URL: https://issues.apache.org/jira/browse/OAK-3107
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Francesco Mari
 Attachments: OAK-3107-01.patch


 The {{SegmentWriter}} is able to store blob IDs that are no longer than 4096 
 bytes, but some implementation of {{BlobStore}} may return blob IDs 4096 
 bytes long (or more).
 It should be possible to use a different encoding for long blob IDs. The blob 
 IDs should be written as a string (using {{SegmentWriter#writeString}}), and 
 its reference ID embedded into a value record.
 The encoding in this case should be something like the following:
 {noformat}
 0 + 3bit + 3byte
 {noformat}
 where the three least significant bits of the first bytes are actually 
 unused, and the three bytes are used to store the record ID of the string 
 representing the blob ID.
 This new encoding is necessary to maintain backwards compatibility with the 
 current way of storing blob IDs and to give a way to {{SegmentBlob}} to 
 recognise this new encoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3110) AsyncIndexer fails due to FileNotFoundException thrown by CopyOnWrite logic

2015-07-16 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3110.
--
Resolution: Fixed

Fixed by ensuring that COW can inform the COR using same directory about the 
files it is working upon
* trunk - 1691331,1691332,1691333
* 1.0 - http://svn.apache.org/r1691334
* 1.2 - http://svn.apache.org/r1691338

 AsyncIndexer fails due to FileNotFoundException thrown by CopyOnWrite logic
 ---

 Key: OAK-3110
 URL: https://issues.apache.org/jira/browse/OAK-3110
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.2.3, 1.3.3, 1.0.18

 Attachments: copier.log


 At times the CopyOnWrite reports following exception
 {noformat}
 15.07.2015 14:20:35.930 *WARN* [pool-58-thread-1] 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate The async index 
 update failed
 org.apache.jackrabbit.oak.api.CommitFailedException: OakLucene0004: Failed to 
 close the Lucene index
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:204)
   at 
 org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
   at 
 org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
   at 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
   at 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
   at 
 org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
   at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.FileNotFoundException: _2s7.fdt
   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:261)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnWriteDirectory$COWLocalFileReference.fileLength(IndexCopier.java:837)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnWriteDirectory.fileLength(IndexCopier.java:607)
   at 
 org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:141)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:529)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
   at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:508)
   at 
 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:618)
   at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3147)
   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3123)
   at 
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:192)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:202)
   ... 10 common frames omitted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-07-16 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629642#comment-14629642
 ] 

Alex Parvulescu commented on OAK-2942:
--

[~frm] does this still happen with the latest trunk? I could not reproduce it 
on my local, test finished ok, latest message was 'Executing task 1049599'

 IllegalStateException thrown in Segment.pos()
 -

 Key: OAK-2942
 URL: https://issues.apache.org/jira/browse/OAK-2942
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.2.2
Reporter: Francesco Mari
  Labels: resilience
 Fix For: 1.3.3

 Attachments: ObservationBusyTest.java


 When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
 {{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace 
 is the following:
 {noformat}
 java.lang.IllegalStateException
   at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
   at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
   at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
   at 
 org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
   ... 6 more
 {noformat}
 In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:
 {noformat}
 Exception in thread TarMK flush thread 
 [/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
 active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms 
 java.lang.OutOfMemoryError: Java heap space
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
   at java.lang.Thread.run(Thread.java:695)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
 {noformat}
 The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
 consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3113) ColdStandby should provide sync start and end timestamps

2015-07-16 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3113:


 Summary: ColdStandby should provide sync start and end timestamps
 Key: OAK-3113
 URL: https://issues.apache.org/jira/browse/OAK-3113
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: tarmk-standby
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
Priority: Minor


This is for adding more info to the JMX stats that will help with automation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-07-16 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629700#comment-14629700
 ] 

Francesco Mari commented on OAK-2942:
-

[~alex.parvulescu], I didn't check recently. I will run the test again and let 
you know as soon as possible.

 IllegalStateException thrown in Segment.pos()
 -

 Key: OAK-2942
 URL: https://issues.apache.org/jira/browse/OAK-2942
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.2.2
Reporter: Francesco Mari
  Labels: resilience
 Fix For: 1.3.3

 Attachments: ObservationBusyTest.java


 When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
 {{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace 
 is the following:
 {noformat}
 java.lang.IllegalStateException
   at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
   at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
   at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
   at 
 org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
   ... 6 more
 {noformat}
 In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:
 {noformat}
 Exception in thread TarMK flush thread 
 [/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
 active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms 
 java.lang.OutOfMemoryError: Java heap space
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
   at java.lang.Thread.run(Thread.java:695)
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
 {noformat}
 The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
 consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3101) wrong use of jcr:score in Solr when sorting

2015-07-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629694#comment-14629694
 ] 

Tommaso Teofili commented on OAK-3101:
--

thanks [~edivad], used your testcase to reproduce and added a fix by mapping 
sorting on _jcr:score_ to _score_ (Solr's field for scoring)

 wrong use of jcr:score in Solr when sorting
 ---

 Key: OAK-3101
 URL: https://issues.apache.org/jira/browse/OAK-3101
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: solr
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Davide Giannella
Assignee: Tommaso Teofili
 Fix For: 1.2.3, 1.3.3, 1.0.18


 When running a query with {{order by @jcr:score}} it gets wrongly
 translated into Solr and could, sometimes, cause exceptions on the
 Solr side
 {noformat}
 ERROR - 2015-06-22 14:27:34.729; org.apache.solr.common.SolrException; 
 org.apache.solr.common.SolrException: can not sort on multivalued field: 
 jcr\:score_string_sort
   at 
 org.apache.solr.schema.SchemaField.checkSortability(SchemaField.java:169)
   at org.apache.solr.schema.FieldType.getStringSort(FieldType.java:683)
   at org.apache.solr.schema.TextField.getSortField(TextField.java:97)
   at org.apache.solr.schema.SchemaField.getSortField(SchemaField.java:152)
   at 
 org.apache.solr.search.QueryParsing.parseSortSpec(QueryParsing.java:369)
   at org.apache.solr.search.QParser.getSort(QParser.java:247)
   at 
 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:175)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:197)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
   at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3114) RDBDocumentStore: add BDATA DDL information to startup diagnostics

2015-07-16 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3114:
---

 Summary: RDBDocumentStore: add BDATA DDL information to startup 
diagnostics
 Key: OAK-3114
 URL: https://issues.apache.org/jira/browse/OAK-3114
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.0.17, 1.3.2, 1.2.2
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-07-16 Thread Manfred Baedke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629734#comment-14629734
 ] 

Manfred Baedke commented on OAK-2619:
-

Hi [~jsedding],

bq. It feels odd, however, that the upgrade code needs to take care of 
implementation details of the DocumentNodeStore.

Absolutely. I'm even more reluctant to add some magic based on instanceof 
checks, so I think we should add an extra option for the maximal name length. 
WDYT?

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Fix For: 1.3.3

 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3113) ColdStandby should provide sync start and end timestamps

2015-07-16 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3113.
--
   Resolution: Fixed
Fix Version/s: 1.3.3

fixed with http://svn.apache.org/r1691394

 ColdStandby should provide sync start and end timestamps
 

 Key: OAK-3113
 URL: https://issues.apache.org/jira/browse/OAK-3113
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: tarmk-standby
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
Priority: Minor
 Fix For: 1.3.3


 This is for adding more info to the JMX stats that will help with automation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3101) wrong use of jcr:score in Solr when sorting

2015-07-16 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-3101.
--
   Resolution: Fixed
Fix Version/s: 1.0.18
   1.3.3
   1.2.3

 wrong use of jcr:score in Solr when sorting
 ---

 Key: OAK-3101
 URL: https://issues.apache.org/jira/browse/OAK-3101
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: solr
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Davide Giannella
Assignee: Tommaso Teofili
 Fix For: 1.2.3, 1.3.3, 1.0.18


 When running a query with {{order by @jcr:score}} it gets wrongly
 translated into Solr and could, sometimes, cause exceptions on the
 Solr side
 {noformat}
 ERROR - 2015-06-22 14:27:34.729; org.apache.solr.common.SolrException; 
 org.apache.solr.common.SolrException: can not sort on multivalued field: 
 jcr\:score_string_sort
   at 
 org.apache.solr.schema.SchemaField.checkSortability(SchemaField.java:169)
   at org.apache.solr.schema.FieldType.getStringSort(FieldType.java:683)
   at org.apache.solr.schema.TextField.getSortField(TextField.java:97)
   at org.apache.solr.schema.SchemaField.getSortField(SchemaField.java:152)
   at 
 org.apache.solr.search.QueryParsing.parseSortSpec(QueryParsing.java:369)
   at org.apache.solr.search.QParser.getSort(QParser.java:247)
   at 
 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:175)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:197)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
   at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3114) RDBDocumentStore: add BDATA DDL information to startup diagnostics

2015-07-16 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3114:

Fix Version/s: 1.3.3

 RDBDocumentStore: add BDATA DDL information to startup diagnostics
 --

 Key: OAK-3114
 URL: https://issues.apache.org/jira/browse/OAK-3114
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3115) Support memberOf attribute within the user entity to lookup memberships in the LdapIdentityProvider

2015-07-16 Thread Konrad Windszus (JIRA)
Konrad Windszus created OAK-3115:


 Summary: Support memberOf attribute within the user entity to 
lookup memberships in the LdapIdentityProvider
 Key: OAK-3115
 URL: https://issues.apache.org/jira/browse/OAK-3115
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.2
Reporter: Konrad Windszus


Some LDAPs  (e.g. OpenLDAP via 
http://www.openldap.org/doc/admin24/overlays.html), support a reverse lookup of 
group memberships (i.e. without an additional search the group membership can 
just be determined by looking at a specific attribute like memberOf).
It would be good if the {{LdapIdentityProvider}} would support that directly 
(instead of executing an expensive search).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3114) RDBDocumentStore: add BDATA DDL information to startup diagnostics

2015-07-16 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3114:

Fix Version/s: 1.0.18

 RDBDocumentStore: add BDATA DDL information to startup diagnostics
 --

 Key: OAK-3114
 URL: https://issues.apache.org/jira/browse/OAK-3114
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.2.3, 1.3.3, 1.0.18






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3114) RDBDocumentStore: add BDATA DDL information to startup diagnostics

2015-07-16 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3114.
-
Resolution: Fixed

 RDBDocumentStore: add BDATA DDL information to startup diagnostics
 --

 Key: OAK-3114
 URL: https://issues.apache.org/jira/browse/OAK-3114
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.2.3, 1.3.3, 1.0.18






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3116) SolrQueryIndexProviderService should not blindly register a QueryIndexProvider

2015-07-16 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-3116:


 Summary: SolrQueryIndexProviderService should not blindly register 
a QueryIndexProvider
 Key: OAK-3116
 URL: https://issues.apache.org/jira/browse/OAK-3116
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: solr
Affects Versions: 1.0.17, 1.3.2, 1.2.2
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.2.3, 1.3.3, 1.0.18


{{SolrQueryIndexProviderService}} always registers a {{SolrQueryIndexProvider}} 
even if no server has been configured in the {{SolrServerProvider}} service.
Such a {{QueryIndexProvider}} should only be registered if an embedded or 
remote server get configured in {{SolrServerProviderService}} (default is 
'none', so no QIP would be registered in that case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3114) RDBDocumentStore: add BDATA DDL information to startup diagnostics

2015-07-16 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3114:

Fix Version/s: 1.2.3

 RDBDocumentStore: add BDATA DDL information to startup diagnostics
 --

 Key: OAK-3114
 URL: https://issues.apache.org/jira/browse/OAK-3114
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.2, 1.0.17
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.2.3, 1.3.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2727) NodeStateSolrServersObserver should be filtering path selectively

2015-07-16 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated OAK-2727:
-
Description: 
As discussed in OAK-2718 it'd be good to be able to selectively find Solr 
indexes by path, as done in Lucene index, see also OAK-2570.
This would avoid having to do full diffs.

  was:
As discussed in OAK-2718 it'd be good to be able to be able to selectively find 
Solr indexes by path, as done in Lucene index, see also OAK-2570.
This would avoid having to do full diffs.


 NodeStateSolrServersObserver should be filtering path selectively
 -

 Key: OAK-2727
 URL: https://issues.apache.org/jira/browse/OAK-2727
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: solr
Affects Versions: 1.1.8
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
  Labels: performance
 Fix For: 1.4


 As discussed in OAK-2718 it'd be good to be able to selectively find Solr 
 indexes by path, as done in Lucene index, see also OAK-2570.
 This would avoid having to do full diffs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3117) Support disabling group lookup in LDAP

2015-07-16 Thread Konrad Windszus (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Windszus resolved OAK-3117.
--
Resolution: Invalid

 Support disabling group lookup in LDAP
 --

 Key: OAK-3117
 URL: https://issues.apache.org/jira/browse/OAK-3117
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.2
Reporter: Konrad Windszus

 Currently the LdapIdentityProvider together with the DefaultSyncHandler will 
 always perform a search to query the group memberships of a user. It would be 
 good if one could disable that (e.g. by leaving the {{group.baseDN}} empty or 
 by having a dedicated property for that).
 Reasoning:
 For some company LDAPs a search on the group memberships is just very 
 expensive. Therefore very often the group membership is maintained somewhere 
 else for 3rd party systems.
 Such an option was also available in CRX2 by using {{autocreate=createUser}} 
 (http://docs.adobe.com/docs/en/crx/2-3/administering/ldap_authentication.html#Auto%20Creation)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2829) Comparing node states for external changes is too slow

2015-07-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2829:
---
Issue Type: New Feature  (was: Bug)

 Comparing node states for external changes is too slow
 --

 Key: OAK-2829
 URL: https://issues.apache.org/jira/browse/OAK-2829
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
  Labels: candidate_oak_1_0, scalability
 Fix For: 1.2.3, 1.3.2

 Attachments: CompareAgainstBaseStateTest.java, 
 OAK-2829-JournalEntry.patch, OAK-2829-gc-bug.patch, 
 OAK-2829-improved-doc-cache-invaliation.2.patch, 
 OAK-2829-improved-doc-cache-invaliation.patch, graph-1.png, graph.png


 Comparing node states for local changes has been improved already with 
 OAK-2669. But in a clustered setup generating events for external changes 
 cannot make use of the introduced cache and is therefore slower. This can 
 result in a growing observation queue, eventually reaching the configured 
 limit. See also OAK-2683.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2829) Comparing node states for external changes is too slow

2015-07-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2829:
---
Issue Type: Improvement  (was: New Feature)

 Comparing node states for external changes is too slow
 --

 Key: OAK-2829
 URL: https://issues.apache.org/jira/browse/OAK-2829
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
  Labels: candidate_oak_1_0, scalability
 Fix For: 1.2.3, 1.3.2

 Attachments: CompareAgainstBaseStateTest.java, 
 OAK-2829-JournalEntry.patch, OAK-2829-gc-bug.patch, 
 OAK-2829-improved-doc-cache-invaliation.2.patch, 
 OAK-2829-improved-doc-cache-invaliation.patch, graph-1.png, graph.png


 Comparing node states for local changes has been improved already with 
 OAK-2669. But in a clustered setup generating events for external changes 
 cannot make use of the introduced cache and is therefore slower. This can 
 result in a growing observation queue, eventually reaching the configured 
 limit. See also OAK-2683.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2619) Repeated upgrades

2015-07-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2619:
---
Labels: doc-impacting  (was: )

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
  Labels: doc-impacting
 Fix For: 1.3.3

 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-07-16 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629818#comment-14629818
 ] 

Michael Marth commented on OAK-2619:


[~baedke], would this need some docu?
(adding the doc label anyway)

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
  Labels: doc-impacting
 Fix For: 1.3.3

 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3112) Performance degradation of UnsavedModifications on MapDB

2015-07-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-3112:
---
Labels: performance  (was: )

 Performance degradation of UnsavedModifications on MapDB
 

 Key: OAK-3112
 URL: https://issues.apache.org/jira/browse/OAK-3112
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.0.11, 1.0.12, 1.0.13, 
 1.0.14, 1.0.15, 1.0.16, 1.0.17
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
  Labels: performance
 Fix For: 1.0.18


 UnsavedModifications performance degrades when used in combination with the 
 MapDB backed MapFactory. Calls become more and more expensive the longer the 
 instance is in use. The is caused by a limitation of MapDB, which does not 
 remove empty BTree nodes.
 A test performed with random paths added to the map and later removed again 
 in a loop shows a increase to roughly 1 second to read keys present in the 
 map when the underlying data file is about 50MB in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3110) AsyncIndexer fails due to FileNotFoundException thrown by CopyOnWrite logic

2015-07-16 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-3110:
---
Labels: resilience  (was: )

 AsyncIndexer fails due to FileNotFoundException thrown by CopyOnWrite logic
 ---

 Key: OAK-3110
 URL: https://issues.apache.org/jira/browse/OAK-3110
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: resilience
 Fix For: 1.2.3, 1.3.3, 1.0.18

 Attachments: copier.log


 At times the CopyOnWrite reports following exception
 {noformat}
 15.07.2015 14:20:35.930 *WARN* [pool-58-thread-1] 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate The async index 
 update failed
 org.apache.jackrabbit.oak.api.CommitFailedException: OakLucene0004: Failed to 
 close the Lucene index
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:204)
   at 
 org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:219)
   at 
 org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
   at 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.updateIndex(AsyncIndexUpdate.java:366)
   at 
 org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:311)
   at 
 org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105)
   at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.FileNotFoundException: _2s7.fdt
   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:261)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnWriteDirectory$COWLocalFileReference.fileLength(IndexCopier.java:837)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnWriteDirectory.fileLength(IndexCopier.java:607)
   at 
 org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:141)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:529)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
   at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:508)
   at 
 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:618)
   at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3147)
   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3123)
   at 
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:192)
   at 
 org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:202)
   ... 10 common frames omitted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3117) Support disabling group lookup in LDAP

2015-07-16 Thread Konrad Windszus (JIRA)
Konrad Windszus created OAK-3117:


 Summary: Support disabling group lookup in LDAP
 Key: OAK-3117
 URL: https://issues.apache.org/jira/browse/OAK-3117
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.2
Reporter: Konrad Windszus


Currently the LdapIdentityProvider together with the DefaultSyncHandler will 
always perform a search to query the group memberships of a user. It would be 
good if one could disable that (e.g. by leaving the {{group.baseDN}} empty or 
by having a dedicated property for that).
Reasoning:
For some company LDAPs a search on the group memberships is just very 
expensive. Therefore very often the group membership is maintained somewhere 
else for 3rd party systems.
Such an option was also available in CRX2 by using {{autocreate=createUser}} 
(http://docs.adobe.com/docs/en/crx/2-3/administering/ldap_authentication.html#Auto%20Creation)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3117) Support disabling group lookup in LDAP

2015-07-16 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629834#comment-14629834
 ] 

Konrad Windszus commented on OAK-3117:
--

Actually this is already possible through the property 
{{user.membershipNestingDepth=0}} on the DefaultSyncHandler.

 Support disabling group lookup in LDAP
 --

 Key: OAK-3117
 URL: https://issues.apache.org/jira/browse/OAK-3117
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.2
Reporter: Konrad Windszus

 Currently the LdapIdentityProvider together with the DefaultSyncHandler will 
 always perform a search to query the group memberships of a user. It would be 
 good if one could disable that (e.g. by leaving the {{group.baseDN}} empty or 
 by having a dedicated property for that).
 Reasoning:
 For some company LDAPs a search on the group memberships is just very 
 expensive. Therefore very often the group membership is maintained somewhere 
 else for 3rd party systems.
 Such an option was also available in CRX2 by using {{autocreate=createUser}} 
 (http://docs.adobe.com/docs/en/crx/2-3/administering/ldap_authentication.html#Auto%20Creation)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2445) Password History Support

2015-07-16 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629986#comment-14629986
 ] 

angela commented on OAK-2445:
-

thanks a lot... great patch and really a pleasure to look at!!! nevertheless, i 
think i found a little inconsistency/issue: the {{PasswordChangeAction}} is not 
connected with the configuration option to enable the history. so, IMO the test 
{{PasswordChangeAction#testPasswordHistory}} would fail if copied to 
{{PasswordHistoryTest}}.

 Password History Support
 

 Key: OAK-2445
 URL: https://issues.apache.org/jira/browse/OAK-2445
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Affects Versions: 1.1.5
Reporter: Dominique Jäggi
Assignee: Dominique Jäggi
 Fix For: 1.3.3

 Attachments: OAK-2445_-_Password_History_Support-2.patch, 
 OAK-2445_-_Password_History_Support-3.patch, 
 OAK-2445_-_Password_History_Support.patch


 This is a patch that enhances oak security to support user user password 
 history.
 details of the feature as per doc:
 {noformat}
 Password History
 
 ### General
 Oak provides functionality to remember a configurable number of
 passwords after password changes and to prevent a password to
 be set during changing a user's password if found in said history.
 ### Configuration
 An administrator may enable password history via the
 _org.apache.jackrabbit.oak.security.user.UserConfigurationImpl_
 OSGi configuration. By default the history is disabled.
 The following configuration option is supported:
 - Maximum Password History Size (_passwordHistorySize_, number of passwords): 
 When greater 0 enables password
   history and sets feature to remember the specified number of passwords for 
 a user.
 ### How it works
  Recording of Passwords
 If the feature is enabled, during a user changing her password, the old 
 password
 hash is recorded in the password history.
 The old password hash is only recorded if a password was set (non-empty).
 Therefore setting a password for a user for the first time does not result
 in a history record, as there is no old password.
 The old password hash is copied to the password history *after* the provided 
 new
 password has been validated (e.g. via the _PasswwordChangeAction_) but 
 *before*
 the new password hash is written to the user's _rep:password_ property.
 The history operates as a FIFO list. A new password history record exceeding 
 the
 configured max history size, results in the oldest recorded password from 
 being
 removed from the history.
 Also, if the configuration parameter for the history
 size is changed to a non-zero but smaller value than before, upon the next
 password change the oldest records exceeding the new history size are removed.
 History password hashes are recorded as child nodes of a user's 
 _rep:pwd/rep:pwdHistory_
 node. The _rep:pwdHistory_ node is governed by the _rep:PasswordHistory_ node 
 type,
 its children by the _rep:PasswordHistoryEntry_ node type, allowing setting of 
 the
 _rep:password_ property and its record date via _jcr:created_.
 # Node Type rep:PasswordHistory and rep:PasswordHistoryEntry
 [rep:PasswordHistory]
 + * (rep:PasswordHistoryEntry) = rep:PasswordHistoryEntry protected
 [rep:PasswordHistoryEntry]  nt:hierarchyNode
 - rep:password (STRING) protected
 # Nodes rep:pwd and rep:pwdHistory
 [rep:Password]
 + rep:pwdHistory (rep:PasswordHistory) = rep:PasswordHistory protected
 ...
 [rep:User]  rep:Authorizable, rep:Impersonatable
 + rep:pwd (rep:Password) = rep:Password protected
 ...
 
 The _rep:pwdHistory_ and history entry nodes and the _rep:password_ property
 are defined protected in order to guard against the user modifying 
 (overcoming) her
 password history limitations. The new sub-nodes also has the advantage of 
 allowing
 repository consumers to e.g. register specific commit hooks / actions on such 
 a node.
  Evaluation of Password History
 Upon a user changing her password and if the password history feature is 
 enabled
 (configured password history size  0), the _PasswordChangeAction_ checks 
 whether
 any of the password hashes recorded in the history matches the new password.
 If any record is a match, a _ConstraintViolationException_ is thrown and the
 user's password is *NOT* changed.
  Oak JCR XML Import
 When users are imported via the Oak JCR XML importer, password history is
 currently not supported (ignored).
 {noformat}
 [~anchela], if you could kindly review the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2948) Expose DefaultSyncHandler

2015-07-16 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra updated OAK-2948:
--
Fix Version/s: (was: 1.3.0)
   1.3.2

 Expose DefaultSyncHandler
 -

 Key: OAK-2948
 URL: https://issues.apache.org/jira/browse/OAK-2948
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-external
Reporter: Konrad Windszus
 Fix For: 1.3.2


 We do have the use case of extending the user sync. Unfortunately 
 {{DefaultSyncHandler}} is not exposed, so if you want to change one single 
 aspect of the user synchronisation you have to copy over the code from the 
 {{DefaultSyncHandler}}. Would it be possible to make that class part of the 
 exposed classes, so that deriving your own class from that DefaultSyncHandler 
 is possible?
 Very often company LDAPs are not very standardized. In our case we face an 
 issue, that the membership is being listed in a user attribute, rather than 
 in a group attribute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)