[jira] [Commented] (OAK-3813) Exception in datastore leads to async index stop indexing new content

2016-01-22 Thread Tom Blackford (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112788#comment-15112788
 ] 

Tom Blackford commented on OAK-3813:


That is possibly the case, by I can confirm that, as Alex states, the fragility 
in the Indexing still exists in later Oak versions (in this case 1.2.9).

> Exception in datastore leads to async index stop indexing new content
> -
>
> Key: OAK-3813
> URL: https://issues.apache.org/jira/browse/OAK-3813
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.2
>Reporter: Alexander Klimetschek
>Priority: Critical
>
> We are using an S3 based datastore and that (for some other reasons) 
> sometimes starts to miss certain blobs and throws an exception, see below. 
> Unfortunately, it seems that this blocks the indexing of any new content - as 
> the index will try again and again to index that missing binary and fail at 
> the same point.
> It would be great if the indexing process could be more resilient against 
> error like this. (I think the datastore implementation should probably not 
> propagate that exception to the outside but just log it, but that's a 
> separate issue).
> This is seen with oak 1.2.2. I had a look at the [latest version on 
> trunk|https://github.com/apache/jackrabbit-oak/blob/d5da738aa6b43424f84063322987b765aead7813/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L427-L431]
>  but it seems the behavior has not changed since then.
> {noformat}
> 17.12.2015 20:50:26.418 -0500 *ERROR* [pool-7-thread-5] 
> org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
> execution of 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@5cc5e2f6 : Error 
> occurred while obtaining InputStream for blobId 
> [2832539c16b1a2e5745370ee89e41ab562436c5f#109419]
> java.lang.RuntimeException: Error occurred while obtaining InputStream for 
> blobId [2832539c16b1a2e5745370ee89e41ab562436c5f#109419]
>   at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:84)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:216)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:264)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:350)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:356)
>   at org.apache.lucene.store.DataInput.readInt(DataInput.java:84)
>   at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:126)
>   at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.(Lucene41PostingsReader.java:75)
>   at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:430)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:116)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
>   at 
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:279)
>   at 
> org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3191)
>   at 
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3182)
>   at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3155)
>   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3123)
>   at 
> org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988)
>   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932)
>   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:169)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:190)
>   at 
> org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:221)
>   at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
>   at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56

[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112635#comment-15112635
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 4:33 PM:
--

[~alex.parvulescu] IMO the issue at stake is to figure out whether Oak wants to 
provide a general crypto API (and use it in its own implementation) or if it is 
fine otherwise.

How the default implementation of such general crypto API is done is not so 
important as long as it is easy to swap it for an alternative one.
Customers may anyway either chose or be required to run a certified (e.g. FIPS) 
crypto module or otherwise proprietary module that can't be linked here.

Assuming the Apache Oak team is not fine with the default implementation, we 
could as well ditch it (provide no implementation at all) or we could base the 
default implementation on a 3rd party open source crypto library which is 
license compatible with Apache.


was (Author: marett):
[~alex.parvulescu] IMO the issue at stake is to figure out whether Oak wants to 
provide a general crypto API (and use it in its own implementation) or if it is 
fine otherwise.

How the default implementation of such general crypto API is done is not so 
important as long as it is easy to swap the implementation for another one.
Customers may anyway either chose or be required to run a certified (e.g. FIPS) 
crypto module or otherwise proprietary module that can't be linked here.

Assuming the Apache Oak team is not fine with the default implementation, we 
could as well ditch it (provide no implementation at all) or we could base the 
default implementation on a 3rd party open source crypto library which is 
license compatible with Apache.

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.patch, OAK-3761.patch
>
>
> As discussed in [0], this issue tracks adding a simple API and implementation 
> for encryption/decryption in Oak. 
> [0] 
> http://oak.markmail.org/search/?q=crypto#query:crypto+page:1+mid:iwsfd66lku2dzs2n+state:results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112635#comment-15112635
 ] 

Timothee Maret commented on OAK-3761:
-

[~alex.parvulescu] IMO the issue at stake is to figure out whether Oak wants to 
provide a general crypto API (and use it in its own implementation) or if it is 
fine otherwise.

How the default implementation of such general crypto API is done is not so 
important as long as it is easy to swap the implementation for another one.
Customers may anyway either chose or be required to run a certified (e.g. FIPS) 
crypto module or otherwise proprietary module that can't be linked here.

Assuming the Apache Oak team is not fine with the default implementation, we 
could as well ditch it (provide no implementation at all) or we could base the 
default implementation on a 3rd party open source crypto library which is 
license compatible with Apache.

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.patch, OAK-3761.patch
>
>
> As discussed in [0], this issue tracks adding a simple API and implementation 
> for encryption/decryption in Oak. 
> [0] 
> http://oak.markmail.org/search/?q=crypto#query:crypto+page:1+mid:iwsfd66lku2dzs2n+state:results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3871) ability to override ClusterNodeInfo#DEFAULT_LEASE_DURATION_MILLIS

2016-01-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3871:

Attachment: OAK-3871.diff

Minimal patch, not restricting the value (should it?).

Also, there are other constants which are tuned to work with the 120s default. 
Should we derive them? Allow to set them?

> ability to override ClusterNodeInfo#DEFAULT_LEASE_DURATION_MILLIS
> -
>
> Key: OAK-3871
> URL: https://issues.apache.org/jira/browse/OAK-3871
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Julian Reschke
> Fix For: 1.4
>
> Attachments: OAK-3871.diff
>
>
> This constant currently is set to 120s.
> In edge cases, it might be good to override the default (when essentially the 
> only other "fix" would be to turn off the lease check).
> Proposal: introduce a system property allowing to tune the lease time (but 
> restrict allowable values to a "sane" range, such as up to 5min).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2478) Move spellcheck config to own configuration node

2016-01-22 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-2478.

   Resolution: Invalid
Fix Version/s: (was: 1.4)

Resolving as invalid as there aren't any global configs for spellcheck. With 
OAK-2477, suggester's global configs have moved to namespaced location, so once 
we include a new global config for spellcheck we'd probably remember to provide 
it with its own namespaced location.

> Move spellcheck config to own configuration node
> 
>
> Key: OAK-2478
> URL: https://issues.apache.org/jira/browse/OAK-2478
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tommaso Teofili
>Assignee: Vikas Saurabh
>  Labels: docs-impacting, technical_debt
>
> Currently spellcheck configuration is controlled via properties defined on 
> main config / props node but it'd be good if we would have its own place to 
> configure the whole spellcheck feature to not mix up configuration of other 
> features / parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2477) Move suggester specific config to own configuration node

2016-01-22 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-2477.

   Resolution: Fixed
Fix Version/s: (was: 1.4)
   1.3.15

Implemented on trunk at [r1726237|http://svn.apache.org/r1726237].

Note that the old location is still respected although the new location would 
be preferred. I'd update the documentation accordingly.

> Move suggester specific config to own configuration node
> 
>
> Key: OAK-2477
> URL: https://issues.apache.org/jira/browse/OAK-2477
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tommaso Teofili
>Assignee: Vikas Saurabh
>  Labels: docs-impacting, technical_debt
> Fix For: 1.3.15
>
> Attachments: OAK-2477.patch
>
>
> Currently suggester configuration is controlled via properties defined on 
> main config / props node but it'd be good if we would have its own place to 
> configure the whole suggest feature to not mix up configuration of other 
> features / parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3905) configurable atomic counter task timeout

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-3905.
---
   Resolution: Fixed
Fix Version/s: 1.3.15

in trunk at https://svn.apache.org/r1726230

> configurable atomic counter task timeout
> 
>
> Key: OAK-3905
> URL: https://issues.apache.org/jira/browse/OAK-3905
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.4
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.3.15
>
>
> in the timeout for the async task of the atomic counter
> - use milliseconds instead of seconds
> - first schedule at 500ms
> - make it configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3843:

Comment: was deleted

(was: trunk: http://svn.apache.org/r1723584

(1.2 and 1.0 to follow later))

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3913) FileStoreIT#testRecovery fails on Windows

2016-01-22 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3913.
--
   Resolution: Fixed
Fix Version/s: 1.3.15

fix confirmed, marking as resolved.
thanks Julian!

> FileStoreIT#testRecovery fails on Windows
> -
>
> Key: OAK-3913
> URL: https://issues.apache.org/jira/browse/OAK-3913
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.15
>
>
> As mentioned on the list
> {code}
> testRecovery(org.apache.jackrabbit.oak.plugins.segment.file.FileStoreIT)  
> Time elapsed: 0.554 sec  <<< ERROR!
> java.io.IOException: Could not remove broken tar file 
> target\FileStoreIT1303808629179878423dir\data0a.tar
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.backupSafely(TarReader.java:237)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.collectFileEntries(TarReader.java:196)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.open(TarReader.java:125)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:434)
> {code}
> Possibly introduced by OAK-3890, the {{data0}} is closed at the very end, 
> blocking other FileStores from init.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3913) FileStoreIT#testRecovery fails on Windows

2016-01-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112471#comment-15112471
 ] 

Julian Reschke commented on OAK-3913:
-

[~alex.parvulescu] Tested and approved. Thanks!

> FileStoreIT#testRecovery fails on Windows
> -
>
> Key: OAK-3913
> URL: https://issues.apache.org/jira/browse/OAK-3913
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> As mentioned on the list
> {code}
> testRecovery(org.apache.jackrabbit.oak.plugins.segment.file.FileStoreIT)  
> Time elapsed: 0.554 sec  <<< ERROR!
> java.io.IOException: Could not remove broken tar file 
> target\FileStoreIT1303808629179878423dir\data0a.tar
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.backupSafely(TarReader.java:237)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.collectFileEntries(TarReader.java:196)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.open(TarReader.java:125)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:434)
> {code}
> Possibly introduced by OAK-3890, the {{data0}} is closed at the very end, 
> blocking other FileStores from init.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112454#comment-15112454
 ] 

Julian Reschke commented on OAK-3843:
-

trunk: http://svn.apache.org/r1723584
1.2: http://svn.apache.org/r1726217


> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3843:

Labels: candidate_oak_1_0  (was: candidate_oak_1_0 candidate_oak_1_2)

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15105337#comment-15105337
 ] 

Julian Reschke edited comment on OAK-3843 at 1/22/16 2:26 PM:
--

trunk: http://svn.apache.org/r1723584

(1.2 and 1.0 to follow later)


was (Author: reschke):
trunk: http://svn.apache.org/r1723584

(1.2 and 1.0 to follow later)

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3843:

Affects Version/s: (was: 1.2.9)
   1.2.10

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3843:

Fix Version/s: 1.2.11

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3913) FileStoreIT#testRecovery fails on Windows

2016-01-22 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112434#comment-15112434
 ] 

Alex Parvulescu commented on OAK-3913:
--

tentative fix with http://svn.apache.org/viewvc?rev=1726215&view=rev

[~julian.resc...@gmx.de] could you please test this out?

> FileStoreIT#testRecovery fails on Windows
> -
>
> Key: OAK-3913
> URL: https://issues.apache.org/jira/browse/OAK-3913
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> As mentioned on the list
> {code}
> testRecovery(org.apache.jackrabbit.oak.plugins.segment.file.FileStoreIT)  
> Time elapsed: 0.554 sec  <<< ERROR!
> java.io.IOException: Could not remove broken tar file 
> target\FileStoreIT1303808629179878423dir\data0a.tar
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.backupSafely(TarReader.java:237)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.collectFileEntries(TarReader.java:196)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader.open(TarReader.java:125)
> at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:434)
> {code}
> Possibly introduced by OAK-3890, the {{data0}} is closed at the very end, 
> blocking other FileStores from init.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3913) FileStoreIT#testRecovery fails on Windows

2016-01-22 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3913:


 Summary: FileStoreIT#testRecovery fails on Windows
 Key: OAK-3913
 URL: https://issues.apache.org/jira/browse/OAK-3913
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu


As mentioned on the list
{code}
testRecovery(org.apache.jackrabbit.oak.plugins.segment.file.FileStoreIT)  Time 
elapsed: 0.554 sec  <<< ERROR!
java.io.IOException: Could not remove broken tar file 
target\FileStoreIT1303808629179878423dir\data0a.tar
at 
org.apache.jackrabbit.oak.plugins.segment.file.TarReader.backupSafely(TarReader.java:237)
at 
org.apache.jackrabbit.oak.plugins.segment.file.TarReader.collectFileEntries(TarReader.java:196)
at 
org.apache.jackrabbit.oak.plugins.segment.file.TarReader.open(TarReader.java:125)
at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:434)
{code}

Possibly introduced by OAK-3890, the {{data0}} is closed at the very end, 
blocking other FileStores from init.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3912) Segment bundle tests have wrong package name

2016-01-22 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3912.
--
   Resolution: Fixed
 Assignee: Alex Parvulescu
Fix Version/s: 1.3.15

fixed tests with http://svn.apache.org/viewvc?rev=1726210&view=rev

> Segment bundle tests have wrong package name
> 
>
> Key: OAK-3912
> URL: https://issues.apache.org/jira/browse/OAK-3912
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
> Fix For: 1.3.15
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3912) Segment bundle tests have wrong package name

2016-01-22 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-3912:


 Summary: Segment bundle tests have wrong package name
 Key: OAK-3912
 URL: https://issues.apache.org/jira/browse/OAK-3912
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Alex Parvulescu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112404#comment-15112404
 ] 

Alex Parvulescu commented on OAK-3761:
--

{quote}
Does Oak need automatic encryption or has other use case for encryption ?
{quote}
I don't think it's this simple. The way I see it (nothing against crypto 
support itself) is: does Oak need to _implement itself_ the crypto support, or 
can it simply 'enable' on demand it via a plugin mechanism and a loose 
dependency to an external library that provides this support. 
Could we use it? Sure! Do we need to roll our own crypto? Not convinced.

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.patch, OAK-3761.patch
>
>
> As discussed in [0], this issue tracks adding a simple API and implementation 
> for encryption/decryption in Oak. 
> [0] 
> http://oak.markmail.org/search/?q=crypto#query:crypto+page:1+mid:iwsfd66lku2dzs2n+state:results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 12:50 PM:
---

Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for implementing the local version of the 
{{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether it sees a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no" or "maybe in the future", then I suggest we 
ditch the current issue OAK-3761 ; propose the {{SymmetricCipher}} API to 
Apache Sling ; add the {{CredentialStore}} API as part of OAK-3626 ; implement 
the {{CredentialStore}} API with a mechanism that does not necessarily involve 
encryption (AFAIU protected/unprotected is a rather lose definition) ; offer 
the ability to plug a custom implementation of the {{CredentialStore}} API 
which may involve encryption.

wdyt ?


was (Author: marett):
Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for implementing the local version of the 
{{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether it sees a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> A

[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 12:32 PM:
---

Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for implementing the local version of the 
{{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?


was (Author: marett):
Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-37

[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 12:33 PM:
---

Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for implementing the local version of the 
{{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether it sees a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?


was (Author: marett):
Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for implementing the local version of the 
{{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.pa

[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 12:32 PM:
---

Thanks for the detailed feedback [~chetanm]! I think we may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?


was (Author: marett):
Thanks for the detailed feedback [~chetanm]! I think may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-37

[jira] [Comment Edited] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret edited comment on OAK-3761 at 1/22/16 12:30 PM:
---

Thanks for the detailed feedback [~chetanm]! I think may be aiming at two 
different goals, which I'll detail below

h3. G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

h3. G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?


was (Author: marett):
Thanks for the detailed feedback [~chetanm]! I think may be aiming at two 
different goals, which I'll detail below

G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.patch, O

[jira] [Commented] (OAK-3761) Oak crypto API and implementation

2016-01-22 Thread Timothee Maret (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112371#comment-15112371
 ] 

Timothee Maret commented on OAK-3761:
-

Thanks for the detailed feedback [~chetanm]! I think may be aiming at two 
different goals, which I'll detail below

G0. Allow to unprotect OSGI configuration properties on demand
This feature request is contained in OAK-3626.
This feature could consist of the {{CredentialStore}} API proposed in the 
previous comment.
The API would specially cover fetching unprotected credentials from any 
sort of data store (local or remote).

G1. Add support for a general crypto API
This feature could be used in Oak for I. implementing the local version of 
the {{CredentialStore}} API (G0.).
This feature could also be used in Oak for supporting "automatic encryption 
of properties" as described by [~anchela] in OAK-3626.
This feature could incidentally be used by application leveraging Apache 
Jackrabbit Oak (such as Apache Sling).

The API for this feature would be the {{SymmetricCipher}} (as in the patch) 
which would initially contain only support for symmetric authenticated 
encryption/decryption, but could be extended in the future with symmetric 
crypto features (e.g. stream ciphers, password based encryption) if there is a 
need for it.

I think Apache Oak team should decide whether there see a need to support use 
cases that motivate G1., by answering the question:

Does Oak need automatic encryption or has other use case for encryption ?

Assuming the answer is "yes", then we may keep the {{SymmetricCipher}} API ; 
expose the {{encryption}} signature in the API ; add the {{CredentialStore}} 
API as part of OAK-3626 ; implement the {{CredentialStore}} API using the 
{{SymmetricCipher}} API.

Assuming the answer to this is "no", then I suggest we ditch the current issue 
OAK-3761 ; propose the {{SymmetricCipher}} API to Apache Sling ; add the 
{{CredentialStore}} API as part of OAK-3626 ; implement the {{CredentialStore}} 
API with a mechanism that does not necessarily involve encryption (AFAIU 
protected/unprotected is a rather lose definition) ; offer the ability to plug 
a custom implementation of the {{CredentialStore}} API which may involve 
encryption.

wdyt ?

> Oak crypto API and implementation
> -
>
> Key: OAK-3761
> URL: https://issues.apache.org/jira/browse/OAK-3761
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 1.3.12
>Reporter: Timothee Maret
>Assignee: angela
> Attachments: OAK-3761.patch, OAK-3761.patch
>
>
> As discussed in [0], this issue tracks adding a simple API and implementation 
> for encryption/decryption in Oak. 
> [0] 
> http://oak.markmail.org/search/?q=crypto#query:crypto+page:1+mid:iwsfd66lku2dzs2n+state:results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3911) Integer overflow causing incorrect file handling in OakDirectory for sile size more than 2 GB

2016-01-22 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3911:
-
Attachment: OAK-3911-v1.patch

Committed an ignored test with 1726189. Which creates a 3GB file and then tries 
to validate the file length. It fails 

{noformat}
Expected :3221226412
Actual   :369952
{noformat}

With [patch|^OAK-3911-v1.patch] the testcase passes

[~tmueller] Can you have a look

> Integer overflow causing incorrect file handling in OakDirectory for sile 
> size more than 2 GB
> -
>
> Key: OAK-3911
> URL: https://issues.apache.org/jira/browse/OAK-3911
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3911-v1.patch
>
>
> In couple of cases we have seen strange error related to invalid seek. In 
> such cases it was seen that file sizes are greater than 2GB. A close 
> inspection of OakDirectory [1] shows that following calls in loadBlob and 
> flushBlob are prone to integer overflow (Thanks [~tmueller])
> * {{int n = (int) Math.min(blobSize, length - index * blobSize);}}
> * {{int n = (int) Math.min(blobSize, length - i * blobSize);}}
> Above both {{blobSize}} and {{index}} and {{i}} are {{int}}. And 
> multiplication of 2 int would be int that can cause overflow.
> {noformat}Caused by: java.io.IOException: Invalid seek request
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.seek(OakDirectory.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.seek(OakDirectory.java:418)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader.seekDir(BlockTreeTermsReader.java:223)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:142)
> {noformat}
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory.java#L361



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3911) Integer overflow causing incorrect file handling in OakDirectory for sile size more than 2 GB

2016-01-22 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3911:


 Summary: Integer overflow causing incorrect file handling in 
OakDirectory for sile size more than 2 GB
 Key: OAK-3911
 URL: https://issues.apache.org/jira/browse/OAK-3911
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.4


In couple of cases we have seen strange error related to invalid seek. In such 
cases it was seen that file sizes are greater than 2GB. A close inspection of 
OakDirectory [1] shows that following calls in loadBlob and flushBlob are prone 
to integer overflow (Thanks [~tmueller])

* {{int n = (int) Math.min(blobSize, length - index * blobSize);}}
* {{int n = (int) Math.min(blobSize, length - i * blobSize);}}

Above both {{blobSize}} and {{index}} and {{i}} are {{int}}. And multiplication 
of 2 int would be int that can cause overflow.

{noformat}Caused by: java.io.IOException: Invalid seek request
at 
org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.seek(OakDirectory.java:288)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.seek(OakDirectory.java:418)
at 
org.apache.lucene.codecs.BlockTreeTermsReader.seekDir(BlockTreeTermsReader.java:223)
at 
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:142)
{noformat}

[1] 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory.java#L361



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3844) Better support for versionable nodes without version histories

2016-01-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112338#comment-15112338
 ] 

Tomek Rękawek commented on OAK-3844:


I created OAK-3910 to cover the case in which the primary type or one of the 
mixins inherits from the {{mix:versionable}}.

> Better support for versionable nodes without version histories
> --
>
> Key: OAK-3844
> URL: https://issues.apache.org/jira/browse/OAK-3844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.4
>
> Attachments: OAK-3844-failing-test.patch, OAK-3844.patch
>
>
> One of the customers reported following exception that has been thrown during 
> the migration:
> {noformat}
> Caused by: java.lang.IllegalStateException: This builder does not exist: 
> 95a5253f-d37b-4e88-a4b4-0721530344fc
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
>   at 
> org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
> ...
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
> {noformat}
> It seems that the node with reported UUID has -primary type inheriting from- 
> mixin {{mix:versionable}}, but there is no appropriate version history in the 
> version storage.
> Obviously this means that there's something wrong with the repository. 
> However, I think that the migration process shouldn't fail, but proceed 
> silently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3910) Migrating node inheriting from mix:versionable without version history

2016-01-22 Thread JIRA
Tomek Rękawek created OAK-3910:
--

 Summary: Migrating node inheriting from mix:versionable without 
version history
 Key: OAK-3910
 URL: https://issues.apache.org/jira/browse/OAK-3910
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: upgrade
Reporter: Tomek Rękawek
Priority: Minor


The OAK-3844 fixes the way in which the oak-upgrade migrates the 
{{mix:versionable}} nodes without version history, by displaying a warning and 
removing the mixin.

There's a related case, in which the node without the version history isn't 
explicitly declared as {{mix:versionable}}, but the {{jcr:primaryType}} or one 
of the mixins inherits from the {{mix:versionable}}. In this case we can't 
simply remove the mixin. We should display a warning and create an empty 
version history.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3844) Better support for versionable nodes without version histories

2016-01-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3844:
---
Description: 
One of the customers reported following exception that has been thrown during 
the migration:

{noformat}
Caused by: java.lang.IllegalStateException: This builder does not exist: 
95a5253f-d37b-4e88-a4b4-0721530344fc
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:150)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
at 
org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
...
at 
org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
{noformat}

It seems that the node with reported UUID has -primary type inheriting from- 
mixin {{mix:versionable}}, but there is no appropriate version history in the 
version storage.

Obviously this means that there's something wrong with the repository. However, 
I think that the migration process shouldn't fail, but proceed silently.

  was:
One of the customers reported following exception that has been thrown during 
the migration:

{noformat}
Caused by: java.lang.IllegalStateException: This builder does not exist: 
95a5253f-d37b-4e88-a4b4-0721530344fc
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:150)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
at 
org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
...
at 
org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
{noformat}

It seems that the node with reported UUID has primary type inheriting from 
{{mix:versionable}}, but there is no appropriate version history in the version 
storage.

Obviously this means that there's something wrong with the repository. However, 
I think that the migration process shouldn't fail, but proceed silently.


> Better support for versionable nodes without version histories
> --
>
> Key: OAK-3844
> URL: https://issues.apache.org/jira/browse/OAK-3844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.4
>
> Attachments: OAK-3844-failing-test.patch, OAK-3844.patch
>
>
> One of the customers reported following exception that has been thrown during 
> the migration:
> {noformat}
> Caused by: java.lang.IllegalStateException: This builder does not exist: 
> 95a5253f-d37b-4e88-a4b4-0721530344fc
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
>   at 
> org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
> ...
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
> {noformat}
> It seems that the node with reported UUID has -primary type inheriting from- 
> mixin {{mix:versionable}}, but there is no appropriate version history in the 
> version storage.
> Obviously this means that there's something wrong with the repository. 
> However, I think that the migration process shouldn't fail, but proceed 
> silently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3888) Release Oak 1.3.14

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-3888.
---
Resolution: Fixed

> Release Oak 1.3.14
> --
>
> Key: OAK-3888
> URL: https://issues.apache.org/jira/browse/OAK-3888
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.4
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3909) Documentation site failure

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-3909.
---
   Resolution: Won't Fix
Fix Version/s: 1.3.15

Missing artifacts on maven. Run {{mvn clean install -DskipTests}} before 
generating the docs.

> Documentation site failure
> --
>
> Key: OAK-3909
> URL: https://issues.apache.org/jira/browse/OAK-3909
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Davide Giannella
>Assignee: Francesco Mari
> Fix For: 1.3.15
>
> Attachments: build-1453455064.log.gz
>
>
> After the creation of oak-it and the big refactoring, building the website 
> fails with 
> {noformat}
> [ERROR] Failed to execute goal on project oak-it: Could not resolve 
> dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
> following artifacts could not be resolved: 
> org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
> org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
> artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in 
> apache.snapshots (http://repository.apache.org/snapshots) -> [Help 1]
> {noformat}
> [full build logs|^build-1453455064.log.gz]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3909) Documentation site failure

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3909:
--
Description: 
After the creation of oak-it and the big refactoring, building the website 
fails with 

{noformat}
[ERROR] Failed to execute goal on project oak-it: Could not resolve 
dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
following artifacts could not be resolved: 
org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
{noformat}

[full build logs|^build-1453455064.log.gz]

  was:
After the creation of oak-it and the big refactoring, building the website 
fails with 

{noformat}
[ERROR] Failed to execute goal on project oak-it: Could not resolve 
dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
following artifacts could not be resolved: 
org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
{noformat}

[full build logs|^]


> Documentation site failure
> --
>
> Key: OAK-3909
> URL: https://issues.apache.org/jira/browse/OAK-3909
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Davide Giannella
>Assignee: Francesco Mari
> Attachments: build-1453455064.log.gz
>
>
> After the creation of oak-it and the big refactoring, building the website 
> fails with 
> {noformat}
> [ERROR] Failed to execute goal on project oak-it: Could not resolve 
> dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
> following artifacts could not be resolved: 
> org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
> org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
> artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in 
> apache.snapshots (http://repository.apache.org/snapshots) -> [Help 1]
> {noformat}
> [full build logs|^build-1453455064.log.gz]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3909) Documentation site failure

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3909:
--
Description: 
After the creation of oak-it and the big refactoring, building the website 
fails with 

{noformat}
[ERROR] Failed to execute goal on project oak-it: Could not resolve 
dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
following artifacts could not be resolved: 
org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
{noformat}

[full build logs|^]

  was:
After the creation of oak-it and the big refactoring, building the website 
fails with 

{noformat}
[ERROR] Failed to execute goal on project oak-it: Could not resolve 
dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
following artifacts could not be resolved: 
org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
{noformat}


> Documentation site failure
> --
>
> Key: OAK-3909
> URL: https://issues.apache.org/jira/browse/OAK-3909
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Davide Giannella
>Assignee: Francesco Mari
> Attachments: build-1453455064.log.gz
>
>
> After the creation of oak-it and the big refactoring, building the website 
> fails with 
> {noformat}
> [ERROR] Failed to execute goal on project oak-it: Could not resolve 
> dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
> following artifacts could not be resolved: 
> org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
> org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
> artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in 
> apache.snapshots (http://repository.apache.org/snapshots) -> [Help 1]
> {noformat}
> [full build logs|^]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3909) Documentation site failure

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3909:
--
Attachment: build-1453455064.log.gz

> Documentation site failure
> --
>
> Key: OAK-3909
> URL: https://issues.apache.org/jira/browse/OAK-3909
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: doc
>Reporter: Davide Giannella
>Assignee: Francesco Mari
> Attachments: build-1453455064.log.gz
>
>
> After the creation of oak-it and the big refactoring, building the website 
> fails with 
> {noformat}
> [ERROR] Failed to execute goal on project oak-it: Could not resolve 
> dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
> following artifacts could not be resolved: 
> org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
> org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
> artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in 
> apache.snapshots (http://repository.apache.org/snapshots) -> [Help 1]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3909) Documentation site failure

2016-01-22 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3909:
-

 Summary: Documentation site failure
 Key: OAK-3909
 URL: https://issues.apache.org/jira/browse/OAK-3909
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: doc
Reporter: Davide Giannella
Assignee: Francesco Mari


After the creation of oak-it and the big refactoring, building the website 
fails with 

{noformat}
[ERROR] Failed to execute goal on project oak-it: Could not resolve 
dependencies for project org.apache.jackrabbit:oak-it:jar:1.4-SNAPSHOT: The 
following artifacts could not be resolved: 
org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT, 
org.apache.jackrabbit:oak-segment:jar:tests:1.4-SNAPSHOT: Could not find 
artifact org.apache.jackrabbit:oak-segment:jar:1.4-SNAPSHOT in apache.snapshots 
(http://repository.apache.org/snapshots) -> [Help 1]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3852) RDBDocumentStore: batched append logic may loose property changes

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3852.
-

Bulk close for 1.3.14

> RDBDocumentStore: batched append logic may loose property changes
> -
>
> Key: OAK-3852
> URL: https://issues.apache.org/jira/browse/OAK-3852
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
>
> When using the "append" logic, we only serialize those parts of the 
> {{UpdateOp}} referring to non-column properties. However, the update logic 
> currently does not handle all column properties, such as {{_deletedOnce}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3830) Provide size for properties for PropertyItearator returned in Node#getProperties(namePattern)

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3830.
-

Bulk close for 1.3.14

> Provide size for properties for PropertyItearator returned in 
> Node#getProperties(namePattern)
> -
>
> Key: OAK-3830
> URL: https://issues.apache.org/jira/browse/OAK-3830
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK-3830.patch
>
>
> Oak currently provides the size of {{PropertyIterator}} for 
> {{Node#getProperties()}} however for {{Node#getProperties(namePattern)}} -1 
> is returned.
> We should also expose the size for namePattern variant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3877) PerfLogger should use System.nanoTime instead of System.currentTimeMillis

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3877.
-

Bulk close for 1.3.14

> PerfLogger should use System.nanoTime instead of System.currentTimeMillis
> -
>
> Key: OAK-3877
> URL: https://issues.apache.org/jira/browse/OAK-3877
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.14
>
>
> PerfLogger currently make use of System.currentTimeMillis for timing the 
> performance. It would be better to make use of System.nanoTime.
> Per [~ianeboston] 
> [comment|https://issues.apache.org/jira/browse/OAK-3654?focusedCommentId=15022031&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15022031]
>  and [1]
> bq. You should always try to use nanoTime to do timing measurement or 
> calculation
> This would provide following benefits
> * Simpler integration with Metric stats support which makes use of nanoTime
> * No possibility of drift i.e. currentTimeMillis going back in time
> [1] https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3861) MapRecord reduce extra loop in MapEntry creation

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3861.
-

Bulk close for 1.3.14

> MapRecord reduce extra loop in MapEntry creation
> 
>
> Key: OAK-3861
> URL: https://issues.apache.org/jira/browse/OAK-3861
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Trivial
> Fix For: 1.3.14
>
> Attachments: MapRecord.java.patch
>
>
> removes unneeded extra loop and a bunch of temp arrays



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3769) QueryParse exception when fulltext search performed with term having '/'

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3769.
-

Bulk close for 1.3.14

> QueryParse exception when fulltext search performed with term having '/'
> 
>
> Key: OAK-3769
> URL: https://issues.apache.org/jira/browse/OAK-3769
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.7, 1.2.8
>Reporter: Rajeev Duggal
>Assignee: Chetan Mehrotra
> Fix For: 1.2.10, 1.3.14
>
> Attachments: OAK-3769-v1.patch
>
>
> Running the below query, results in Exception pointed by [1]
> /jcr:root/content/dam//element(*,dam:Asset)[jcr:contains(jcr:content/metadata/@cq:tags,
>  'stockphotography:business/business_abstract')] order by @jcr:created 
> descending
> Also if you remove the node at 
> /oak:index/damAssetLucene/indexRules/dam:Asset/properties/cqTags  and 
> re-index the /oak:index/damAssetLucene index, the query works.
> Seems '/' is special character and needs to be escaped by Oak.
> [1]
> {noformat}
> Caused by: 
> org.apache.lucene.queryparser.flexible.core.QueryNodeParseException: Syntax 
> Error, cannot parse stockphotography\:business/business_abstract: Lexical 
> error at line 1, column 45.  Encountered:  after : "/business_abstract" 
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.parse(StandardSyntaxParser.java:74)
> at 
> org.apache.lucene.queryparser.flexible.core.QueryParserHelper.parse(QueryParserHelper.java:250)
> at 
> org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.parse(StandardQueryParser.java:168)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LucenePropertyIndex.tokenToQuery(LucenePropertyIndex.java:1260)
> ... 138 common frames omitted
> Caused by: 
> org.apache.lucene.queryparser.flexible.standard.parser.TokenMgrError: Lexical 
> error at line 1, column 45.  Encountered:  after : "/business_abstract"
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParserTokenManager.getNextToken(StandardSyntaxParserTokenManager.java:937)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.jj_scan_token(StandardSyntaxParser.java:945)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.jj_3R_4(StandardSyntaxParser.java:827)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.jj_3_2(StandardSyntaxParser.java:739)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.jj_2_2(StandardSyntaxParser.java:730)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.Clause(StandardSyntaxParser.java:318)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.ModClause(StandardSyntaxParser.java:303)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.ConjQuery(StandardSyntaxParser.java:234)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.DisjQuery(StandardSyntaxParser.java:204)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.Query(StandardSyntaxParser.java:166)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.TopLevelQuery(StandardSyntaxParser.java:147)
> at 
> org.apache.lucene.queryparser.flexible.standard.parser.StandardSyntaxParser.parse(StandardSyntaxParser.java:65)
> ... 141 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3803) Clean up the fixtures code in core and jcr modules

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3803.
-

Bulk close for 1.3.14

> Clean up the fixtures code in core and jcr modules
> --
>
> Key: OAK-3803
> URL: https://issues.apache.org/jira/browse/OAK-3803
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: core, jcr
>Reporter: Tomek Rękawek
>Assignee: Michael Dürig
>  Labels: tech-debt
> Fix For: 1.3.14
>
> Attachments: OAK-3803.patch
>
>
> oak-core and oak-jcr modules uses the fixture mechanism to provide NodeStore 
> implementations to the unit/integration tests. There is a few problems with 
> the fixture implementation:
> * the {{NodeStoreFixture}} class is duplicated between two modules and 
> supports different set of options (eg. the oak-core version doesn't support 
> the RDB node store at all, while the oak-jcr doesn't support MemoryNodeStore)
> * it isn't possible to set the MongoDB URL manually from the Maven command 
> line (it can be done for the RDB, though), which makes running the tests on a 
> Mongo replica hard,
> * the Mongo fixture doesn't remove the test database after the test is done.
> There should be just one NodeStoreFixture implementation (the oak-jcr can 
> reuse the oak-core version), supporting all values of the {{Fixture}} enum. 
> The Mongo fixture should be more customisable and also should clean-up the 
> database.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3812) Disable compaction gain estimation if compaction is paused

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3812.
-

Bulk close for 1.3.14

> Disable compaction gain estimation if compaction is paused
> --
>
> Key: OAK-3812
> URL: https://issues.apache.org/jira/browse/OAK-3812
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.14
>
>
> I think we should disable compaction estimation when compaction is paused. 
> Estimation interferes with the caches, wastes CPU and IO cycles and is not 
> essential for Oak's operation when compaction is disabled. The only reason it 
> was unconditionally enabled initially is to gather the respective information 
> in production. I think this has turned out to be not too useful so it is safe 
> to disable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3577) NameValidator diagnostics could be more helpful

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3577.
-

Bulk close for 1.3.14

> NameValidator diagnostics could be more helpful
> ---
>
> Key: OAK-3577
> URL: https://issues.apache.org/jira/browse/OAK-3577
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK-3577.patch
>
>
> When reporting an invalid name, the log isn't always helpful, as trailing 
> whitespace or non-ASCII whitespace characters are hard to spot. It would be 
> good to append a variant of the name that has all non-printing ASCII 
> characters plus whitespace escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3826) Lucene index augmentation doesn't work in Osgi environment

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3826.
-

Bulk close for 1.3.14

> Lucene index augmentation doesn't work in Osgi environment
> --
>
> Key: OAK-3826
> URL: https://issues.apache.org/jira/browse/OAK-3826
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.14
>
> Attachments: OAK-3826-v2.patch, OAK-3826.patch
>
>
> OAK-3576 introduced a way to hook SPI to provide extra fields and query terms 
> for a lucene index.
> In Osgi world, due to OAK-3815, {{LuceneIndexProviderService}} registered 
> references to SPI and pinged {{IndexAugmentFactory}} to update its map. But, 
> it seems bind/unbind methods get called ahead of time as compared to the 
> information Tracker contains. This leads to wrong set of services captured by 
> {{IndexAugmentFactory}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3873) Don't pass the compaction map to FileStore.cleanup

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3873.
-

Bulk close for 1.3.14

> Don't pass the compaction map to FileStore.cleanup
> --
>
> Key: OAK-3873
> URL: https://issues.apache.org/jira/browse/OAK-3873
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.14
>
>
> That argument is unused and I'll remove it thus. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3856) Potential NPE in SegmentWriter

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3856.
-

Bulk close for 1.3.14

> Potential NPE in SegmentWriter
> --
>
> Key: OAK-3856
> URL: https://issues.apache.org/jira/browse/OAK-3856
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
> Fix For: 1.3.14
>
>
> {{SegmentWriter#SegmentWriter(SegmentStore, SegmentVersion)}} potentially 
> leads to a {{NPE}} as it passes on {{null}} for the {{wid}} argument. The 
> latter is expected to be non {{null}} in 
> {{SegmentWriter.SegmentBufferWriterPool#borrowWriter}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3857) Simplify SegmentGraphTest

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3857.
-

Bulk close for 1.3.14

> Simplify SegmentGraphTest
> -
>
> Key: OAK-3857
> URL: https://issues.apache.org/jira/browse/OAK-3857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: gc
> Fix For: 1.3.14
>
>
> {{SegmentGraphTest}} has a somewhat complicated setup phase to build a 
> segment store of a certain structure. This is will probably prove unreliable 
> when underlying implementation details of how segments are written change 
> (e.g. with OAK-3348). I would like to refactor the test such that it becomes 
> independent of such implementation details. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3847) Provide an easy way to parse/retrieve facets

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3847.
-

Bulk close for 1.3.14

> Provide an easy way to parse/retrieve facets
> 
>
> Key: OAK-3847
> URL: https://issues.apache.org/jira/browse/OAK-3847
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.3.14
>
>
> Current facet results are returned within the rep:facet($propertyname) 
> property of each resulting node. The resulting String [1] is however a bit 
> annoying to parse as it separates label / value by comma so that if label 
> contains a similar pattern parsing may even be buggy.
> An easier format for facets should be used, eventually together with an 
> utility class that returns proper objects that client code can consume.
> [1] : 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/jcr/query/FacetTest.java#L99



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3739) RDBDocumentStore: allow schema evolution part 1: check for required columns, log unexpected new columns

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3739.
-

Bulk close for 1.3.14

> RDBDocumentStore: allow schema evolution part 1: check for required columns, 
> log unexpected new columns
> ---
>
> Key: OAK-3739
> URL: https://issues.apache.org/jira/browse/OAK-3739
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
>
> In the future, we may have to add new table columns (for instance, see 
> OAK-3730).
> We can somewhat decouple database changes from code changes by only using new 
> columns when they are present.
> However, we need to make sure that once the columns have been added and the 
> system has started to use them, no older code writes to the DB (potentially 
> causing inconsistencies because of not updating these columns).
> To prevent this, code could check that it understands all columns it finds on 
> startup. If there are unknown columns, it would need to abort and log an 
> error that explains the compatibility issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3849) After partial migration versions are not restorable

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3849.
-

Bulk close for 1.3.14

> After partial migration versions are not restorable
> ---
>
> Key: OAK-3849
> URL: https://issues.apache.org/jira/browse/OAK-3849
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.3.14
>
> Attachments: OAK-3849.patch
>
>
> After migrating a content subtree with referenced versions and starting the 
> destination repository, the versions are not available. The reason is that 
> the new version histories UUIDs hasn't been indexed.
> We should set the {{/oak:index/uuid/reindex}} to {{true}} every time we copy 
> a version history, so the new identifiers will be indexed during the merge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-2472) Add support for atomic counters on cluster solutions

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-2472.
-

Bulk close for 1.3.14

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>Priority: Blocker
>  Labels: scalability
> Fix For: 1.3.14
>
> Attachments: OAK-2472-failure-1452511772.log.gz, 
> OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, 
> oak-1452268140.log.gz
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3891) AsyncIndexUpdateLeaseTest doesn't use the provided NodeStore

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3891.
-

Bulk close for 1.3.14

> AsyncIndexUpdateLeaseTest doesn't use the provided NodeStore
> 
>
> Key: OAK-3891
> URL: https://issues.apache.org/jira/browse/OAK-3891
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Trivial
> Fix For: 1.2.10, 1.3.14
>
>
> AsyncIndexUpdateLeaseTest extends from _OakBaseTest_ but still sets up its  
> own NodeStore, this needs to be fixed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3864) Filestore cleanup removes referenced segments

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3864.
-

Bulk close for 1.3.14

> Filestore cleanup removes referenced segments
> -
>
> Key: OAK-3864
> URL: https://issues.apache.org/jira/browse/OAK-3864
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: regression
> Fix For: 1.3.14
>
>
> In some situations {{FileStore.cleanup()}} may remove segments that are still 
> referenced, subsequently causing a {{SNFE}}. 
> This is a regression introduced with OAK-1828. 
> {{FileStore.cleanup()}} relies on the ordering of the segments in the tar 
> files: later segments only reference earlier segments. As we have seen in 
> other places this assumption does not hold any more (e.g. OAK-3794, OAK-3793) 
> since OAK-1828.
>  {{cleanup}} traverses the segments backwards maintaining a list of 
> referenced ids. When a segment is not in that list, it is removed. However, 
> this approach does not work with forward references as those are only seen 
> later when the segment has been removed already. 
> cc [~alex.parvulescu], [~frm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3863) [oak-blob-cloud] Incorrect export package

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3863.
-

Bulk close for 1.3.14

> [oak-blob-cloud] Incorrect export package
> -
>
> Key: OAK-3863
> URL: https://issues.apache.org/jira/browse/OAK-3863
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.2.10, 1.3.14
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3848) ConcurrentAddNodesClusterIT.addNodesConcurrent() fails occasionally

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3848.
-

Bulk close for 1.3.14

> ConcurrentAddNodesClusterIT.addNodesConcurrent() fails occasionally
> ---
>
> Key: OAK-3848
> URL: https://issues.apache.org/jira/browse/OAK-3848
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.14
>
>
> Failed recently on travis: 
> https://travis-ci.org/apache/jackrabbit-oak/builds/100793506



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3838) IndexPlanner incorrectly lets all full text indices to participate for suggest/spellcheck queries

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3838.
-

Bulk close for 1.3.14

> IndexPlanner incorrectly lets all full text indices to participate for 
> suggest/spellcheck queries
> -
>
> Key: OAK-3838
> URL: https://issues.apache.org/jira/browse/OAK-3838
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
> Fix For: 1.3.14
>
> Attachments: OAK-3838-take2.patch, OAK-3838.patch
>
>
> Currently, index planner builds plan for suggest/spellcheck queries even if 
> the indices don't have those functionality enabled.
> Also, there's another issue: when suggestion/spell-check come into play, we 
> need to dis-allow inheritance behavior of indexingRule. To clarify, a query 
> like {{SELECT [rep:suggest()] from [nt:unstructured] where suggest('test')}} 
> if using some index preparing suggestions on {{nt:base}} would give incorrect 
> suggestions as the index would have suggested values from other type of nodes 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3819) Collect and expose statistics related to Segment FileStore operations

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3819.
-

Bulk close for 1.3.14

> Collect and expose statistics related to Segment FileStore operations
> -
>
> Key: OAK-3819
> URL: https://issues.apache.org/jira/browse/OAK-3819
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.14
>
> Attachments: OAK-3819-segment-read.patch, OAK-3819-v1.patch, 
> OAK-3819-v2.patch, segment-writes.png
>
>
> It would be useful to collect some more stats around FileStore operations in 
> SegmentMK
> * Writes per second - How many bytes are being written measured per second
> * Repository size time series - There is a time series in {{GCMonitorMBean}} 
> but that looks like get updated upon GC and hence bit stale



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3868) Move createSegmentWriter() from FileStore to SegmentTracker

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3868.
-

Bulk close for 1.3.14

> Move createSegmentWriter() from FileStore to SegmentTracker
> ---
>
> Key: OAK-3868
> URL: https://issues.apache.org/jira/browse/OAK-3868
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: technical_debt
> Fix For: 1.3.14
>
>
> I think it makes sense to move said method. This simplifies the code in 
> various places as it somewhat decouples the concern "writing segments" from 
> an implementation ({{FileStore}}). 
> Also this is somewhat a prerequisite for my current work on OAK-3348.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3890) Robuster test expectations for FileStoreIT

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3890.
-

Bulk close for 1.3.14

> Robuster test expectations for FileStoreIT
> --
>
> Key: OAK-3890
> URL: https://issues.apache.org/jira/browse/OAK-3890
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: technical_debt
> Fix For: 1.3.14
>
>
> {{FileStoreIT.testRecovery}} currently hard codes expected segment offsets. I 
> would like to refactor this to make it more robust against changes in how 
> exactly records are stored / de-duplicated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3859) Suspended commit depends on non-conflicting change

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3859.
-

Bulk close for 1.3.14

> Suspended commit depends on non-conflicting change
> --
>
> Key: OAK-3859
> URL: https://issues.apache.org/jira/browse/OAK-3859
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.3.6
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.14
>
>
> When a conflict occurs, a commit is suspended until the conflicting revision 
> becomes visible. This feature was introduced with OAK-3042. The 
> implementation does not distinguish between revisions that are conflicting 
> and those that are reported as collisions. The latter just means changes 
> happened after the base revision but may not necessarily be conflicting. E.g. 
> different properties can be changed concurrently.
> There are actually two problems:
> - When a commit detects a conflict, it will create collision markers even for 
> changes that are non conflicting. The commit should only create collision 
> markers for conflicting changes.
> - A commit with a conflict will suspend until is sees also revisions that are 
> considered a collision but are not actually a conflict. The commit should 
> only suspend until conflicting revisions are visible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3872) [RDB] Updated blob still deleted even if deletion interval lower

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3872.
-

Bulk close for 1.3.14

> [RDB] Updated blob still deleted even if deletion interval lower
> 
>
> Key: OAK-3872
> URL: https://issues.apache.org/jira/browse/OAK-3872
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Amit Jain
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK_3872.patch
>
>
> If an existing blob is uploaded again, the timestamp of the existing entry is 
> updated in the meta table. Subsequently if a call to delete 
> (RDBBlobStore#countDeleteChunks) is made with {{maxLastModifiedTime}} 
> parameter of less than the updated time above, the entry in the meta table is 
> not touched but the data table entry is wiped out. 
> Refer 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/rdb/RDBBlobStore.java#L510



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3645.
-

Bulk close for 1.3.14

> RDBDocumentStore: server time detection for DB2 fails due to timezone/dst 
> differences
> -
>
> Key: OAK-3645
> URL: https://issues.apache.org/jira/browse/OAK-3645
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK-3645-jr.patch, OAK-3645.patch
>
>
> We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time.
> Apparently, at least with DB2, this might return a value that is off by a 
> multiple of one hour (3600 * 1000ms) depending on whether the OAK instance 
> and the DB run in different timezones.
> Known to work: both on the same machine.
> Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a 
> timestamp one hour in the past.
> At this time it's not clear whether the same problem occurs for other 
> databases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3853) Improve SegmentGraph resilience

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3853.
-

Bulk close for 1.3.14

> Improve SegmentGraph resilience 
> 
>
> Key: OAK-3853
> URL: https://issues.apache.org/jira/browse/OAK-3853
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: gc, resilience, tooling
> Fix For: 1.3.14
>
>
> Currently {{SegmentGraph}} just bails out upon hitting a {{SNFE}}. I would 
> like to improve this and include the error in the generated graph. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3727) Broadcasting cache: auto-configuration

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3727.
-

Bulk close for 1.3.14

> Broadcasting cache: auto-configuration
> --
>
> Key: OAK-3727
> URL: https://issues.apache.org/jira/browse/OAK-3727
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.14
>
>
> The plan is, each cluster node writes its IP address and listening port to 
> the clusterInfo collection, and (if really needed) a UUID. That way, it's 
> possible to detect other cluster nodes and connect to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3845) AbstractRDBConnectionTest fails to close the DataSource

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3845.
-

Bulk close for 1.3.14

> AbstractRDBConnectionTest fails to close the DataSource
> ---
>
> Key: OAK-3845
> URL: https://issues.apache.org/jira/browse/OAK-3845
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.3.14
>
>
> This can cause integration tests to fail when the connection limit of the 
> database is exhausted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3791) Time measurements for DocumentStore methods

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3791.
-

Bulk close for 1.3.14

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.3.14
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-jr.patch, 
> OAK-3791-v1.patch, OAK-3791-v2-chetanm.patch, OAK-3791-v3-rdb.patch, 
> oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-2592) Commit does not ensure w:majority

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-2592.
-

Bulk close for 1.3.14

> Commit does not ensure w:majority
> -
>
> Key: OAK-2592
> URL: https://issues.apache.org/jira/browse/OAK-2592
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, mongomk
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_0, candidate_oak_1_2, resilience
> Fix For: 1.3.14
>
>
> The MongoDocumentStore uses {{findAndModify()}} to commit a transaction. This 
> operation does not allow an application specified write concern and always 
> uses the MongoDB default write concern {{Acknowledged}}. This means a commit 
> may not make it to a majority of a replica set when the primary fails. From a 
> MongoDocumentStore perspective it may appear as if a write was successful and 
> later reverted. See also the test in OAK-1641.
> To fix this, we'd probably have to change the MongoDocumentStore to avoid 
> {{findAndModify()}} and use {{update()}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3646) Inconsistent read of hierarchy

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3646.
-

Bulk close for 1.3.14

> Inconsistent read of hierarchy 
> ---
>
> Key: OAK-3646
> URL: https://issues.apache.org/jira/browse/OAK-3646
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.3.14
>
>
> This is similar to OAK-3388, but about hierarchy information like which child 
> nodes exist at a given revision of the parent node. This issue only occurs in 
> a cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3634) RDB/MongoDocumentStore may return stale documents

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3634.
-

Bulk close for 1.3.14

> RDB/MongoDocumentStore may return stale documents
> -
>
> Key: OAK-3634
> URL: https://issues.apache.org/jira/browse/OAK-3634
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk, rdbmk
>Affects Versions: 1.2.7, 1.3.10, 1.0.23
>Reporter: Julian Reschke
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3634.diff, OAK-3634.diff, OAK-3634.patch
>
>
> It appears that the implementations of the {{update}} method sometimes 
> populate the memory cache with documents that do not reflect any current or 
> previous state in the persistence (that is, miss changes made by another 
> node).
> (will attach test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-1649.
-

Bulk close for 1.3.14

> NamespaceException: OakNamespace0005 on save, after replica crash
> -
>
> Key: OAK-1649
> URL: https://issues.apache.org/jira/browse/OAK-1649
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, mongomk
>Affects Versions: 0.19
> Environment: 0.20-SNAPSHOT as of March 31
>Reporter: Stefan Egli
>Assignee: Marcel Reutegger
> Fix For: 1.3.14
>
> Attachments: OverwritePropertyTest.java
>
>
> After running a test that produces couple thousands of nodes, and overwrites 
> the same properties couple thousand times, then crashing the replica primary, 
> the exception below occurs.
> The exception can be reproduced on the db and with the test case I'll attach 
> in a minute
> {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
> not allowed: rep:nsdata
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
>   at 
> org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
> OakNamespace0005: Namespace modification not allowed: rep:nsdata
>   at 
> org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
>   at 
> org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
>   at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
>   at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
>   at 
> org.apache.jackrabbit.oak.plugins.do

[jira] [Closed] (OAK-3841) Change return type of Document.getModCount() to Long

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3841.
-

Bulk close for 1.3.14

> Change return type of Document.getModCount() to Long
> 
>
> Key: OAK-3841
> URL: https://issues.apache.org/jira/browse/OAK-3841
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: candidate_oak_1_0
> Fix For: 1.3.14, 1.2.11
>
>
> The method currently returns {{Number}} and makes it difficult for client 
> code to compare two modCount values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3874) DocumentToExternalMigrationTest fails occasionally

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3874.
-

Bulk close for 1.3.14

> DocumentToExternalMigrationTest fails occasionally
> --
>
> Key: OAK-3874
> URL: https://issues.apache.org/jira/browse/OAK-3874
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: core, documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.14
>
>
> {noformat}
> blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.plugins.blob.migration.DocumentToExternalMigrationTest)
>   Time elapsed: 0.254 sec  <<< ERROR!
> java.io.IOException: java.lang.IllegalArgumentException: 
> 09f9ec09c71296a24335460be6bb713573a09630#16516
> {noformat}
> Most likely happens because the database is not clean when the test starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3799) Drop module oak-js

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3799.
-

Bulk close for 1.3.14

> Drop module oak-js
> --
>
> Key: OAK-3799
> URL: https://issues.apache.org/jira/browse/OAK-3799
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.14
>
>
> The Oak checkout contains a module {{oak-js}}, which is mostly empty apart 
> from a TODO statement. As we didn't work on this and AFAIK do not intend to 
> work on this in the near future, I propose to drop the module for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3882) Collision may mark the wrong commit

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3882.
-

Bulk close for 1.3.14

> Collision may mark the wrong commit
> ---
>
> Key: OAK-3882
> URL: https://issues.apache.org/jira/browse/OAK-3882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.3.6
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.3.14
>
>
> In some rare cases it may happen that a collision marks the wrong commit. 
> OAK-3344 introduced a conditional update of the commit root with a collision 
> marker. However, this may fail when the commit revision of the condition is 
> moved to a split document at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3653) Incorrect last revision of cached node state

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3653.
-

Bulk close for 1.3.14

> Incorrect last revision of cached node state
> 
>
> Key: OAK-3653
> URL: https://issues.apache.org/jira/browse/OAK-3653
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Vikas Saurabh
>Assignee: Marcel Reutegger
> Fix For: 1.3.14
>
> Attachments: Capture.JPG, Capture1.JPG, Capture2.JPG, diagram.png, 
> oplog.json
>
>
> While installing a package in one of the systems (Oak-1.0.16), we observed 
> broken workflow model. Upon further investigation, it was found that 
> {{baseVersion}} of the model was pointing to a node in version storage which 
> wasn't visible on the instance. This node was available in mongo and could 
> also be seen on a different instance of the cluster. 
> The node was created on the same instance where it wasn't visible (cluster id 
> - 3)
> Further investigation showed that for some (yet unknown) reason, 
> {{lastRevision}} for cached node state of node's parent at revision where the 
> node was created was stale (an older revision) which must have led to invalid 
> children list and hence unavailable node.
> Attaching a few files which were captured during the investigation:
> * [^oplog.json] - oplog for +-10s for r151191c7601-0-3
> * [^Capture.JPG]- snapshot of groovy console output using script at \[0] to 
> list node children cache entry for parent
> * [^Capture1.JPG]- snapshot of groovy console output using script at \[1] to 
> traverse to invisible node
> * [^Capture2.JPG]- Almost same as Capture.jpg but this time for node cache 
> instead (using script at \[2])
> The node which wasn't visible in mongo:
> {noformat}
> db.nodes.findOne({_id: 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7"})
> {
> "_id" : 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7",
> "_deleted" : {
> "r151191c7601-0-3" : "false"
> },
> "_commitRoot" : {
> "r151191c7601-0-3" : "0"
> },
> "_lastRev" : {
> "r0-0-3" : "r151191c7601-0-3"
> },
> "jcr:primaryType" : {
> "r151191c7601-0-3" : "\"nam:nt:version\""
> },
> "jcr:uuid" : {
> "r151191c7601-0-3" : 
> "\"40234f1d-2435-4c6b-9962-20af22b1c948\""
> },
> "_modified" : NumberLong(1447825270),
> "jcr:created" : {
> "r151191c7601-0-3" : "\"dat:2015-11-17T21:41:14.058-08:00\""
> },
> "_children" : true,
> "jcr:predecessors" : {
> "r151191c7601-0-3" : 
> "[\"ref:6cecc77b-3020-4a47-b9cc-084a618aa957\"]"
> },
> "jcr:successors" : {
> "r151191c7601-0-3" : "\"[0]:Reference\""
> },
> "_modCount" : NumberLong(1)
> }
> {noformat}
> Last revision entry of cached node state of 
> {{/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e}}
>  (parent) at revision {{r151191c7601-0-3}} was {{r14f035ebdf8-0-3}} (the last 
> revs were propagated correctly up till root for r151191c7601-0-3).
> As a work-around, doing {{cache.invalidate(k)}} for node cache (inside the 
> loop of script at \[2]) revived the visibility of the node.
> *Disclaimer*: _The issue was reported and investigated on a cluster backed by 
> mongo running on Oak-1.0.16. But, the problem itself doesn't seem like any 
> which have been fixed in later versions_
> \[0]: {noformat}
> def printNodeChildrenCache(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def cache = rootNode.sessionDelegate.root.store.nodeChildrenCache
> def cacheMap = cache.asMap()
> cacheMap.each{k,v -> 
>   if (k.toString().startsWith(path + "@")) {
> println "${k}:${v}"
>   }
> }
>   } finally {
> session.logout()
>   }
> }
>  
> printNodeChildrenCache("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e")
> {noformat}
> \[1]: {noformat}
> def traverse(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def nb = rootNode.sessionDelegate.root.store.root
> def p = ''
> path.tokenize('/').each() {
>   p = p + '/' + it
>   nb = nb.getChildNode(it)
>   println "${p} ${nb}"
> }
>   } finally {
> session.logout()
>   }
> }
> traverse("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7")
> {noformat}
> \[2]:

[jira] [Closed] (OAK-3843) MS SQL doesn't support more than 2100 parameters in one request

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3843.
-

Bulk close for 1.3.14

> MS SQL doesn't support more than 2100 parameters in one request
> ---
>
> Key: OAK-3843
> URL: https://issues.apache.org/jira/browse/OAK-3843
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.3.13
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>Priority: Blocker
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.14
>
>
> Following exception is thrown if the {{RDBDocumentStoreJDBC.read()}} method 
> is called with more than 2100 keys:
> {noformat}
> com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too 
> many parameters. The server supports a maximum of 2100 parameters. Reduce the 
> number of parameters and resend the request.
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
>  ~[sqljdbc4.jar:na]
>   at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696) 
> ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
>  ~[sqljdbc4.jar:na]
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
>  ~[sqljdbc4.jar:na]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.read(RDBDocumentStoreJDBC.java:593)
>  ~[oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentsUncached(RDBDocumentStore.java:387)
>  [oak-run-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
> {noformat}
> The parameters are already split into multiple {{in}} clauses, so I think we 
> need to split it further, into multiple queries (if the MS SQL is used). This 
> also applied to other places where we use the 
> {{RDBJDBCTools.createInStatement()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3851) RDB*Store: update PostgreSQL and MySQL JDBC driver dependencies

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3851.
-

Bulk close for 1.3.14

> RDB*Store: update PostgreSQL and MySQL JDBC driver dependencies
> ---
>
> Key: OAK-3851
> URL: https://issues.apache.org/jira/browse/OAK-3851
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3885) enhance stability of clusterNodeInfo's machineId

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3885.
-

Bulk close for 1.3.14

> enhance stability of clusterNodeInfo's machineId
> 
>
> Key: OAK-3885
> URL: https://issues.apache.org/jira/browse/OAK-3885
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK-3885.diff
>
>
> We currently use network interface information to derive a unique machine ID 
> (ClusterNodeInfo.getMachineId()). Among the 6-byte addresses, we use the 
> "smallest".
> At least on Windows machines, connecting through a VPN inserts a new low 
> machineID into the list, causing the machineID to vary depending on whether 
> the VPN is connected or not.
> I don't see a clean way to filter these addresses. We *could* inspect the 
> names of the interfaces and treat those containing "VPN" or "Virtual" to be 
> less relevant. Of course that would be an ugly hack, but it would fix the 
> problem for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3867) RDBDocumentStore: refactor JSON support

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3867.
-

Bulk close for 1.3.14

> RDBDocumentStore: refactor JSON support
> ---
>
> Key: OAK-3867
> URL: https://issues.apache.org/jira/browse/OAK-3867
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK-3867.diff
>
>
> Refactor JSON support, so that
> - it can be better re-used in RDBExport utility and
> - unit test coverage can be improved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3637.
-

Bulk close for 1.3.14

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.14
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3836) Convert simple versionable nodes during upgrade

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3836.
-

Bulk close for 1.3.14

> Convert simple versionable nodes during upgrade
> ---
>
> Key: OAK-3836
> URL: https://issues.apache.org/jira/browse/OAK-3836
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.3.14
>
> Attachments: OAK-3836.patch
>
>
> JCR 2 supports two modes of versioning: simple and full. Oak supports only 
> full versioning. We should convert the simple-versionable nodes during 
> repository upgrade, so the version history of such items is not lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-3470) Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being used

2016-01-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-3470.
-

Bulk close for 1.3.14

> Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being 
> used
> -
>
> Key: OAK-3470
> URL: https://issues.apache.org/jira/browse/OAK-3470
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.2.6, 1.3.7
>Reporter: Jegadisan Sankar Kumar
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.14
>
>
> When create a repository without Mongo and just a RDBMS DocumentNodeStore, a 
> NoClassDefFoundError is encountered.
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/mongodb/BasicDBObject
>   at 
> org.apache.jackrabbit.oak.plugins.document.util.Utils.estimateMemoryUsage(Utils.java:160)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Document.getMemory(Document.java:167)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:33)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:27)
>   at 
> com.google.common.cache.LocalCache$Segment.setValue(LocalCache.java:2158)
>   at 
> com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3140)
>   at 
> com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2349)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentCached(RDBDocumentStore.java:762)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:222)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.(DocumentNodeStore.java:448)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getNodeStore(DocumentMK.java:671)
> {code}
> The dependencies in pom.xml are as follows
> {code:xml}
> 
> 
> org.apache.jackrabbit
> oak-jcr
> 1.2.6
> 
> 
> com.h2database
> h2
> 1.4.189
> 
> 
> ch.qos.logback
> logback-classic
> 1.1.3
> 
> 
> {code}
> And the code to recreate the issue
> {code:java}
> // Build the Data Source to be used.
> JdbcDataSource ds = new JdbcDataSource();
> ds.setURL("jdbc:h2:mem:oak;DB_CLOSE_DELAY=-1");
> ds.setUser("sa");
> ds.setPassword("sa");
> // Build the OAK Repository Instance
> DocumentNodeStore ns = null;
> try {
> ns = new DocumentMK.Builder()
> .setRDBConnection(ds)
> .getNodeStore();
> } finally {
> if (ns != null) {
> ns.dispose();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3908) Don't skip maven-bundle-plugin:baseline when skipTests is true

2016-01-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-3908.
-
   Resolution: Fixed
Fix Version/s: 1.3.15

Fixed in r1726158.

> Don't skip maven-bundle-plugin:baseline when skipTests is true
> --
>
> Key: OAK-3908
> URL: https://issues.apache.org/jira/browse/OAK-3908
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: parent
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: 1.3.15
>
>
> As reported in [this 
> thread|http://jackrabbit.markmail.org/thread/jqsxdsjrb6fmjact] by [~rombert], 
> it would be useful not to skip {{maven-bundle-plugin:baseline}} when 
> {{skipTests}} is set. If necessary, {{baseline}} can be skipped using the 
> {{baseline.skip}} property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3908) Don't skip maven-bundle-plugin:baseline when skipTests is true

2016-01-22 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-3908:
---

 Summary: Don't skip maven-bundle-plugin:baseline when skipTests is 
true
 Key: OAK-3908
 URL: https://issues.apache.org/jira/browse/OAK-3908
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: parent
Reporter: Francesco Mari
Assignee: Francesco Mari


As reported in [this 
thread|http://jackrabbit.markmail.org/thread/jqsxdsjrb6fmjact] by [~rombert], 
it would be useful not to skip {{maven-bundle-plugin:baseline}} when 
{{skipTests}} is set. If necessary, {{baseline}} can be skipped using the 
{{baseline.skip}} property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3907) Sync the files to directory upon copy from remote

2016-01-22 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112111#comment-15112111
 ] 

Thomas Mueller commented on OAK-3907:
-

I don't think using sync is needed, and using sync might slow down all other 
operations quite a lot (including the persistent cache and other file system 
operations).

How do we copy files (from remote to local)? Do we create the file, and then 
just copy data over block by block? Then even sync would not help if the 
process is killed during the copy operation (before the potential sync call).

What would help is to create the file first using a different name 
(.inProgress), and only once copying is done rename the file. 

Do we verify the file size of the local file matches the file size of the 
remote file before reading from the local file?

> Sync the files to directory upon copy from remote
> -
>
> Key: OAK-3907
> URL: https://issues.apache.org/jira/browse/OAK-3907
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4, 1.2.11, 1.0.27
>
>
> {{IndexCopier}} does not {{sync}} the file to file system once files are 
> copied from remote. This might cause problem on setup where local index file 
> directory is on some SAN.
> For safety we should also sync the files upon copy from remote



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3871) ability to override ClusterNodeInfo#DEFAULT_LEASE_DURATION_MILLIS

2016-01-22 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112097#comment-15112097
 ] 

Michael Marth commented on OAK-3871:


+1

> ability to override ClusterNodeInfo#DEFAULT_LEASE_DURATION_MILLIS
> -
>
> Key: OAK-3871
> URL: https://issues.apache.org/jira/browse/OAK-3871
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Julian Reschke
> Fix For: 1.4
>
>
> This constant currently is set to 120s.
> In edge cases, it might be good to override the default (when essentially the 
> only other "fix" would be to turn off the lease check).
> Proposal: introduce a system property allowing to tune the lease time (but 
> restrict allowable values to a "sane" range, such as up to 5min).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3907) Sync the files to directory upon copy from remote

2016-01-22 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3907:


 Summary: Sync the files to directory upon copy from remote
 Key: OAK-3907
 URL: https://issues.apache.org/jira/browse/OAK-3907
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.4, 1.2.11, 1.0.27


{{IndexCopier}} does not {{sync}} the file to file system once files are copied 
from remote. This might cause problem on setup where local index file directory 
is on some SAN.

For safety we should also sync the files upon copy from remote



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)