[jira] [Resolved] (OAK-7209) Race condition can resurrect blobs during blob GC

2018-03-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-7209.

Resolution: Fixed

Merged into branches:
1.8 - http://svn.apache.org/viewvc?rev=1826604&view=rev
1.6 - http://svn.apache.org/viewvc?rev=1826605&view=rev

> Race condition can resurrect blobs during blob GC
> -
>
> Key: OAK-7209
> URL: https://issues.apache.org/jira/browse/OAK-7209
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.5, 1.10, 1.8.2
>Reporter: Csaba Varga
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.9.0, 1.10, 1.8.3, 1.6.11
>
>
> A race condition exists between the scheduled blob ID publishing process and 
> the GC process that can resurrect the blobs being deleted by the GC. This is 
> how it can happen:
>  # MarkSweepGarbageCollector.collectGarbage() starts running.
>  # As part of the preparation for sweeping, BlobIdTracker.globalMerge() is 
> called, which merges all blob ID records from the blob store into the local 
> tracker.
>  # Sweeping begins deleting files.
>  # BlobIdTracker.snapshot() gets called by the scheduler. It pushes all blob 
> ID records that were collected and merged in step 2 back into the blob store, 
> then deletes the local copies.
>  # Sweeping completes and tries to remove the successfully deleted blobs from 
> the tracker. Step 4 already deleted those records from the local files, so 
> nothing gets removed.
> The end result is that all blobs removed during the GC run will be considered 
> still alive and causes warnings when later GC runs try to remove them again. 
> The risk is higher the longer the sweep runs, but it can happen during a 
> short but badly timed GC run as well. (We've found it during a GC run that 
> took more than 11 hours to complete.)
> I can see two ways to approach this:
>  # Suspend the execution of BlobIdTracker.snapshot() while Blob GC is in 
> progress. This requires adding new methods to the BlobTracker interface to 
> allow suspending and resuming snapshotting of the tracker.
>  # Have the two overloads of BlobIdTracker.remove() do a globalMerge() before 
> trying to remove anything. This ensures that even if a snapshot() call 
> happened during the GC run, all IDs are "pulled back" into the local tracker 
> and can be removed successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff

2018-03-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395577#comment-16395577
 ] 

Hudson commented on OAK-7321:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1305|https://builds.apache.org/job/Jackrabbit%20Oak/1305/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1305/console]

> Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
> ---
>
> Key: OAK-7321
> URL: https://issues.apache.org/jira/browse/OAK-7321
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, documentmk
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1295 has failed.
> First failed run: [Jackrabbit Oak 
> #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]
> {noformat}
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
> cluster node id 1 already in use: machineId/instanceId do not match: 
> mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document != 
> mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7334) Transform CacheWeightEstimator into a unit test

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-7334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-7334:
---
Fix Version/s: 1.10

> Transform CacheWeightEstimator into a unit test
> ---
>
> Key: OAK-7334
> URL: https://issues.apache.org/jira/browse/OAK-7334
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Priority: Major
>  Labels: technical_debt
> Fix For: 1.10
>
>
> We should transform {{CacheWeightEstimator}} from a stand-alone utility into 
> a unit test such that we can regularly run in on a CI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7334) Transform CacheWeightEstimator into a unit test

2018-03-12 Thread JIRA
Michael Dürig created OAK-7334:
--

 Summary: Transform CacheWeightEstimator into a unit test
 Key: OAK-7334
 URL: https://issues.apache.org/jira/browse/OAK-7334
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segment-tar
Reporter: Michael Dürig


We should transform {{CacheWeightEstimator}} from a stand-alone utility into a 
unit test such that we can regularly run in on a CI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7330) RDBDocumentStore: make indices on SD* sparse where possible

2018-03-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395463#comment-16395463
 ] 

Julian Reschke commented on OAK-7330:
-

DB2: 
https://www.ibm.com/support/knowledgecenter/de/SSEPGG_10.5.0/com.ibm.db2.luw.wn.doc/doc/c0060910.html

> RDBDocumentStore: make indices on SD* sparse where possible
> ---
>
> Key: OAK-7330
> URL: https://issues.apache.org/jira/browse/OAK-7330
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
> Attachments: OAK-7330.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7330) RDBDocumentStore: make indices on SD* sparse where possible

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7330:

Attachment: OAK-7330.diff

> RDBDocumentStore: make indices on SD* sparse where possible
> ---
>
> Key: OAK-7330
> URL: https://issues.apache.org/jira/browse/OAK-7330
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Major
> Attachments: OAK-7330.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7333) RDBDocumentStore: refactor index report

2018-03-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395453#comment-16395453
 ] 

Julian Reschke commented on OAK-7333:
-

trunk: [r1826560|http://svn.apache.org/r1826560]


> RDBDocumentStore: refactor index report
> ---
>
> Key: OAK-7333
> URL: https://issues.apache.org/jira/browse/OAK-7333
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>
> - use field names when accessing metadata
> - dump more fields
> - prepare for reporting missing indices



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7333) RDBDocumentStore: refactor index report

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7333:

Labels: candidate_oak_1_8  (was: )

> RDBDocumentStore: refactor index report
> ---
>
> Key: OAK-7333
> URL: https://issues.apache.org/jira/browse/OAK-7333
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>
> - use field names when accessing metadata
> - dump more fields
> - prepare for reporting missing indices



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7333) RDBDocumentStore: refactor index report

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-7333.
-
   Resolution: Fixed
Fix Version/s: 1.10
   1.9.0

> RDBDocumentStore: refactor index report
> ---
>
> Key: OAK-7333
> URL: https://issues.apache.org/jira/browse/OAK-7333
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>
> - use field names when accessing metadata
> - dump more fields
> - prepare for reporting missing indices



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7333) RDBDocumentStore: refactor index report

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7333:

Description: 
- use field names when accessing metadata
- dump more fields
- prepare for reporting missing indices

  was:
- dump more fields
- prepare for reporting missing indices


> RDBDocumentStore: refactor index report
> ---
>
> Key: OAK-7333
> URL: https://issues.apache.org/jira/browse/OAK-7333
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> - use field names when accessing metadata
> - dump more fields
> - prepare for reporting missing indices



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff

2018-03-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395437#comment-16395437
 ] 

Hudson commented on OAK-7321:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1304|https://builds.apache.org/job/Jackrabbit%20Oak/1304/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1304/console]

> Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
> ---
>
> Key: OAK-7321
> URL: https://issues.apache.org/jira/browse/OAK-7321
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, documentmk
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1295 has failed.
> First failed run: [Jackrabbit Oak 
> #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]
> {noformat}
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
> cluster node id 1 already in use: machineId/instanceId do not match: 
> mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document != 
> mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7333) RDBDocumentStore: refactor index report

2018-03-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-7333:
---

 Summary: RDBDocumentStore: refactor index report
 Key: OAK-7333
 URL: https://issues.apache.org/jira/browse/OAK-7333
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke


- dump more fields
- prepare for reporting missing indices



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-6922) Azure support for the segment-tar

2018-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395371#comment-16395371
 ] 

Michael Dürig commented on OAK-6922:


While I cannot comment on the Azure specific bits, the patch looks clean an 
ready otherwise. However, I share [~frm] concerns re. the exports. Not 
exporting anything saved us from a lot of extra compatibility issues so far. I 
would like to stick with this as much as we can and make introducing and spi 
package for the affected interfaces an prerequisite for this patch.

As an afterthought once the patch is applied, could you add Javadoc to the 
public methods and classes describing the general contracts and relevant post 
conditions, pre conditions dependencies and invariants?

Also as an afterthought, AFAICS currently no tests run against the 
{{SegmentAzureFixture}}. Is there an easy way to integrate with 
{{FixturesHelper}} and run tests locally? Can we also integrate these tests 
with our CI?

> Azure support for the segment-tar
> -
>
> Key: OAK-6922
> URL: https://issues.apache.org/jira/browse/OAK-6922
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-6922.patch
>
>
> An Azure Blob Storage implementation of the segment storage, based on the 
> OAK-6921 work.
> h3. Segment files layout
> Thew new implementation doesn't use tar files. They are replaced with 
> directories, storing segments, named after their UUIDs. This approach has 
> following advantages:
> * no need to call seek(), which may be expensive on a remote file system. 
> Rather than that we can read the whole file (=segment) at once.
> * it's possible to send multiple segments at once, asynchronously, which 
> reduces the performance overhead (see below).
> The file structure is as follows:
> {noformat}
> [~]$ az storage blob list -c oak --output table
> Name  Blob Type
> Blob TierLengthContent Type  Last Modified
>   ---  
> ---      -
> oak/data0a.tar/.ca1326d1-edf4-4d53-aef0-0f14a6d05b63  BlockBlob   
>   192   application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0001.c6e03426-db9d-4315-a20a-12559e6aee54  BlockBlob   
>   262144application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0002.b3784e27-6d16-4f80-afc1-6f3703f6bdb9  BlockBlob   
>   262144application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0003.5d2f9588-0c92-4547-abf7-0263ee7c37bb  BlockBlob   
>   259216application/octet-stream  2018-01-31T10:59:14+00:00
> ...
> oak/data0a.tar/006e.7b8cf63d-849a-4120-aa7c-47c3dde25e48  BlockBlob   
>   4368  application/octet-stream  2018-01-31T12:01:09+00:00
> oak/data0a.tar/006f.93799ae9-288e-4b32-afc2-bbc676fad7e5  BlockBlob   
>   3792  application/octet-stream  2018-01-31T12:01:14+00:00
> oak/data0a.tar/0070.8b2d5ff2-6a74-4ac3-a3cc-cc439367c2aa  BlockBlob   
>   3680  application/octet-stream  2018-01-31T12:01:14+00:00
> oak/data0a.tar/0071.2a1c49f0-ce33-4777-a042-8aa8a704d202  BlockBlob   
>   7760  application/octet-stream  2018-01-31T12:10:54+00:00
> oak/journal.log.001   AppendBlob  
>   1010  application/octet-stream  2018-01-31T12:10:54+00:00
> oak/manifest  BlockBlob   
>   46application/octet-stream  2018-01-31T10:59:14+00:00
> oak/repo.lock BlockBlob   
> application/octet-stream  2018-01-31T10:59:14+00:00
> {noformat}
> For the segment files, each name is prefixed with the index number. This 
> allows to maintain an order, as in the tar archive. This order is normally 
> stored in the index files as well, but if it's missing, the recovery process 
> uses the prefixes to maintain it.
> Each file contains the raw segment data, with no padding/headers. Apart from 
> the segment files, there are 3 special files: binary references (.brf), 
> segment graph (.gph) and segment index (.idx).
> h3. Asynchronous writes
> Normally, all the TarWriter writes are synchronous, appending the segments to 
> the tar file. In case of Azure Blob Storage each write involves a network 
> latency. That's why the SegmentWriteQueue was introduced. The segments are 
> added to the blocking dequeue, which is served by a 

[jira] [Resolved] (OAK-7259) Improve SegmentNodeStoreStats to include number of commits per thread and threads currently waiting on the semaphore

2018-03-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu resolved OAK-7259.
--
Resolution: Fixed

> Improve SegmentNodeStoreStats to include number of commits per thread and 
> threads currently waiting on the semaphore
> 
>
> Key: OAK-7259
> URL: https://issues.apache.org/jira/browse/OAK-7259
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: tooling
> Fix For: 1.9.0, 1.10
>
>
> When investigating the performance of  {{segment-tar}}, the source of the 
> writes (commits) is a very useful indicator of the cause.
> To better understand which threads are currently writing in the repository 
> and which are blocked on the semaphore, we need to improve 
> {{SegmentNodeStoreStats}} to:
>  * expose the number of commits executed per thread
>  * expose threads currently waiting on the semaphore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7332) Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap version conflict

2018-03-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395356#comment-16395356
 ] 

Francesco Mari commented on OAK-7332:
-

I don't see any issue with this approach.

> Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap 
> version conflict 
> -
>
> Key: OAK-7332
> URL: https://issues.apache.org/jira/browse/OAK-7332
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: benchmarks, segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7332.patch
>
>
> The changes needed for OAK-7259 rely on 
> {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 1.4.2, 
> which is embedded in {{oak-segment-tar}}. This conflicts with the transitive 
> dependency coming from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}, 
> which depends on {{concurrentlinkedhashmap-lru}} 1.2. Therefore, running 
> benchmarks on {{Oak-Segment-*}} fixtures fails with the following exception:
> {noformat}
> 04:35:44 Exception in thread "main" java.lang.NoSuchMethodError: 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:57)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:51)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreStats.(SegmentNodeStoreStats.java:64)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler.(LockBasedScheduler.java:171)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$ObservableLockBasedScheduler.(LockBasedScheduler.java:352)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$LockBasedSchedulerBuilder.build(LockBasedScheduler.java:100)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:169)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:63)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$SegmentNodeStoreBuilder.build(SegmentNodeStore.java:121)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.SegmentTarFixture.setUpCluster(SegmentTarFixture.java:211)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:145)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:141)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.createRepository(AbstractTest.java:655)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:202)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:516)
> 04:35:44 at 
> org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:27)
> 04:35:44 at org.apache.jackrabbit.oak.run.Main.main(Main.java:54)
> {noformat}
> A possible solution to mitigate this is to exclude the transitive dependency 
> to {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 
> from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7332) Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap version conflict

2018-03-12 Thread Andrei Dulceanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395352#comment-16395352
 ] 

Andrei Dulceanu commented on OAK-7332:
--

As mentioned in the issue description, the patch changes {{oak-benchmarks}} to 
remove the transitive dependency to {{concurrentlinkedhashmap}} from 
{{solr-core}}. After applying it, I could successfully run benchmarks on 
various fixtures including {{Oak-Segment-Tar}}, {{Oak-Segment-Tar-Cold}}, 
{{Oak-Segment-Tar-DS}}, {{Oak-Composite-Memory-Store}}, etc.

[~mduerig], [~frm] do you see any issues for this approach? 

> Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap 
> version conflict 
> -
>
> Key: OAK-7332
> URL: https://issues.apache.org/jira/browse/OAK-7332
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: benchmarks, segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7332.patch
>
>
> The changes needed for OAK-7259 rely on 
> {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 1.4.2, 
> which is embedded in {{oak-segment-tar}}. This conflicts with the transitive 
> dependency coming from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}, 
> which depends on {{concurrentlinkedhashmap-lru}} 1.2. Therefore, running 
> benchmarks on {{Oak-Segment-*}} fixtures fails with the following exception:
> {noformat}
> 04:35:44 Exception in thread "main" java.lang.NoSuchMethodError: 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:57)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:51)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreStats.(SegmentNodeStoreStats.java:64)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler.(LockBasedScheduler.java:171)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$ObservableLockBasedScheduler.(LockBasedScheduler.java:352)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$LockBasedSchedulerBuilder.build(LockBasedScheduler.java:100)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:169)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:63)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$SegmentNodeStoreBuilder.build(SegmentNodeStore.java:121)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.SegmentTarFixture.setUpCluster(SegmentTarFixture.java:211)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:145)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:141)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.createRepository(AbstractTest.java:655)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:202)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:516)
> 04:35:44 at 
> org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:27)
> 04:35:44 at org.apache.jackrabbit.oak.run.Main.main(Main.java:54)
> {noformat}
> A possible solution to mitigate this is to exclude the transitive dependency 
> to {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 
> from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7332) Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap version conflict

2018-03-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-7332:
-
Attachment: OAK-7332.patch

> Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap 
> version conflict 
> -
>
> Key: OAK-7332
> URL: https://issues.apache.org/jira/browse/OAK-7332
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: benchmarks, segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-7332.patch
>
>
> The changes needed for OAK-7259 rely on 
> {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 1.4.2, 
> which is embedded in {{oak-segment-tar}}. This conflicts with the transitive 
> dependency coming from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}, 
> which depends on {{concurrentlinkedhashmap-lru}} 1.2. Therefore, running 
> benchmarks on {{Oak-Segment-*}} fixtures fails with the following exception:
> {noformat}
> 04:35:44 Exception in thread "main" java.lang.NoSuchMethodError: 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:57)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:51)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreStats.(SegmentNodeStoreStats.java:64)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler.(LockBasedScheduler.java:171)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$ObservableLockBasedScheduler.(LockBasedScheduler.java:352)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$LockBasedSchedulerBuilder.build(LockBasedScheduler.java:100)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:169)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:63)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$SegmentNodeStoreBuilder.build(SegmentNodeStore.java:121)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.SegmentTarFixture.setUpCluster(SegmentTarFixture.java:211)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:145)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:141)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.createRepository(AbstractTest.java:655)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:202)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:516)
> 04:35:44 at 
> org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:27)
> 04:35:44 at org.apache.jackrabbit.oak.run.Main.main(Main.java:54)
> {noformat}
> A possible solution to mitigate this is to exclude the transitive dependency 
> to {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 
> from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7329) RDB*Store for SQLServer: name the PK index for better readability

2018-03-12 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395332#comment-16395332
 ] 

Julian Reschke commented on OAK-7329:
-

trunk: [r1826551|http://svn.apache.org/r1826551]


> RDB*Store for SQLServer: name the PK index for better readability
> -
>
> Key: OAK-7329
> URL: https://issues.apache.org/jira/browse/OAK-7329
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7329) RDB*Store for SQLServer: name the PK index for better readability

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-7329.
-
   Resolution: Fixed
Fix Version/s: 1.10
   1.9.0

> RDB*Store for SQLServer: name the PK index for better readability
> -
>
> Key: OAK-7329
> URL: https://issues.apache.org/jira/browse/OAK-7329
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7332) Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap version conflict

2018-03-12 Thread Andrei Dulceanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Dulceanu updated OAK-7332:
-
Component/s: segment-tar

> Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap 
> version conflict 
> -
>
> Key: OAK-7332
> URL: https://issues.apache.org/jira/browse/OAK-7332
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: benchmarks, segment-tar
>Reporter: Andrei Dulceanu
>Assignee: Andrei Dulceanu
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> The changes needed for OAK-7259 rely on 
> {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 1.4.2, 
> which is embedded in {{oak-segment-tar}}. This conflicts with the transitive 
> dependency coming from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}, 
> which depends on {{concurrentlinkedhashmap-lru}} 1.2. Therefore, running 
> benchmarks on {{Oak-Segment-*}} fixtures fails with the following exception:
> {noformat}
> 04:35:44 Exception in thread "main" java.lang.NoSuchMethodError: 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:57)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:51)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStoreStats.(SegmentNodeStoreStats.java:64)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler.(LockBasedScheduler.java:171)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$ObservableLockBasedScheduler.(LockBasedScheduler.java:352)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$LockBasedSchedulerBuilder.build(LockBasedScheduler.java:100)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:169)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:63)
> 04:35:44 at 
> org.apache.jackrabbit.oak.segment.SegmentNodeStore$SegmentNodeStoreBuilder.build(SegmentNodeStore.java:121)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.SegmentTarFixture.setUpCluster(SegmentTarFixture.java:211)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:145)
> 04:35:44 at 
> org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:141)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.createRepository(AbstractTest.java:655)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:202)
> 04:35:44 at 
> org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:516)
> 04:35:44 at 
> org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:27)
> 04:35:44 at org.apache.jackrabbit.oak.run.Main.main(Main.java:54)
> {noformat}
> A possible solution to mitigate this is to exclude the transitive dependency 
> to {{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 
> from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7332) Benchmarks failure on Oak-Segment-* fixtures due to concurrentlinkedhashmap version conflict

2018-03-12 Thread Andrei Dulceanu (JIRA)
Andrei Dulceanu created OAK-7332:


 Summary: Benchmarks failure on Oak-Segment-* fixtures due to 
concurrentlinkedhashmap version conflict 
 Key: OAK-7332
 URL: https://issues.apache.org/jira/browse/OAK-7332
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: benchmarks
Reporter: Andrei Dulceanu
Assignee: Andrei Dulceanu
 Fix For: 1.9.0, 1.10


The changes needed for OAK-7259 rely on 
{{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} 1.4.2, 
which is embedded in {{oak-segment-tar}}. This conflicts with the transitive 
dependency coming from {{org.apache.solr/solr-core}} in {{oak-benchmarks}}, 
which depends on {{concurrentlinkedhashmap-lru}} 1.2. Therefore, running 
benchmarks on {{Oak-Segment-*}} fixtures fails with the following exception:

{noformat}
04:35:44 Exception in thread "main" java.lang.NoSuchMethodError: 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
04:35:44 at 
org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:57)
04:35:44 at 
org.apache.jackrabbit.oak.segment.CommitsTracker.(CommitsTracker.java:51)
04:35:44 at 
org.apache.jackrabbit.oak.segment.SegmentNodeStoreStats.(SegmentNodeStoreStats.java:64)
04:35:44 at 
org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler.(LockBasedScheduler.java:171)
04:35:44 at 
org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$ObservableLockBasedScheduler.(LockBasedScheduler.java:352)
04:35:44 at 
org.apache.jackrabbit.oak.segment.scheduler.LockBasedScheduler$LockBasedSchedulerBuilder.build(LockBasedScheduler.java:100)
04:35:44 at 
org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:169)
04:35:44 at 
org.apache.jackrabbit.oak.segment.SegmentNodeStore.(SegmentNodeStore.java:63)
04:35:44 at 
org.apache.jackrabbit.oak.segment.SegmentNodeStore$SegmentNodeStoreBuilder.build(SegmentNodeStore.java:121)
04:35:44 at 
org.apache.jackrabbit.oak.fixture.SegmentTarFixture.setUpCluster(SegmentTarFixture.java:211)
04:35:44 at 
org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:145)
04:35:44 at 
org.apache.jackrabbit.oak.fixture.OakRepositoryFixture.setUpCluster(OakRepositoryFixture.java:141)
04:35:44 at 
org.apache.jackrabbit.oak.benchmark.AbstractTest.createRepository(AbstractTest.java:655)
04:35:44 at 
org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:202)
04:35:44 at 
org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:516)
04:35:44 at 
org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:27)
04:35:44 at org.apache.jackrabbit.oak.run.Main.main(Main.java:54)
{noformat}

A possible solution to mitigate this is to exclude the transitive dependency to 
{{com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru}} from 
{{org.apache.solr/solr-core}} in {{oak-benchmarks}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-6922) Azure support for the segment-tar

2018-03-12 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395295#comment-16395295
 ] 

Francesco Mari commented on OAK-6922:
-

The patch is well crafted, but I have some comments on how the code is laid 
out, with some emphasis on OSGi matters.

# oak-segment-tar exposes five packages, but those packages contain many public 
implementation classes that shouldn't be exposed. I think we should move 
{{SegmentNodeStorePersistence}} and related interfaces into a new package, e.g. 
{{org.apache.jackrabbit.oak.segment.spi.persistence}}, and expose only that 
package.
# oak-segment-azure exports {{org.apache.jackrabbit.oak.segment.azure}}. That 
pacakge, in my opinion, should not be exported. The registration of 
{{AzureSegmentStoreService}} will work even if the package is not exported.
# {{AzureSegmentStoreService}} uses the legacy Felix SCR annotations. Since 
this is newly added code, we should use thew new OSGi R6 annotations (see 
OAK-6741 and OAK-6770) unless we have a good reason not to. Following the same 
reasoning, we should stick to the standard way to read configuration 
properties, and remove custom logic as much as possible. This custom logic is 
currently implemented by {{Configuration}} in oak-segment-azure, and should be 
removed.

Regarding the last point, the standard OSGi annotations make it easier to use 
configuration property names that resemble method names. Maybe it's a good idea 
to rename properties like {{azure.accountName}} to simply {{accountName}}. 
Since we should avoid reading from the framework properties, prefixing the 
property names is unnecessary.

> Azure support for the segment-tar
> -
>
> Key: OAK-6922
> URL: https://issues.apache.org/jira/browse/OAK-6922
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segment-tar
>Reporter: Tomek Rękawek
>Assignee: Tomek Rękawek
>Priority: Major
> Fix For: 1.9.0, 1.10
>
> Attachments: OAK-6922.patch
>
>
> An Azure Blob Storage implementation of the segment storage, based on the 
> OAK-6921 work.
> h3. Segment files layout
> Thew new implementation doesn't use tar files. They are replaced with 
> directories, storing segments, named after their UUIDs. This approach has 
> following advantages:
> * no need to call seek(), which may be expensive on a remote file system. 
> Rather than that we can read the whole file (=segment) at once.
> * it's possible to send multiple segments at once, asynchronously, which 
> reduces the performance overhead (see below).
> The file structure is as follows:
> {noformat}
> [~]$ az storage blob list -c oak --output table
> Name  Blob Type
> Blob TierLengthContent Type  Last Modified
>   ---  
> ---      -
> oak/data0a.tar/.ca1326d1-edf4-4d53-aef0-0f14a6d05b63  BlockBlob   
>   192   application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0001.c6e03426-db9d-4315-a20a-12559e6aee54  BlockBlob   
>   262144application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0002.b3784e27-6d16-4f80-afc1-6f3703f6bdb9  BlockBlob   
>   262144application/octet-stream  2018-01-31T10:59:14+00:00
> oak/data0a.tar/0003.5d2f9588-0c92-4547-abf7-0263ee7c37bb  BlockBlob   
>   259216application/octet-stream  2018-01-31T10:59:14+00:00
> ...
> oak/data0a.tar/006e.7b8cf63d-849a-4120-aa7c-47c3dde25e48  BlockBlob   
>   4368  application/octet-stream  2018-01-31T12:01:09+00:00
> oak/data0a.tar/006f.93799ae9-288e-4b32-afc2-bbc676fad7e5  BlockBlob   
>   3792  application/octet-stream  2018-01-31T12:01:14+00:00
> oak/data0a.tar/0070.8b2d5ff2-6a74-4ac3-a3cc-cc439367c2aa  BlockBlob   
>   3680  application/octet-stream  2018-01-31T12:01:14+00:00
> oak/data0a.tar/0071.2a1c49f0-ce33-4777-a042-8aa8a704d202  BlockBlob   
>   7760  application/octet-stream  2018-01-31T12:10:54+00:00
> oak/journal.log.001   AppendBlob  
>   1010  application/octet-stream  2018-01-31T12:10:54+00:00
> oak/manifest  BlockBlob   
>   46application/octet-stream  2018-01-31T10:59:14+00:00
> oak/repo.lock BlockBlob   
> application/octet-stream  2018-01-31T10:59:14+00:00
> {noformat}
> For the segment files, each name is prefixed with the index number. This 
> allows to maintain an order, as in the tar

[jira] [Updated] (OAK-7329) RDB*Store for SQLServer: name the PK index for better readability

2018-03-12 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7329:

Labels: candidate_oak_1_8  (was: )

> RDB*Store for SQLServer: name the PK index for better readability
> -
>
> Key: OAK-7329
> URL: https://issues.apache.org/jira/browse/OAK-7329
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7331) RDBDocumentStore: add index on _MODIFIED to improve VersionGC performance

2018-03-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-7331:
---

 Summary: RDBDocumentStore: add index on _MODIFIED to improve 
VersionGC performance
 Key: OAK-7331
 URL: https://issues.apache.org/jira/browse/OAK-7331
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7330) RDBDocumentStore: make indices on SD* sparse where possible

2018-03-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-7330:
---

 Summary: RDBDocumentStore: make indices on SD* sparse where 
possible
 Key: OAK-7330
 URL: https://issues.apache.org/jira/browse/OAK-7330
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7329) RDB*Store for SQLServer: name the PK index for better readability

2018-03-12 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-7329:
---

 Summary: RDB*Store for SQLServer: name the PK index for better 
readability
 Key: OAK-7329
 URL: https://issues.apache.org/jira/browse/OAK-7329
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff

2018-03-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395091#comment-16395091
 ] 

Hudson commented on OAK-7321:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1303|https://builds.apache.org/job/Jackrabbit%20Oak/1303/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1303/console]

> Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
> ---
>
> Key: OAK-7321
> URL: https://issues.apache.org/jira/browse/OAK-7321
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, documentmk
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1295 has failed.
> First failed run: [Jackrabbit Oak 
> #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]
> {noformat}
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
> cluster node id 1 already in use: machineId/instanceId do not match: 
> mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document != 
> mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-6922) Azure support for the segment-tar

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-6922:
---
Description: 
An Azure Blob Storage implementation of the segment storage, based on the 
OAK-6921 work.

h3. Segment files layout

Thew new implementation doesn't use tar files. They are replaced with 
directories, storing segments, named after their UUIDs. This approach has 
following advantages:

* no need to call seek(), which may be expensive on a remote file system. 
Rather than that we can read the whole file (=segment) at once.
* it's possible to send multiple segments at once, asynchronously, which 
reduces the performance overhead (see below).

The file structure is as follows:

{noformat}
[~]$ az storage blob list -c oak --output table
Name  Blob TypeBlob 
TierLengthContent Type  Last Modified
  ---  
---      -
oak/data0a.tar/.ca1326d1-edf4-4d53-aef0-0f14a6d05b63  BlockBlob 
192   application/octet-stream  2018-01-31T10:59:14+00:00
oak/data0a.tar/0001.c6e03426-db9d-4315-a20a-12559e6aee54  BlockBlob 
262144application/octet-stream  2018-01-31T10:59:14+00:00
oak/data0a.tar/0002.b3784e27-6d16-4f80-afc1-6f3703f6bdb9  BlockBlob 
262144application/octet-stream  2018-01-31T10:59:14+00:00
oak/data0a.tar/0003.5d2f9588-0c92-4547-abf7-0263ee7c37bb  BlockBlob 
259216application/octet-stream  2018-01-31T10:59:14+00:00
...
oak/data0a.tar/006e.7b8cf63d-849a-4120-aa7c-47c3dde25e48  BlockBlob 
4368  application/octet-stream  2018-01-31T12:01:09+00:00
oak/data0a.tar/006f.93799ae9-288e-4b32-afc2-bbc676fad7e5  BlockBlob 
3792  application/octet-stream  2018-01-31T12:01:14+00:00
oak/data0a.tar/0070.8b2d5ff2-6a74-4ac3-a3cc-cc439367c2aa  BlockBlob 
3680  application/octet-stream  2018-01-31T12:01:14+00:00
oak/data0a.tar/0071.2a1c49f0-ce33-4777-a042-8aa8a704d202  BlockBlob 
7760  application/octet-stream  2018-01-31T12:10:54+00:00
oak/journal.log.001   AppendBlob
1010  application/octet-stream  2018-01-31T12:10:54+00:00
oak/manifest  BlockBlob 
46application/octet-stream  2018-01-31T10:59:14+00:00
oak/repo.lock BlockBlob 
  application/octet-stream  2018-01-31T10:59:14+00:00
{noformat}

For the segment files, each name is prefixed with the index number. This allows 
to maintain an order, as in the tar archive. This order is normally stored in 
the index files as well, but if it's missing, the recovery process uses the 
prefixes to maintain it.

Each file contains the raw segment data, with no padding/headers. Apart from 
the segment files, there are 3 special files: binary references (.brf), segment 
graph (.gph) and segment index (.idx).

h3. Asynchronous writes

Normally, all the TarWriter writes are synchronous, appending the segments to 
the tar file. In case of Azure Blob Storage each write involves a network 
latency. That's why the SegmentWriteQueue was introduced. The segments are 
added to the blocking dequeue, which is served by a number of the consumer 
threads, writing the segments to the cloud. There's also a map UUID->Segment, 
which allows to return the segments in case they are requested by the 
readSegment() method before they are actually persisted. Segments are removed 
from the map only after a successful write operation.

The flush() method blocks accepting the new segments and returns after all 
waiting segments are written. The close() method waits until the current 
operations are finished and stops all threads.

The asynchronous mode can be disabled by setting the number of threads to 0.

h5. Queue recovery mode

If the Azure Blob Storage write() operation fails, the segment will be re-added 
and the queue is switched to an "recovery mode". In this mode, all the threads 
are suspended and new segments are not accepted (active waiting). There's a 
single thread which retries adding the segment with some delay. If the segment 
is successfully written, the queue will back to the normal operation.

This way the unavailable remote service is not flooded by the requests and 
we're not accepting the segments when we can't persist them.

The close() method finishes the recovery mode - in this case, some of the 
awaiting segments won't be persisted.

h5. Consistency

The asynchronous mode isn't as reliable as the standard, synchronous case. 
Following cases are possible:

* TarWriter#writeEntry() returns successf

[jira] [Updated] (OAK-7209) Race condition can resurrect blobs during blob GC

2018-03-12 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-7209:
---
Fix Version/s: 1.6.11
   1.8.3
   1.10
   1.9.0

> Race condition can resurrect blobs during blob GC
> -
>
> Key: OAK-7209
> URL: https://issues.apache.org/jira/browse/OAK-7209
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.5, 1.10, 1.8.2
>Reporter: Csaba Varga
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.9.0, 1.10, 1.8.3, 1.6.11
>
>
> A race condition exists between the scheduled blob ID publishing process and 
> the GC process that can resurrect the blobs being deleted by the GC. This is 
> how it can happen:
>  # MarkSweepGarbageCollector.collectGarbage() starts running.
>  # As part of the preparation for sweeping, BlobIdTracker.globalMerge() is 
> called, which merges all blob ID records from the blob store into the local 
> tracker.
>  # Sweeping begins deleting files.
>  # BlobIdTracker.snapshot() gets called by the scheduler. It pushes all blob 
> ID records that were collected and merged in step 2 back into the blob store, 
> then deletes the local copies.
>  # Sweeping completes and tries to remove the successfully deleted blobs from 
> the tracker. Step 4 already deleted those records from the local files, so 
> nothing gets removed.
> The end result is that all blobs removed during the GC run will be considered 
> still alive and causes warnings when later GC runs try to remove them again. 
> The risk is higher the longer the sweep runs, but it can happen during a 
> short but badly timed GC run as well. (We've found it during a GC run that 
> took more than 11 hours to complete.)
> I can see two ways to approach this:
>  # Suspend the execution of BlobIdTracker.snapshot() while Blob GC is in 
> progress. This requires adding new methods to the BlobTracker interface to 
> allow suspending and resuming snapshotting of the tracker.
>  # Have the two overloads of BlobIdTracker.remove() do a globalMerge() before 
> trying to remove anything. This ensures that even if a snapshot() call 
> happened during the GC run, all IDs are "pulled back" into the local tracker 
> and can be removed successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7209) Race condition can resurrect blobs during blob GC

2018-03-12 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395035#comment-16395035
 ] 

Amit Jain commented on OAK-7209:


Fixed in trunk with http://svn.apache.org/viewvc?rev=1826532&view=rev

> Race condition can resurrect blobs during blob GC
> -
>
> Key: OAK-7209
> URL: https://issues.apache.org/jira/browse/OAK-7209
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob-plugins
>Affects Versions: 1.6.5, 1.10, 1.8.2
>Reporter: Csaba Varga
>Assignee: Amit Jain
>Priority: Minor
> Fix For: 1.9.0, 1.10, 1.8.3, 1.6.11
>
>
> A race condition exists between the scheduled blob ID publishing process and 
> the GC process that can resurrect the blobs being deleted by the GC. This is 
> how it can happen:
>  # MarkSweepGarbageCollector.collectGarbage() starts running.
>  # As part of the preparation for sweeping, BlobIdTracker.globalMerge() is 
> called, which merges all blob ID records from the blob store into the local 
> tracker.
>  # Sweeping begins deleting files.
>  # BlobIdTracker.snapshot() gets called by the scheduler. It pushes all blob 
> ID records that were collected and merged in step 2 back into the blob store, 
> then deletes the local copies.
>  # Sweeping completes and tries to remove the successfully deleted blobs from 
> the tracker. Step 4 already deleted those records from the local files, so 
> nothing gets removed.
> The end result is that all blobs removed during the GC run will be considered 
> still alive and causes warnings when later GC runs try to remove them again. 
> The risk is higher the longer the sweep runs, but it can happen during a 
> short but badly timed GC run as well. (We've found it during a GC run that 
> took more than 11 hours to complete.)
> I can see two ways to approach this:
>  # Suspend the execution of BlobIdTracker.snapshot() while Blob GC is in 
> progress. This requires adding new methods to the BlobTracker interface to 
> allow suspending and resuming snapshotting of the tracker.
>  # Have the two overloads of BlobIdTracker.remove() do a globalMerge() before 
> trying to remove anything. This ensures that even if a snapshot() call 
> happened during the GC run, all IDs are "pulled back" into the local tracker 
> and can be removed successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff

2018-03-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394995#comment-16394995
 ] 

Hudson commented on OAK-7321:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1302|https://builds.apache.org/job/Jackrabbit%20Oak/1302/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1302/console]

> Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
> ---
>
> Key: OAK-7321
> URL: https://issues.apache.org/jira/browse/OAK-7321
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, documentmk
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1295 has failed.
> First failed run: [Jackrabbit Oak 
> #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]
> {noformat}
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
> cluster node id 1 already in use: machineId/instanceId do not match: 
> mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document != 
> mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7321) Test failure: DocumentNodeStoreIT.modifiedResetWithDiff

2018-03-12 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-7321:
--
Fix Version/s: 1.10
  Description: 
No description is provided

The build Jackrabbit Oak #1295 has failed.
First failed run: [Jackrabbit Oak 
#1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]

{noformat}
org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
cluster node id 1 already in use: machineId/instanceId do not match: 
mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
Oak/oak-store-document != 
mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
Oak/oak-store-document
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
{noformat}

  was:
No description is provided

The build Jackrabbit Oak #1295 has failed.
First failed run: [Jackrabbit Oak 
#1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]

  Component/s: documentmk
  Summary: Test failure: DocumentNodeStoreIT.modifiedResetWithDiff  
(was: Build Jackrabbit Oak #1295 failed)

> Test failure: DocumentNodeStoreIT.modifiedResetWithDiff
> ---
>
> Key: OAK-7321
> URL: https://issues.apache.org/jira/browse/OAK-7321
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration, documentmk
>Reporter: Hudson
>Priority: Major
> Fix For: 1.10
>
>
> No description is provided
> The build Jackrabbit Oak #1295 has failed.
> First failed run: [Jackrabbit Oak 
> #1295|https://builds.apache.org/job/Jackrabbit%20Oak/1295/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1295/console]
> {noformat}
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: Configured 
> cluster node id 1 already in use: machineId/instanceId do not match: 
> mac:0242454078e1//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document != 
> mac:0242d5bcc8f8//home/jenkins/jenkins-slave/workspace/Jackrabbit 
> Oak/oak-store-document
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreIT.modifiedResetWithDiff(DocumentNodeStoreIT.java:65)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7328) Update DocumentNodeStore based OakFixture

2018-03-12 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-7328:
-

 Summary: Update DocumentNodeStore based OakFixture
 Key: OAK-7328
 URL: https://issues.apache.org/jira/browse/OAK-7328
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.10


The current OakFixtures using a DocumentNodeStore use a configuration / setup 
which is different from what a default DocumentNodeStoreService would use. It 
would be better if benchmarks run with a configuration close to a default 
setup. The main differences identified are:

- Does not have a proper executor, which means some tasks are executed with the 
same thread.
- Does not use a separate persistent cache for the journal (diff cache).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (OAK-7326) Add a way to disable the SegmentCache

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reopened OAK-7326:


Reopening as I think there is a problem with eviction of segments in the empty 
cache: segments put into the empty cache should get evicted immediately AFAIU.

> Add a way to disable the SegmentCache 
> --
>
> Key: OAK-7326
> URL: https://issues.apache.org/jira/browse/OAK-7326
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.9.0, 1.10
>
>
> We should implement a way to disable the segment cache. This enables us to 
> establish performance baselines and better understand locality of reference 
> without the distortion of the cache (e.g. see OAK-5655).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7304) Deploy oak-pojosr as part of standard deployment

2018-03-12 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-7304:
-
Fix Version/s: 1.9.0

> Deploy oak-pojosr as part of standard deployment
> 
>
> Key: OAK-7304
> URL: https://issues.apache.org/jira/browse/OAK-7304
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.9.0, 1.10
>
>
> oak-pojosr currently has {{skip.deployment}} set to true and hence is being 
> skipped from deployment. This setting should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7266) Standalone example system console fails to render

2018-03-12 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-7266:
-
   Labels:   (was: candidate_oak_1_8)
Fix Version/s: 1.8.3

Merged to 1.8 with 1826523

> Standalone example system console fails to render
> -
>
> Key: OAK-7266
> URL: https://issues.apache.org/jira/browse/OAK-7266
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.8.0
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Major
> Fix For: 1.9.0, 1.10, 1.8.3
>
>
> Accessing http://localhost:8080/osgi/system/console/bundles with 
> oak-standalone fails to render with following exception
> {noformat}
> 2018-02-14 10:36:40.415  WARN 23249 --- [qtp349420578-14] 
> org.eclipse.jetty.server.HttpChannel : /osgi/system/console/bundles
> java.lang.NoSuchMethodError: 
> org.json.JSONObject.valueToString(Ljava/lang/Object;)Ljava/lang/String;
>   at org.json.JSONWriter.value(JSONWriter.java:316)
>   at 
> org.apache.felix.webconsole.internal.core.BundlesServlet.writeJSON(BundlesServlet.java:612)
>   at 
> org.apache.felix.webconsole.internal.core.BundlesServlet.writeJSON(BundlesServlet.java:558)
>   at 
> org.apache.felix.webconsole.internal.core.BundlesServlet.renderContent(BundlesServlet.java:532)
>   at 
> org.apache.felix.webconsole.AbstractWebConsolePlugin.doGet(AbstractWebConsolePlugin.java:194)
>   at 
> org.apache.felix.webconsole.internal.core.BundlesServlet.doGet(BundlesServlet.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7265) Standalone example application fails to start

2018-03-12 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-7265:
-
   Labels:   (was: candidate_oak_1_8)
Fix Version/s: 1.8.3

Merged to 1.8 with 1826522

> Standalone example application fails to start
> -
>
> Key: OAK-7265
> URL: https://issues.apache.org/jira/browse/OAK-7265
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: examples, pojosr
>Affects Versions: 1.8.0
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Major
> Fix For: 1.9.0, 1.10, 1.8.3
>
>
> On users dl it was reported that standalone example application fails to 
> start with following exception
> {noformat}
> java.lang.NoSuchMethodException: 
> org.springframework.boot.loader.jar.JarEntry.getUrl()
>   at java.lang.Class.getMethod(Class.java:1786)
>   at 
> org.apache.jackrabbit.oak.run.osgi.SpringBootSupport$SpringBootJarRevision.getUrlMethod(SpringBootSupport.java:135)
>   at 
> org.apache.jackrabbit.oak.run.osgi.SpringBootSupport$SpringBootJarRevision.getEntry(SpringBootSupport.java:109)
>   at org.apache.felix.connect.PojoSRBundle.getEntry(PojoSRBundle.java:471)
>   at 
> org.apache.felix.webconsole.plugins.scriptconsole.internal.ScriptEngineManager.bundleChanged(ScriptEngineManager.java:117)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.invokeBundleListenerCallback(EventDispatcher.java:821)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:771)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.run(EventDispatcher.java:993)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher.access$000(EventDispatcher.java:52)
>   at 
> org.apache.felix.connect.felix.framework.util.EventDispatcher$1.run(EventDispatcher.java:94)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7304) Deploy oak-pojosr as part of standard deployment

2018-03-12 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-7304.
--
Resolution: Fixed

Done with 1826516

> Deploy oak-pojosr as part of standard deployment
> 
>
> Key: OAK-7304
> URL: https://issues.apache.org/jira/browse/OAK-7304
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10
>
>
> oak-pojosr currently has {{skip.deployment}} set to true and hence is being 
> skipped from deployment. This setting should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7304) Deploy oak-pojosr as part of standard deployment

2018-03-12 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-7304:
-
Labels: candidate_oak_1_8  (was: )

> Deploy oak-pojosr as part of standard deployment
> 
>
> Key: OAK-7304
> URL: https://issues.apache.org/jira/browse/OAK-7304
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10
>
>
> oak-pojosr currently has {{skip.deployment}} set to true and hence is being 
> skipped from deployment. This setting should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)