[jira] [Commented] (OAK-7488) VersionablePathHook should be located with authorization code

2018-05-14 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475312#comment-16475312
 ] 

angela commented on OAK-7488:
-

[~stillalex], the {{VersionablePathHook}} is only wired to the {{Root.commit}} 
through {{AuthorizationConfigurationImpl.getCommitHooks}}.

> VersionablePathHook should be located with authorization code
> -
>
> Key: OAK-7488
> URL: https://issues.apache.org/jira/browse/OAK-7488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
>  Labels: m12n
> Attachments: OAK-7488.patch
>
>
> in order to cleanup troublesome dependencies within oak core, the 
> {{VersionablePathHook}} associated with the default authorization model 
> should be co-located with the latter instead of being placed inside 
> _o.a.j.oak.plugins.version_.
> [~stillalex], fyi



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475283#comment-16475283
 ] 

Vikas Saurabh commented on OAK-7495:


[~egli], I've tried to make an oak test in  [^OAK-7495.demo.patch] - can you 
please see that it aligns with your test as well.

[~chetanm], if you've some spare cycles, can you please check out the demo test 
(not committed because it's still crude and sets up nrt in 
{{LuceneIndexPropertyTest}} and adds some logs for assistance in identifying 
the issue ). Notice the output in  [^unit-tests.log] - e.g snipped that 
{{child-job4}} makes into index but lucene cursor doesn't return it
{noformat}

09:45:41.674 INFO  [pool-1-thread-2] DocumentQueue.java:276 [Queued] Updated 
index with doc /oak:index/jobIndex(/child-job4)

09:45:41.677 INFO  [jobConsumer] LucenePropertyIndex.java:433 loading the first 
50 entries for query job-id:job4
09:45:41.679 INFO  [jobConsumer] LucenePropertyIndex.java:355 EndOfData for { 
costPerExecution : 1.0, costPerEntry : 1.0, estimatedEntryCount : 1, filter : 
Filter(query=select * from [nt:base] where [job-id] = 'job4', path=*, 
property=[job-id=[job4]]), isDelayed : true, isFulltextIndex : false, 
includesNodeData : false, sortOrder : null, definition : null, 
propertyRestriction : null, pathPrefix : , supportsPathRestriction : false }
{noformat}

> async,sync index not synchronous
> 
>
> Key: OAK-7495
> URL: https://issues.apache.org/jira/browse/OAK-7495
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Affects Versions: 1.6.1
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: GetJobVerifier.java, OAK-7495.demo.patch, 
> slingeventJob.-1.tidy.json, unit-tests.log
>
>
> On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
> an 
> [addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
>  followed by a 
> [getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
>  (in a different thread though, but perhaps would also fail in same thread) 
> was not seeing the job that was just created.
> To give a bit more background, in Sling getJobById results in a query. That 
> query uses an index which is built using {{"async, sync"}}. So the assumption 
> is that the index is actually synchronous. But a test reproducing initially 
> mentioned scenario showed the opposite.
> Attached:
> *  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
> that does addJob, adds the resulting jobId to a list (synchronized). and a 
> second thread that reads the jobId off that list and does a getJobById. That 
> getJobById should find the job, as it was just created (how else could you 
> figure out the jobId) - but sometimes it FAILs (see system out FAIL)
> *  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
> "async, sync"
> PS: Example query that is executed: 
> {{/jcr:root/var/eventing/jobs//element(*,slingevent:Job)[@slingevent:eventId 
> = '2018/5/11/2/12/bca505d9-3044-4de9-9732-056ab1b6c513_5569']}}
> /cc [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-7495:
---
Attachment: unit-tests.log
OAK-7495.demo.patch

> async,sync index not synchronous
> 
>
> Key: OAK-7495
> URL: https://issues.apache.org/jira/browse/OAK-7495
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Affects Versions: 1.6.1
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: GetJobVerifier.java, OAK-7495.demo.patch, 
> slingeventJob.-1.tidy.json, unit-tests.log
>
>
> On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
> an 
> [addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
>  followed by a 
> [getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
>  (in a different thread though, but perhaps would also fail in same thread) 
> was not seeing the job that was just created.
> To give a bit more background, in Sling getJobById results in a query. That 
> query uses an index which is built using {{"async, sync"}}. So the assumption 
> is that the index is actually synchronous. But a test reproducing initially 
> mentioned scenario showed the opposite.
> Attached:
> *  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
> that does addJob, adds the resulting jobId to a list (synchronized). and a 
> second thread that reads the jobId off that list and does a getJobById. That 
> getJobById should find the job, as it was just created (how else could you 
> figure out the jobId) - but sometimes it FAILs (see system out FAIL)
> *  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
> "async, sync"
> PS: Example query that is executed: 
> {{/jcr:root/var/eventing/jobs//element(*,slingevent:Job)[@slingevent:eventId 
> = '2018/5/11/2/12/bca505d9-3044-4de9-9732-056ab1b6c513_5569']}}
> /cc [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-7495:
-
Description: 
On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
an 
[addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
 followed by a 
[getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
 (in a different thread though, but perhaps would also fail in same thread) was 
not seeing the job that was just created.

To give a bit more background, in Sling getJobById results in a query. That 
query uses an index which is built using {{"async, sync"}}. So the assumption 
is that the index is actually synchronous. But a test reproducing initially 
mentioned scenario showed the opposite.

Attached:
*  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
that does addJob, adds the resulting jobId to a list (synchronized). and a 
second thread that reads the jobId off that list and does a getJobById. That 
getJobById should find the job, as it was just created (how else could you 
figure out the jobId) - but sometimes it FAILs (see system out FAIL)
*  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
"async, sync"

PS: Example query that is executed: 
{{/jcr:root/var/eventing/jobs//element(*,slingevent:Job)[@slingevent:eventId = 
'2018/5/11/2/12/bca505d9-3044-4de9-9732-056ab1b6c513_5569']}}

/cc [~catholicon]

  was:
On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
an 
[addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
 followed by a 
[getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
 (in a different thread though, but perhaps would also fail in same thread) was 
not seeing the job that was just created.

To give a bit more background, in Sling getJobById results in a query. That 
query uses an index which is built using {{"async, sync"}}. So the assumption 
is that the index is actually synchronous. But a test reproducing initially 
mentioned scenario showed the opposite.

Attached:
*  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
that does addJob, adds the resulting jobId to a list (synchronized). and a 
second thread that reads the jobId off that list and does a getJobById. That 
getJobById should find the job, as it was just created (how else could you 
figure out the jobId) - but sometimes it FAILs (see system out FAIL)
*  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
"async, sync"

/cc [~catholicon]


> async,sync index not synchronous
> 
>
> Key: OAK-7495
> URL: https://issues.apache.org/jira/browse/OAK-7495
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Affects Versions: 1.6.1
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: GetJobVerifier.java, slingeventJob.-1.tidy.json
>
>
> On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
> an 
> [addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
>  followed by a 
> [getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
>  (in a different thread though, but perhaps would also fail in same thread) 
> was not seeing the job that was just created.
> To give a bit more background, in Sling getJobById results in a query. That 
> query uses an index which is built using {{"async, sync"}}. So the assumption 
> is that the index is actually synchronous. But a test reproducing initially 
> mentioned scenario showed the opposite.
> Attached:
> *  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
> that does addJob, adds the resulting jobId to a list (synchronized). and a 
> second thread that reads the jobId off that list and does a getJobById. That 
> getJobById should find the job, as it was just created (how else could you 
> figure out the jobId) - but sometimes it FAILs (see system out FAIL)
> *  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
> "async, sync"
> PS: Example query that is executed: 
> {{/jcr:root/var/eventing/jobs//element(*,slingevent:Job)[@slingevent:eventId 
> = 

[jira] [Updated] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-7495:
-
Description: 
On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
an 
[addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
 followed by a 
[getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
 (in a different thread though, but perhaps would also fail in same thread) was 
not seeing the job that was just created.

To give a bit more background, in Sling getJobById results in a query. That 
query uses an index which is built using {{"async, sync"}}. So the assumption 
is that the index is actually synchronous. But a test reproducing initially 
mentioned scenario showed the opposite.

Attached:
*  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
that does addJob, adds the resulting jobId to a list (synchronized). and a 
second thread that reads the jobId off that list and does a getJobById. That 
getJobById should find the job, as it was just created (how else could you 
figure out the jobId) - but sometimes it FAILs (see system out FAIL)
*  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
"async, sync"

/cc [~catholicon]

  was:
On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
an addJob followed by a getJobById (in a different thread though, but perhaps 
would also fail in same thread) was not seeing the job that was just created.

To give a bit more background, in Sling getJobById results in a query. That 
query uses an index which is built using {{"async, sync"}}. So the assumption 
is that the index is actually synchronous. But a test reproducing initially 
mentioned scenario showed the opposite.

Attached:
*  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
that does addJob, adds the resulting jobId to a list (synchronized). and a 
second thread that reads the jobId off that list and does a getJobById. That 
getJobById should find the job, as it was just created (how else could you 
figure out the jobId) - but sometimes it FAILs (see system out FAIL)
*  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
"async, sync"

/cc [~catholicon]


> async,sync index not synchronous
> 
>
> Key: OAK-7495
> URL: https://issues.apache.org/jira/browse/OAK-7495
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Affects Versions: 1.6.1
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: GetJobVerifier.java, slingeventJob.-1.tidy.json
>
>
> On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
> an 
> [addJob|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L286]
>  followed by a 
> [getJobById|https://github.com/apache/sling-old-svn-mirror/blob/org.apache.sling.event-4.2.0/src/main/java/org/apache/sling/event/impl/jobs/JobManagerImpl.java#L294]
>  (in a different thread though, but perhaps would also fail in same thread) 
> was not seeing the job that was just created.
> To give a bit more background, in Sling getJobById results in a query. That 
> query uses an index which is built using {{"async, sync"}}. So the assumption 
> is that the index is actually synchronous. But a test reproducing initially 
> mentioned scenario showed the opposite.
> Attached:
> *  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
> that does addJob, adds the resulting jobId to a list (synchronized). and a 
> second thread that reads the jobId off that list and does a getJobById. That 
> getJobById should find the job, as it was just created (how else could you 
> figure out the jobId) - but sometimes it FAILs (see system out FAIL)
> *  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
> "async, sync"
> /cc [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh reassigned OAK-7495:
--

Assignee: Vikas Saurabh

> async,sync index not synchronous
> 
>
> Key: OAK-7495
> URL: https://issues.apache.org/jira/browse/OAK-7495
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: indexing
>Affects Versions: 1.6.1
>Reporter: Stefan Egli
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: GetJobVerifier.java, slingeventJob.-1.tidy.json
>
>
> On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
> an addJob followed by a getJobById (in a different thread though, but perhaps 
> would also fail in same thread) was not seeing the job that was just created.
> To give a bit more background, in Sling getJobById results in a query. That 
> query uses an index which is built using {{"async, sync"}}. So the assumption 
> is that the index is actually synchronous. But a test reproducing initially 
> mentioned scenario showed the opposite.
> Attached:
> *  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
> that does addJob, adds the resulting jobId to a list (synchronized). and a 
> second thread that reads the jobId off that list and does a getJobById. That 
> getJobById should find the job, as it was just created (how else could you 
> figure out the jobId) - but sometimes it FAILs (see system out FAIL)
> *  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
> "async, sync"
> /cc [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7495) async,sync index not synchronous

2018-05-14 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-7495:


 Summary: async,sync index not synchronous
 Key: OAK-7495
 URL: https://issues.apache.org/jira/browse/OAK-7495
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: indexing
Affects Versions: 1.6.1
Reporter: Stefan Egli
 Attachments: GetJobVerifier.java, slingeventJob.-1.tidy.json

On an oak 1.6.1 (AEM 6.3) a suspicious behaviour was detected, where in Sling 
an addJob followed by a getJobById (in a different thread though, but perhaps 
would also fail in same thread) was not seeing the job that was just created.

To give a bit more background, in Sling getJobById results in a query. That 
query uses an index which is built using {{"async, sync"}}. So the assumption 
is that the index is actually synchronous. But a test reproducing initially 
mentioned scenario showed the opposite.

Attached:
*  [^GetJobVerifier.java] a Sling job test case that has 2 threads: a thread 
that does addJob, adds the resulting jobId to a list (synchronized). and a 
second thread that reads the jobId off that list and does a getJobById. That 
getJobById should find the job, as it was just created (how else could you 
figure out the jobId) - but sometimes it FAILs (see system out FAIL)
*  [^slingeventJob.-1.tidy.json] the index definition showing it is indeed 
"async, sync"

/cc [~catholicon]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7494) Build Jackrabbit Oak #1441 failed

2018-05-14 Thread Hudson (JIRA)
Hudson created OAK-7494:
---

 Summary: Build Jackrabbit Oak #1441 failed
 Key: OAK-7494
 URL: https://issues.apache.org/jira/browse/OAK-7494
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #1441 has failed.
First failed run: [Jackrabbit Oak 
#1441|https://builds.apache.org/job/Jackrabbit%20Oak/1441/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1441/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7493) RDB*Store: update Derby dependency to 10.14.2.0

2018-05-14 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474111#comment-16474111
 ] 

Julian Reschke commented on OAK-7493:
-

trunk: [r1831560|http://svn.apache.org/r1831560]


> RDB*Store: update Derby dependency to 10.14.2.0
> ---
>
> Key: OAK-7493
> URL: https://issues.apache.org/jira/browse/OAK-7493
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10, 1.9.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-7493) RDB*Store: update Derby dependency to 10.14.2.0

2018-05-14 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-7493.
-
   Resolution: Fixed
Fix Version/s: 1.9.2
   1.10

> RDB*Store: update Derby dependency to 10.14.2.0
> ---
>
> Key: OAK-7493
> URL: https://issues.apache.org/jira/browse/OAK-7493
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10, 1.9.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7493) RDB*Store: update Derby dependency to 10.14.2.0

2018-05-14 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7493:

Labels: candidate_oak_1_8  (was: )

> RDB*Store: update Derby dependency to 10.14.2.0
> ---
>
> Key: OAK-7493
> URL: https://issues.apache.org/jira/browse/OAK-7493
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_8
> Fix For: 1.10, 1.9.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-7493) RDB*Store: update Derby dependency to 10.14.2.0

2018-05-14 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-7493:
---

 Summary: RDB*Store: update Derby dependency to 10.14.2.0
 Key: OAK-7493
 URL: https://issues.apache.org/jira/browse/OAK-7493
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: parent
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7488) VersionablePathHook should be located with authorization code

2018-05-14 Thread Alex Deparvu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473970#comment-16473970
 ] 

Alex Deparvu commented on OAK-7488:
---

[~anchela] patch looks ok. 
one thing I don't completely see is the association of the version hook with 
the default model. could you explain a bit?

> VersionablePathHook should be located with authorization code
> -
>
> Key: OAK-7488
> URL: https://issues.apache.org/jira/browse/OAK-7488
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
>  Labels: m12n
> Attachments: OAK-7488.patch
>
>
> in order to cleanup troublesome dependencies within oak core, the 
> {{VersionablePathHook}} associated with the default authorization model 
> should be co-located with the latter instead of being placed inside 
> _o.a.j.oak.plugins.version_.
> [~stillalex], fyi



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7339) Fix all sidegrades breaking with UnsupportedOperationException on MissingBlobStore by introducing LoopbackBlobStore

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473901#comment-16473901
 ] 

Tomek Rękawek commented on OAK-7339:


Backported to 1.6.12 in [r1831545|https://svn.apache.org/r1831545].

> Fix all sidegrades breaking with UnsupportedOperationException on 
> MissingBlobStore by introducing LoopbackBlobStore
> ---
>
> Key: OAK-7339
> URL: https://issues.apache.org/jira/browse/OAK-7339
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.6.0, 1.8.0
>Reporter: Arek Kita
>Assignee: Tomek Rękawek
>Priority: Major
>  Labels: candidate_oak_1_2, candidate_oak_1_4, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.9.0, 1.10, 1.6.12, 1.8.4
>
> Attachments: OAK-7339-jenkins-xml-encoding-issue.patch, OAK-7339.patch
>
>
> h4. Problem
> In some edge cases when the binary under the same path (/content/asset1) is 
> modified by 2 independent checkpoints: A & B the sidegrade without providing 
> DataStore might fail with the following error:
> {noformat:title=An exception thrown by oak-upgrade tool}
> Caused by: java.lang.UnsupportedOperationException: null
>     at 
> org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62)
>     at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44)
>     at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227)
>     at com.google.common.base.Objects.equal(Objects.java:60)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:59)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511)
> (the same nested methods)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.childNodeChanged(PersistingDiff.java:191)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:440)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:483)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:432)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.applyDiffOnNodeState(PersistingDiff.java:106)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:403)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:347)
> {noformat}
>  
> h4. Abstract of proposed solution
> The idea for migration is simple: instead of failing on:
> {code:java}
> public InputStream getInputStream(String blobId) throws IOException;
> {code}
> or
> {code:java}
> public int readBlob(String blobId, long pos, byte[] buff, int off, int 
> length) throws IOException;
> {code}
> lets introduce a BlobStore implementation that acts similarly as a 
> *localhost* interface that what you sent it will resend back to this 
> interface.
> h4. How it works
> It works as a *localhost* interface, the same way: when *{{blobId}}* is 
> requested... then *{{blobId}}* is served as a binary content instead *of 
> throwing*: {{UnsupportedOperationException}}.
> This allows to act quickly on migrations that requires to compare binaries in 
> order to satisfy requirements for checkpoints to be rewritten, copied from 
> scratch.
> h4. Pros
>  * simplifies simple sidegrade migration use cases: you do not need anymore 
> to include your DataStore (which slows that migration not necessary) on the 
> 

[jira] [Updated] (OAK-7339) Fix all sidegrades breaking with UnsupportedOperationException on MissingBlobStore by introducing LoopbackBlobStore

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-7339:
---
Fix Version/s: 1.6.12

> Fix all sidegrades breaking with UnsupportedOperationException on 
> MissingBlobStore by introducing LoopbackBlobStore
> ---
>
> Key: OAK-7339
> URL: https://issues.apache.org/jira/browse/OAK-7339
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.6.0, 1.8.0
>Reporter: Arek Kita
>Assignee: Tomek Rękawek
>Priority: Major
>  Labels: candidate_oak_1_2, candidate_oak_1_4, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.9.0, 1.10, 1.6.12, 1.8.4
>
> Attachments: OAK-7339-jenkins-xml-encoding-issue.patch, OAK-7339.patch
>
>
> h4. Problem
> In some edge cases when the binary under the same path (/content/asset1) is 
> modified by 2 independent checkpoints: A & B the sidegrade without providing 
> DataStore might fail with the following error:
> {noformat:title=An exception thrown by oak-upgrade tool}
> Caused by: java.lang.UnsupportedOperationException: null
>     at 
> org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62)
>     at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44)
>     at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227)
>     at com.google.common.base.Objects.equal(Objects.java:60)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:59)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511)
> (the same nested methods)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.childNodeChanged(PersistingDiff.java:191)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:440)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:483)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:432)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.applyDiffOnNodeState(PersistingDiff.java:106)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:403)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:347)
> {noformat}
>  
> h4. Abstract of proposed solution
> The idea for migration is simple: instead of failing on:
> {code:java}
> public InputStream getInputStream(String blobId) throws IOException;
> {code}
> or
> {code:java}
> public int readBlob(String blobId, long pos, byte[] buff, int off, int 
> length) throws IOException;
> {code}
> lets introduce a BlobStore implementation that acts similarly as a 
> *localhost* interface that what you sent it will resend back to this 
> interface.
> h4. How it works
> It works as a *localhost* interface, the same way: when *{{blobId}}* is 
> requested... then *{{blobId}}* is served as a binary content instead *of 
> throwing*: {{UnsupportedOperationException}}.
> This allows to act quickly on migrations that requires to compare binaries in 
> order to satisfy requirements for checkpoints to be rewritten, copied from 
> scratch.
> h4. Pros
>  * simplifies simple sidegrade migration use cases: you do not need anymore 
> to include your DataStore (which slows that migration not necessary) on the 
> command line when the migration is failing
>  * speeds up the migration as it doesn't 

[jira] [Updated] (OAK-7339) Fix all sidegrades breaking with UnsupportedOperationException on MissingBlobStore by introducing LoopbackBlobStore

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-7339:
---
Fix Version/s: 1.8.4

> Fix all sidegrades breaking with UnsupportedOperationException on 
> MissingBlobStore by introducing LoopbackBlobStore
> ---
>
> Key: OAK-7339
> URL: https://issues.apache.org/jira/browse/OAK-7339
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.6.0, 1.8.0
>Reporter: Arek Kita
>Assignee: Tomek Rękawek
>Priority: Major
>  Labels: candidate_oak_1_2, candidate_oak_1_4, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.9.0, 1.10, 1.8.4
>
> Attachments: OAK-7339-jenkins-xml-encoding-issue.patch, OAK-7339.patch
>
>
> h4. Problem
> In some edge cases when the binary under the same path (/content/asset1) is 
> modified by 2 independent checkpoints: A & B the sidegrade without providing 
> DataStore might fail with the following error:
> {noformat:title=An exception thrown by oak-upgrade tool}
> Caused by: java.lang.UnsupportedOperationException: null
>     at 
> org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62)
>     at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44)
>     at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227)
>     at com.google.common.base.Objects.equal(Objects.java:60)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:59)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511)
> (the same nested methods)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.childNodeChanged(PersistingDiff.java:191)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:440)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:483)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:432)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.applyDiffOnNodeState(PersistingDiff.java:106)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:403)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:347)
> {noformat}
>  
> h4. Abstract of proposed solution
> The idea for migration is simple: instead of failing on:
> {code:java}
> public InputStream getInputStream(String blobId) throws IOException;
> {code}
> or
> {code:java}
> public int readBlob(String blobId, long pos, byte[] buff, int off, int 
> length) throws IOException;
> {code}
> lets introduce a BlobStore implementation that acts similarly as a 
> *localhost* interface that what you sent it will resend back to this 
> interface.
> h4. How it works
> It works as a *localhost* interface, the same way: when *{{blobId}}* is 
> requested... then *{{blobId}}* is served as a binary content instead *of 
> throwing*: {{UnsupportedOperationException}}.
> This allows to act quickly on migrations that requires to compare binaries in 
> order to satisfy requirements for checkpoints to be rewritten, copied from 
> scratch.
> h4. Pros
>  * simplifies simple sidegrade migration use cases: you do not need anymore 
> to include your DataStore (which slows that migration not necessary) on the 
> command line when the migration is failing
>  * speeds up the migration as it doesn't require to 

[jira] [Commented] (OAK-7339) Fix all sidegrades breaking with UnsupportedOperationException on MissingBlobStore by introducing LoopbackBlobStore

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473886#comment-16473886
 ] 

Tomek Rękawek commented on OAK-7339:


Backported to 1.8.4 in [r1831543|https://svn.apache.org/r1831543].

> Fix all sidegrades breaking with UnsupportedOperationException on 
> MissingBlobStore by introducing LoopbackBlobStore
> ---
>
> Key: OAK-7339
> URL: https://issues.apache.org/jira/browse/OAK-7339
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: upgrade
>Affects Versions: 1.6.0, 1.8.0
>Reporter: Arek Kita
>Assignee: Tomek Rękawek
>Priority: Major
>  Labels: candidate_oak_1_2, candidate_oak_1_4, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.9.0, 1.10, 1.8.4
>
> Attachments: OAK-7339-jenkins-xml-encoding-issue.patch, OAK-7339.patch
>
>
> h4. Problem
> In some edge cases when the binary under the same path (/content/asset1) is 
> modified by 2 independent checkpoints: A & B the sidegrade without providing 
> DataStore might fail with the following error:
> {noformat:title=An exception thrown by oak-upgrade tool}
> Caused by: java.lang.UnsupportedOperationException: null
>     at 
> org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62)
>     at 
> org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44)
>     at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227)
>     at com.google.common.base.Objects.equal(Objects.java:60)
>     at 
> org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:59)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511)
> (the same nested methods)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.childNodeChanged(PersistingDiff.java:191)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:440)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:483)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:432)
>     at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.diff(PersistingDiff.java:139)
>     at 
> org.apache.jackrabbit.oak.upgrade.PersistingDiff.applyDiffOnNodeState(PersistingDiff.java:106)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copyDiffToTarget(RepositorySidegrade.java:403)
>     at 
> org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.migrateWithCheckpoints(RepositorySidegrade.java:347)
> {noformat}
>  
> h4. Abstract of proposed solution
> The idea for migration is simple: instead of failing on:
> {code:java}
> public InputStream getInputStream(String blobId) throws IOException;
> {code}
> or
> {code:java}
> public int readBlob(String blobId, long pos, byte[] buff, int off, int 
> length) throws IOException;
> {code}
> lets introduce a BlobStore implementation that acts similarly as a 
> *localhost* interface that what you sent it will resend back to this 
> interface.
> h4. How it works
> It works as a *localhost* interface, the same way: when *{{blobId}}* is 
> requested... then *{{blobId}}* is served as a binary content instead *of 
> throwing*: {{UnsupportedOperationException}}.
> This allows to act quickly on migrations that requires to compare binaries in 
> order to satisfy requirements for checkpoints to be rewritten, copied from 
> scratch.
> h4. Pros
>  * simplifies simple sidegrade migration use cases: you do not need anymore 
> to include your DataStore (which slows that migration not necessary) on the 
> command line 

[jira] [Commented] (OAK-7490) oak-run console lc rmdata command uses second parameter for index path (and defaults to /oak:index/lucene)

2018-05-14 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473878#comment-16473878
 ] 

Chetan Mehrotra commented on OAK-7490:
--

bq. Destructive functionality, imo, shouldn't have default value for things to 
be deleted

+1. Thanks [~catholicon] for fixing this!

> oak-run console lc rmdata command uses second parameter for index path (and 
> defaults to /oak:index/lucene)
> --
>
> Key: OAK-7490
> URL: https://issues.apache.org/jira/browse/OAK-7490
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: oak-run
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.10, 1.8.3, 1.2.30, 1.4.22, 1.6.12
>
>
> {{rmdata}} command currently takes 2 arguments where 1st one isn't even used 
> - the second one is represents the index to be deleted.
> Worse yet, the command assumes a default index to be deleted to be 
> {{/oak:index/lucene}} - that can lead to unintentional deletion.
> Destructive functionality, imo, shouldn't have default value for things to be 
> deleted. Also, of course, we should fix that only 1 parameter is expected and 
> parsed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)