[jira] [Commented] (OAK-8048) VersionHistory not removed when removing node and all its versions

2019-02-18 Thread Marco Piovesana (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771670#comment-16771670
 ] 

Marco Piovesana commented on OAK-8048:
--

Hi Marcel,

here the code snippet:
{code:java}
@Test
public void shouldDeleteNodeHistory() throws IOException, 
InvalidFileStoreVersionException, RepositoryException {
File root = new File(System.getProperty("java.io.tmpdir"), "oakTest");
File repo = new File(root, "content");

FileDataStore fileDataStore = new FileDataStore();
fileDataStore.init(repo.getAbsolutePath());
DataStoreBlobStore dataStoreBlobStore = new 
DataStoreBlobStore(fileDataStore);
FileStore fileStore = 
FileStoreBuilder.fileStoreBuilder(repo).withBlobStore(dataStoreBlobStore).build();
SegmentNodeStore nodeStore = 
SegmentNodeStoreBuilders.builder(fileStore).build();

Oak oak = new Oak(nodeStore);
Jcr jcrRepo = new Jcr(oak);
Repository repository = jcrRepo.createRepository();
try {
Session adminSession = repository.login(new 
SimpleCredentials("admin", "admin".toCharArray()));

adminSession.save();

Node myFolder = JcrUtils.getOrAddNode(adminSession.getRootNode(), 
"my node", JcrConstants.NT_UNSTRUCTURED);
myFolder.addMixin(JcrConstants.MIX_VERSIONABLE);

myFolder.addMixin(AccessControlConstants.MIX_REP_ACCESS_CONTROLLABLE);
adminSession.save();


adminSession.getWorkspace().getVersionManager().checkout(myFolder.getPath());

adminSession.getWorkspace().getVersionManager().checkin(myFolder.getPath());

adminSession.getWorkspace().getVersionManager().checkout(myFolder.getPath());

adminSession.getWorkspace().getVersionManager().checkin(myFolder.getPath());

VersionHistory versionHistory = 
adminSession.getWorkspace().getVersionManager().getVersionHistory(myFolder.getPath());
String historyNodePath = versionHistory.getPath();
VersionIterator allVersions = versionHistory.getAllVersions();
myFolder.remove();
adminSession.save();
while (allVersions.hasNext()) {
Version version = allVersions.nextVersion();
if (!version.getName().equals(JcrConstants.JCR_ROOTVERSION)) {
versionHistory.removeVersion(version.getName());
}
}
adminSession.save();
boolean historyExists = adminSession.itemExists(historyNodePath);
adminSession.logout();

assertFalse(historyExists);
} finally {
fileStore.close();
((JackrabbitRepository) repository).shutdown();
}
}
{code}
 

> VersionHistory not removed when removing node and all its versions
> --
>
> Key: OAK-8048
> URL: https://issues.apache.org/jira/browse/OAK-8048
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.8.9
>Reporter: Marco Piovesana
>Priority: Major
>
> Hi all,
> I'm trying to delete a node and all its versions, but the version history is 
> not removed. I'm doing the following steps (as described in OAK-4370 and 
> JCR-34):
>  # retrieve the version history
>  # delete the node and save the session
>  # delete all versions except for the base version
>  # save the session
> The versions are all gone but the versionHistory node, and the base version 
> node, are still there. Am I doing something wrong? 
> The only test related to this that I found is 
> {{ReadOnlyVersionManagerTest.testRemoveEmptyHistoryAfterRemovingVersionable}}.
>  It does work, but uses Oak related classes and not the JCR interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8060) Incorrect read preference when parentId refers to NodeDocument.NULL

2019-02-18 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-8060:
-

 Summary: Incorrect read preference when parentId refers to 
NodeDocument.NULL
 Key: OAK-8060
 URL: https://issues.apache.org/jira/browse/OAK-8060
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 1.10.0, 1.8.0, 1.6.0, 1.4.0, 1.0.2, 1.2.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.12


The method {{MongoDocumentStore.getMongoReadPreference()}} recommends an 
incorrect read preference when the cache contains a {{NodeDocument.NULL}} for 
the {{parentId}} and the configured read preference, e.g. via the MongoDB URI 
is not set to primary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8048) VersionHistory not removed when removing node and all its versions

2019-02-18 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771662#comment-16771662
 ] 

Marcel Reutegger commented on OAK-8048:
---

Thanks for reporting this issue. Can you please attach a test that reproduces 
the problem?

> VersionHistory not removed when removing node and all its versions
> --
>
> Key: OAK-8048
> URL: https://issues.apache.org/jira/browse/OAK-8048
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.8.9
>Reporter: Marco Piovesana
>Priority: Major
>
> Hi all,
> I'm trying to delete a node and all its versions, but the version history is 
> not removed. I'm doing the following steps (as described in OAK-4370 and 
> JCR-34):
>  # retrieve the version history
>  # delete the node and save the session
>  # delete all versions except for the base version
>  # save the session
> The versions are all gone but the versionHistory node, and the base version 
> node, are still there. Am I doing something wrong? 
> The only test related to this that I found is 
> {{ReadOnlyVersionManagerTest.testRemoveEmptyHistoryAfterRemovingVersionable}}.
>  It does work, but uses Oak related classes and not the JCR interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8059) Update Jackson dependency to 2.9.8

2019-02-18 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-8059:
---

 Summary: Update Jackson dependency to 2.9.8
 Key: OAK-8059
 URL: https://issues.apache.org/jira/browse/OAK-8059
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: parent
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.12






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8058) RDB*Store: update Tomcat JDBC pool dependency to 8.5.38

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8058:

Issue Type: Technical task  (was: Task)
Parent: OAK-1266

> RDB*Store: update Tomcat JDBC pool dependency to 8.5.38
> ---
>
> Key: OAK-8058
> URL: https://issues.apache.org/jira/browse/OAK-8058
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: documentmk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8058) RDB*Store: update Tomcat JDBC pool dependency to 8.5.38

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8058:

Fix Version/s: 1.12

> RDB*Store: update Tomcat JDBC pool dependency to 8.5.38
> ---
>
> Key: OAK-8058
> URL: https://issues.apache.org/jira/browse/OAK-8058
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: documentmk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.12
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8058) RDB*Store: update Tomcat JDBC pool dependency to 8.5.38

2019-02-18 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-8058:
---

 Summary: RDB*Store: update Tomcat JDBC pool dependency to 8.5.38
 Key: OAK-8058
 URL: https://issues.apache.org/jira/browse/OAK-8058
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: documentmk, rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8046) Result items are not always correctly counted against the configured read limit if a query uses a lucene index

2019-02-18 Thread Vikas Saurabh (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771436#comment-16771436
 ] 

Vikas Saurabh commented on OAK-8046:


[~tmueller] can you please review  [^OAK-8046.patch] . I'm working on writing 
test cases in the mean time (which is turning out to be a bit harder to refresh 
index without consuming the whole cursor).

> Result items are not always correctly counted against the configured read 
> limit if a query uses a lucene index 
> ---
>
> Key: OAK-8046
> URL: https://issues.apache.org/jira/browse/OAK-8046
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.8.7
>Reporter: Georg Henzler
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: OAK-8046.patch
>
>
> There are cases where an index is re-opened during query execution. In that 
> case, already returned entries are read again and skipped, so basically 
> counted twice. This should be fixed to only count entries once (see also [1])
> The issue most likely exists since the read limit was introduced with OAK-6875
> [1] 
> https://lists.apache.org/thread.html/dddf9834fee0bccb6e48f61ba2a01430e34fc0b464b12809f7dfe2eb@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-8046) Result items are not always correctly counted against the configured read limit if a query uses a lucene index

2019-02-18 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh reassigned OAK-8046:
--

Assignee: Vikas Saurabh

> Result items are not always correctly counted against the configured read 
> limit if a query uses a lucene index 
> ---
>
> Key: OAK-8046
> URL: https://issues.apache.org/jira/browse/OAK-8046
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.8.7
>Reporter: Georg Henzler
>Assignee: Vikas Saurabh
>Priority: Major
> Attachments: OAK-8046.patch
>
>
> There are cases where an index is re-opened during query execution. In that 
> case, already returned entries are read again and skipped, so basically 
> counted twice. This should be fixed to only count entries once (see also [1])
> The issue most likely exists since the read limit was introduced with OAK-6875
> [1] 
> https://lists.apache.org/thread.html/dddf9834fee0bccb6e48f61ba2a01430e34fc0b464b12809f7dfe2eb@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8046) Result items are not always correctly counted against the configured read limit if a query uses a lucene index

2019-02-18 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-8046:
---
Attachment: OAK-8046.patch

> Result items are not always correctly counted against the configured read 
> limit if a query uses a lucene index 
> ---
>
> Key: OAK-8046
> URL: https://issues.apache.org/jira/browse/OAK-8046
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.8.7
>Reporter: Georg Henzler
>Priority: Major
> Attachments: OAK-8046.patch
>
>
> There are cases where an index is re-opened during query execution. In that 
> case, already returned entries are read again and skipped, so basically 
> counted twice. This should be fixed to only count entries once (see also [1])
> The issue most likely exists since the read limit was introduced with OAK-6875
> [1] 
> https://lists.apache.org/thread.html/dddf9834fee0bccb6e48f61ba2a01430e34fc0b464b12809f7dfe2eb@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8040) Build Jackrabbit Oak #1941 failed

2019-02-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771252#comment-16771252
 ] 

Hudson commented on OAK-8040:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#1954|https://builds.apache.org/job/Jackrabbit%20Oak/1954/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/1954/console]

> Build Jackrabbit Oak #1941 failed
> -
>
> Key: OAK-8040
> URL: https://issues.apache.org/jira/browse/OAK-8040
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #1941 has failed.
> First failed run: [Jackrabbit Oak 
> #1941|https://builds.apache.org/job/Jackrabbit%20Oak/1941/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/1941/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8033) Node states sometimes refer to more than a single generation of segments after a full compaction

2019-02-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/OAK-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-8033:
---
Fix Version/s: 1.8.12
   1.10.1
   1.11.0
   1.6.17

> Node states sometimes refer to more than a single generation of segments 
> after a full compaction
> 
>
> Key: OAK-8033
> URL: https://issues.apache.org/jira/browse/OAK-8033
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.10.0, 1.8.10, 1.6.16, 1.8.11, 1.10
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
>  Labels: TarMK, candidate_oak_1_10, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.6.17, 1.11.0, 1.10.1, 1.8.12
>
>
> Due to a regression introduced with OAK-7867 a full compaction can sometimes 
> cause nodes that are written concurrently to reference segments from more 
> than a single gc generation.
> This happens when the {{borrowWriter}} method needs to [create a new 
> writer|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentBufferWriterPool.java#L197-L201].
>  In this case the new writer will be of the generation of the current head 
> state instead of the generation associated with the current write operation 
> in progress.
>  
> cc [~frm], [~ahanikel]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8033) Node states sometimes refer to more than a single generation of segments after a full compaction

2019-02-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/OAK-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771192#comment-16771192
 ] 

Michael Dürig commented on OAK-8033:


Merged into 1.6 at [http://svn.apache.org/viewvc?rev=1853814&view=rev]

Merged into 1.8 at [http://svn.apache.org/viewvc?rev=1853813&view=rev]

Merged into 1.10 at http://svn.apache.org/viewvc?rev=1853812&view=rev

> Node states sometimes refer to more than a single generation of segments 
> after a full compaction
> 
>
> Key: OAK-8033
> URL: https://issues.apache.org/jira/browse/OAK-8033
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.10.0, 1.8.10, 1.6.16, 1.8.11, 1.10
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
>  Labels: TarMK, candidate_oak_1_10, candidate_oak_1_6, 
> candidate_oak_1_8
>
> Due to a regression introduced with OAK-7867 a full compaction can sometimes 
> cause nodes that are written concurrently to reference segments from more 
> than a single gc generation.
> This happens when the {{borrowWriter}} method needs to [create a new 
> writer|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentBufferWriterPool.java#L197-L201].
>  In this case the new writer will be of the generation of the current head 
> state instead of the generation associated with the current write operation 
> in progress.
>  
> cc [~frm], [~ahanikel]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8033) Node states sometimes refer to more than a single generation of segments after a full compaction

2019-02-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/OAK-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-8033.

Resolution: Fixed

> Node states sometimes refer to more than a single generation of segments 
> after a full compaction
> 
>
> Key: OAK-8033
> URL: https://issues.apache.org/jira/browse/OAK-8033
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.10.0, 1.8.10, 1.6.16, 1.8.11, 1.10
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
>  Labels: TarMK, candidate_oak_1_10, candidate_oak_1_6, 
> candidate_oak_1_8
> Fix For: 1.6.17, 1.11.0, 1.10.1, 1.8.12
>
>
> Due to a regression introduced with OAK-7867 a full compaction can sometimes 
> cause nodes that are written concurrently to reference segments from more 
> than a single gc generation.
> This happens when the {{borrowWriter}} method needs to [create a new 
> writer|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentBufferWriterPool.java#L197-L201].
>  In this case the new writer will be of the generation of the current head 
> state instead of the generation associated with the current write operation 
> in progress.
>  
> cc [~frm], [~ahanikel]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8057) ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING

2019-02-18 Thread Hans-Peter Stoerr (JIRA)
Hans-Peter Stoerr created OAK-8057:
--

 Summary: ItemExistsException in ImporterImpl despite 
COLLISION_REPLACE_EXISTING
 Key: OAK-8057
 URL: https://issues.apache.org/jira/browse/OAK-8057
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: jcr
Affects Versions: 1.8.8
 Environment: Sling Launchpad 11
Reporter: Hans-Peter Stoerr


I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:


            ByteArrayOutputStream sout = new ByteArrayOutputStream();
            session.exportDocumentView(node.getPath(), sout, false, false);
            LOG.info("Document View:\n{}", sout.toString("UTF-8"));
            XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
            
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
            xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;
 
The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : `IdentifierManager.getIdentifier(existing)` always returns something - if 
there is no `jcr:uuid` it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual `jcr:uuid` of `existing` 
?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be


    !existingIdentifier.equals(id)
                            || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                            && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING)))
 
- that is, if it is really correct.
 
I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8014) Commits carrying over from previous GC generation can block other threads from committing

2019-02-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/OAK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771186#comment-16771186
 ] 

Michael Dürig commented on OAK-8014:


Thanks for those suggestions, [~ahanikel]. Merged into 
https://github.com/mduerig/jackrabbit-oak/commits/OAK-8014

> Commits carrying over from previous GC generation can block other threads 
> from committing
> -
>
> Key: OAK-8014
> URL: https://issues.apache.org/jira/browse/OAK-8014
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: 1.10.0, 1.8.11
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: TarMK
> Fix For: 1.12, 1.11.0, 1.8.12
>
> Attachments: OAK-8014.patch
>
>
> A commit that is based on a previous (full) generation can block other 
> commits from progressing for a long time. This happens because such a commit 
> will do a deep copy of its state to avoid linking to old segments (see 
> OAK-3348). Most of the deep copying is usually avoided by the deduplication 
> caches. However, in cases where the cache hit rate is not good enough we have 
> seen deep copy operations up to several minutes. Sometimes this deep copy 
> operation happens inside the commit lock of 
> {{LockBasedScheduler.schedule()}}, which then causes all other commits to 
> become blocked.
> cc [~rma61...@adobe.com], [~edivad]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8057) ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING

2019-02-18 Thread Hans-Peter Stoerr (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans-Peter Stoerr updated OAK-8057:
---
Description: 
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
of a node and re-importing it using javax.jcr.Session#getImportContentHandler 
with ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
ByteArrayOutputStream sout = new ByteArrayOutputStream();
session.exportDocumentView(node.getPath(), sout, false, false);
LOG.info("Document View:\n{}", sout.toString("UTF-8"));
XMLReader xmlreader = saxParserFactory.newSAXParser().getXMLReader();
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
if (!existingIdentifier.equals(id)
|| (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
&& uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

PS: I noticed the process of exporting and re-importing DocumentView XML also 
changes the datatype of boolean attributes to String. But I'm not sure whether 
this is a bug or a feature.

  was:
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
ByteArrayOutputStream sout = new ByteArrayOutputStream();
session.exportDocumentView(node.getPath(), sout, false, false);
LOG.info("Document View:\n{}", sout.toString("UTF-8"));
XMLReader xmlreader = saxParserFactory.newSAXParser().getXMLReader();
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
if (!existingIdentifier.equals(id)
|| (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
&& uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

PS: I noticed the process of exporting and re-importing DocumentView XML also 
changes the datatype of boolean attributes to String. But I'm not sure whether 
this is a bug or a feature.


> ItemExistsException

[jira] [Updated] (OAK-8057) ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING

2019-02-18 Thread Hans-Peter Stoerr (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans-Peter Stoerr updated OAK-8057:
---
Description: 
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
ByteArrayOutputStream sout = new ByteArrayOutputStream();
session.exportDocumentView(node.getPath(), sout, false, false);
LOG.info("Document View:\n{}", sout.toString("UTF-8"));
XMLReader xmlreader = saxParserFactory.newSAXParser().getXMLReader();
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
if (!existingIdentifier.equals(id)
|| (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
&& uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

PS: I noticed the process of exporting and re-importing DocumentView XML also 
changes the datatype of boolean attributes to String. But I'm not sure whether 
this is a bug or a feature.

  was:
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
            ByteArrayOutputStream sout = new ByteArrayOutputStream();
             session.exportDocumentView(node.getPath(), sout, false, false);
             LOG.info("Document View:\n{}", sout.toString("UTF-8"));
             XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
             
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
             xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
    !existingIdentifier.equals(id)
                             || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                             && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

PS: I noticed the process of exporting and re-importing DocumentView XML also 
changes the datatyp

[jira] [Updated] (OAK-8057) ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING

2019-02-18 Thread Hans-Peter Stoerr (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans-Peter Stoerr updated OAK-8057:
---
Description: 
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
            ByteArrayOutputStream sout = new ByteArrayOutputStream();
             session.exportDocumentView(node.getPath(), sout, false, false);
             LOG.info("Document View:\n{}", sout.toString("UTF-8"));
             XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
             
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
             xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
    !existingIdentifier.equals(id)
                             || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                             && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

PS: I noticed the process of exporting and re-importing DocumentView XML also 
changes the datatype of boolean attributes to String. But I'm not sure whether 
this is a bug or a feature.

  was:
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
            ByteArrayOutputStream sout = new ByteArrayOutputStream();
             session.exportDocumentView(node.getPath(), sout, false, false);
             LOG.info("Document View:\n{}", sout.toString("UTF-8"));
             XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
             
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
             xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
    !existingIdentifier.equals(id)
                             || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                             && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

[jira] [Updated] (OAK-8057) ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING

2019-02-18 Thread Hans-Peter Stoerr (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans-Peter Stoerr updated OAK-8057:
---
Description: 
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:
{code:java}
            ByteArrayOutputStream sout = new ByteArrayOutputStream();
             session.exportDocumentView(node.getPath(), sout, false, false);
             LOG.info("Document View:\n{}", sout.toString("UTF-8"));
             XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
             
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
             xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;{code}
 
 The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : IdentifierManager.getIdentifier(existing) always returns something - if 
there is no jcr:uuid it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual jcr:uuid of 'existing' ?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be
{code:java}
    !existingIdentifier.equals(id)
                             || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                             && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING))){code}
that is, if it is really correct.
  
 I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!

  was:
I tried exporting a document view XML with javax.jcr.Session#exportDocumentView 
and re-importing it using javax.jcr.Session#getImportContentHandler with 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING into the same node's 
parent, which should IMHO always work and not change that node, right? 
Unfortunately that throws an ItemExistsException "Node with the same UUID 
exists..." in some cases, that shouldn't be thrown to my understanding. I ran 
the following code:


            ByteArrayOutputStream sout = new ByteArrayOutputStream();
            session.exportDocumentView(node.getPath(), sout, false, false);
            LOG.info("Document View:\n{}", sout.toString("UTF-8"));
            XMLReader xmlreader = 
saxParserFactory.newSAXParser().getXMLReader();
            
xmlreader.setContentHandler(session.getImportContentHandler(node.getParent().getPath(),
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING));
            xmlreader.parse(new InputSource(new 
ByteArrayInputStream(sout.toByteArray(;
 
The simplest example where this throws an unwarranted ItemExistsException is 
when you run this on a node of jcr:primaryType nt:file . The problem lies in 
the condition at lines 419 / 420 of 
[ImporterImpl.java|https://github.com/apache/jackrabbit-oak/blob/jackrabbit-oak-1.9.13/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/xml/ImporterImpl.java]
 : `IdentifierManager.getIdentifier(existing)` always returns something - if 
there is no `jcr:uuid` it returns a path. But id is null here (the node has no 
jcr:uuid in the imported XML), so the condition always fails if that branch is 
reached. Perhaps that comparison should use the actual `jcr:uuid` of `existing` 
?

By the way: I suggest to apply De Morgans law to the condition in line 420 to 
make it more readable. That'd be


    !existingIdentifier.equals(id)
                            || (uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REMOVE_EXISTING
                            && uuidBehavior != 
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING)))
 
- that is, if it is really correct.
 
I tried this with version 1.8.8 in the Sling Launchpad, but I suppose this 
problem persists with 1.9.13 since that code is unchanged.

Thanks so much!


> ItemExistsException in ImporterImpl despite COLLISION_REPLACE_EXISTING
> --
>
> Key: OAK-8057
> U

[jira] [Comment Edited] (OAK-8043) RDB: expose DDL generation functionality in oak-run

2019-02-18 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16766030#comment-16766030
 ] 

Julian Reschke edited comment on OAK-8043 at 2/18/19 3:38 PM:
--

trunk: [r1853808|http://svn.apache.org/r1853808] 
[r1853433|http://svn.apache.org/r1853433]
1.10: [r1853457|http://svn.apache.org/r1853457]
1.8: [r1853477|http://svn.apache.org/r1853477]



was (Author: reschke):
trunk: [r1853433|http://svn.apache.org/r1853433]
1.10: [r1853457|http://svn.apache.org/r1853457]
1.8: [r1853477|http://svn.apache.org/r1853477]


> RDB: expose DDL generation functionality in oak-run
> ---
>
> Key: OAK-8043
> URL: https://issues.apache.org/jira/browse/OAK-8043
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: oak-run, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_6
> Fix For: 1.12, 1.11.0, 1.10.1, 1.8.12
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8014) Commits carrying over from previous GC generation can block other threads from committing

2019-02-18 Thread Axel Hanikel (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771153#comment-16771153
 ] 

Axel Hanikel commented on OAK-8014:
---

[~mduerig] Don't we have to also check the underlying lock in 
LockFixture#assertUnlock? As in 
https://github.com/ahanikel/jackrabbit-oak/commit/bfe93a0d922c8942209da1ab1d25f700477e1deb

> Commits carrying over from previous GC generation can block other threads 
> from committing
> -
>
> Key: OAK-8014
> URL: https://issues.apache.org/jira/browse/OAK-8014
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-tar
>Affects Versions: 1.10.0, 1.8.11
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: TarMK
> Fix For: 1.12, 1.11.0, 1.8.12
>
> Attachments: OAK-8014.patch
>
>
> A commit that is based on a previous (full) generation can block other 
> commits from progressing for a long time. This happens because such a commit 
> will do a deep copy of its state to avoid linking to old segments (see 
> OAK-3348). Most of the deep copying is usually avoided by the deduplication 
> caches. However, in cases where the cache hit rate is not good enough we have 
> seen deep copy operations up to several minutes. Sometimes this deep copy 
> operation happens inside the commit lock of 
> {{LockBasedScheduler.schedule()}}, which then causes all other commits to 
> become blocked.
> cc [~rma61...@adobe.com], [~edivad]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7182) Make it possible to update Guava

2019-02-18 Thread Robert Munteanu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771136#comment-16771136
 ] 

Robert Munteanu commented on OAK-7182:
--

[~stillalex] - I eventually managed to take a look at your diff. The fact that 
you had to basically replicate the Guava API in a compat layer is indeed a bad 
smell.

I would suggest that - if needed - we simply do away with the Guava APIs and 
implement our own, e.g. instead of having 
{{org.apache.jackrabbit.oak.commons.guava.MoreExecutorsCompat}} we have a 
{{org.apache.jackrabbit.oak.commons.concurrent.OakExecutors}} where we supply 
whatever is needed by Oak under our own APIs. I have come to strongly believe 
that any leak of Guava APIs in the Oak codebase - whether it's in exported API 
or not - is a risk. Therefore I would favour only accessing it from a limited 
amount of classes so migration to newer version (or even away from Guava) is 
possible.

I think that the only way this can eventually work is to no longer import Guava 
at runtime, either by replacing it completely or by importing it statically and 
shading it (optional rant below).



For more context, I believe that it's important that we understand where Guava 
is coming from. Google famously stores all it's code in a monorepo ( 
https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext
 ). Not only is all the code stored in a single repository - including shared 
libraries like Guava and external dependencies like JUnit, but also all 
dependencies are upgraded _at the same time_. This means that all projects in 
trunk depend on a single version of Guava at a single point in time.

For this kind of setup the way Guava does versioning and backwards 
compatibility is perfectly fine - you give everyone time to adapt to new 
releases and then stop supporting the old code. However, it is fundamentally at 
odds with being used in projects with a large number of dependencies, each one 
having its own Guava version. This is precisely why we must minimise our 
exposure to Guava and keep it behind adapter classes or even remove it outright.

The sooner we do this the better.

> Make it possible to update Guava
> 
>
> Key: OAK-7182
> URL: https://issues.apache.org/jira/browse/OAK-7182
> Project: Jackrabbit Oak
>  Issue Type: Wish
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Attachments: GuavaTests.java, OAK-7182-guava-21-3.diff, 
> OAK-7182-guava-21-4.diff, OAK-7182-guava-21.diff, OAK-7182-guava-23.6.1.diff, 
> guava.diff
>
>
> We currently rely on Guava 15, and this affects all users of Oak because they 
> essentially need to use the same version.
> This is an overall issue to investigate what would need to be done in Oak in 
> order to make updates possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771133#comment-16771133
 ] 

Julian Reschke commented on OAK-8051:
-

Proposed patch: 
https://issues.apache.org/jira/secure/attachment/12959130/OAK-8051.diff

This enforces that {{map != null}} upon construction of {{CacheMap}}.

However, this causes {{CacheTest.recoverIfCorrupt()}} to fail. It appears that 
this test tries to verify that only a limited number of operations is attempted 
upon failures. This now fails because the {{CacheMap}} cannot be instantiated 
in the first place. Not sure what to do with this. [~tmueller], [~mreutegg] - 
feedback appreciated.

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Priority: Major
> Fix For: 1.12
>
> Attachments: OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> 

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Attachment: OAK-8051.diff

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Priority: Major
> Fix For: 1.12
>
> Attachments: OAK-8051.diff
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
> {noformat}
> and then
> {noformat}
> 22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap 
> Re-opening map PREV_DOCUMENT
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.asyncReadIfPresent(NodeCache.java:147)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.readIfPresent(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCach

[jira] [Commented] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771014#comment-16771014
 ] 

Julian Reschke commented on OAK-8051:
-

(still looking at 1.6 source...)

So, in 

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
{noformat}

a {{null}} return value is catched and logged as warning, and then a {{null}} 
is returned.

{{CacheMap.openMap()}}:

{noformat}
void openMap() {
openCount = factory.reopenStoreIfNeeded(openCount);
Map m2 = factory.openMap(name, builder);
if (m2 != null) {
map = m2;
}
}
{noformat}

handles the {{null}} return value, and then does not update {{this.map}}.

The constructor:

{noformat}
public CacheMap(MapFactory factory, String name, Builder builder) {
this.factory = factory;
this.name = name;
this.builder = builder;
openMap();
}
{noformat}

thus passes with {{this.map == null}}, causing the {{CacheMap}} to be in an 
invalid state.



> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Priority: Major
> Fix For: 1.12
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registe

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Description: 
Seen in the wild (in 1.6.6):

{noformat}
22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the store _path_/cache-4.data
java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
[1.4.193/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
at org.h2.mvstore.FileStore.open(FileStore.java:168)
at org.h2.mvstore.MVStore.(MVStore.java:348)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
{noformat}

Later on:

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
{noformat}

and then

{noformat}
22.01.2019 08:45:16.808 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap Re-opening 
map PREV_DOCUMENT
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.readValue(MultiGenerationMap.java:71)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.asyncReadIfPresent(NodeCache.java:147)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.readIfPresent(NodeCache.java:130)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.getIfPresent(NodeCache.java:213)
at 
org.apache.jackrabbit.oak.plugins.document.cache.NodeDocumentCache.getIfPresent(NodeDocumentCache.java:155)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentCached(RDBDocumentStore.java:1130)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:234)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:229)
at 
org.apache.jackrabbit.oak.plugins.document.NodeDocument.getPreviousDocument(NodeDocument.java:1338)
at 
org.apache.jackrabbit.oak.plugins.docum

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Description: 
Seen in the wild (in 1.6.6):

{noformat}
22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the store _path_/cache-4.data
java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
[1.4.193/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
at org.h2.mvstore.FileStore.open(FileStore.java:168)
at org.h2.mvstore.MVStore.(MVStore.java:348)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
{noformat}

Later on:

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1214)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildPrevDocumentsCache(DocumentMK.java:1182)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildNodeDocumentCache(DocumentMK.java:1189)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:798)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:212)
at 
org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:224)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.setRDBConnection(DocumentMK.java:757)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStore(DocumentNodeStoreService.java:508)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.registerNodeStoreIfPossible(DocumentNodeStoreService.java:430)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.activate(DocumentNodeStoreService.java:414)
{noformat}

  was:
Seen in the wild (in 1.6.6):

{noformat}
22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the store _path_/cache-4.data
java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
[1.4.193/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
at org.h2.mvstore.FileStore.open(FileStore.java:168)
at org.h2.mvstore.MVStore.(MVStore.java:348)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
{noformat}

Later on:

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit

[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Fix Version/s: 1.12

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Julian Reschke
>Priority: Major
> Fix For: 1.12
>
>
> Seen in the wild:
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Affects Version/s: 1.6.6

> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.6.6
>Reporter: Julian Reschke
>Priority: Major
> Fix For: 1.12
>
>
> Seen in the wild (in 1.6.6):
> {noformat}
> 22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the store _path_/cache-4.data
> java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
> [1.4.193/7]
>   at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
>   at org.h2.mvstore.FileStore.open(FileStore.java:168)
>   at org.h2.mvstore.MVStore.(MVStore.java:348)
>   at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
> {noformat}
> Later on:
> {noformat}
> 22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could 
> not open the map
> java.lang.NullPointerException: null
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8051) PersistentCache: error during open can lead to incomplete initialization and subsequent NPEs

2019-02-18 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8051:

Description: 
Seen in the wild (in 1.6.6):

{noformat}
22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the store _path_/cache-4.data
java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
[1.4.193/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
at org.h2.mvstore.FileStore.open(FileStore.java:168)
at org.h2.mvstore.MVStore.(MVStore.java:348)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
{noformat}

Later on:

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
{noformat}

  was:
Seen in the wild:

{noformat}
22.01.2019 08:45:13.153 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the store _path_/cache-4.data
java.lang.IllegalStateException: The file is locked: nio:_path_/cache-4.data 
[1.4.193/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:765)
at org.h2.mvstore.FileStore.open(FileStore.java:168)
at org.h2.mvstore.MVStore.(MVStore.java:348)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2923)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openStore(PersistentCache.java:288)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.createMapFactory(PersistentCache.java:361)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.(PersistentCache.java:210)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getPersistentCache(DocumentMK.java:1232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.buildCache(DocumentMK.java:1211)
{noformat}

Later on:

{noformat}
22.01.2019 08:45:13.155 *WARN* [http-/0.0.0.0:80-3] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MapFactory Could not 
open the map
java.lang.NullPointerException: null
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.openMap(PersistentCache.java:335)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.openMap(CacheMap.java:135)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.(CacheMap.java:48)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.openMap(PersistentCache.java:468)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.addGeneration(NodeCache.java:115)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.initGenerationCache(PersistentCache.java:452)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.wrap(PersistentCache.java:443)
{noformat}


> PersistentCache: error during open can lead to incomplete initialization and 
> subsequent NPEs
> 
>
> Key: OAK-8051
> URL: https://issues.apache.org/jira/browse/OAK-8051
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk

[jira] [Commented] (OAK-8054) RepMembersConflictHandler creates property with wrong type

2019-02-18 Thread Alex Deparvu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770953#comment-16770953
 ] 

Alex Deparvu commented on OAK-8054:
---

This is how it looks so far [0].
 - integrated the provided tests and added some more for the threshold overflow 
scenario
 - I added more conflict handling code for the threshold
 - added handling for {{deleteDeletedProperty}} scenario
 - finally I refactored the merge to keep the existing order of the values (in 
it's current form there's some shuffle due to usage of HashSet)
I think it's in pretty good shape, running the IT tests now.

[0] https://github.com/apache/jackrabbit-oak/compare/trunk...stillalex:oak-8054

> RepMembersConflictHandler creates property with wrong type
> --
>
> Key: OAK-8054
> URL: https://issues.apache.org/jira/browse/OAK-8054
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Critical
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8054-impact.patch, OAK-8054.patch
>
>
> The {{RepMembersConflictHandler}} handler uses type {{STRING}} instead of 
> {{WEAKREFERENCE}} [0] as per the property's definition, which will trigger 
> the type validation to fail the commit.
> Running external login tests I see that the type fails as soon as the handler 
> comes into play:
> {noformat}
> WARN  o.a.j.o.s.s.a.e.i.ExternalLoginModule - User synchronization failed 
> during commit: org.apache.jackrabbit.oak.api.CommitFailedException: 
> OakConstraint0004: 
> /rep:security/rep:authorizables/rep:groups/pathPrefix/g8/rep:membersList/9[[rep:MemberReferences]]:
>  No matching property definition found for rep:members = 
> [8e490910-17b6-30c1-8e11-6abdfa8a4ebc, 1a8e79f5-428e-39e9-88bb-2b86bd9b402e, 
> ... ]. (attempt 10/50)
> {noformat}
> This seems to be a pretty big issue, and I'm not yet sure why it wasn't 
> caught by the existing tests.
> // fyi [~anchela]
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/RepMembersConflictHandler.java#L135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8054) RepMembersConflictHandler creates property with wrong type

2019-02-18 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770862#comment-16770862
 ] 

angela commented on OAK-8054:
-

[~stillalex], you're welcome. let me know if there is anything else i can help 
you with. during the weekend i kept wondering if we should let the 
{{UserValdiator}} to verify the type of the {{rep:members}} property once the 
issue is fixed 

> RepMembersConflictHandler creates property with wrong type
> --
>
> Key: OAK-8054
> URL: https://issues.apache.org/jira/browse/OAK-8054
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Critical
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8054-impact.patch, OAK-8054.patch
>
>
> The {{RepMembersConflictHandler}} handler uses type {{STRING}} instead of 
> {{WEAKREFERENCE}} [0] as per the property's definition, which will trigger 
> the type validation to fail the commit.
> Running external login tests I see that the type fails as soon as the handler 
> comes into play:
> {noformat}
> WARN  o.a.j.o.s.s.a.e.i.ExternalLoginModule - User synchronization failed 
> during commit: org.apache.jackrabbit.oak.api.CommitFailedException: 
> OakConstraint0004: 
> /rep:security/rep:authorizables/rep:groups/pathPrefix/g8/rep:membersList/9[[rep:MemberReferences]]:
>  No matching property definition found for rep:members = 
> [8e490910-17b6-30c1-8e11-6abdfa8a4ebc, 1a8e79f5-428e-39e9-88bb-2b86bd9b402e, 
> ... ]. (attempt 10/50)
> {noformat}
> This seems to be a pretty big issue, and I'm not yet sure why it wasn't 
> caught by the existing tests.
> // fyi [~anchela]
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/RepMembersConflictHandler.java#L135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8054) RepMembersConflictHandler creates property with wrong type

2019-02-18 Thread Alex Deparvu (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770838#comment-16770838
 ] 

Alex Deparvu commented on OAK-8054:
---

thanks a lot [~anchela] for the tests! I am taking a look now. It seems there 
is a big part missing from the conflict handling: the scenario where the extra 
nodes are created (the {{rep:membersList}}).

> RepMembersConflictHandler creates property with wrong type
> --
>
> Key: OAK-8054
> URL: https://issues.apache.org/jira/browse/OAK-8054
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, security
>Reporter: Alex Deparvu
>Assignee: Alex Deparvu
>Priority: Critical
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8054-impact.patch, OAK-8054.patch
>
>
> The {{RepMembersConflictHandler}} handler uses type {{STRING}} instead of 
> {{WEAKREFERENCE}} [0] as per the property's definition, which will trigger 
> the type validation to fail the commit.
> Running external login tests I see that the type fails as soon as the handler 
> comes into play:
> {noformat}
> WARN  o.a.j.o.s.s.a.e.i.ExternalLoginModule - User synchronization failed 
> during commit: org.apache.jackrabbit.oak.api.CommitFailedException: 
> OakConstraint0004: 
> /rep:security/rep:authorizables/rep:groups/pathPrefix/g8/rep:membersList/9[[rep:MemberReferences]]:
>  No matching property definition found for rep:members = 
> [8e490910-17b6-30c1-8e11-6abdfa8a4ebc, 1a8e79f5-428e-39e9-88bb-2b86bd9b402e, 
> ... ]. (attempt 10/50)
> {noformat}
> This seems to be a pretty big issue, and I'm not yet sure why it wasn't 
> caught by the existing tests.
> // fyi [~anchela]
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/RepMembersConflictHandler.java#L135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)