[jira] [Updated] (OAK-3886) Support custom Credentials in external identity providers

2016-01-14 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-3886:
---
Issue Type: Improvement  (was: Bug)

> Support custom Credentials in external identity providers
> -
>
> Key: OAK-3886
> URL: https://issues.apache.org/jira/browse/OAK-3886
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Alexander Klimetschek
>
> Currently, the ExternalLoginModule [only supports 
> SimpleCredentials|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L415-L419].
> As the TODO says, it would be good to allow the ExternalIdentityProvider 
> specify the supported types, in case they have custom authentication schemes 
> that don't fit the username + password pattern of the SimpleCredentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3886) Support custom Credentials types in external identity providers

2016-01-14 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek updated OAK-3886:
---
Summary: Support custom Credentials types in external identity providers  
(was: Support custom Credentials in external identity providers)

> Support custom Credentials types in external identity providers
> ---
>
> Key: OAK-3886
> URL: https://issues.apache.org/jira/browse/OAK-3886
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Alexander Klimetschek
>
> Currently, the ExternalLoginModule [only supports 
> SimpleCredentials|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L415-L419].
> As the TODO says, it would be good to allow the ExternalIdentityProvider 
> specify the supported types, in case they have custom authentication schemes 
> that don't fit the username + password pattern of the SimpleCredentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3886) Support custom Credentials types in external identity providers

2016-01-14 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101099#comment-15101099
 ] 

Alexander Klimetschek edited comment on OAK-3886 at 1/15/16 2:51 AM:
-

This could be added in a non-breaking, opt-in way by adding a new interface:
{code}
public interface CustomCredentialsIdentityProvider extends 
ExternalIdentityProvider {
Set getSupportedCredentials();
}
{code}

and then changing ExternalLoginModule.getSupportedCredentials() to this:
{code}
protected Set getSupportedCredentials() {
if (idp instanceof CustomCredentialsIdentityProvider) {
return ((CustomCredentialsIdentityProvider) 
idp).getSupportedCredentials();
} else {
Class scClass = SimpleCredentials.class;
return Collections.singleton(scClass);
}
}
{code}

I quickly tested something like this successfully. The ExternalLoginModule 
otherwise has no requirement that it's a SimpleCredentials. Only 
createAuthInfo() does an {{instanceof SimpleCredentials}}, but that looks very 
optional.


was (Author: alexander.klimetschek):
This could be added in a non-breaking, opt-in way by adding a new interface:
{code}
public interface CustomCredentialsIdentityProvider extends 
ExternalIdentityProvider {
Set getSupportedCredentials();
}
{code}

and then changing ExternalLoginModule.getSupportedCredentials() to this:
{code}
protected Set getSupportedCredentials() {
if (idp instanceof CustomCredentialsIdentityProvider) {
return ((CustomCredentialsIdentityProvider) 
idp).getSupportedCredentials();
} else {
Class scClass = Credentials.class;
return Collections.singleton(scClass);
}
}
{code}

I quickly tested something like this successfully. The ExternalLoginModule 
otherwise has no requirement that it's a SimpleCredentials. Only 
createAuthInfo() does an {{instanceof SimpleCredentials}}, but that looks very 
optional.

> Support custom Credentials types in external identity providers
> ---
>
> Key: OAK-3886
> URL: https://issues.apache.org/jira/browse/OAK-3886
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Alexander Klimetschek
>
> Currently, the ExternalLoginModule [only supports 
> SimpleCredentials|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L415-L419].
> As the TODO says, it would be good to allow the ExternalIdentityProvider 
> specify the supported types, in case they have custom authentication schemes 
> that don't fit the username + password pattern of the SimpleCredentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3886) Support custom Credentials in external identity providers

2016-01-14 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101099#comment-15101099
 ] 

Alexander Klimetschek commented on OAK-3886:


This could be added in a non-breaking, opt-in way by adding a new interface:
{code}
public interface CustomCredentialsIdentityProvider extends 
ExternalIdentityProvider {
Set getSupportedCredentials();
}
{code}

and then changing ExternalLoginModule.getSupportedCredentials() to this:
{code}
protected Set getSupportedCredentials() {
if (idp instanceof CustomCredentialsIdentityProvider) {
return ((CustomCredentialsIdentityProvider) 
idp).getSupportedCredentials();
} else {
Class scClass = Credentials.class;
return Collections.singleton(scClass);
}
}
{code}

I quickly tested something like this successfully. The ExternalLoginModule 
otherwise has no requirement that it's a SimpleCredentials. Only 
createAuthInfo() does an {{instanceof SimpleCredentials}}, but that looks very 
optional.

> Support custom Credentials in external identity providers
> -
>
> Key: OAK-3886
> URL: https://issues.apache.org/jira/browse/OAK-3886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Reporter: Alexander Klimetschek
>
> Currently, the ExternalLoginModule [only supports 
> SimpleCredentials|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L415-L419].
> As the TODO says, it would be good to allow the ExternalIdentityProvider 
> specify the supported types, in case they have custom authentication schemes 
> that don't fit the username + password pattern of the SimpleCredentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3886) Support custom Credentials in external identity providers

2016-01-14 Thread Alexander Klimetschek (JIRA)
Alexander Klimetschek created OAK-3886:
--

 Summary: Support custom Credentials in external identity providers
 Key: OAK-3886
 URL: https://issues.apache.org/jira/browse/OAK-3886
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-external
Reporter: Alexander Klimetschek


Currently, the ExternalLoginModule [only supports 
SimpleCredentials|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L415-L419].

As the TODO says, it would be good to allow the ExternalIdentityProvider 
specify the supported types, in case they have custom authentication schemes 
that don't fit the username + password pattern of the SimpleCredentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3876) ExternalLoginModule ignores authorizable ID returned from IDP

2016-01-14 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101079#comment-15101079
 ] 

Alexander Klimetschek edited comment on OAK-3876 at 1/15/16 2:34 AM:
-

Turns out there is *no problem in the ExternalLoginModule*. User id is 
correctly set on the final Subject, taken from the ExternalUser.getId().

Problem I had was in my Sling AuthenticationHandler: when the user id is not 
known you have to not set a userId in the AuthenticationInfo and then pass the 
jcr Credentials object manually in the "user.jcr.credentials" attribute:
{code}
// ExternalLoginModule currently requires SimpleCredentials, but ideally, if 
the user id
// is unkwnown and the password is not used, a special Credentials class makes 
more sense
SimpleCredentials credentials = new SimpleCredentials(null, new char[0]);
credentials.setAttribute("my-attribute", "");

AuthenticationInfo authInfo = new AuthenticationInfo("my-auth-type");
authInfo.put("user.jcr.credentials", credentials);
{code}


was (Author: alexander.klimetschek):
Turns out there is *no problem in the ExternalLoginModule*. User id is 
correctly set on the final Subject, taken from the ExternalUser.getId().

Problem I had was in my Sling AuthenticationHandler: when the user id is not 
known you have to not set a userId in the AuthenticationInfo and then pass the 
jcr Credentials object manually in the "user.jcr.credentials" attribute:
{code}
// ExternalLoginModule currently requires SimpleCredentials, but ideally, if 
the user id
// is unkwnown and the password is not used, a special Credentials class makes 
more sense
SimpleCredentials credentials = new SimpleCredentials(null, new char[0]);
credentials.setAttribute("my-attribute", "");

AuthenticationInfo authInfo = new 
AuthenticationInfo(AuthConstants.ACCESS_TOKEN);
authInfo.put("user.jcr.credentials", credentials);
{code}

> ExternalLoginModule ignores authorizable ID returned from IDP
> -
>
> Key: OAK-3876
> URL: https://issues.apache.org/jira/browse/OAK-3876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Affects Versions: 1.2.9, 1.3.13
>Reporter: Alexander Klimetschek
>
> In the ExternalLoginModule, the user = authorizable id for the subject after 
> successful authentication will be solely based on the userId of the passed in 
> SimpleCredentials, as the [original credentials are set as 
> SHARED_KEY_CREDENTIALS|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L230].
> However, with an external identity provider it can be the case that the 
> credentials do not contain the actual local user id and only the identity 
> provider would do the mapping in its authentication logic and return the 
> right local user id via ExternalUser.getId().
> An example might be an opaque token string used as credential, which the 
> external IDP validates by calling the external entity, and receiving user 
> data that allows to map to the local user id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3876) ExternalLoginModule ignores authorizable ID returned from IDP

2016-01-14 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101079#comment-15101079
 ] 

Alexander Klimetschek edited comment on OAK-3876 at 1/15/16 2:33 AM:
-

Turns out there is *no problem in the ExternalLoginModule*. User id is 
correctly set on the final Subject, taken from the ExternalUser.getId().

Problem I had was in my Sling AuthenticationHandler: when the user id is not 
known you have to not set a userId in the AuthenticationInfo and then pass the 
jcr Credentials object manually in the "user.jcr.credentials" attribute:
{code}
// ExternalLoginModule currently requires SimpleCredentials, but ideally, if 
the user id
// is unkwnown and the password is not used, a special Credentials class makes 
more sense
SimpleCredentials credentials = new SimpleCredentials(null, new char[0]);
credentials.setAttribute("my-attribute", "");

AuthenticationInfo authInfo = new 
AuthenticationInfo(AuthConstants.ACCESS_TOKEN);
authInfo.put("user.jcr.credentials", credentials);
{code}


was (Author: alexander.klimetschek):
Turns out there is no problem in the ExternalLoginModule.

Problem I had was in my Sling AuthenticationHandler: when the user id is not 
know you have to not set a userId in the AuthenticationInfo and then pass the 
jcr Credentials object manually in the "user.jcr.credentials" attribute:
{code}
// ExternalLoginModule currently requires SimpleCredentials, but ideally, if 
the user id
// is unkwnown and the password is not used, a special Credentials class makes 
more sense
SimpleCredentials credentials = new SimpleCredentials(null, new char[0]);
credentials.setAttribute("my-attribute", "");

AuthenticationInfo authInfo = new 
AuthenticationInfo(AuthConstants.ACCESS_TOKEN);
authInfo.put("user.jcr.credentials", credentials);
{code}

> ExternalLoginModule ignores authorizable ID returned from IDP
> -
>
> Key: OAK-3876
> URL: https://issues.apache.org/jira/browse/OAK-3876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Affects Versions: 1.2.9, 1.3.13
>Reporter: Alexander Klimetschek
>
> In the ExternalLoginModule, the user = authorizable id for the subject after 
> successful authentication will be solely based on the userId of the passed in 
> SimpleCredentials, as the [original credentials are set as 
> SHARED_KEY_CREDENTIALS|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L230].
> However, with an external identity provider it can be the case that the 
> credentials do not contain the actual local user id and only the identity 
> provider would do the mapping in its authentication logic and return the 
> right local user id via ExternalUser.getId().
> An example might be an opaque token string used as credential, which the 
> external IDP validates by calling the external entity, and receiving user 
> data that allows to map to the local user id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3876) ExternalLoginModule ignores authorizable ID returned from IDP

2016-01-14 Thread Alexander Klimetschek (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Klimetschek resolved OAK-3876.

Resolution: Invalid

Turns out there is no problem in the ExternalLoginModule.

Problem I had was in my Sling AuthenticationHandler: when the user id is not 
know you have to not set a userId in the AuthenticationInfo and then pass the 
jcr Credentials object manually in the "user.jcr.credentials" attribute:
{code}
// ExternalLoginModule currently requires SimpleCredentials, but ideally, if 
the user id
// is unkwnown and the password is not used, a special Credentials class makes 
more sense
SimpleCredentials credentials = new SimpleCredentials(null, new char[0]);
credentials.setAttribute("my-attribute", "");

AuthenticationInfo authInfo = new 
AuthenticationInfo(AuthConstants.ACCESS_TOKEN);
authInfo.put("user.jcr.credentials", credentials);
{code}

> ExternalLoginModule ignores authorizable ID returned from IDP
> -
>
> Key: OAK-3876
> URL: https://issues.apache.org/jira/browse/OAK-3876
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-external
>Affects Versions: 1.2.9, 1.3.13
>Reporter: Alexander Klimetschek
>
> In the ExternalLoginModule, the user = authorizable id for the subject after 
> successful authentication will be solely based on the userId of the passed in 
> SimpleCredentials, as the [original credentials are set as 
> SHARED_KEY_CREDENTIALS|https://github.com/apache/jackrabbit-oak/blob/cc78f6fdd122d1c9f200b43fc2b9536518ea996b/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/ExternalLoginModule.java#L230].
> However, with an external identity provider it can be the case that the 
> credentials do not contain the actual local user id and only the identity 
> provider would do the mapping in its authentication logic and return the 
> right local user id via ExternalUser.getId().
> An example might be an opaque token string used as credential, which the 
> external IDP validates by calling the external entity, and receiving user 
> data that allows to map to the local user id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3885) enhance stability of clusterNodeInfo's machineId

2016-01-14 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3885:
---

 Summary: enhance stability of clusterNodeInfo's machineId
 Key: OAK-3885
 URL: https://issues.apache.org/jira/browse/OAK-3885
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: documentmk
Reporter: Julian Reschke


We currently use network interface information to derive a unique machine ID 
(ClusterNodeInfo.getMachineId()). Among the 6-byte addresses, we use the 
"smallest".

At least on Windows machines, connecting through a VPN inserts a new low 
machineID into the list, causing the machineID to vary depending on whether the 
VPN is connected or not.

I don't see a clean way to filter these addresses. We *could* inspect the names 
of the interfaces and treat those containing "VPN" or "Virtual" to be less 
relevant. Of course that would be an ugly hack, but it would fix the problem 
for now.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3872) [RDB] Updated blob still deleted even if deletion interval lower

2016-01-14 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3872.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1724423
1.2: http://svn.apache.org/r1724632
1.0: http://svn.apache.org/r1724659

(the backport to 1.0 was a bit painful because of other changes not being 
backported; sigh)

> [RDB] Updated blob still deleted even if deletion interval lower
> 
>
> Key: OAK-3872
> URL: https://issues.apache.org/jira/browse/OAK-3872
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Amit Jain
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK_3872.patch
>
>
> If an existing blob is uploaded again, the timestamp of the existing entry is 
> updated in the meta table. Subsequently if a call to delete 
> (RDBBlobStore#countDeleteChunks) is made with {{maxLastModifiedTime}} 
> parameter of less than the updated time above, the entry in the meta table is 
> not touched but the data table entry is wiped out. 
> Refer 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/rdb/RDBBlobStore.java#L510



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3872) [RDB] Updated blob still deleted even if deletion interval lower

2016-01-14 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3872:

Fix Version/s: 1.2.10
   1.0.26

> [RDB] Updated blob still deleted even if deletion interval lower
> 
>
> Key: OAK-3872
> URL: https://issues.apache.org/jira/browse/OAK-3872
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob, rdbmk
>Affects Versions: 1.2.9, 1.0.25, 1.3.13
>Reporter: Amit Jain
>Assignee: Julian Reschke
> Fix For: 1.0.26, 1.2.10, 1.3.14
>
> Attachments: OAK_3872.patch
>
>
> If an existing blob is uploaded again, the timestamp of the existing entry is 
> updated in the meta table. Subsequently if a call to delete 
> (RDBBlobStore#countDeleteChunks) is made with {{maxLastModifiedTime}} 
> parameter of less than the updated time above, the entry in the meta table is 
> not touched but the data table entry is wiped out. 
> Refer 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/rdb/RDBBlobStore.java#L510



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3884) Enable BROADCAST for UDPBroadcaster

2016-01-14 Thread Philipp Suter (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Suter updated OAK-3884:
---
Attachment: broadcasttest.patch

> Enable BROADCAST for UDPBroadcaster
> ---
>
> Key: OAK-3884
> URL: https://issues.apache.org/jira/browse/OAK-3884
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 1.3.14
> Environment: OS X
>Reporter: Philipp Suter
> Attachments: broadcasttest.patch
>
>
> It seems that broadcastEncryptedUDP and broadcastUDP test are not working on 
> OS X and most likely other *IX based systems due to the fact that the 
> loopback network interface (e.g. lo0) is not having a BROADCAST option [1].
> A workaround patch for the unit tests is listed below.
> [1] See answer to: 
> http://serverfault.com/questions/554503/how-do-i-add-a-broadcast-ip-to-the-loopback-interface-under-os-x-using-ifconfig
> ===BEGIN PATCH===
> @@ -22,9 +22,14 @@
>  
>  import java.io.File;
>  import java.io.IOException;
> +import java.net.InetAddress;
> +import java.net.InterfaceAddress;
> +import java.net.NetworkInterface;
> +import java.net.SocketException;
>  import java.nio.ByteBuffer;
>  import java.sql.Timestamp;
>  import java.util.ArrayList;
> +import java.util.Enumeration;
>  import java.util.Random;
>  import java.util.concurrent.Callable;
>  
> @@ -37,7 +42,6 @@
>  import 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster;
>  import org.apache.jackrabbit.oak.plugins.document.util.StringValue;
>  import org.junit.Assert;
> -import org.junit.Ignore;
>  import org.junit.Test;
>  import org.slf4j.LoggerFactory;
>  
> @@ -150,16 +154,17 @@
>  public void broadcastInMemory() throws Exception {
>  broadcast("inMemory", 100);
>  }
> -
> +
>  @Test
> -@Ignore("OAK-2843")
>  public void broadcastUDP() throws Exception {
>  try {
> -broadcast("udp:sendTo localhost", 50);
> +String localBroadcastIp = getLocalBroadcastAddress();
> +System.out.println("BROADCASTING TO: " + localBroadcastIp);
> +broadcast("udp:sendTo "+localBroadcastIp, 50);
>  } catch (AssertionError e) {
>  // IPv6 didn't work, so try with IPv4
> -try {
> -broadcast("udp:group 228.6.7.9", 50);
> +broadcast("udp:group 228.6.7.9", 50);
> +try {
>  } catch (AssertionError e2) {
>  throwBoth(e, e2);
>  }
> @@ -167,10 +172,11 @@
>  }
>  
>  @Test
> -@Ignore("OAK-2843")
>  public void broadcastEncryptedUDP() throws Exception {
>  try {
> -broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
> localhost;aes", 50);
> +String localBroadcastIp = getLocalBroadcastAddress();
> +System.out.println("BROADCASTING TO: " + localBroadcastIp);
> +broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
> "+localBroadcastIp+";aes", 50);
>  } catch (AssertionError e) {
>  try {
>  broadcast("udp:group 228.6.7.9;key test;port 9876;aes", 50); 
>
> @@ -178,6 +184,26 @@
>  throwBoth(e, e2);
>  }
>  }
> +}
> +
> +private static String getLocalBroadcastAddress(){
> +try {
> +Enumeration interfaces = 
> NetworkInterface.getNetworkInterfaces();
> +while (interfaces.hasMoreElements()) {
> +NetworkInterface networkInterface = interfaces.nextElement();
> +if (networkInterface.isLoopback())
> +continue;
> +for (InterfaceAddress interfaceAddress :
> +networkInterface.getInterfaceAddresses()) {
> +InetAddress broadcast = interfaceAddress.getBroadcast();
> +if (broadcast != null)
> +return broadcast.getHostAddress();
> +}
> +}
> +} catch (SocketException e) {
> +e.printStackTrace();
> +}
> +return null;
>  }
>  
>  private static void throwBoth(AssertionError e, AssertionError e2) 
> throws AssertionError {
> ===END PATCH===



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3884) Enable BROADCAST for UDPBroadcaster unit tests

2016-01-14 Thread Philipp Suter (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Suter updated OAK-3884:
---
Summary: Enable BROADCAST for UDPBroadcaster unit tests  (was: Enable 
BROADCAST for UDPBroadcaster)

> Enable BROADCAST for UDPBroadcaster unit tests
> --
>
> Key: OAK-3884
> URL: https://issues.apache.org/jira/browse/OAK-3884
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 1.3.14
> Environment: OS X
>Reporter: Philipp Suter
> Attachments: broadcasttest.patch
>
>
> It seems that broadcastEncryptedUDP and broadcastUDP test are not working on 
> OS X and most likely other *IX based systems due to the fact that the 
> loopback network interface (e.g. lo0) is not having a BROADCAST option [1].
> A possible workaround patch for the unit tests is attached.
> [1] See answer to: 
> http://serverfault.com/questions/554503/how-do-i-add-a-broadcast-ip-to-the-loopback-interface-under-os-x-using-ifconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3884) Enable BROADCAST for UDPBroadcaster

2016-01-14 Thread Philipp Suter (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Suter updated OAK-3884:
---
Description: 
It seems that broadcastEncryptedUDP and broadcastUDP test are not working on OS 
X and most likely other *IX based systems due to the fact that the loopback 
network interface (e.g. lo0) is not having a BROADCAST option [1].

A possible workaround patch for the unit tests is attached.

[1] See answer to: 
http://serverfault.com/questions/554503/how-do-i-add-a-broadcast-ip-to-the-loopback-interface-under-os-x-using-ifconfig

  was:
It seems that broadcastEncryptedUDP and broadcastUDP test are not working on OS 
X and most likely other *IX based systems due to the fact that the loopback 
network interface (e.g. lo0) is not having a BROADCAST option [1].

A workaround patch for the unit tests is listed below.

[1] See answer to: 
http://serverfault.com/questions/554503/how-do-i-add-a-broadcast-ip-to-the-loopback-interface-under-os-x-using-ifconfig

===BEGIN PATCH===
@@ -22,9 +22,14 @@
 
 import java.io.File;
 import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InterfaceAddress;
+import java.net.NetworkInterface;
+import java.net.SocketException;
 import java.nio.ByteBuffer;
 import java.sql.Timestamp;
 import java.util.ArrayList;
+import java.util.Enumeration;
 import java.util.Random;
 import java.util.concurrent.Callable;
 
@@ -37,7 +42,6 @@
 import 
org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster;
 import org.apache.jackrabbit.oak.plugins.document.util.StringValue;
 import org.junit.Assert;
-import org.junit.Ignore;
 import org.junit.Test;
 import org.slf4j.LoggerFactory;
 
@@ -150,16 +154,17 @@
 public void broadcastInMemory() throws Exception {
 broadcast("inMemory", 100);
 }
-
+
 @Test
-@Ignore("OAK-2843")
 public void broadcastUDP() throws Exception {
 try {
-broadcast("udp:sendTo localhost", 50);
+String localBroadcastIp = getLocalBroadcastAddress();
+System.out.println("BROADCASTING TO: " + localBroadcastIp);
+broadcast("udp:sendTo "+localBroadcastIp, 50);
 } catch (AssertionError e) {
 // IPv6 didn't work, so try with IPv4
-try {
-broadcast("udp:group 228.6.7.9", 50);
+broadcast("udp:group 228.6.7.9", 50);
+try {
 } catch (AssertionError e2) {
 throwBoth(e, e2);
 }
@@ -167,10 +172,11 @@
 }
 
 @Test
-@Ignore("OAK-2843")
 public void broadcastEncryptedUDP() throws Exception {
 try {
-broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
localhost;aes", 50);
+String localBroadcastIp = getLocalBroadcastAddress();
+System.out.println("BROADCASTING TO: " + localBroadcastIp);
+broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
"+localBroadcastIp+";aes", 50);
 } catch (AssertionError e) {
 try {
 broadcast("udp:group 228.6.7.9;key test;port 9876;aes", 50);   
 
@@ -178,6 +184,26 @@
 throwBoth(e, e2);
 }
 }
+}
+
+private static String getLocalBroadcastAddress(){
+try {
+Enumeration interfaces = 
NetworkInterface.getNetworkInterfaces();
+while (interfaces.hasMoreElements()) {
+NetworkInterface networkInterface = interfaces.nextElement();
+if (networkInterface.isLoopback())
+continue;
+for (InterfaceAddress interfaceAddress :
+networkInterface.getInterfaceAddresses()) {
+InetAddress broadcast = interfaceAddress.getBroadcast();
+if (broadcast != null)
+return broadcast.getHostAddress();
+}
+}
+} catch (SocketException e) {
+e.printStackTrace();
+}
+return null;
 }
 
 private static void throwBoth(AssertionError e, AssertionError e2) throws 
AssertionError {
===END PATCH===


> Enable BROADCAST for UDPBroadcaster
> ---
>
> Key: OAK-3884
> URL: https://issues.apache.org/jira/browse/OAK-3884
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 1.3.14
> Environment: OS X
>Reporter: Philipp Suter
> Attachments: broadcasttest.patch
>
>
> It seems that broadcastEncryptedUDP and broadcastUDP test are not working on 
> OS X and most likely other *IX based systems due to the fact that the 
> loopback network interface (e.g. lo0) is not having a BROADCAST option [1].
> A possible workaround patch for the unit te

[jira] [Created] (OAK-3884) Enable BROADCAST for UDPBroadcaster

2016-01-14 Thread Philipp Suter (JIRA)
Philipp Suter created OAK-3884:
--

 Summary: Enable BROADCAST for UDPBroadcaster
 Key: OAK-3884
 URL: https://issues.apache.org/jira/browse/OAK-3884
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: cache
Affects Versions: 1.3.14
 Environment: OS X
Reporter: Philipp Suter
 Attachments: broadcasttest.patch

It seems that broadcastEncryptedUDP and broadcastUDP test are not working on OS 
X and most likely other *IX based systems due to the fact that the loopback 
network interface (e.g. lo0) is not having a BROADCAST option [1].

A workaround patch for the unit tests is listed below.

[1] See answer to: 
http://serverfault.com/questions/554503/how-do-i-add-a-broadcast-ip-to-the-loopback-interface-under-os-x-using-ifconfig

===BEGIN PATCH===
@@ -22,9 +22,14 @@
 
 import java.io.File;
 import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InterfaceAddress;
+import java.net.NetworkInterface;
+import java.net.SocketException;
 import java.nio.ByteBuffer;
 import java.sql.Timestamp;
 import java.util.ArrayList;
+import java.util.Enumeration;
 import java.util.Random;
 import java.util.concurrent.Callable;
 
@@ -37,7 +42,6 @@
 import 
org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster;
 import org.apache.jackrabbit.oak.plugins.document.util.StringValue;
 import org.junit.Assert;
-import org.junit.Ignore;
 import org.junit.Test;
 import org.slf4j.LoggerFactory;
 
@@ -150,16 +154,17 @@
 public void broadcastInMemory() throws Exception {
 broadcast("inMemory", 100);
 }
-
+
 @Test
-@Ignore("OAK-2843")
 public void broadcastUDP() throws Exception {
 try {
-broadcast("udp:sendTo localhost", 50);
+String localBroadcastIp = getLocalBroadcastAddress();
+System.out.println("BROADCASTING TO: " + localBroadcastIp);
+broadcast("udp:sendTo "+localBroadcastIp, 50);
 } catch (AssertionError e) {
 // IPv6 didn't work, so try with IPv4
-try {
-broadcast("udp:group 228.6.7.9", 50);
+broadcast("udp:group 228.6.7.9", 50);
+try {
 } catch (AssertionError e2) {
 throwBoth(e, e2);
 }
@@ -167,10 +172,11 @@
 }
 
 @Test
-@Ignore("OAK-2843")
 public void broadcastEncryptedUDP() throws Exception {
 try {
-broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
localhost;aes", 50);
+String localBroadcastIp = getLocalBroadcastAddress();
+System.out.println("BROADCASTING TO: " + localBroadcastIp);
+broadcast("udp:group FF78:230::1234;key test;port 9876;sendTo 
"+localBroadcastIp+";aes", 50);
 } catch (AssertionError e) {
 try {
 broadcast("udp:group 228.6.7.9;key test;port 9876;aes", 50);   
 
@@ -178,6 +184,26 @@
 throwBoth(e, e2);
 }
 }
+}
+
+private static String getLocalBroadcastAddress(){
+try {
+Enumeration interfaces = 
NetworkInterface.getNetworkInterfaces();
+while (interfaces.hasMoreElements()) {
+NetworkInterface networkInterface = interfaces.nextElement();
+if (networkInterface.isLoopback())
+continue;
+for (InterfaceAddress interfaceAddress :
+networkInterface.getInterfaceAddresses()) {
+InetAddress broadcast = interfaceAddress.getBroadcast();
+if (broadcast != null)
+return broadcast.getHostAddress();
+}
+}
+} catch (SocketException e) {
+e.printStackTrace();
+}
+return null;
 }
 
 private static void throwBoth(AssertionError e, AssertionError e2) throws 
AssertionError {
===END PATCH===



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3862) Move integration tests in a different Maven module

2016-01-14 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098429#comment-15098429
 ] 

Francesco Mari commented on OAK-3862:
-

At the moment the proposal is about #1, but it might also be seen as a first 
step to reach #2.

> Move integration tests in a different Maven module
> --
>
> Key: OAK-3862
> URL: https://issues.apache.org/jira/browse/OAK-3862
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: 1.4
>
>
> While moving the Segment Store and related packages into its own bundle, I 
> figured out that integration tests contained in {{oak-core}} contribute to a 
> cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}.
> The dependency is due to the usage of {{NodeStoreFixture}} to instantiate 
> different implementations of {{NodeStore}} in a semi-transparent way.
> Tests depending on {{NodeStoreFixture}} are most likely integration tests. A 
> clean solution to this problem would be to move those integration tests into 
> a new Maven module, referencing the API and implementation modules as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3847) Provide an easy way to parse/retrieve facets

2016-01-14 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098365#comment-15098365
 ] 

Tommaso Teofili commented on OAK-3847:
--

here's the proposal for the {{FacetResult}} API:

{code}
/**
 * A facet result is a wrapper for {@link javax.jcr.query.QueryResult} capable 
of returning information about facets
 * stored in the query result {@link javax.jcr.query.Row}s.
 */
public class FacetResult {

private final Map> facets = new HashMap>();

public FacetResult(QueryResult queryResult) {
try {
RowIterator rows = queryResult.getRows();
if (rows.hasNext()) {
Row row = rows.nextRow();
for (String column : queryResult.getColumnNames()) {
if (column.startsWith(QueryImpl.REP_FACET)) {
String dimension = 
column.substring(QueryImpl.REP_FACET.length() + 1, column.length() - 1);
String jsonFacetString = 
row.getValue(column).getString();
// parse ...
facets.put(dimension, new Facet(...));
}
}
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}

@Nonnull
public Set getDimensions() {
return facets.keySet();
}

@CheckForNull
public List getFacets(@Nonnull String dimension) {
return facets.get(dimension);
}

public static class Facet {

private final String label;
private final Integer count;

private Facet(String label, Integer count) {
this.label = label;
this.count = count;
}

@Nonnull
public String getLabel() {
return label;
}

@Nonnull
public Integer getCount() {
return count;
}
}
}
{code}

> Provide an easy way to parse/retrieve facets
> 
>
> Key: OAK-3847
> URL: https://issues.apache.org/jira/browse/OAK-3847
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.3.14
>
>
> Current facet results are returned within the rep:facet($propertyname) 
> property of each resulting node. The resulting String [1] is however a bit 
> annoying to parse as it separates label / value by comma so that if label 
> contains a similar pattern parsing may even be buggy.
> An easier format for facets should be used, eventually together with an 
> utility class that returns proper objects that client code can consume.
> [1] : 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/jcr/query/FacetTest.java#L99



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3826) Lucene index augmentation doesn't work in Osgi environment

2016-01-14 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-3826.

   Resolution: Fixed
Fix Version/s: (was: 1.4)
   1.3.14

Took care of comments above and committed in trunk at 
[r1724653|http://svn.apache.org/r1724653].

> Lucene index augmentation doesn't work in Osgi environment
> --
>
> Key: OAK-3826
> URL: https://issues.apache.org/jira/browse/OAK-3826
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.14
>
> Attachments: OAK-3826-v2.patch, OAK-3826.patch
>
>
> OAK-3576 introduced a way to hook SPI to provide extra fields and query terms 
> for a lucene index.
> In Osgi world, due to OAK-3815, {{LuceneIndexProviderService}} registered 
> references to SPI and pinged {{IndexAugmentFactory}} to update its map. But, 
> it seems bind/unbind methods get called ahead of time as compared to the 
> information Tracker contains. This leads to wrong set of services captured by 
> {{IndexAugmentFactory}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3883) Avoid commit from too far in the future (due to clock skews) to go through

2016-01-14 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3883:
--

 Summary: Avoid commit from too far in the future (due to clock 
skews) to go through
 Key: OAK-3883
 URL: https://issues.apache.org/jira/browse/OAK-3883
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, documentmk
Reporter: Vikas Saurabh
Assignee: Vikas Saurabh
Priority: Minor


Following up [discussion|http://markmail.org/message/m5jk5nbby77nlqs5] \[0] to 
avoid bad commits due to mis-behaving clocks. Points from the discussion:
* We can start self-destrucut mode while updating lease
* Revision creation should check that newly created revision isn't beyond 
leaseEnd time
* Implementation done for OAK-2682 might be useful

[0]: http://markmail.org/message/m5jk5nbby77nlqs5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3862) Move integration tests in a different Maven module

2016-01-14 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098331#comment-15098331
 ] 

Julian Sedding commented on OAK-3862:
-

Not sure what people mean when they say "move to dedicated IT module".

# Create a new IT module and run tests with potentially all fixtures in this 
module
# Create a new IT module that defines implementation-agnostic test, run the 
tests in the various (future) persistence modules (e.g. 
oak-persistence-segment, oak-persistence-document, etc.)

If it's #1, I am -0 as I don't think that's an elegant approach. However, it's 
pragmatic and solves the immediate problem.

I think #2 is the nicer approach, as it would essentially provide a test kit 
that could be easily re-used even for implementations outside the Jackrabbit 
project (similar to the JCR TCK). However, this is presumably slightly more 
work to achieve.



> Move integration tests in a different Maven module
> --
>
> Key: OAK-3862
> URL: https://issues.apache.org/jira/browse/OAK-3862
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: 1.4
>
>
> While moving the Segment Store and related packages into its own bundle, I 
> figured out that integration tests contained in {{oak-core}} contribute to a 
> cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}.
> The dependency is due to the usage of {{NodeStoreFixture}} to instantiate 
> different implementations of {{NodeStore}} in a semi-transparent way.
> Tests depending on {{NodeStoreFixture}} are most likely integration tests. A 
> clean solution to this problem would be to move those integration tests into 
> a new Maven module, referencing the API and implementation modules as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3882) Collision may mark the wrong commit

2016-01-14 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098265#comment-15098265
 ] 

Marcel Reutegger commented on OAK-3882:
---

The 1.2 and 1.0 branches are not affected by this issue.

> Collision may mark the wrong commit
> ---
>
> Key: OAK-3882
> URL: https://issues.apache.org/jira/browse/OAK-3882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.3.6
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.4
>
>
> In some rare cases it may happen that a collision marks the wrong commit. 
> OAK-3344 introduced a conditional update of the commit root with a collision 
> marker. However, this may fail when the commit revision of the condition is 
> moved to a split document at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3882) Collision may mark the wrong commit

2016-01-14 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098257#comment-15098257
 ] 

Marcel Reutegger commented on OAK-3882:
---

Added ignored test to trunk: http://svn.apache.org/r1724628 and 
http://svn.apache.org/r1724631

> Collision may mark the wrong commit
> ---
>
> Key: OAK-3882
> URL: https://issues.apache.org/jira/browse/OAK-3882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.3.6
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.4
>
>
> In some rare cases it may happen that a collision marks the wrong commit. 
> OAK-3344 introduced a conditional update of the commit root with a collision 
> marker. However, this may fail when the commit revision of the condition is 
> moved to a split document at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3862) Move integration tests in a different Maven module

2016-01-14 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098240#comment-15098240
 ] 

Davide Giannella commented on OAK-3862:
---

+1 for moving towards a dedicated IT module.

> Move integration tests in a different Maven module
> --
>
> Key: OAK-3862
> URL: https://issues.apache.org/jira/browse/OAK-3862
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Francesco Mari
>Assignee: Francesco Mari
> Fix For: 1.4
>
>
> While moving the Segment Store and related packages into its own bundle, I 
> figured out that integration tests contained in {{oak-core}} contribute to a 
> cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}.
> The dependency is due to the usage of {{NodeStoreFixture}} to instantiate 
> different implementations of {{NodeStore}} in a semi-transparent way.
> Tests depending on {{NodeStoreFixture}} are most likely integration tests. A 
> clean solution to this problem would be to move those integration tests into 
> a new Maven module, referencing the API and implementation modules as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3882) Collision may mark the wrong commit

2016-01-14 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-3882:
--
Labels: resilience  (was: )

> Collision may mark the wrong commit
> ---
>
> Key: OAK-3882
> URL: https://issues.apache.org/jira/browse/OAK-3882
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.3.6
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: resilience
> Fix For: 1.4
>
>
> In some rare cases it may happen that a collision marks the wrong commit. 
> OAK-3344 introduced a conditional update of the commit root with a collision 
> marker. However, this may fail when the commit revision of the condition is 
> moved to a split document at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3882) Collision may mark the wrong commit

2016-01-14 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3882:
-

 Summary: Collision may mark the wrong commit
 Key: OAK-3882
 URL: https://issues.apache.org/jira/browse/OAK-3882
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, documentmk
Affects Versions: 1.3.6
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.4


In some rare cases it may happen that a collision marks the wrong commit. 
OAK-3344 introduced a conditional update of the commit root with a collision 
marker. However, this may fail when the commit revision of the condition is 
moved to a split document at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098099#comment-15098099
 ] 

Marcel Reutegger commented on OAK-3791:
---

Looks very good to me. One question about the screenshot. Why do all the types 
have an oak prefix?

It would also be nice to have stats about how long it took to acquire one of 
the nodeLocks. The current patch includes the time to acquire the lock for a 
given uncached call. I would probably keep it that way, to get an accurate 
picture of the overall time to perform the operation, but additional lock stats 
would also be nice.

Stats for other methods like remove are missing. Wouldn't it be better to add 
those as well?

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch, 
> OAK-3791-v2-chetanm.patch, oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098088#comment-15098088
 ] 

Julian Reschke commented on OAK-3637:
-

Current changes in trunk: http://svn.apache.org/r1724598 
http://svn.apache.org/r1723008

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3637:

Fix Version/s: (was: 1.2.11)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3470) Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being used

2016-01-14 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3470.
---
   Resolution: Fixed
Fix Version/s: 1.3.14

Removed the check for BasicDBObject. The method is only used by the Document 
class, which does not have BasicDBObjects.

As a workaround, add a dependency to the MongoDB Java Driver.

Fixed in trunk: http://svn.apache.org/r1724597

> Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being 
> used
> -
>
> Key: OAK-3470
> URL: https://issues.apache.org/jira/browse/OAK-3470
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.2.6, 1.3.7
>Reporter: Jegadisan Sankar Kumar
>Assignee: Marcel Reutegger
>Priority: Minor
> Fix For: 1.3.14
>
>
> When create a repository without Mongo and just a RDBMS DocumentNodeStore, a 
> NoClassDefFoundError is encountered.
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/mongodb/BasicDBObject
>   at 
> org.apache.jackrabbit.oak.plugins.document.util.Utils.estimateMemoryUsage(Utils.java:160)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Document.getMemory(Document.java:167)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:33)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:27)
>   at 
> com.google.common.cache.LocalCache$Segment.setValue(LocalCache.java:2158)
>   at 
> com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3140)
>   at 
> com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2349)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentCached(RDBDocumentStore.java:762)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:222)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.(DocumentNodeStore.java:448)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getNodeStore(DocumentMK.java:671)
> {code}
> The dependencies in pom.xml are as follows
> {code:xml}
> 
> 
> org.apache.jackrabbit
> oak-jcr
> 1.2.6
> 
> 
> com.h2database
> h2
> 1.4.189
> 
> 
> ch.qos.logback
> logback-classic
> 1.1.3
> 
> 
> {code}
> And the code to recreate the issue
> {code:java}
> // Build the Data Source to be used.
> JdbcDataSource ds = new JdbcDataSource();
> ds.setURL("jdbc:h2:mem:oak;DB_CLOSE_DELAY=-1");
> ds.setUser("sa");
> ds.setPassword("sa");
> // Build the OAK Repository Instance
> DocumentNodeStore ns = null;
> try {
> ns = new DocumentMK.Builder()
> .setRDBConnection(ds)
> .getNodeStore();
> } finally {
> if (ns != null) {
> ns.dispose();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3881) TCPBroadcaster causes shutdown delay

2016-01-14 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3881:
-

 Summary: TCPBroadcaster causes shutdown delay
 Key: OAK-3881
 URL: https://issues.apache.org/jira/browse/OAK-3881
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, documentmk
Reporter: Marcel Reutegger
Priority: Minor


OAK-3727 enabled the broadcasting cache by default. With this change the 
shutdown of a DocumentNodeStore gets delayed by roughly one second. This is 
also visible in the time it takes to run the tests on travis. Previously the 
build took 35 min. The build with OAK-3727 took 45 minutes.

A thread dump shows the following threads:

{noformat}
"Oak TCPBroadcaster: discover #50" daemon prio=5 tid=0x7f8957837800 
nid=0x5e13 waiting on condition [0x000116b5b000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster.discover(TCPBroadcaster.java:268)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster$2.run(TCPBroadcaster.java:155)
at java.lang.Thread.run(Thread.java:745)
{noformat}

And the shutdown waiting for above thread:

{noformat}
"main" prio=5 tid=0x7f8954000800 nid=0x1303 in Object.wait() 
[0x00010cd5e000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0007f6c80e10> (a java.lang.Thread)
at java.lang.Thread.join(Thread.java:1281)
- locked <0x0007f6c80e10> (a java.lang.Thread)
at java.lang.Thread.join(Thread.java:1355)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.broadcast.TCPBroadcaster.close(TCPBroadcaster.java:379)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.close(PersistentCache.java:346)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.dispose(DocumentNodeStore.java:581)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098019#comment-15098019
 ] 

Chetan Mehrotra edited comment on OAK-3791 at 1/14/16 11:57 AM:


[updated patch|OAK-3791-v2-chetanm.patch] which takes a bit different approach

# Introduced a new {{DocumentStoreStatsCollector}} callback which would be 
invoked by various DocumentStore impl. This captures the kind of data we want 
to collect. With this very small change was done in current implementation of 
DocumentStore and all stats related logic now lives separately. This can be 
evolved easily
# {{DocumentStoreStats}} implements  {{DocumentStoreStatsCollector}} - Based on 
data provided in callback various types of stats are computed
# {{DocumentStoreStatsMBean}} exposes the various time series data
# Existing usage of {{PerfLogger}} in {{MongoDocumentStore}} have been replaced 
and similar logs are now done from {{DocumentStoreStats}} as PerfLogger does 
not allow passing duration and only accept start time

*Stats Types*
# Finding uncached Nodes - Separately for calls made to primary and secondary. 
This would allow us to test out various strategies of optimizing reads from 
secondaries and see how effective they are
# How many Nodes are being read via Query calls
# Calls made by Journal
# Number of uncached find call for split documents - If they are more we should 
look into caching them better

See [screenshot|^oak-document-stats.png] for various types of stats in a 
startup of Oak based application

[~mreutegg] Can you review the approach taken. If fine I would commit it and 
then add similar data collection to RDB side

[~rosu] Approach taken here differs from earlier approach. So would be helpful 
if you can also take a look


was (Author: chetanm):
[updated patch|OAK-3791-v2-chetanm.patch] which takes a bit different approach

# Introduced a new {{DocumentStoreStatsCollector}} callback which would be 
invoked by various DocumentStore impl. This captures the kind of data we want 
to collect. With this very small change was done in current implementation of 
DocumentStore and all stats related logic now lives separately. This can be 
evolved easily
# {{DocumentStoreStats}} implements  {{DocumentStoreStatsCollector}} - Based on 
data provided in callback various types of stats are computed
# {{DocumentStoreStatsMBean}} exposes the various time series data
# Existing usage of {{PerfLogger}} in {{MongoDocumentStore}} have been replaced 
and similar logs are now done from {{DocumentStoreStats}} as PerfLogger does 
not allow passing duration and only accept start time

See [screenshot|^oak-document-stats.png] for various types of stats in a 
startup of Oak based application

[~mreutegg] Can you review the approach taken. If fine I would commit it and 
then add similar data collection to RDB side

[~rosu] Approach taken here differs from earlier approach. So would be helpful 
if you can also take a look

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch, 
> OAK-3791-v2-chetanm.patch, oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098019#comment-15098019
 ] 

Chetan Mehrotra edited comment on OAK-3791 at 1/14/16 11:58 AM:


[updated patch|^OAK-3791-v2-chetanm.patch] which takes a bit different approach

# Introduced a new {{DocumentStoreStatsCollector}} callback which would be 
invoked by various DocumentStore impl. This captures the kind of data we want 
to collect. With this very small change was done in current implementation of 
DocumentStore and all stats related logic now lives separately. This can be 
evolved easily
# {{DocumentStoreStats}} implements  {{DocumentStoreStatsCollector}} - Based on 
data provided in callback various types of stats are computed
# {{DocumentStoreStatsMBean}} exposes the various time series data
# Existing usage of {{PerfLogger}} in {{MongoDocumentStore}} have been replaced 
and similar logs are now done from {{DocumentStoreStats}} as PerfLogger does 
not allow passing duration and only accept start time

*Stats Types*
# Finding uncached Nodes - Separately for calls made to primary and secondary. 
This would allow us to test out various strategies of optimizing reads from 
secondaries and see how effective they are
# How many Nodes are being read via Query calls
# Calls made by Journal
# Number of uncached find call for split documents - If they are more we should 
look into caching them better

See [screenshot|^oak-document-stats.png] for various types of stats in a 
startup of Oak based application

[~mreutegg] Can you review the approach taken. If fine I would commit it and 
then add similar data collection to RDB side

[~rosu] Approach taken here differs from earlier approach. So would be helpful 
if you can also take a look


was (Author: chetanm):
[updated patch|OAK-3791-v2-chetanm.patch] which takes a bit different approach

# Introduced a new {{DocumentStoreStatsCollector}} callback which would be 
invoked by various DocumentStore impl. This captures the kind of data we want 
to collect. With this very small change was done in current implementation of 
DocumentStore and all stats related logic now lives separately. This can be 
evolved easily
# {{DocumentStoreStats}} implements  {{DocumentStoreStatsCollector}} - Based on 
data provided in callback various types of stats are computed
# {{DocumentStoreStatsMBean}} exposes the various time series data
# Existing usage of {{PerfLogger}} in {{MongoDocumentStore}} have been replaced 
and similar logs are now done from {{DocumentStoreStats}} as PerfLogger does 
not allow passing duration and only accept start time

*Stats Types*
# Finding uncached Nodes - Separately for calls made to primary and secondary. 
This would allow us to test out various strategies of optimizing reads from 
secondaries and see how effective they are
# How many Nodes are being read via Query calls
# Calls made by Journal
# Number of uncached find call for split documents - If they are more we should 
look into caching them better

See [screenshot|^oak-document-stats.png] for various types of stats in a 
startup of Oak based application

[~mreutegg] Can you review the approach taken. If fine I would commit it and 
then add similar data collection to RDB side

[~rosu] Approach taken here differs from earlier approach. So would be helpful 
if you can also take a look

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch, 
> OAK-3791-v2-chetanm.patch, oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098025#comment-15098025
 ] 

Chetan Mehrotra commented on OAK-3791:
--

bq. IMO timing the database related methods would be very useful. Is a timer 
costly? I expect the DocumentStore methods that hit the database to be quite 
expensive ( order >= milliseconds ). I needed, I could spend some time 
investigating the impact.

agreed. Overhead of timer was not the driving concern here. It was just that it 
require bit more changes and then we had some similar existing support in 
PerfLogger. So all that would need to be consolidated (see patch proposed 
below) and hence those thoughts. But based on feedback I agree that collecting 
time would be helpful

bq. On the long run having a lot of stats will add overhead.

Per various users of Metric the overhead is not very high and its designed for 
runtime usage where it is enabled by default. So lets have that in and see how 
it performs. If required we can go for fine grained control of enabling 
metrics. Key point being that we have required metrics in place

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch, 
> OAK-3791-v2-chetanm.patch, oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3791:
-
Attachment: oak-document-stats.png
OAK-3791-v2-chetanm.patch

[updated patch|OAK-3791-v2-chetanm.patch] which takes a bit different approach

# Introduced a new {{DocumentStoreStatsCollector}} callback which would be 
invoked by various DocumentStore impl. This captures the kind of data we want 
to collect. With this very small change was done in current implementation of 
DocumentStore and all stats related logic now lives separately. This can be 
evolved easily
# {{DocumentStoreStats}} implements  {{DocumentStoreStatsCollector}} - Based on 
data provided in callback various types of stats are computed
# {{DocumentStoreStatsMBean}} exposes the various time series data
# Existing usage of {{PerfLogger}} in {{MongoDocumentStore}} have been replaced 
and similar logs are now done from {{DocumentStoreStats}} as PerfLogger does 
not allow passing duration and only accept start time

See [screenshot|^oak-document-stats.png] for various types of stats in a 
startup of Oak based application

[~mreutegg] Can you review the approach taken. If fine I would commit it and 
then add similar data collection to RDB side

[~rosu] Approach taken here differs from earlier approach. So would be helpful 
if you can also take a look

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch, 
> OAK-3791-v2-chetanm.patch, oak-document-stats.png
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3470) Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being used

2016-01-14 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098007#comment-15098007
 ] 

Marcel Reutegger commented on OAK-3470:
---

The dependency of Utils to the MongoDB Java driver is indeed a bit troublesome. 
The intention is that Oak should also work without the driver when e.g. the 
RDBDocumentStore is used. This is why the driver dependency is marked optional.

> Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being 
> used
> -
>
> Key: OAK-3470
> URL: https://issues.apache.org/jira/browse/OAK-3470
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.2.6, 1.3.7
>Reporter: Jegadisan Sankar Kumar
>Assignee: Marcel Reutegger
>Priority: Minor
>
> When create a repository without Mongo and just a RDBMS DocumentNodeStore, a 
> NoClassDefFoundError is encountered.
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/mongodb/BasicDBObject
>   at 
> org.apache.jackrabbit.oak.plugins.document.util.Utils.estimateMemoryUsage(Utils.java:160)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Document.getMemory(Document.java:167)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:33)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:27)
>   at 
> com.google.common.cache.LocalCache$Segment.setValue(LocalCache.java:2158)
>   at 
> com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3140)
>   at 
> com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2349)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentCached(RDBDocumentStore.java:762)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:222)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.(DocumentNodeStore.java:448)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getNodeStore(DocumentMK.java:671)
> {code}
> The dependencies in pom.xml are as follows
> {code:xml}
> 
> 
> org.apache.jackrabbit
> oak-jcr
> 1.2.6
> 
> 
> com.h2database
> h2
> 1.4.189
> 
> 
> ch.qos.logback
> logback-classic
> 1.1.3
> 
> 
> {code}
> And the code to recreate the issue
> {code:java}
> // Build the Data Source to be used.
> JdbcDataSource ds = new JdbcDataSource();
> ds.setURL("jdbc:h2:mem:oak;DB_CLOSE_DELAY=-1");
> ds.setUser("sa");
> ds.setPassword("sa");
> // Build the OAK Repository Instance
> DocumentNodeStore ns = null;
> try {
> ns = new DocumentMK.Builder()
> .setRDBConnection(ds)
> .getNodeStore();
> } finally {
> if (ns != null) {
> ns.dispose();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3470) Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being used

2016-01-14 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reassigned OAK-3470:
-

Assignee: Marcel Reutegger

> Utils.estimateMemoryUsage has a NoClassDefFoundError when Mongo is not being 
> used
> -
>
> Key: OAK-3470
> URL: https://issues.apache.org/jira/browse/OAK-3470
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, documentmk
>Affects Versions: 1.2.6, 1.3.7
>Reporter: Jegadisan Sankar Kumar
>Assignee: Marcel Reutegger
>Priority: Minor
>
> When create a repository without Mongo and just a RDBMS DocumentNodeStore, a 
> NoClassDefFoundError is encountered.
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/mongodb/BasicDBObject
>   at 
> org.apache.jackrabbit.oak.plugins.document.util.Utils.estimateMemoryUsage(Utils.java:160)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Document.getMemory(Document.java:167)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:33)
>   at 
> org.apache.jackrabbit.oak.cache.EmpiricalWeigher.weigh(EmpiricalWeigher.java:27)
>   at 
> com.google.common.cache.LocalCache$Segment.setValue(LocalCache.java:2158)
>   at 
> com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3140)
>   at 
> com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2349)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.readDocumentCached(RDBDocumentStore.java:762)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:222)
>   at 
> org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.find(RDBDocumentStore.java:217)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.(DocumentNodeStore.java:448)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentMK$Builder.getNodeStore(DocumentMK.java:671)
> {code}
> The dependencies in pom.xml are as follows
> {code:xml}
> 
> 
> org.apache.jackrabbit
> oak-jcr
> 1.2.6
> 
> 
> com.h2database
> h2
> 1.4.189
> 
> 
> ch.qos.logback
> logback-classic
> 1.1.3
> 
> 
> {code}
> And the code to recreate the issue
> {code:java}
> // Build the Data Source to be used.
> JdbcDataSource ds = new JdbcDataSource();
> ds.setURL("jdbc:h2:mem:oak;DB_CLOSE_DELAY=-1");
> ds.setUser("sa");
> ds.setPassword("sa");
> // Build the OAK Repository Instance
> DocumentNodeStore ns = null;
> try {
> ns = new DocumentMK.Builder()
> .setRDBConnection(ds)
> .getNodeStore();
> } finally {
> if (ns != null) {
> ns.dispose();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3844) Better support for versionable nodes without version histories

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098005#comment-15098005
 ] 

Tomek Rękawek edited comment on OAK-3844 at 1/14/16 11:16 AM:
--

[~jsedding], thanks for noticing this. So, maybe we should create an empty 
version history for nodes:

a) with type inheriting from {{mix:versionable}} and
b) which doesn't have the version history

I agree it should be extracted to a separate issue.


was (Author: tomek.rekawek):
[~jsedding], thanks for noticing this. So, maybe we should create an empty 
version history for nodes:

a) with type inheriting from {{mix:versionable}} and
b) doesn't have the version history

I agree it should be extracted to a separate issue.

> Better support for versionable nodes without version histories
> --
>
> Key: OAK-3844
> URL: https://issues.apache.org/jira/browse/OAK-3844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.4
>
> Attachments: OAK-3844-failing-test.patch, OAK-3844.patch
>
>
> One of the customers reported following exception that has been thrown during 
> the migration:
> {noformat}
> Caused by: java.lang.IllegalStateException: This builder does not exist: 
> 95a5253f-d37b-4e88-a4b4-0721530344fc
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
>   at 
> org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
> ...
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
> {noformat}
> It seems that the node with reported UUID has primary type inheriting from 
> {{mix:versionable}}, but there is no appropriate version history in the 
> version storage.
> Obviously this means that there's something wrong with the repository. 
> However, I think that the migration process shouldn't fail, but proceed 
> silently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3844) Better support for versionable nodes without version histories

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15098005#comment-15098005
 ] 

Tomek Rękawek commented on OAK-3844:


[~jsedding], thanks for noticing this. So, maybe we should create an empty 
version history for nodes:

a) with type inheriting from {{mix:versionable}} and
b) doesn't have the version history

I agree it should be extracted to a separate issue.

> Better support for versionable nodes without version histories
> --
>
> Key: OAK-3844
> URL: https://issues.apache.org/jira/browse/OAK-3844
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Tomek Rękawek
>Assignee: Julian Sedding
> Fix For: 1.4
>
> Attachments: OAK-3844-failing-test.patch, OAK-3844.patch
>
>
> One of the customers reported following exception that has been thrown during 
> the migration:
> {noformat}
> Caused by: java.lang.IllegalStateException: This builder does not exist: 
> 95a5253f-d37b-4e88-a4b4-0721530344fc
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:150)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506)
>   at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522)
>   at 
> org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226)
> ...
>   at 
> org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486)
> {noformat}
> It seems that the node with reported UUID has primary type inheriting from 
> {{mix:versionable}}, but there is no appropriate version history in the 
> version storage.
> Obviously this means that there's something wrong with the repository. 
> However, I think that the migration process shouldn't fail, but proceed 
> silently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2472) Add support for atomic counters on cluster solutions

2016-01-14 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2472:
--
Priority: Blocker  (was: Major)

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>Priority: Blocker
>  Labels: scalability
> Fix For: 1.4, 1.3.14
>
> Attachments: OAK-2472-failure-1452511772.log.gz, 
> OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, 
> oak-1452268140.log.gz
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2472) Add support for atomic counters on cluster solutions

2016-01-14 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2472:
--
Fix Version/s: 1.3.14

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>Priority: Blocker
>  Labels: scalability
> Fix For: 1.4, 1.3.14
>
> Attachments: OAK-2472-failure-1452511772.log.gz, 
> OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, 
> oak-1452268140.log.gz
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097966#comment-15097966
 ] 

Thomas März commented on OAK-3846:
--

[~jsedding] /var/audit is required and explicitly included instead of being 
excluded.

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:545)
>   at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:4547)
>   at org.apache.felix.framework.Felix.registerService(Felix.java:

[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097961#comment-15097961
 ] 

Konrad Windszus commented on OAK-3846:
--

That sound reasonable. That CommitHook should IMHO be switched on by default, 
because 
* it is hard to determine that there is actually an SNS problem from the 
exception being thrown by Oak
* it should not have a big impact on performance hopefully

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(Eve

[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097960#comment-15097960
 ] 

Julian Sedding commented on OAK-3846:
-

[~primedo], I don't know your use-case. However, if you don't require 
{{/var/audit}} in your migrated/copied instance, it would be more efficient to 
exclude this path. This feature is available today.

This should of course not prevent us from fixing the underlying issue.

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatche

[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097953#comment-15097953
 ] 

Thomas März commented on OAK-3846:
--

[~tuerkben] Personally I'd prefer skipping the nodes since like OAK-3111, since 
the actual problem with /var/audit, renaming the nodes doesn't provide any 
benefit. But renaming the nodes and logging the new path is also a good 
solution for our use case.

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(Event

[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097943#comment-15097943
 ] 

Julian Reschke commented on OAK-3637:
-

Thanks. Will take it from here.

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3842) Adjust package export declarations

2016-01-14 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097940#comment-15097940
 ] 

Amit Jain commented on OAK-3842:


[~mduerig] We can remove the blob related package exports
{code}
org.apache.jackrabbit.oak.spi.blob
org.apache.jackrabbit.oak.spi.blob.stats
{code}

> Adjust package export declarations 
> ---
>
> Key: OAK-3842
> URL: https://issues.apache.org/jira/browse/OAK-3842
> Project: Jackrabbit Oak
>  Issue Type: Task
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: api, modularization, technical_debt
> Fix For: 1.4
>
>
> We need to adjust the package export declarations such that they become 
> manageable with our branch / release model. 
> See http://markmail.org/thread/5g3viq5pwtdryapr for discussion.
> I propose to remove package export declarations from all packages that we 
> don't consider public API / SPI beyond Oak itself. This would allow us to 
> evolve Oak internal stuff (e.g. things used across Oak modules) freely 
> without having to worry about merges to branches messing up semantic 
> versioning. OTOH it would force us to keep externally facing public API / SPI 
> reasonably stable also across the branches. Furthermore such an approach 
> would send the right signal to Oak API / SPI consumers regarding the 
> stability assumptions they can make. 
> An external API / SPI having a (transitive) dependency on internals might be 
> troublesome. In doubt I would remove the export version here until we can 
> make reasonable guarantees (either through decoupling the code or stabilising 
> the dependencies). 
> I would start digging through the export version and prepare an initial 
> proposal for further discussion. 
> /cc [~frm], [~chetanm], [~mmarth]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097938#comment-15097938
 ] 

Tomek Rękawek commented on OAK-3846:


[~tuerkben], I was thinking about the the {{old_name[2]}}, {{old_name[3]}}, 
etc. schema and a warning message for each renamed node.

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:545)
>   at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:4547)
>   at org.apach

[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097934#comment-15097934
 ] 

Tomek Rękawek commented on OAK-3637:


(y)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: OAK-3637-same-document-bug.patch

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: (was: OAK-3637-same-document-bug.patch)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3727) Broadcasting cache: auto-configuration

2016-01-14 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-3727.
-
Resolution: Fixed

> Broadcasting cache: auto-configuration
> --
>
> Key: OAK-3727
> URL: https://issues.apache.org/jira/browse/OAK-3727
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.14
>
>
> The plan is, each cluster node writes its IP address and listening port to 
> the clusterInfo collection, and (if really needed) a UUID. That way, it's 
> possible to detect other cluster nodes and connect to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097911#comment-15097911
 ] 

Julian Reschke commented on OAK-3637:
-

We don't need a separate fixture. We can just require that if _modCount is 
present, it behaves the way it's supposed to...

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097908#comment-15097908
 ] 

Ilyas Türkben commented on OAK-3846:


[~tomek.rekawek] It sound reasonable, thank you.
I think there must be a property on the node or a pattern in the name that 
should allow to identify automatically renamed nodes post-upgrade. What you you 
think ? /cc [~primedo].

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   at 
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:838)
>   at 
> org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:545)
> 

[jira] [Updated] (OAK-3727) Broadcasting cache: auto-configuration

2016-01-14 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3727:

Fix Version/s: 1.3.14

> Broadcasting cache: auto-configuration
> --
>
> Key: OAK-3727
> URL: https://issues.apache.org/jira/browse/OAK-3727
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.14
>
>
> The plan is, each cluster node writes its IP address and listening port to 
> the clusterInfo collection, and (if really needed) a UUID. That way, it's 
> possible to detect other cluster nodes and connect to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097904#comment-15097904
 ] 

Tomek Rękawek commented on OAK-3637:


[~mreutegg], thanks for the clarification. I updated the patch once more, the 
MemoryDocumentStore remains unchanged and there's an extra fixture condition in 
the test case.

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: OAK-3637-same-document-bug.patch

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3727) Broadcasting cache: auto-configuration

2016-01-14 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097905#comment-15097905
 ] 

Thomas Mueller commented on OAK-3727:
-

http://svn.apache.org/r1724565 (trunk)

The broadcasting cache (the TCP flavor) is enabled by default. To disable it, 
use "broadcast=disabled", as in:

{noformat}
launchpad configuration (escaped):
persistentCache="crx-quickstart/repository/cache,size\=1024,binary\=0,broadcast\=disabled"
{noformat}

The configuration is stored in the "clusterNodes" collection. In case of 
MongoDB, you may see:

{noformat}
> db.clusterNodes.find().pretty()
{
"_id" : "1",...,
"broadcastListener" : "10.132.4.224:9800",
"broadcastId" : "3dcc2c42-7aad-4473-95f1-35d832208d76"
}
{
"_id" : "2", ...,
"broadcastListener" : "10.132.4.224:9801",
"broadcastId" : "eab6e092-25de-4351-9aca-b047ac62040d"
}
{noformat}

This means the the listener is at IP address 10.132.4.224 and the given port. A 
new, randomly generated broadcastId is used whenever a cluster node starts.

> Broadcasting cache: auto-configuration
> --
>
> Key: OAK-3727
> URL: https://issues.apache.org/jira/browse/OAK-3727
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.14
>
>
> The plan is, each cluster node writes its IP address and listening port to 
> the clusterInfo collection, and (if really needed) a UUID. That way, it's 
> possible to detect other cluster nodes and connect to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: (was: OAK-3637-same-document-bug.patch)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097897#comment-15097897
 ] 

Marcel Reutegger commented on OAK-3637:
---

The _modCount is optional and a DocumentStore implementation is not required to 
maintain it. The MongoDocumentStore and RDBDocumentStore implementation use it 
to invalidate their cache. The MemoryDocumentStore obviously does not have to 
do this.

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3846) Add parameter to skip SNS nodes

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097893#comment-15097893
 ] 

Tomek Rękawek commented on OAK-3846:


[~tuerkben], SNS are generally supported by the Oak, so I'm not sure whether we 
should allow to skip it during migration, especially that sometimes they are 
required for the repository to work correctly (eg. in the 
{{/jcr:system/jcr:nodeTypes}}). In this particular case the content doesn't 
match the JCR nodetype schema and that's the reason of failure, as [~kwin] 
noticed.

What I can suggest is an extra CommitHook in the oak-upgrade that checks if 
there's a SNS node under a parent that doesn't allow it - if so, the node will 
be renamed. WDYT?

> Add parameter to skip SNS nodes
> ---
>
> Key: OAK-3846
> URL: https://issues.apache.org/jira/browse/OAK-3846
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Affects Versions: 1.3.13
>Reporter: Ilyas Türkben
>
> Add a parameter to skip migration of SNS nodes.
> Currently SNS nodes break the upgrade process with an error similar to the 
> following:
> {code:java}
> 31.08.2015 13:18:56.121 *ERROR* [FelixFrameworkWiring] 
> org.apache.sling.jcr.base.internal.loader.Loader Error loading node types 
> SLING-INF/nodetypes/folder.cnd from bundle org.apache.sling.jcr.resource: {}
> javax.jcr.nodetype.ConstraintViolationException: Failed to register node 
> types.
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:225)
>   at 
> org.apache.jackrabbit.oak.plugins.nodetype.write.ReadWriteNodeTypeManager.registerNodeTypes(ReadWriteNodeTypeManager.java:156)
>   at 
> org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:162)
>   at 
> org.apache.sling.jcr.base.NodeTypeLoader.registerNodeType(NodeTypeLoader.java:124)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerNodeTypes(Loader.java:296)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundleInternal(Loader.java:237)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.registerBundle(Loader.java:166)
>   at 
> org.apache.sling.jcr.base.internal.loader.Loader.(Loader.java:78)
>   at 
> org.apache.sling.jcr.base.NamespaceMappingSupport.setup(NamespaceMappingSupport.java:81)
>   at 
> org.apache.sling.jcr.base.AbstractSlingRepositoryManager.start(AbstractSlingRepositoryManager.java:313)
>   at 
> com.adobe.granite.repository.impl.SlingRepositoryManager.activate(SlingRepositoryManager.java:267)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)
>   at 
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)
>   at 
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:832)
>   at 
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:799)
>   at 
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:927)
>   at 
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:891)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1492)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1413)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:1222)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:1158)
>   at 
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1444)
>   

[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: (was: OAK-3637-same-document-bug.patch)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: OAK-3637-same-document-bug.patch

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3637) Bulk document updates in RDBDocumentStore

2016-01-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097885#comment-15097885
 ] 

Tomek Rękawek commented on OAK-3637:


Agreed on the complexity. However, I think the performance gain is worth it.

I updated the patch. BTW, it seems that the MemoryDocumentStore doesn't update 
mod_count at all - I fixed it in the same patch, but it can be extracted to a 
separate issue as well.

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.13, 1.2.10
>Reporter: Tomek Rękawek
>Assignee: Julian Reschke
> Fix For: 1.3.14, 1.2.11
>
> Attachments: OAK-3637-same-document-bug.patch, OAK-3637.comments.txt, 
> OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3880) Provide an option to create 2 time series for TimerStats

2016-01-14 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3880:


 Summary: Provide an option to create 2 time series for TimerStats
 Key: OAK-3880
 URL: https://issues.apache.org/jira/browse/OAK-3880
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.4


Currently for a TimerStats only 1 time series data is maintained which is for 
time taken. It would be good to also have a time series maintained for count. 
This also aligns with the Timer implementation which has both a Meter and 
Histogram



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2758) Performance: Consider caches for MutableTree#getTree and MemoryNodeBuilder#getChildNode

2016-01-14 Thread Joel Richard (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Richard resolved OAK-2758.
---
Resolution: Invalid

> Performance: Consider caches for MutableTree#getTree and 
> MemoryNodeBuilder#getChildNode
> ---
>
> Key: OAK-2758
> URL: https://issues.apache.org/jira/browse/OAK-2758
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.2
>Reporter: Joel Richard
>Priority: Critical
>  Labels: performance
> Attachments: OAK-2758-SecureNodeBuilder-lastChildNode-cache.patch, 
> OAK-2758-SegmentNodeState-lastChildNode-cache.patch, experimental_caches.patch
>
>
> While I was analysing Sling's rendering performance, I noticed that it would 
> help a lot to implement a cache for the ResourceResolver. For some pages it 
> almost doubled the rendering performance. This made me wondering whether 
> Oak's read performance could still be improved.
> I noticed that a lot of time is spent in MutableTree#getTree and 
> MemoryNodeBuilder#getChildNode and have implemented a specific cache for 
> them. These two caches improve the read performance up to 4 times and do not 
> break any oak-core tests.
> Here the benchmark results for ReadDeepTreeTest:
> {code}
> Fixtures: Oak-Tar
> Admin User: false
> Runtime: 5
> Num Items: 1000
> Concurrency: 1,2,4
> Random User: true
> Profiling: false
> --
> Executing benchmarks as admin: false on Oak-Tar
> ---
> # ReadDeepTreeTest  ,  C,min,10%,50%,90%,max, 
>  N
> Oak-Tar ,  1, 16, 16, 17, 19, 22, 
>290
> Oak-Tar ,  2, 23, 29, 44, 68,115, 
>216
> Oak-Tar ,  4, 24, 43, 97,154,232, 
>207
> {code}
> The same results with my changes:
> {code}
> # ReadDeepTreeTest  ,  C,min,10%,50%,90%,max, 
>  N
> Oak-Tar ,  1,  4,  4,  5,  5, 15, 
>   1038
> Oak-Tar ,  2, 10, 14, 16, 20, 60, 
>577
> Oak-Tar ,  4, 13, 27, 32, 40, 69, 
>605
> {code}
> I have also implemented another cache for properties, but it didn't really 
> help and broke some tests.
> The experimental patch is attached. It's not meant to be applied, but just to 
> point out areas with potential for improvement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3559) Bulk document updates in MongoDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3559:
---
Attachment: OAK-3559.patch

> Bulk document updates in MongoDocumentStore
> ---
>
> Key: OAK-3559
> URL: https://issues.apache.org/jira/browse/OAK-3559
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: mongomk
>Reporter: Tomek Rękawek
>Assignee: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3559.patch
>
>
> Using the MongoDB [Bulk 
> API|https://docs.mongodb.org/manual/reference/method/Bulk/#Bulk] implement 
> the [batch version of createOrUpdate method|OAK-3662].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3559) Bulk document updates in MongoDocumentStore

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3559:
---
Attachment: (was: OAK-3559.patch)

> Bulk document updates in MongoDocumentStore
> ---
>
> Key: OAK-3559
> URL: https://issues.apache.org/jira/browse/OAK-3559
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: mongomk
>Reporter: Tomek Rękawek
>Assignee: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: OAK-3559.patch
>
>
> Using the MongoDB [Bulk 
> API|https://docs.mongodb.org/manual/reference/method/Bulk/#Bulk] implement 
> the [batch version of createOrUpdate method|OAK-3662].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3791) Time measurements for DocumentStore methods

2016-01-14 Thread Teodor Rosu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097834#comment-15097834
 ] 

Teodor Rosu commented on OAK-3791:
--

IMO timing the database related methods would be very useful. Is a timer 
costly? I expect the DocumentStore methods that hit the database to be quite 
expensive ( order >= milliseconds ). I needed, I could spend some time 
investigating the impact.

One idea that might apply to other cases: if a timer adds overhead, a better 
option would be to track “rates” and use random sampling with timers ( ex: time 
a method only 20% of the calls ). This can still give a good idea of what 
happens in a system. 

On the long run having a lot of stats will add overhead. One other thing that 
might help would be to add a state to a metric ( active/inactive ). To be more 
exact, oak will have (many) stats added in all essential places. An “inactive” 
Stat will basically translate to a noop.  By default all oak stats are 
“inactive”. Unless activated (somehow statically or dynamically: configuration, 
osgi, system property etc) a metric should not add overhead. 

/cc [~chetanm] [~reschke] [~mreutegg]

> Time measurements for DocumentStore methods
> ---
>
> Key: OAK-3791
> URL: https://issues.apache.org/jira/browse/OAK-3791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, documentmk
>Reporter: Teodor Rosu
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3791-RDB-0.patch, OAK-3791-v1.patch
>
>
> For monitoring ( in big latency environments ), it would be useful to measure 
> and report time for the DocumentStore methods.
> These could be exposed as (and/or):
> - as Timers generated obtained from StatisticsProvider ( statistics registry )
> - as TimeSeries 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181, 399 |  SEGMENT_MK, DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 
382 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151, 490, 656 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, 
DOCUMENT_RDB, DOCUMENT_NS | 1.6, 1.7 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 
| DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | 
DOCUMENT_RDB | 1.6 |
| 
org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths
 | 361 | DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 
361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 |
| Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration | 
486 | DOCUMENT_NS | 1.7| 
| org.apache.jackrabbit.j2ee.TomcatIT.testTomcat | 489, 493, 597, 648 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 | 
| org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 
623, 624, 656 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization
 | 490, 656 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.run.osgi.PropertyIndexReindexingTest.propertyIndexState
 | 492 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.j2ee.TomcatIT | 589 | SEGMENT_MK | 1.8 |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStoreRestart
 | 621 | DOCUMENT_NS | 1.8 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation
 | 627 | DOCUMENT_RDB |1.7 |
| org.apache.jackrabbit.oak.spi.security.authorization.cug.impl.* | 648 | 
SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.remote.http.handler.RemoteServerIT | 643 | 
DOCUMNET_NS | 1.7, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 
663 | SEGMENT_MK | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest | 673, 674 | 
SEGMENT_MK | 1.8 | 



  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test  

[jira] [Updated] (OAK-3879) Lucene index / compatVersion 2: search for 'abc!' does not work

2016-01-14 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3879:

Fix Version/s: 1.4

> Lucene index / compatVersion 2: search for 'abc!' does not work
> ---
>
> Key: OAK-3879
> URL: https://issues.apache.org/jira/browse/OAK-3879
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
> Fix For: 1.4
>
>
> When using a Lucene fulltext index with compatVersion 2, then the following 
> query does not return any results. When using compatVersion 1, the correct 
> result is returned.
> {noformat}
> SELECT * FROM [nt:unstructured] AS c 
> WHERE CONTAINS(c.[jcr:description], 'abc!') 
> AND ISDESCENDANTNODE(c, '/content')
> {noformat}
> With compatVersion 1 and 2, searching for just 'abc' works. Also, searching 
> with '=' instead of 'contains' works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3879) Lucene index / compatVersion 2: search for 'abc!' does not work

2016-01-14 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-3879:
---

 Summary: Lucene index / compatVersion 2: search for 'abc!' does 
not work
 Key: OAK-3879
 URL: https://issues.apache.org/jira/browse/OAK-3879
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: Thomas Mueller


When using a Lucene fulltext index with compatVersion 2, then the following 
query does not return any results. When using compatVersion 1, the correct 
result is returned.

{noformat}
SELECT * FROM [nt:unstructured] AS c 
WHERE CONTAINS(c.[jcr:description], 'abc!') 
AND ISDESCENDANTNODE(c, '/content')
{noformat}

With compatVersion 1 and 2, searching for just 'abc' works. Also, searching 
with '=' instead of 'contains' works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2016-01-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181, 399 |  SEGMENT_MK, DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 
382 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151, 490, 656 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, 
DOCUMENT_RDB, DOCUMENT_NS | 1.6, 1.7 |
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 
| DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | 
DOCUMENT_RDB | 1.6 |
| 
org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths
 | 361 | DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 
361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 |
| Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration | 
486 | DOCUMENT_NS | 1.7| 
| org.apache.jackrabbit.j2ee.TomcatIT.testTomcat | 489, 493, 597, 648 | 
DOCUMENT_NS, SEGMENT_MK | 1.7 | 
| org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 
623, 624, 656 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization
 | 490, 656 | DOCUMENT_RDB | 1.7 |
| 
org.apache.jackrabbit.oak.run.osgi.PropertyIndexReindexingTest.propertyIndexState
 | 492 | DOCUMENT_NS | 1.6 |
| org.apache.jackrabbit.j2ee.TomcatIT | 589 | SEGMENT_MK | 1.8 |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStoreRestart
 | 621 | DOCUMENT_NS | 1.8 |
| 
org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation
 | 627 | DOCUMENT_RDB |1.7 |
| org.apache.jackrabbit.oak.spi.security.authorization.cug.impl.* | 648 | 
SEGMENT_MK, DOCUMENT_NS | 1.8 | 
| org.apache.jackrabbit.oak.remote.http.handler.RemoteServerIT | 643 | 
DOCUMNET_NS | 1.7, 1.8 |
| org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 
663 | SEGMENT_MK | 1.7 |
| org.apache.jackrabbit.oak.plugins.document.blob.RDBBlobStoreTest | 673 | 
SEGMENT_MK | 1.8 | 



  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test