Re: [VOTE] Release Apache Jackrabbit Filevault 3.2.0

2018-08-09 Thread Stefan Guggisberg
successfully verified hashes and signature
successfully built from source
successfully  ran tests

[X] +1 Release this package as Apache Jackrabbit Filevault 3.2.0

cheers
stefan
On Mon, Aug 6, 2018 at 9:29 AM Tobias Bocanegra  wrote:
>
> A candidate for the Jackrabbit Filevault 3.2.0 release is available at:
>
> https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.2.0/
>
> The release candidate is a zip archive of the sources in:
>
> https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/tags/jackrabbit-filevault-3.2.0/
>
> The SHA1 checksum of the archive is 354eb978563fa42db03cd15326a62135a1f28790.
>
> The command for running automated checks against this release candidate is:
> $ sh check-release.sh filevault 3.2.0 354eb978563fa42db03cd15326a62135a1f28790
>
> A staged Maven repository is available for review at:
>
> https://repository.apache.org/content/repositories/orgapachejackrabbit-1362/
>
> Please vote on releasing this package as Apache Jackrabbit Filevault 3.2.0.
> The vote is open for a minimum of 72 hours during business days and passes
> if a majority of at least three +1 Jackrabbit PMC votes are cast.
> The vote fails if not enough votes are cast after 1 week (5 business days).
>
> [ ] +1 Release this package as Apache Jackrabbit Filevault 3.2.0
> [ ] -1 Do not release this package because...
>
>
> Regards, Toby


Re: [VOTE] Release Apache Jackrabbit Filevault Package Maven Plugin 1.0.1

2017-12-20 Thread Stefan Guggisberg
all checks ok

+1 Release this package as Apache Jackrabbit Filevault Package Maven
Plugin 1.0.1

cheers
stefan

On Thu, Dec 14, 2017 at 5:07 PM, Tobias Bocanegra  wrote:
> Hi alter ego,
>
> [INFO] Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
> 2017-04-03T21:39:06+02:00)
> [INFO] OS name: "mac os x", version: "10.13.1", arch: "x86_64", family: "mac"
> [INFO] Java version: 1.8.0_144, vendor: Oracle Corporation
> [INFO] 
> 
> [INFO] ALL CHECKS OK
>
> here's my +1
>
>> [X] +1 Release this package as Apache Jackrabbit Filevault Package Maven 
>> Plugin 1.0.1
>> [ ] -1 Do not release this package because...
>
> Regards, Toby
>


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.34

2017-01-19 Thread Stefan Guggisberg
On Thu, Jan 19, 2017 at 10:07 AM, Marcel Reutegger  wrote:
> On 16/01/17 08:36, Tobias Bocanegra wrote:
>>
>> A candidate for the Jackrabbit Filevault 3.1.34 release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.1.34/
>
>
> I have reproducible test failures in TestACLAndMerge [0] when running 'mvn
> clean install'. The pasted output was generated with 'mvn
> -Dtest=TestACLAndMerge clean test' within vault-core.
>
> My environment is Mac OS X, Oracle Java 1.8, Maven 3.3.9.

i get the same test failures (a newly setup machine with os x sierra,
java 1.8, maven 3.3.9).

cheers
stefan

>
> Regards
>  Marcel
>
> [0] https://paste.apache.org/9hor


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.28

2016-08-22 Thread Stefan Guggisberg
> Please vote on releasing this package as Apache Jackrabbit Filevault 3.1.28.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Jackrabbit PMC votes are cast.
>
> [ ] +1 Release this package as Apache Jackrabbit Filevault 3.1.28
> [ ] -1 Do not release this package because...

verified signature,
checked sha1 digest,
built from source and successfully ran all tests

+1 Release this package as Apache Jackrabbit Filevault 3.1.28

cheers
stefan


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.20

2015-05-27 Thread Stefan Guggisberg
- verified PGP signature and sha1 checksum
- ran 'mvn install' without errors

[x] +1 Release this package as Apache Jackrabbit Filevault 3.1.20

cheers
stefan

On Wed, May 27, 2015 at 7:49 AM, Tobias Bocanegra tri...@apache.org wrote:
 A candidate for the Jackrabbit Filevault 3.1.20 release is available at:

 https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.1.20/

 The release candidate is a zip archive of the sources in:

 https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/tags/jackrabbit-filevault-3.1.20/

 The SHA1 checksum of the archive is 3766b90112f533517aea2956e13a4ddffe4eb50b.

 A staged Maven repository is available for review at:

 https://repository.apache.org/content/repositories/orgapachejackrabbit-1067/

 Please vote on releasing this package as Apache Jackrabbit Filevault 3.1.20.
 The vote is open for the next 72 hours and passes if a majority of at
 least three +1 Jackrabbit PMC votes are cast.

 [ ] +1 Release this package as Apache Jackrabbit Filevault 3.1.20
 [ ] -1 Do not release this package because...


Re: Initial work for the specification of a remote API

2015-01-27 Thread Stefan Guggisberg
hi francesco,

On Fri, Jan 23, 2015 at 6:09 PM, Francesco Mari
mari.france...@gmail.com wrote:
 Hi all,

 since I don't have access to the wiki, I started to write down a draft
 for the remote API in a public GitHub repository [1].

 I didn't write much so far, but I invite every interested party to
 take a look at it for suggestions and improvements.

 Thanks,

 Francesco

 [1]: https://github.com/francescomari/oak-remote

To access the Oak Remote Interface a client needs to have the
credentials of the master user.

are you saying that a client needs master user credentials to access a
remote oak instance? that would IMO be wrong.

cheers
stefan


Re: Initial work for the specification of a remote API

2015-01-27 Thread Stefan Guggisberg
On Tue, Jan 27, 2015 at 2:23 PM, Francesco Mari
mari.france...@gmail.com wrote:
 My personal point of view is that the remote API should make Oak
 interoperable between different software stacks. Some of these
 software stacks may want to use their own authentication strategies
 which may (or may not) be the authentication strategies provided by
 Oak.

 In a typical web application, in example, every time a user A wants to
 do something, the following happens:

 - The user U sends a request to the web application and provides his
 personal credentials.

 - The web application connects to a database. This connection usually
 uses a set of credentials D that are necessary to establish a
 conversation with the database.

 - The web application extracts the credentials for U from the request
 and authenticates them. Information about U are usually saved in the
 database.

 - The web application reads and writes data on behalf of U, on top of
 a database connection that was previously authenticated with the
 credentials D.

 One of the goals of the remote interface, in my opinion, should be to
 enable this scenario. It should be possible to replace the word
 database with Oak in every point in the list above. If an
 application uses Oak as a storage system, that application should not
 be limited to the authentication strategies supported by Oak.

IIUC we're talking here about the remoting layer of the repository
(oak), not about hypothetical applications.

an oak client should IMO be able to use the same credentials
regardless of the deployment (local vs remote), i.e. it should be
transparent. requiring a remote client to provide oak 'admin'
credentials just feels totally wrong.

cheers
stefan


 On the other end, we still want to leverage the authorization
 mechanism built into Oak. The way authorization works in Oak is
 strongly decoupled from authentication. As long as a user exists,
 regardless of how it was authenticated, authorization will still work.
 This is one of the strongest point of authorization in Oak, in my
 opinion.

 2015-01-27 12:11 GMT+01:00 Stefan Guggisberg stefan.guggisb...@gmail.com:
 On Tue, Jan 27, 2015 at 11:31 AM, Francesco Mari
 mari.france...@gmail.com wrote:
 Should the API expose every authentication strategy implemented in
 Oak?

 which strategy should be supported by the remoting layer is something
 that needs TBD.
 i'd say we start with the simple case (password credentials).

 cheers
 stefan

 How is this API should look like?

 2015-01-27 10:39 GMT+01:00 Stefan Guggisberg stefan.guggisb...@gmail.com:
 On Tue, Jan 27, 2015 at 10:16 AM, Francesco Mari
 mari.france...@gmail.com wrote:
 Which other alternatives do we have regarding authentication?

 remote and local users should IMO use the same credentials. it should
 be transparent to the user whether he's accessing a local or a remote
 repo.

 cheers
 stefan


 2015-01-27 9:16 GMT+01:00 Stefan Guggisberg stefan.guggisb...@gmail.com:
 hi francesco,

 On Fri, Jan 23, 2015 at 6:09 PM, Francesco Mari
 mari.france...@gmail.com wrote:
 Hi all,

 since I don't have access to the wiki, I started to write down a draft
 for the remote API in a public GitHub repository [1].

 I didn't write much so far, but I invite every interested party to
 take a look at it for suggestions and improvements.

 Thanks,

 Francesco

 [1]: https://github.com/francescomari/oak-remote

 To access the Oak Remote Interface a client needs to have the
 credentials of the master user.

 are you saying that a client needs master user credentials to access a
 remote oak instance? that would IMO be wrong.

 cheers
 stefan


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.14

2014-12-19 Thread Stefan Guggisberg
+1

btw: there seems to be a bug in AdminPermissionCheckerTest. this test
fails when running the tests twice, e.g.

 mvn clean install
 mvn test

see attached stack trace.

cheers
stefan

On Tue, Dec 16, 2014 at 6:58 AM, Tobias Bocanegra tri...@apache.org wrote:
 A candidate for the Jackrabbit Filevault 3.1.14 release is available at:

 https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.1.14/

 The release candidate is a zip archive of the sources in:

 https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/tags/jackrabbit-filevault-3.1.14/

 The SHA1 checksum of the archive is 83afe4fa1c131c1c16f7286b5aa8896c82695989.

 A staged Maven repository is available for review at:

 https://repository.apache.org/content/repositories/orgapachejackrabbit-1043/

 Please vote on releasing this package as Apache Jackrabbit Filevault 3.1.14.
 The vote is open for the next 72 hours and passes if a majority of at
 least three +1 Jackrabbit PMC votes are cast.

 [ ] +1 Release this package as Apache Jackrabbit Filevault 3.1.14
 [ ] -1 Do not release this package because...

 Thanks.
 Regards, Toby
---
Test set: org.apache.jackrabbit.vault.packaging.impl.AdminPermissionCheckerTest
---
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.204 sec  
FAILURE!
testNotAdminUser(org.apache.jackrabbit.vault.packaging.impl.AdminPermissionCheckerTest)
  Time elapsed: 0.052 sec   FAILURE!
java.lang.AssertionError: testUser is not admin/system and doesn't belong to 
administrators thus shouldn't have admin permissions
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.jackrabbit.vault.packaging.impl.AdminPermissionCheckerTest.testNotAdminUser(AdminPermissionCheckerTest.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:338)
at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:997)



Re: blobs are not being retained when using MicroKernel interface, adds str: prefix to blobId property value

2014-09-22 Thread Stefan Guggisberg
hi adrien,

On Mon, Sep 22, 2014 at 5:35 PM, Adrien Lamoureux
lamoureux.adr...@gmail.com wrote:
 Hi,

 No one has responded to the issues I'm having with the MicroKernel.

sorry, missed that one.

the problem you're having ('str:' being prepended to ':blobid:...')
seems to be caused by a bug
in o.a.j.oak.kernel.NodeStoreKernel.


 Is the correct location to ask these questions? I tried finding a solution
 to this issue in your documentation and found none.

you could file a jira issue. however, i am not sure NodeStoreKernel is
being actively maintained.

cheers
stefan


 Thanks,

 Adrien

 On Tue, Sep 16, 2014 at 1:51 PM, Adrien Lamoureux 
 lamoureux.adr...@gmail.com wrote:

 Hello,

 I've been testing Oak 1.0.5, and changed Main.java under oak-run to enable
 a MicroKernel to run at startup with the standalone service at the bottom
 of the addServlets() method:

 private void addServlets(Oak oak, String path) {

 Jcr jcr = new Jcr(oak);

 // 1 - OakServer

 ContentRepository repository = oak.createContentRepository();

 .

 org.apache.jackrabbit.oak.core.ContentRepositoryImpl repoImpl =
 (org.apache.jackrabbit.oak.core.ContentRepositoryImpl)repository;

 org.apache.jackrabbit.oak.kernel.NodeStoreKernel nodeStoreK = new
 org.apache.jackrabbit.oak.kernel.NodeStoreKernel(repoImpl.getNodeStore());

 org.apache.jackrabbit.mk.server.Server mkserver = new
 org.apache.jackrabbit.mk.server.Server(nodeStoreK);

 mkserver.setPort(28080);

 mkserver.setBindAddress(java.net.InetAddress.getByName(localhost));

 mkserver.start();
 }

 I then used an org.apache.jackrabbit.mk.client.Client to connect to it,
 and everything seemed to work fine, including writing / reading blobs,
 however, the blobs are not being retained, and it appears to be impossible
 to set a :blobId: prefix for a property value without it forcing an
 additional 'str:' prefix.

 Here are a couple of examples using curl to create a node with a single
 property to hold the blobId. The first uses the proper :blobId: prefix,
 the other doesn't:

 curl -X POST --data 'path=/message=' --data-urlencode
 'json_diff=+testFile1.jpg :
 {testFileRef::blobId:93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33}'
 http://localhost:28080/commit.html

 RETURNED:

 curl -X POST --data
 'path=/testFile1.jpgdepth=2offset=0count=-1filter={nodes:[*],properties:[*]}'
 http://localhost:28080/getNodes.html

 {

   testFileRef: *str::blobId:*
 93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33,

   :childNodeCount: 0

 }

 I then tried without the blobId prefix, and it did not add a prefix:

 curl -X POST --data 'path=/message=' --data-urlencode
 'json_diff=+testFile2.jpg :
 {testFileRef:93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33}'
 http://localhost:28080/commit.html

 RETURNED:

 curl -X POST --data
 'path=/testFile2.jpgdepth=2offset=0count=-1filter={nodes:[*],properties:[*]}'
 http://localhost:28080/getNodes.html

 {

   testFileRef:
 93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33,

   :childNodeCount: 0

 }

 The blob itself was later removed/deleted, presumably by some sort of
 cleanup mechanism. I'm assuming that it couldn't find the reference to the
 blob.

 For sanity check, I tried saving a different one line text file at the
 Java Content Repository level of abstraction, and this is the result:

 curl -X POST --data
 'path=/testFiledepth=2offset=0count=-2filter={nodes:[*],properties:[*]}'
 http://localhost:28080/getNodes.html

 {

   jcr:created: dat:2014-09-16T13:41:38.084-07:00,

   jcr:createdBy: admin,

   jcr:primaryType: nam:nt:file,

   :childNodeCount: 1,

   jcr:content: {

 :childOrder: [0]:Name,

 jcr:encoding: UTF-8,

 jcr:lastModified: dat:2014-09-16T13:41:38.094-07:00,

 jcr:mimeType: text/plain,

 jcr:data:
 :blobId:428ed7545cd993bf6add8cd74cd6ad70f517341bbc1b31615f9286c652cd214a,

 jcr:primaryType: nam:nt:unstructured,

 :childNodeCount: 0

   }

 }

 The :blobId: prefix appears intact in this case..

 Any help would be greatly appreciated, as I would like to start using the
 MicroKernel for remote access, and file retention is critical.

 Thanks,

 Adrien






Re: Some more benchmarks

2014-07-02 Thread Stefan Guggisberg
On Tue, Jul 1, 2014 at 8:37 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Tue, Jul 1, 2014 at 9:38 AM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 I also tried including MongoMK results, but the benchmark got stuck in
 ConcurrentReadTest. I'll re-try today and will file a bug if I can
 reproduce the problem.

 I guess it was a transient problem. Here are the results with
 Oak-Mongo included:

 Summary (90%, lower is better)

 Benchmark  Jackrabbit  Oak-Mongo  Oak-Tar
 -
 ReadPropertyTest   45  44
 SetPropertyTest  1179   2398  119
 SmallFileReadTest  47  97
 SmallFileWriteTest182530   43
 ConcurrentReadTest   1201   1247  710
 ConcurrentReadWriteTest  1900   2321  775
 ConcurrentWriteReadTest  1009354  108
 ConcurrentWriteTest   532553  101

wow, very impressive, congrats!

cheers
stefan


 I updated the gist at
 https://gist.github.com/jukka/078bd524aa0ba36b184b with full details.

 The general message here is to use TarMK for maximum single-node
 performance and MongoMK for scalability and throughput across multiple
 cluster nodes.

 BR,

 Jukka Zitting


[jira] [Updated] (JCR-3239) Removal of a top level node doesn't update the hierarchy manager's cache.

2014-05-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3239:
---

Priority: Major  (was: Minor)

 Removal of a top level node doesn't update the hierarchy manager's cache.
 -

 Key: JCR-3239
 URL: https://issues.apache.org/jira/browse/JCR-3239
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.11, 2.4
Reporter: Philipp Bärfuss

 *Problem*
 Scenario in which I encounter the problem:
 - given a node 'test' under root (/test)
 - re-imported the node after a deletion (all in-session operations) 
 {code}
 session.removeItem(/test)
 session.getImportXML(/, stream, 
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_THROW)
 {code}
 Result: throws an ItemExistsException
 If the same operations are executed deeper in the hierarchy (for instance 
 /foo/bar) then the code works perfectly fine.
 *Findings*
 - session.removeItem informs the hierachy manager (via listener)
 -- CachingHierarchyManager.nodeRemoved(NodeState, Name, int, NodeId)
 - but the root node (passed as state) is not in the cache and hence the entry 
 of the top level node is not removed
 -- CachingHierarchyManager: 458 
 - while trying to import the method SessionImporter.startNode(NodeInfo, 
 ListPropInfo) calls session.getHierarchyManager().getName(id) (line 400)
 - the stall information causes a uuid collision (the code expects an 
 exception if the node doesn't exist but in this case it returns the name of 
 the formerly removed node)
 Note: session.itemExists() and session.getNode() work as expected (the former 
 returns false, the later throws an ItemNotFound exception)
 Note: I know that a different import behavior (replace existing) would solve 
 the issue but I can't be 100% sure that the UUID match so I favor collision 
 throw in my case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3770:
---

Fix Version/s: 2.7.6

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 2.7.6


 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created JCR-3770:
--

 Summary: refine validateHierarchy check in order to avoid 
false-positives
 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg


if a node is deleted and re-added with the same nodeId (within the same 
changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
{code}
Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of node 
with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
at 
org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
at 
org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
...
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13969585#comment-13969585
 ] 

Stefan Guggisberg commented on JCR-3770:


JCR-2598 introduced the optional validateHierarchy check

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3770.


Resolution: Fixed

fixed in svn revision 1587619

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 2.7.6


 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Jackrabbit Filevault 3.1.0

2014-03-31 Thread Stefan Guggisberg
+1 Release this package as Apache Jackrabbit Filevault 3.1.0

cheers
stefan

On Thu, Mar 27, 2014 at 11:52 PM, Tobias Bocanegra tri...@apache.org wrote:
 A candidate for the Jackrabbit Filevault 3.1.0 release is available at:

 https://dist.apache.org/repos/dist/dev/jackrabbit/filevault/3.1.0/

 The release candidate is a zip archive of the sources in:

 https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/tags/3.1.0/

 The SHA1 checksum of the archive is 764061b6edd33a0239a416a1c4992700f9cf92a0.

 A staged Maven repository is available for review at:

 https://repository.apache.org/content/repositories/orgapachejackrabbit-1008/

 Please vote on releasing this package as Apache Jackrabbit Filevault 3.1.0.
 The vote is open for the next 72 hours and passes if a majority of at
 least three +1 Jackrabbit PMC votes are cast.

 [ ] +1 Release this package as Apache Jackrabbit Filevault 3.1.0
 [ ] -1 Do not release this package because...


Re: oak-mk-perf?

2014-03-31 Thread Stefan Guggisberg
On Fri, Mar 28, 2014 at 8:39 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 Do we still need the oak-mk-perf component for something? We added it
 in OAK-335, but have since implemented a much more comprehensive set
 of benchmarks in oak-run.

 It doesn't seem as if the code is still being used, as the component
 was still referencing the ancient Oak 0.9-SNAPSHOT. Unless someone
 says otherwise, I'm inclined to drop the component.

0

cheers
stefan


 BR,

 Jukka Zitting


Re: svn commit: r1544078 - in /jackrabbit/oak/trunk/oak-mk/src: main/java/org/apache/jackrabbit/mk/model/StagedNodeTree.java test/java/org/apache/jackrabbit/mk/MicroKernelImplTest.java

2013-11-21 Thread Stefan Guggisberg
On Thu, Nov 21, 2013 at 3:57 PM, julian.resc...@gmx.de
julian.resc...@gmx.de wrote:
 On 2013-11-21 10:55, ste...@apache.org wrote:
 ...
 Modified: 
 jackrabbit/oak/trunk/oak-mk/src/test/java/org/apache/jackrabbit/mk/MicroKernelImplTest.java
 URL: 
 http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-mk/src/test/java/org/apache/jackrabbit/mk/MicroKernelImplTest.java?rev=1544078r1=1544077r2=1544078view=diff
 ==
 --- 
 jackrabbit/oak/trunk/oak-mk/src/test/java/org/apache/jackrabbit/mk/MicroKernelImplTest.java
  (original)
 +++ 
 jackrabbit/oak/trunk/oak-mk/src/test/java/org/apache/jackrabbit/mk/MicroKernelImplTest.java
  Thu Nov 21 09:55:24 2013
 @@ -415,7 +415,6 @@ public class MicroKernelImplTest {
   rev, mk.commit(/, , rev, null));
   }

 -@Ignore(OAK-552)  // FIXME OAK-552
   @Test(expected = MicroKernelException.class)
   public void foo() {
   mk.commit(, +\/x\:{}, null, null);

 ...this breaks the build for me.

any details?



Re: writing a new MK, guidelines

2013-11-06 Thread Stefan Guggisberg
hi tommaso

the javadoc of org.apache.jackrabbit.mk.api.MicroKernel specifies the
API contract.

org.apache.jackrabbit.mk.test.MicroKernelIT is the integration test
which verifies
whether an implementation obeys the contract.

org.apache.jackrabbit.mk.core.MicroKernelImpl serves as a reference
implementation
of the MicroKernel API. it uses a GIT-inspired data/versioning model.

more pointers:
- wiki page [1]
- versioning model as implemented by the reference implementation [2]

cheers
stefan

[1] http://wiki.apache.org/jackrabbit/RepositoryMicroKernel
[2] 
http://wiki.apache.org/jackrabbit/RepositoryMicroKernel?action=AttachFiledo=viewtarget=MicroKernel+Revision+Model.pdf



On Wed, Nov 6, 2013 at 11:31 AM, Tommaso Teofili
tommaso.teof...@gmail.com wrote:
 Hi all,

 out of curiosity, I was wondering if there's any guideline other than
 following the API on writing a new MK implementation.
 What one should start with, design concepts, experience on known dos,
 donts, caveats, etc.

 In the past I wanted to play with it a bit but never had time to do
 anything there however that may be a useful information (if anything like
 that exists) to share.

 Regards,
 Tommaso


Re: Clarifying: Support for properties and child nodes with the same name

2013-10-31 Thread Stefan Guggisberg
On Wed, Oct 30, 2013 at 7:25 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Wed, Oct 30, 2013 at 6:43 AM, Stefan Guggisberg
 stefan.guggisb...@gmail.com wrote:
 SNNP was introduced with JSR 283, mainly due to pressure from vendors with
 legacy systems. we (day) were opposed since SNNP renders JCR paths
 ambiguous (IMO a serious problem). BTW SNNP are an optional repository
 feature [1].

 we shouldn't make the same mistake twice. and as for the aforementioned
 import xml use case: remember that the import xml feature brought us
 JCR namespaces and SNS, both widely considered major flaws in the JCR API?

 Right, but it doesn't necessarily follow that SNNP is also a major flaw.

SNNP renders JCR paths ambiguous, which IMO *is* a major flaw in the JCR spec.


 In fact most content structures and access patterns I've seen make a
 big distinction between properties and child nodes, so it makes sense
 to treat them separately on the storage layer (as discussed, all our
 backends do that). From that perspective having a shared namespace for
 properties and child nodes goes against the grain, as it forces a
 backend to either use a data structure that's not optimal for common
 access patterns or to do extra work to prevent separate data
 structures from overlapping.

i OTOH have seen a lot of JSON recently ;). SNNP breaks
the intuitive 1:1 JSON representation of repository content.
that's what bothers me most.

cheers
stefan


 BR,

 Jukka Zitting


Re: Clarifying: Support for properties and child nodes with the same name

2013-10-30 Thread Stefan Guggisberg
On Tue, Oct 29, 2013 at 1:46 AM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Mon, Oct 28, 2013 at 6:54 PM, Tobias Bocanegra tri...@apache.org wrote:
 I would rather keep the support for same name properties and nodes in
 order to ensure backward compatibility for existing repositories.

 I tend to agree. There don't seem to be too many benefits to keeping
 the current design, and the backwards compatibility aspect might well
 be a real problem for existing deployments.

 Are there any other areas where the item name ambiguity is not
 properly handled, because we assume no same name property and node
 names? e.g. in permission evaluation, search, etc?

 I think the most prominent case is the MicroKernel JSON model. But it
 should be quite straightforward to adjust the JSON format, for example
 by putting all child nodes within an extra :children object.

the JSON representation of the repository as exposed by the MicroKernel API
naturally mirrors the content structure and is IMO intuitive. i don't
think we should
introduce artificial intermediary objects like :children since it
breaks the 1:1
mapping of repository content and leads to awkward client code.

WRT same named node and property (SNNP):

SNNP was introduced with JSR 283, mainly due to pressure from vendors with
legacy systems. we (day) were opposed since SNNP renders JCR paths
ambiguous (IMO a serious problem). BTW SNNP are an optional repository
feature [1].

we shouldn't make the same mistake twice. and as for the aforementioned
import xml use case: remember that the import xml feature brought us
JCR namespaces and SNS, both widely considered major flaws in the JCR API?

cheers
stefan

[1] 
http://www.day.com/specs/jcr/2.0/5_Reading.html#5.1.8%20Node%20and%20Property%20with%20Same%20Name


 There are some other cases, like the simple URL and JSON mappings in
 the oak-http draft, that would need to be adjusted, but I don't think
 any of these would require too much effort.

 AFAICT the core functionality in oak-core and oak-jcr is mostly
 unaffected by this, given the structure of the Tree and NodeState APIs
 that (reflecting the JCR API) make a clear distinction between
 properties and child nodes.

 BR,

 Jukka Zitting


Re: Clarifying: Support for properties and child nodes with the same name

2013-10-29 Thread Stefan Guggisberg
On Mon, Oct 28, 2013 at 10:48 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Mon, Oct 28, 2013 at 3:12 PM, Tobias Bocanegra tri...@apache.org wrote:
 I don't know anymore where I heard/read this, but I had the
 assumption, that oak will no longer support properties and child nodes
 with the same name.

 Yes, you're correct. However, we don't really actively enforce this
 limitation and the data structures used by our all our backends
 actually do make it possible for a node and a property to have the
 same name, so it might well be that it's in practice possible to
 create such content structures. The original idea as implemented by
 the older H2 MK was to use just a single JSON-like map for both nodes
 and properties,

FWIW: the so called H2 MK uses separate structures for properties
and child node entries.

the mk exposes a JSON-like data model, hence the shared
'namespace' for properties and child nodes. see [1].

cheers
stefan

[1] http://wiki.apache.org/jackrabbit/RepositoryMicroKernel#Data%20Model

 but that approach is troublesome given the need to
 support flat hierarchies.

 I think we should fix that in one way or another: either explicitly
 check for such name collisions at the storage level or alternatively
 revert the earlier decision and allow a property and a child node to
 have the same name.

 If this is no longer supported, we should add it to the list of
 changes [0] and add it as issue to [1].

 Right.

 If the support varies by persistence implementation, we need to
 propagate this information to the repository descriptor accordingly.

 It's probably cleaner if the behavior is the same regardless of the
 storage backend.

 BR,

 Jukka Zitting


Re: JCR-3581: Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg
hi ate,

thanks, i'll have a look.

cheers
stefan


On Fri, May 3, 2013 at 10:25 AM, Ate Douma a...@douma.nu wrote:

 I didn't see yet any feedback on this [1].

 Its not like a very critical bug (probably) but still serious enough IMO
 to consider fixing.

 Anyone interested to at least take a look at it?

 Regards, Ate

 [1] 
 https://issues.apache.org/**jira/browse/JCR-3581https://issues.apache.org/jira/browse/JCR-3581



[jira] [Assigned] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3581:
--

Assignee: Stefan Guggisberg

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3581.


   Resolution: Fixed
Fix Version/s: 2.7
   2.6.1

committed patch in svn r1478684

thanks, good catch!

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Fix For: 2.6.1, 2.7

 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: JCR-3581: Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg
i committed your patch, thanks!

cheers
stefan


On Fri, May 3, 2013 at 10:34 AM, Stefan Guggisberg 
stefan.guggisb...@gmail.com wrote:

 hi ate,

 thanks, i'll have a look.

 cheers
 stefan


 On Fri, May 3, 2013 at 10:25 AM, Ate Douma a...@douma.nu wrote:

 I didn't see yet any feedback on this [1].

 Its not like a very critical bug (probably) but still serious enough IMO
 to consider fixing.

 Anyone interested to at least take a look at it?

 Regards, Ate

 [1] 
 https://issues.apache.org/**jira/browse/JCR-3581https://issues.apache.org/jira/browse/JCR-3581





[jira] [Updated] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3581:
---

Fix Version/s: 2.4.4

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Fix For: 2.4.4, 2.6.1, 2.7

 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3568) Property.getBinary().getStream() files in tempDir not removed by InputStream.close() nor by Binary.dispose()

2013-04-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3568:
---

Component/s: (was: jackrabbit-api)
 jackrabbit-webdav
   Priority: Major  (was: Blocker)

this is not a jackrabbit-core issue. 

i ran your test code against a local repository with the default configuration 
(created with new TransientRepository()).

i used test files with a total size of about 1gb.

there were no temp files created.

this might be a sling issue or a webdav-client or -server issue. 

to further narrow down the problem please run your test with
the following configurations:

- standalone repository accessed in-proc (not deployed in sling)
- standalone repository server accessed via webdav (not deployed in sling)

 




 Property.getBinary().getStream() files in tempDir not removed by 
 InputStream.close() nor by Binary.dispose() 
 -

 Key: JCR-3568
 URL: https://issues.apache.org/jira/browse/JCR-3568
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-webdav
Affects Versions: 2.6
 Environment: Windows 7 Pro, Java 6.0.39, WebDAV, JCR 2.0
Reporter: Ulrich Schmidt

 I need to inspect the the files stored to the jcr:data-Property in Node 
 jcr:content which is a subnode of a nt:fille-Node. Access mode is WebDAV 
 using JCR 2.0-API.
 Jackrabbit does not drop the tempfiles created by the command 
 Property.getBinary().getStream() by the closing instruchtions 
 InputStream.close() nor Binary.dispose(). I get a RepositoryException No 
 space left on device when der tempsace becomes full.
 The executed code is;
 public class DownloadLoopMain {
   private final static Logger LOGGER = 
 LoggerFactory.getLogger(Test.DownloadLoopMain);
   String repository = http://localhost:8080/server;;
   String user=admin;
   String password=admin;
   Session session;
   File temp = new File(System.getProperty(java.io.tmpdir));
   ListString nodeList = new ArrayListString();
   public DownloadLoopMain() throws Exception {
   LOGGER.info(TempDir= + temp.getPath());
   long totalsize=0;
   
   connectRepository();
   buildNodeList();
   ListString[] tempfiles = getTempFiles(temp.listFiles());
   LOGGER.info(Start with number of files in Tempdir: + 
 tempfiles.size());
   for (String node : nodeList) {  
   LOGGER.info(Retrieve node  + node);
   Node currentNode=session.getNode(node);
   Node fileNode = currentNode.getNode(jcr:content);
   Property jcrdata = fileNode.getProperty(jcr:data);
   Binary fileBin=jcrdata.getBinary();
   long filesize=fileBin.getSize();
   totalsize+=filesize;
   InputStream file = fileBin.getStream();
   
   LOGGER.info(Now we have number of files in Tempdir: + 
 tempfiles.size());  
   
   ListString[] newTempfiles = 
 getTempFiles(temp.listFiles());
   // Display new files in temp-directory
   compareTempfiles(new, newTempfiles, tempfiles);
   
   // Display files gone from temp-directory
   compareTempfiles(gone, tempfiles, newTempfiles);
   
   tempfiles=newTempfiles;
   
   file.close();
   fileBin.dispose();
   }
   }
   
   
   /**
* Compare List of tempfiles.
* @param intend
* @param list1
* @param list2
*/
   public void compareTempfiles(String intend, ListString[] list1, 
 ListString[] list2 ) {
   for (String[] list1file : list1) {
   boolean known=false;
   for (int i=0; i list2.size(); i++) {
   String[] list2file=list2.get(i);
   if (list1file[0].equals(list2file[0])) {
   known=true;
   break;
   }
   }
   if (!known) {
   LOGGER.info(intend +  tempfile= + 
 list1file[0]+   + list1file[1]);
   }
   }
   }
   public ListString[] getTempFiles(File[] files) {
   ListString[] filesList = new ArrayListString[]();
   for (File file

[jira] [Resolved] (JCR-3568) Property.getBinary().getStream() files in tempDir not removed by InputStream.close() nor by Binary.dispose()

2013-04-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3568.


Resolution: Invalid

 Property.getBinary().getStream() files in tempDir not removed by 
 InputStream.close() nor by Binary.dispose() 
 -

 Key: JCR-3568
 URL: https://issues.apache.org/jira/browse/JCR-3568
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-webdav
Affects Versions: 2.6
 Environment: Windows 7 Pro, Java 6.0.39, WebDAV, JCR 2.0
Reporter: Ulrich Schmidt

 I need to inspect the the files stored to the jcr:data-Property in Node 
 jcr:content which is a subnode of a nt:fille-Node. Access mode is WebDAV 
 using JCR 2.0-API.
 Jackrabbit does not drop the tempfiles created by the command 
 Property.getBinary().getStream() by the closing instruchtions 
 InputStream.close() nor Binary.dispose(). I get a RepositoryException No 
 space left on device when der tempsace becomes full.
 The executed code is;
 public class DownloadLoopMain {
   private final static Logger LOGGER = 
 LoggerFactory.getLogger(Test.DownloadLoopMain);
   String repository = http://localhost:8080/server;;
   String user=admin;
   String password=admin;
   Session session;
   File temp = new File(System.getProperty(java.io.tmpdir));
   ListString nodeList = new ArrayListString();
   public DownloadLoopMain() throws Exception {
   LOGGER.info(TempDir= + temp.getPath());
   long totalsize=0;
   
   connectRepository();
   buildNodeList();
   ListString[] tempfiles = getTempFiles(temp.listFiles());
   LOGGER.info(Start with number of files in Tempdir: + 
 tempfiles.size());
   for (String node : nodeList) {  
   LOGGER.info(Retrieve node  + node);
   Node currentNode=session.getNode(node);
   Node fileNode = currentNode.getNode(jcr:content);
   Property jcrdata = fileNode.getProperty(jcr:data);
   Binary fileBin=jcrdata.getBinary();
   long filesize=fileBin.getSize();
   totalsize+=filesize;
   InputStream file = fileBin.getStream();
   
   LOGGER.info(Now we have number of files in Tempdir: + 
 tempfiles.size());  
   
   ListString[] newTempfiles = 
 getTempFiles(temp.listFiles());
   // Display new files in temp-directory
   compareTempfiles(new, newTempfiles, tempfiles);
   
   // Display files gone from temp-directory
   compareTempfiles(gone, tempfiles, newTempfiles);
   
   tempfiles=newTempfiles;
   
   file.close();
   fileBin.dispose();
   }
   }
   
   
   /**
* Compare List of tempfiles.
* @param intend
* @param list1
* @param list2
*/
   public void compareTempfiles(String intend, ListString[] list1, 
 ListString[] list2 ) {
   for (String[] list1file : list1) {
   boolean known=false;
   for (int i=0; i list2.size(); i++) {
   String[] list2file=list2.get(i);
   if (list1file[0].equals(list2file[0])) {
   known=true;
   break;
   }
   }
   if (!known) {
   LOGGER.info(intend +  tempfile= + 
 list1file[0]+   + list1file[1]);
   }
   }
   }
   public ListString[] getTempFiles(File[] files) {
   ListString[] filesList = new ArrayListString[]();
   for (File file : files) {
   String[] filedesc = new String[2];
   filedesc[0]=file.getName();
   filedesc[1]=file.length()+;
   filesList.add(filedesc);
   }
   return filesList;
   }
   
   public void buildNodeList() throws IOException {
   String path =E:/Jackrabbit/logs/Populate-Files.log;
   File file = new File(path);
   BufferedReader br = new BufferedReader(new FileReader(file));
   String line;
   while ((line=br.readLine())!=null) {
   nodeList.add(line

Re: MongoMK^2 design proposal

2013-02-18 Thread Stefan Guggisberg
On Mon, Feb 18, 2013 at 1:01 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Mon, Feb 11, 2013 at 3:22 PM, Jukka Zitting jukka.zitt...@gmail.com 
 wrote:
 I've now spent a few days drafting some initial implementation code to
 see how such a design would work in practice.

 Fast forward one week ahead and we have also a MongoDB binding and
 performance that's already roughly on par with the H2 MK (see for
 example OAK-624). I'm increasingly confident that this SegmentMK
 concept will work really well also in practice.

excellent, good news!

cheers
stefan


 There's still a number of missing features and optimizations that I
 started to list as subtasks of OAK-593. I'll start working on OAK-630
 (compareAgainstBaseState) and will follow up on the others after that.
 If anyone else feels like joining the effort, feel free to pick one of
 the subtasks or come up with new ones.

 I'll also be looking at improving our performance and scalability
 benchmarks to provide better insight on where we are in the big
 picture and what are the most pressing issues to focus on.

 BR,

 Jukka Zitting


Re: script to compare maven test run times

2013-02-15 Thread Stefan Guggisberg
On Fri, Feb 15, 2013 at 12:04 PM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
 hi,

 For OAK-624 I needed to compare the test run times between 2 different mk
 implementations so I came up with a script that computes the percentage
 difference between the 2 'mvn test' runs.

 This allows for output like the one in this comment [0].
 The interesting part is it is generic enough to be also used when comparing
 unit tests that run on oak and jackrabbit (starting with, but not limited
 to the tck ones or the query tests that now run on jr  oak).

 Please take a look and let me know if you think the output is useful, or if
 there is anything more that we could add to it.

nice!

cheers
stefan

 The script is here [1].

 thanks,
 alex



 [0]
 https://issues.apache.org/jira/browse/OAK-624?focusedCommentId=13579093page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13579093
 [1] https://gist.github.com/alexparvulescu/4959583


[jira] [Assigned] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3514:
--

Assignee: Stefan Guggisberg

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576704#comment-13576704
 ] 

Stefan Guggisberg commented on JCR-3514:


bq. The condition should rather be if (!initialized || !active) { 
instead of if (!initialized || active) { 

the condition is correct as is.

a workspace is considered active if there are sessions connected to it or if 
there's a current GC task accessing it.

a workspace is considered idle it it's not active. 

disposeIfIdle should never dispose an active workspace, hence the if-statement.

see also JCR-2749

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3514.


Resolution: Not A Problem

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3509) Workspace maxIdleTime parameter not working

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3509:
---

Priority: Minor  (was: Major)

 Workspace maxIdleTime parameter not working
 ---

 Key: JCR-3509
 URL: https://issues.apache.org/jira/browse/JCR-3509
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: config
Affects Versions: 2.4.3
 Environment: JSF, SPRING
Reporter: Sarfaraaz ASLAM
Priority: Minor
 Attachments: derby.jackrabbit.repository.xml, JcrConfigurer.java


 would like to set the maximum number of seconds that a workspace can remain 
 unused before the workspace is automatically closed through maxIdleTime 
 parameter but this seems not to work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3509) Workspace maxIdleTime parameter not working

2013-02-12 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576708#comment-13576708
 ] 

Stefan Guggisberg commented on JCR-3509:


bq. It should rather be if (!initialized || !active) {

no, the if-statement is correct, see JCR-3514.

please note that a workspace won't be automatically disposed if there's at 
least one session connected to it. 
make sure you call Session#logout if you're done.

 Workspace maxIdleTime parameter not working
 ---

 Key: JCR-3509
 URL: https://issues.apache.org/jira/browse/JCR-3509
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: config
Affects Versions: 2.4.3
 Environment: JSF, SPRING
Reporter: Sarfaraaz ASLAM
 Attachments: derby.jackrabbit.repository.xml, JcrConfigurer.java


 would like to set the maximum number of seconds that a workspace can remain 
 unused before the workspace is automatically closed through maxIdleTime 
 parameter but this seems not to work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3502) Deleted states are not merged correctly

2013-01-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3502:
---

Component/s: clustering

 Deleted states are not merged correctly
 ---

 Key: JCR-3502
 URL: https://issues.apache.org/jira/browse/JCR-3502
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Reporter: Unico Hommes
Assignee: Unico Hommes

 When a node is simultaneously deleted on two cluster nodes, the save on the 
 cluster node that lost the race fails unnecessarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: MongoMK.getHeadRevision()

2013-01-08 Thread Stefan Guggisberg
On Mon, Jan 7, 2013 at 5:21 PM, Mete Atamel mata...@adobe.com wrote:
 yes, getHeadRevision must return the most recent public 'trunk' revision.
 returning a private branch revision is a bug.

 Maybe this could be specified in the JavaDoc for getHeadRevision to avoid
 confusion?

done in r1430236

cheers
stefan


 -Mete

 On 1/7/13 5:11 PM, Stefan Guggisberg stefan.guggisb...@gmail.com wrote:

On Mon, Jan 7, 2013 at 4:46 PM, Marcel Reutegger mreut...@adobe.com
wrote:
 Hi,

 while working on OAK-535 I noticed that MongoMK.getHeadRevision()
 may also return the revision of a branch commit. Is this intentional?
 I was rather expecting the method would return the highest commit
 revision without a branchId.

yes, getHeadRevision must return the most recent public 'trunk' revision.
returning a private branch revision is a bug.

cheers
stefan


 Regards
  Marcel



Re: Conflict handling in Oak

2012-12-18 Thread Stefan Guggisberg
On Wed, Dec 12, 2012 at 4:46 PM, Michael Dürig mdue...@apache.org wrote:
 Hi,

 Currently the Microkernel contract does not specify a merge policy but is
 free to try to merge conflicting changes or throw an exception. I think this
 is problematic in various ways:

 1) Automatic merging may violate the principal of least surprise. It can be
 arbitrary complex and still be incorrect wrt. different use cases which need
 different merge strategies for the same conflict.

 2) Furthermore merges should be correctly mirrored in the journal. According
 to the Microkernel API: deleting a node is allowed if the node existed in
 the given revision, even if it was deleted in the meantime. So the
 following should currently not fail (it does though, see OAK-507):

 String base = mk.getHeadRevision();
 String r1 = mk.commit(-a, base)
 String r2 = mk.commit(-a, base)

 At this point retrieving the journal up to revision r2 should only contain a
 single -a operation. I'm quite sure this is currently not the case and the
 journal will contain two -a operations. One for revision r1 and another for
 revision r2.

if that's the case then it's a bug. the journal must IMO contain the exact diff
from a revision to its predecessor.

cheers
stefan


 3) Throwing an unspecific MicrokernelException leaves the API consumer with
 no clue on what caused a commit to fail. Retrying a commit after some client
 side conflict resolution becomes a hit and miss. See OAK-442.


 To address 1) I suggest we define a set of clear cut cases where any
 Microkernel implementations MUST merge. For the other cases I'm not sure
 whether we should make them MUST NOT, SHOULD NOT or MAY merge.

 To address 2) My preferred solution would be to drop getJournal entirely
 from the Microkernel API. However, this means rebasing a branch would need
 to go into the Microkernel (OAK-464). Otherwise every merge defined for 1)
 would need to take care the journal is adjusted accordingly.
 Another possibility here is to leave the journal unadjusted. However then we
 need to specify MUST NOT for other merges in 1). Because only then can
 clients of the journal know how to interpret the journal (receptively the
 conflicts contained therein).

 To address 3) I'd simply derive a more specific exception from
 MicroKernelException and throw that in the case of a conflict. See OAK-496.

 Michael


Re: Conflict handling in Oak

2012-12-18 Thread Stefan Guggisberg
On Tue, Dec 18, 2012 at 12:49 PM, Michael Dürig mdue...@apache.org wrote:

 This is a bit more complicated. In fact it is the other way around: if two
 journal entries commute, the corresponding differences on the nodes due not
 conflict regarding the definition I gave.

 OTOH non conflicting changes could still lead to non commuting journal
 entries and thus merging such changes would require journals to be adjusted.

why should a journal need to be adjusted? MicroKernel#getJournal returns the
exact diffs of successive revisions.

cheers
stefan

 I'll rephrase below.


 On 18.12.12 11:09, Michael Dürig wrote:



 On 18.12.12 9:38, Thomas Mueller wrote:

 What I suggest should be merged within the MicroKernel:

 * Two sessions concurrently add different child nodes to a node
 (/test/a
 and /test/b): this is merged as it's not really a conflict

 * Two sessions concurrently delete different child nodes (/test/a and
 /test/b): this is merged

 * Two sessions concurrently move different child nodes to another
 location


 I think this can be summed up as:

 Only merge non conflicting changes wrt. the children of a node. The
 children of any nodes are its child nodes and its properties. Two
 changes to the children of a node conflict if these children have the
 same name.


 If there are no other conflicts (*), merge changes wrt. the children of a
 node. The children of any nodes are its child nodes and its properties. Two
 changes to the children of a node conflict if these children have the same
 name.

 (*) see Tom's initial post for what constitutes other conflicts.

 This additional complication somewhat unnecessarily restricts the set of
 mergeable changes. That's why I came up with the proposal to drop support
 for the getJournal() API.

 Michael




 This has the beauty of simplicity and as Tom notes below also does not
 require the journal to be corrected.


 The reason for this is to allow concurrently manipulating child nodes if
 there are many child nodes (concurrent repository loading).

 With this rules, I believe that 2) Furthermore merges should be
 correctly
 mirrored in the journal wouldn't be required, as there are no merges
 that
 would cause the journal to change.


 Right. The reason for this is - and that's again a very nice property of
 this approach - that for these conflicts the corresponding journal
 entries commute.


 In addition it would be nice to annotate conflicts in some way. This is
 quite easy to do and would allow upper layers to resolve conflicts based
 on specific business logic. Currently we do something along these lines
 with the AnnotatingConflictHandler [1] in oak-core.

 Michael


 [1]

 https://github.com/jukka/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/commit/AnnotatingConflictHandler.java






Re: Conflict handling in Oak

2012-12-18 Thread Stefan Guggisberg
On Tue, Dec 18, 2012 at 2:30 PM, Michael Dürig mic...@gmail.com wrote:


 On 18.12.12 11:30, Stefan Guggisberg wrote:

 On Wed, Dec 12, 2012 at 4:46 PM, Michael Dürig mdue...@apache.org wrote:

 Hi,

 Currently the Microkernel contract does not specify a merge policy but is
 free to try to merge conflicting changes or throw an exception. I think
 this
 is problematic in various ways:

 1) Automatic merging may violate the principal of least surprise. It can
 be
 arbitrary complex and still be incorrect wrt. different use cases which
 need
 different merge strategies for the same conflict.

 2) Furthermore merges should be correctly mirrored in the journal.
 According
 to the Microkernel API: deleting a node is allowed if the node existed
 in
 the given revision, even if it was deleted in the meantime. So the
 following should currently not fail (it does though, see OAK-507):

  String base = mk.getHeadRevision();
  String r1 = mk.commit(-a, base)
  String r2 = mk.commit(-a, base)

 At this point retrieving the journal up to revision r2 should only
 contain a
 single -a operation. I'm quite sure this is currently not the case and
 the
 journal will contain two -a operations. One for revision r1 and another
 for
 revision r2.


 if that's the case then it's a bug. the journal must IMO contain the exact
 diff
 from a revision to its predecessor.


 See OAK-532.

thanks
stefan


 Michael



 cheers
 stefan


 3) Throwing an unspecific MicrokernelException leaves the API consumer
 with
 no clue on what caused a commit to fail. Retrying a commit after some
 client
 side conflict resolution becomes a hit and miss. See OAK-442.


 To address 1) I suggest we define a set of clear cut cases where any
 Microkernel implementations MUST merge. For the other cases I'm not sure
 whether we should make them MUST NOT, SHOULD NOT or MAY merge.

 To address 2) My preferred solution would be to drop getJournal entirely
 from the Microkernel API. However, this means rebasing a branch would
 need
 to go into the Microkernel (OAK-464). Otherwise every merge defined for
 1)
 would need to take care the journal is adjusted accordingly.
 Another possibility here is to leave the journal unadjusted. However then
 we
 need to specify MUST NOT for other merges in 1). Because only then can
 clients of the journal know how to interpret the journal (receptively the
 conflicts contained therein).

 To address 3) I'd simply derive a more specific exception from
 MicroKernelException and throw that in the case of a conflict. See
 OAK-496.

 Michael


Re: Conflict handling in Oak

2012-12-18 Thread Stefan Guggisberg
On Tue, Dec 18, 2012 at 5:12 PM, Michael Dürig mdue...@apache.org wrote:


 On 18.12.12 16:05, Mete Atamel wrote:

 In MongoMK, getJournal basically returns the jsonDiff from the commit, at
 least in the simple case when there is no path to filter.


 And AFAIK this is the same for the H2 MK.

currently, yes.

cheers
stefan


 Michael



 -Mete

 On 12/18/12 4:57 PM, Thomas Mueller muel...@adobe.com wrote:

 Hi,

 But the question is how close the journal has to match the original
 commit, specially move and copy operations.


 Yes. There are various degrees of how close the journal is to the commit.
 One option is: the commit is preserved 1:1. The other extreme is: moves
 are fully converted to add+remove. But there are options in the middle,
 for example if the original operation included move /a /b, and the
 journal wouldn't return it 1:1, but instead add /b, then move /a/x to
 /b/x, and remove /a. I thought this is what the MicroKernelImpl does in
 some cases (if there are multiple operations), and I don't think it's a
 problem.

 Regards,
 Thomas






[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507249#comment-13507249
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. Yes, that is correct. BUT, adding additional child node types is making the 
restriction LESS restrictive and should be allowed. 

wrong. the required primary node types of a child node definition are logically 
ANDed during validation, i.e. adding req. types makes the constraint stronger, 
removing OTOH weaker. see [0].

bq. And setting the child node to nt:base makes the restriction LEAST 
restrictive.

agreed. that's an edge case that's currently not handled. since the abstract 
nt:base node type is the root of the node type hierarchy
it is implicitly included in every req. types constraint. explicitly adding or 
removing nt:base has no effect on the constraint.  

[0] 
http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.7.4.1%20Required%20Primary%20Node%20Types


 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3452:
---

Issue Type: Improvement  (was: Bug)

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507346#comment-13507346
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. What about the change of property definitions from single to multiple (I 
mentioned this problem in the description)? Do I miss there something, too?

you'e right. changing isMultiple from false to true should be allowed but 
currently isn't. same with changing the requiredType to UNDEFINED. 

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3452.


   Resolution: Fixed
Fix Version/s: 2.6
 Assignee: Stefan Guggisberg

fixed in svn r1415685.

trivial modifications
- adding/removing nt:base as requiredPrimaryType constraint 
- making a single-valued property multi-valued 
- changing a property's requiredType constraint to UNDEFINED

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 2.6

 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3452:
---

Issue Type: Bug  (was: Improvement)

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 2.6

 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: issues fixed by nobody?

2012-11-29 Thread Stefan Guggisberg
On Thu, Nov 29, 2012 at 9:44 AM, Jan Haderka
jan.hade...@magnolia-cms.com wrote:
 Hiya,

 there appears to be fair number of issues in JR that were fixed by 
 unassigned. Is this on purpose?

 https://issues.apache.org/jira/sr/jira.issueviews:searchrequest-printable/temp/SearchRequest.html?jqlQuery=project+%3D+JCR+AND+status+%3D+closed+and+assignee+is+nulltempMax=1000

FWIW: they were not 'assigned' to a particular user, that doesn't mean
they were fixed by 'unassigned'.
the user who resolved the issue can be found in the issue's history.

cheers
stefan


 Cheers,
 Jan


[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-29 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506599#comment-13506599
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. AFAIU, making something more restrictive than before is considered a 
major change because it could make existing content invalid.

correct. changes that might render existing content illegal according to the 
new definition are considered major. 
only changes that have no effect on existing content (e.g. making a mandatory 
item non-mandatory) are allowed.

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (JCR-3468) ConcurrentModificationException in BitSetENTCacheImpl

2012-11-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3468:
--

Assignee: Stefan Guggisberg

 ConcurrentModificationException in BitSetENTCacheImpl
 -

 Key: JCR-3468
 URL: https://issues.apache.org/jira/browse/JCR-3468
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.10
Reporter: Jeroen van Erp
Assignee: Stefan Guggisberg
Priority: Critical

 Irregularly the following ConcurrentModificationException occurs in the logs 
 of our application, seems like there is either a sync missing, or a copy to a 
 new collection before iterating.
 {noformat}
 java.util.ConcurrentModificationException: null
 at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100) 
 ~[na:1.6.0_32]
 at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154) ~[na:1.6.0_32]
 at 
 org.apache.jackrabbit.core.nodetype.BitSetENTCacheImpl.findBest(BitSetENTCacheImpl.java:114)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:1082)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:508)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getEffectiveNodeType(NodeImpl.java:776) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getApplicablePropertyDefinition(NodeImpl.java:826)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:255) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:101) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyData.getPropertyDefinition(PropertyData.java:55)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyImpl.internalGetValues(PropertyImpl.java:461)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.PropertyImpl.getValues(PropertyImpl.java:498) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3468) ConcurrentModificationException in BitSetENTCacheImpl

2012-11-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3468.


   Resolution: Fixed
Fix Version/s: 2.6

fixed in svn r1414733

 ConcurrentModificationException in BitSetENTCacheImpl
 -

 Key: JCR-3468
 URL: https://issues.apache.org/jira/browse/JCR-3468
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.10
Reporter: Jeroen van Erp
Assignee: Stefan Guggisberg
Priority: Critical
 Fix For: 2.6


 Irregularly the following ConcurrentModificationException occurs in the logs 
 of our application, seems like there is either a sync missing, or a copy to a 
 new collection before iterating.
 {noformat}
 java.util.ConcurrentModificationException: null
 at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100) 
 ~[na:1.6.0_32]
 at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154) ~[na:1.6.0_32]
 at 
 org.apache.jackrabbit.core.nodetype.BitSetENTCacheImpl.findBest(BitSetENTCacheImpl.java:114)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:1082)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:508)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getEffectiveNodeType(NodeImpl.java:776) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getApplicablePropertyDefinition(NodeImpl.java:826)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:255) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:101) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyData.getPropertyDefinition(PropertyData.java:55)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyImpl.internalGetValues(PropertyImpl.java:461)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.PropertyImpl.getValues(PropertyImpl.java:498) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Identifier- or hash-based access in the MicroKernel

2012-11-23 Thread Stefan Guggisberg
On Thu, Nov 22, 2012 at 4:56 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Wed, Nov 21, 2012 at 11:00 AM, Jukka Zitting jukka.zitt...@gmail.com 
 wrote:
 On Tue, Nov 20, 2012 at 8:01 PM, Stefan Guggisberg
 stefan.guggisb...@gmail.com wrote:
 - do you have a proposal for the suggested MicroKernel API (java doc)
   changes?

 I'll have one to share shortly...

 See the attachment in https://issues.apache.org/jira/browse/OAK-468

i committed the API changes, MicroKernelImpl support and adapted
integration tests
in svn r1412898.

cheers
stefan


 BR,

 Jukka Zitting


Re: Identifier- or hash-based access in the MicroKernel

2012-11-21 Thread Stefan Guggisberg
On Wed, Nov 21, 2012 at 10:00 AM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Tue, Nov 20, 2012 at 8:01 PM, Stefan Guggisberg
 stefan.guggisb...@gmail.com wrote:
 - returning an :id and/or :hash should be optional, i.e. we shouldn't
   require an implementation to return an :id or :hash for every path

 Exactly.

 - i suggest we prefix the id/path getNodes parameter value with ':id:'
 and ':hash:'

 I'd leave the format up to the MK implementation to decide, with
 oak-core just passing whatever the MK gave as the :id or :hash
 attribute of a child object. For example, an MK that selects to use
 :id: and :hash: prefixes for such values, would work something
 like this:

 getNodes(/) = { a: { :id: :id:x } }
 getNodes(:id:x) = { b: { :id: :id:y } }
 getNodes(:id:y) = { c: { :id: :id:z} }
 getNodes(:id:z) = {}

ok, agreed.

cheers
stefan


 - do you have a proposal for the suggested MicroKernel API (java doc)
   changes?

 I'll have one to share shortly...

 BR,

 Jukka Zitting


Re: Identifier- or hash-based access in the MicroKernel

2012-11-20 Thread Stefan Guggisberg
hi jukka

On Tue, Nov 20, 2012 at 5:24 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 A lot of functionality in Oak (node states, the diff and hook
 mechanisms, etc.) are based on walking down the tree hierarchy one
 level at a time. To do this, for example to access changes below
 /a/b/c, oak-core will currently request paths /a, /a/b, /a/b/c and so
 on from the underlying MK implementation.

 This would work reasonably well with MK implementations that are
 essentially big hash table that map the full path (and revision) to
 the content at that location. Even then there's some space overhead as
 even tiny nodes (think of an ACL entry) get paired with the full path
 (and revision) of the node. The current MongoMK with its path keys
 works like this, though even there a secondary index is needed for the
 path lookups.

 The approach is less ideal for MK implementations (like the default
 H2-based one) that have to traverse the path when some content is
 accessed. For example, with the above oak-core access pattern, the
 sequence of accessed nodes would be [ a, a, b, a, b, c ], where
 ideally just [ a, b, c ] would suffice. The KernelNodeStore cache in
 oak-core prevents this from being too big an issue, but ideally we'd
 be able to avoid such extra levels of caching.

 To solve that mismatch without impacting the overall architecture too
 much I'd like to propose the following:

 * When requested using the filter argument, the getNodes() call may
 (but is not required to) return special :hash or :id properties as
 parts of the (possibly otherwise empty) child node objects included in
 the JSON response.

 * When returned by getNodes(), those values can be used by the client
 instead of the normal path argument when requesting the content of
 such child nodes using other getNodes() calls. The MK implementation
 is expected to automatically detect whether a given string argument is
 a path, a hash or an identifier, possibly as simply as looking at
 whether it starts with a slash.

 * Both :hash and :id values are expected to uniquely identify a
 specific immutable state of a node. The only difference is that the
 inequality of two hashes implies the inequality of the referenced
 nodes (which can be used by oak-core to optimize some operations),
 whereas it's possible for two different ids to refer to nodes with the
 exact same content.

 Such a solution would allow the following sequence

getNodes(/) = { a: {} }
getNodes(/a) = { b: {} }
getNodes(/a/b) = { c: {} }
getNodes(/a/b/c) = {}

 to become something like

getNodes(/) = { a: { :id: x } }
getNodes(x) = { b: { :id: y } }
getNodes(y) = { c: { :id: z} }
getNodes(z) = {}

 with x, y and z being some implementation-specific identifiers, like
 ObjectIDs in MongoDB.

 In any case the MK implementation would still be required to support
 access by full path.

makes sense, +1 in general.

some comments:

- returning an :id and/or :hash should be optional, i.e. we shouldn't
  require an implementation to return an :id or :hash for every path
  (an implementation might e.g. want to persist an entire subtree as
  one single persistence entity)
- i suggest we prefix the id/path getNodes parameter value with ':id:'
and ':hash:'
  (or some other scheme) when requesting nodes by hash or identifier
  to avoid a potential ambiguity (an implementation might support
  both access by hash and id).
- do you have a proposal for the suggested MicroKernel API (java doc)
  changes?

cheers
stefan


 BR,

 Jukka Zitting


Re: Support for long multivalued properties

2012-11-15 Thread Stefan Guggisberg
On Thu, Nov 15, 2012 at 12:00 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 This came up earlier in the Berlin Oakathon and we're now facing it
 also with the property index. Basically, do we want to add explicit
 support for long ( 1k values) multivalued properties?

 Currently such properties are possible in theory since there is no
 hard limit to the size of the value array, but in practice accessing
 such properties requires reading the whole array to memory and even a
 simple update (like appending a single value) requires the whole array
 to be rewritten.

 There are two ways of supporting use cases that require such long
 multivalued properties: a) we can support it directly in the
 repository, or b) we require the client to use more complex data
 structures like in the BTreeManager in Jackrabbit trunk or the BTree
 implementation in the wrapper-based index implementation Thomas wrote
 for Oak.

 Supporting such use cases directly in the repository would be nice as
 that would notably simplify affected clients. However, doing so would
 require us to adapt some of our APIs. For example we'd need a way to
 iterate over the list of values of a single property and to add, set
 and remove individual values at specific locations. The
 PropertyBuilder interface already has some of this, but neither the
 Oak API nor the MicroKernel currently support such operations.

 WDYT, should we implement something along these lines or leave it up
 to clients? I'm cautiously positive in that we should do this since
 we'd in any case need similar code for the property index, but would
 love to hear other opinions.

personally i am not aware of real life use cases requiring 'large' mv
properties.

since the ultimate goal of oak is to provide a JCR implementation and the
JCR API doesn't provide any methods to manipulate/access single members
of a mv property i don't think we need to support it under the hood.

cheers
stefan


 BR,

 Jukka Zitting


Re: [MongoMK] BlobStore garbage collection

2012-11-06 Thread Stefan Guggisberg
On Tue, Nov 6, 2012 at 9:45 AM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 On 11/6/12 9:24 AM, Thomas Mueller muel...@adobe.com wrote:

Yes. Somebody has to decide which revisions are no longer needed. Luckily
it doesn't need to be us :-) We might set a default value (10 minutes or
so), and then give the user the ability to change that, depending on
whether he cares more about disk space or the ability to read old data /
roll back to an old state.

 If we go down this path for node GC, doesn't MicroKernel interface have to
 change to account for this? Where would you change this default 10 minutes
 value as far as MicroKernel is concerned?

there's a jira issue [0]. so far we've not been able to resolve this issue.

there's no single 'right' retention policy as different use cases
imply different
strategies.

personally i tend to not specify a retention policy on the API level
but rather leave it implementation specific (configurable).

cheers
stefan

[0[ https://issues.apache.org/jira/browse/OAK-114


 -MEte



Re: Jackrabbit-explorer

2012-11-01 Thread Stefan Guggisberg
hi,

sorry, wrong list.

i guess you're using this project:
http://code.google.com/p/jackrabbitexplorer/

please send your questions to their mailing list.

cheers
stefan

On Thu, Nov 1, 2012 at 7:56 AM, nadhiya
nadhiya.shanm...@icissolutions.com wrote:
 hi,
 I use glassfish server and have deployed the jackrabbitexplorer in it
 through admin console. When i try to browse
 http://localhost:8080/jackrabbitexplorer/; this URL,

 it opens up a pop to enter the credentials to find my repository. I'm not
 able to login to my local repository. I get this error,

  There was an error logging in:
 com.priocept.jcr.client.SerializedException: Remote repository not found:
 The resource at http://localhost:8080/jackrabbit.repository could not be
 retrieved 

 pls help me to solve this.



 --
 View this message in context: 
 http://jackrabbit.510166.n4.nabble.com/Jackrabbit-explorer-tp4656908.html
 Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.


[jira] [Commented] (JCR-3453) Jackrabbit might deplate the temporary tablespace on Oracle

2012-10-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486897#comment-13486897
 ] 

Stefan Guggisberg commented on JCR-3453:


FWIW:

bq. * Actually why do you need to use NVL(...) in the column list? Other DB 
filesystem implementations do not have this workaround. 

because oracle is AFAIK the only rdbms which doesn't distinguish empty strings 
or empty lob's from NULL...

for more detailed information have a look at the javadoc ([0]).

[0] 
http://jackrabbit.apache.org/api/2.1/org/apache/jackrabbit/core/fs/db/OracleFileSystem.html#buildSQLStatements()

 Jackrabbit might deplate the temporary tablespace on Oracle
 ---

 Key: JCR-3453
 URL: https://issues.apache.org/jira/browse/JCR-3453
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.1.2, 2.5.2
 Environment: Operating system: Linux
 Application server: Websphere v7
 RDBMS: Oracle 11g
 Jackrabbit: V2.1.2 (built into Liferay 6.0 EE SP2)
Reporter: Laszlo Csontos
 Attachments: repository.xml


 *** Experienced phenomenon ***
 Our customer reported an issue regarding Liferay’s document library: while 
 documents are being retrieved, the following exception occurs accompanied by 
 temporary tablespace shortage.
 [9/24/12 8:00:55:973 CEST] 0023 SystemErr R ERROR 
 [org.apache.jackrabbit.core.util.db.ConnectionHelper:454] Failed to execute 
 SQL (stacktrace on DEBUG log level)
 java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in 
 tablespace TEMP
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
 …
 at 
 oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecute(WSJdbcPreparedStatement.java:928)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.execute(WSJdbcPreparedStatement.java:614)
 …
 at 
 org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:328)
 at 
 org.apache.jackrabbit.core.fs.db.DatabaseFileSystem.getInputStream(DatabaseFileSystem.java:663)
 at 
 org.apache.jackrabbit.core.fs.BasedFileSystem.getInputStream(BasedFileSystem.java:121)
 at 
 org.apache.jackrabbit.core.fs.FileSystemResource.getInputStream(FileSystemResource.java:149)
 at 
 org.apache.jackrabbit.core.RepositoryImpl.loadRootNodeId(RepositoryImpl.java:556)
 at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:325)
 at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:673)
 at 
 org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:231)
 at 
 org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:279)
 at 
 org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:375)
 at 
 com.liferay.portal.jcr.jackrabbit.JCRFactoryImpl.createSession(JCRFactoryImpl.java:67)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:43)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:47)
 at com.liferay.documentlibrary.util.JCRHook.getFileAsStream(JCRHook.java:472)
 at 
 com.liferay.documentlibrary.util.HookProxyImpl.getFileAsStream(HookProxyImpl.java:149)
 at 
 com.liferay.documentlibrary.util.SafeFileNameHookWrapper.getFileAsStream(SafeFileNameHookWrapper.java:236)
 at 
 com.liferay.documentlibrary.service.impl.DLLocalServiceImpl.getFileAsStream(DLLocalServiceImpl.java:192)
 The original size of tablespace TEMP used to be 8Gb when the error has 
 occurred for the first time. Later on it was extended by as much as 
 additional 7Gb to 15Gb, yet the available space was still not sufficient to 
 fulfill subsequent requests and ORA-01652 emerged again.
 *** Reproduction steps ***
 1) Create a dummy 10MB file
 $ dd if=/dev/urandom of=/path/to/dummy_blob bs=8192 count=1280
 1280+0 records in
 1280+0 records out
 10485760 bytes (10 MB) copied, 0.722818 s, 14.5 MB/s
 2) Create a temp tablespace
 The tablespace is created with 5Mb and automatic expansion is intentionally 
 disabled.
 SQL CREATE TEMPORARY TABLESPACE jcr_temp
   TEMPFILE '/path/to/jcr_temp_01.dbf'
   SIZE 5M AUTOEXTEND OFF;
 Table created.
 SQL ALTER USER jcr TEMPORARY TABLESPACE jcr_temp;
 User altered.
 3) Prepare the test case
 For the sake of simplicity a dummy table is created (similar to Jackrabbit's 
 FSENTRY).
 SQL create table FSENTRY(data blob);
 Table created.
 SQL
 CREATE OR REPLACE PROCEDURE load_blob
 AS
 dest_loc  BLOB;
 src_loc   BFILE := BFILENAME('DATA_PUMP_DIR', 'dummy_blob');
 BEGIN
 INSERT INTO FSENTRY (data)
 VALUES (empty_blob

[jira] [Commented] (JCR-3453) Jackrabbit might deplate the temporary tablespace on Oracle

2012-10-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486993#comment-13486993
 ] 

Stefan Guggisberg commented on JCR-3453:


bq. Actually I'm ready to contribute this enhancement to Jackrabbit.

excellent!

bq. If you could modify my attached repository.xml file so that it use 
Oracle9FileSystem  Oracle9PersistenceManager and certify that that 
configuration is going to work on Oracle 11gR2, I'd like to change this ticket 
to improvement.

sorry, i have neither the time nor an oracle install at hand.

 Jackrabbit might deplate the temporary tablespace on Oracle
 ---

 Key: JCR-3453
 URL: https://issues.apache.org/jira/browse/JCR-3453
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.1.2, 2.5.2
 Environment: Operating system: Linux
 Application server: Websphere v7
 RDBMS: Oracle 11g
 Jackrabbit: V2.1.2 (built into Liferay 6.0 EE SP2)
Reporter: Laszlo Csontos
 Attachments: repository.xml


 *** Experienced phenomenon ***
 Our customer reported an issue regarding Liferay’s document library: while 
 documents are being retrieved, the following exception occurs accompanied by 
 temporary tablespace shortage.
 [9/24/12 8:00:55:973 CEST] 0023 SystemErr R ERROR 
 [org.apache.jackrabbit.core.util.db.ConnectionHelper:454] Failed to execute 
 SQL (stacktrace on DEBUG log level)
 java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in 
 tablespace TEMP
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
 …
 at 
 oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecute(WSJdbcPreparedStatement.java:928)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.execute(WSJdbcPreparedStatement.java:614)
 …
 at 
 org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:328)
 at 
 org.apache.jackrabbit.core.fs.db.DatabaseFileSystem.getInputStream(DatabaseFileSystem.java:663)
 at 
 org.apache.jackrabbit.core.fs.BasedFileSystem.getInputStream(BasedFileSystem.java:121)
 at 
 org.apache.jackrabbit.core.fs.FileSystemResource.getInputStream(FileSystemResource.java:149)
 at 
 org.apache.jackrabbit.core.RepositoryImpl.loadRootNodeId(RepositoryImpl.java:556)
 at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:325)
 at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:673)
 at 
 org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:231)
 at 
 org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:279)
 at 
 org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:375)
 at 
 com.liferay.portal.jcr.jackrabbit.JCRFactoryImpl.createSession(JCRFactoryImpl.java:67)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:43)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:47)
 at com.liferay.documentlibrary.util.JCRHook.getFileAsStream(JCRHook.java:472)
 at 
 com.liferay.documentlibrary.util.HookProxyImpl.getFileAsStream(HookProxyImpl.java:149)
 at 
 com.liferay.documentlibrary.util.SafeFileNameHookWrapper.getFileAsStream(SafeFileNameHookWrapper.java:236)
 at 
 com.liferay.documentlibrary.service.impl.DLLocalServiceImpl.getFileAsStream(DLLocalServiceImpl.java:192)
 The original size of tablespace TEMP used to be 8Gb when the error has 
 occurred for the first time. Later on it was extended by as much as 
 additional 7Gb to 15Gb, yet the available space was still not sufficient to 
 fulfill subsequent requests and ORA-01652 emerged again.
 *** Reproduction steps ***
 1) Create a dummy 10MB file
 $ dd if=/dev/urandom of=/path/to/dummy_blob bs=8192 count=1280
 1280+0 records in
 1280+0 records out
 10485760 bytes (10 MB) copied, 0.722818 s, 14.5 MB/s
 2) Create a temp tablespace
 The tablespace is created with 5Mb and automatic expansion is intentionally 
 disabled.
 SQL CREATE TEMPORARY TABLESPACE jcr_temp
   TEMPFILE '/path/to/jcr_temp_01.dbf'
   SIZE 5M AUTOEXTEND OFF;
 Table created.
 SQL ALTER USER jcr TEMPORARY TABLESPACE jcr_temp;
 User altered.
 3) Prepare the test case
 For the sake of simplicity a dummy table is created (similar to Jackrabbit's 
 FSENTRY).
 SQL create table FSENTRY(data blob);
 Table created.
 SQL
 CREATE OR REPLACE PROCEDURE load_blob
 AS
 dest_loc  BLOB;
 src_loc   BFILE := BFILENAME('DATA_PUMP_DIR', 'dummy_blob');
 BEGIN
 INSERT INTO FSENTRY (data)
 VALUES (empty_blob())
 RETURNING data INTO dest_loc;
 DBMS_LOB.OPEN(src_loc

Re: branch/merge bug in MicroKernelImpl?

2012-10-26 Thread Stefan Guggisberg
On Fri, Oct 26, 2012 at 10:54 AM, Mete Atamel mata...@adobe.com wrote:
 Hi Stefan, I have a bunch of branch/merge tests in my fork [0] that you
 might want to run through with MicroKernelImpl. A few test cases fail with
 MicroKernelImpl. They might be the same issue I mentioned yesterday or
 separate issues, not sure.

excellent, i'll check.

thanks
stefan


 -Mete

 [0]
 https://github.com/meteatamel/jackrabbit-oak/blob/6b4635edc5908f346f5bc0e35
 4cf2563d6aa6da7/oak-mongomk/src/test/java/org/apache/jackrabbit/mongomk/imp
 l/MongoMKBranchMergeTest.java


 On 10/25/12 3:55 PM, Stefan Guggisberg stefan.guggisb...@gmail.com
 wrote:

On Thu, Oct 25, 2012 at 3:38 PM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 I think I found a bug with branch/merge in MicroKernelImpl but wanted to
 make sure. The last assert in the test fails. Could someone verify that
 this is indeed a bug?

thanks, i'll have a look.

cheers
stefan


 @Test
 public void test() {
 mk.commit(, +\/trunk\:{}, null, );
 mk.commit(, +\/trunk/child1\:{}, null, );

 String branchRev = mk.branch(null);
 branchRev = mk.commit(, +\/trunk/child1/child2\:{},
 branchRev, );

 mk.commit(, +\/trunk/child3\:{}, null, );

 mk.merge(branchRev, );

 assertTrue(mk.nodeExists(/trunk, null));
 assertTrue(mk.nodeExists(/trunk/child1, null));
 assertTrue(mk.nodeExists(/trunk/child1/child2, null));
 assertTrue(mk.nodeExists(/trunk/child3, null));
 }


 -Mete




Re: branch/merge bug in MicroKernelImpl?

2012-10-26 Thread Stefan Guggisberg
hi mete

On Fri, Oct 26, 2012 at 10:54 AM, Mete Atamel mata...@adobe.com wrote:
 Hi Stefan, I have a bunch of branch/merge tests in my fork [0] that you
 might want to run through with MicroKernelImpl. A few test cases fail with
 MicroKernelImpl. They might be the same issue I mentioned yesterday or
 separate issues, not sure.

i found and fixed the problem which caused the issue that you initially reported
(svn rev 1402529).

some of your tests in your fork still failed. however, this time legitimately
due to a bug in your tests ;)

with the following fix the tests run fine against MicroKernelImpl:

private String addNodes(String rev, String...nodes) {
String newRev = rev;
for (String node : nodes) {
-newRev = mk.commit(, +\ + node + \:{}, rev, );
+newRev = mk.commit(, +\ + node + \:{}, newRev, );
}
return newRev;
}

private String removeNodes(String rev, String...nodes) {
String newRev = rev;
for (String node : nodes) {
-newRev = mk.commit(, -\ + node + \, rev, );
+newRev = mk.commit(, -\ + node + \, newRev, );
}
return newRev;
}

i added your tests to the MicroKernel intergation tests.

cheers
stefan


 -Mete

 [0]
 https://github.com/meteatamel/jackrabbit-oak/blob/6b4635edc5908f346f5bc0e35
 4cf2563d6aa6da7/oak-mongomk/src/test/java/org/apache/jackrabbit/mongomk/imp
 l/MongoMKBranchMergeTest.java


 On 10/25/12 3:55 PM, Stefan Guggisberg stefan.guggisb...@gmail.com
 wrote:

On Thu, Oct 25, 2012 at 3:38 PM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 I think I found a bug with branch/merge in MicroKernelImpl but wanted to
 make sure. The last assert in the test fails. Could someone verify that
 this is indeed a bug?

thanks, i'll have a look.

cheers
stefan


 @Test
 public void test() {
 mk.commit(, +\/trunk\:{}, null, );
 mk.commit(, +\/trunk/child1\:{}, null, );

 String branchRev = mk.branch(null);
 branchRev = mk.commit(, +\/trunk/child1/child2\:{},
 branchRev, );

 mk.commit(, +\/trunk/child3\:{}, null, );

 mk.merge(branchRev, );

 assertTrue(mk.nodeExists(/trunk, null));
 assertTrue(mk.nodeExists(/trunk/child1, null));
 assertTrue(mk.nodeExists(/trunk/child1/child2, null));
 assertTrue(mk.nodeExists(/trunk/child3, null));
 }


 -Mete




Re: branch/merge bug in MicroKernelImpl?

2012-10-25 Thread Stefan Guggisberg
On Thu, Oct 25, 2012 at 3:38 PM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 I think I found a bug with branch/merge in MicroKernelImpl but wanted to
 make sure. The last assert in the test fails. Could someone verify that
 this is indeed a bug?

thanks, i'll have a look.

cheers
stefan


 @Test
 public void test() {
 mk.commit(, +\/trunk\:{}, null, );
 mk.commit(, +\/trunk/child1\:{}, null, );

 String branchRev = mk.branch(null);
 branchRev = mk.commit(, +\/trunk/child1/child2\:{},
 branchRev, );

 mk.commit(, +\/trunk/child3\:{}, null, );

 mk.merge(branchRev, );

 assertTrue(mk.nodeExists(/trunk, null));
 assertTrue(mk.nodeExists(/trunk/child1, null));
 assertTrue(mk.nodeExists(/trunk/child1/child2, null));
 assertTrue(mk.nodeExists(/trunk/child3, null));
 }


 -Mete



Re: Repository construction

2012-10-19 Thread Stefan Guggisberg
On Fri, Oct 19, 2012 at 5:25 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 As you may have noticed from my work in OAK-352, we now have pretty
 clean and simple mechanism for constructing repositories with
 different combinations of pluggable components. Here's a quick guide
 on how to use this mechanism.

 The core class to use is called Oak and can be found in the
 org.apache.jackrabbit.oak package inside oak-core. It takes a
 MicroKernel instance and wraps it into a ContentRepository:

 MicroKernel kernel = ...;
 ContentRepository repository = new Oak(kernel).createContentRepository();

 For test purposes you can use the default constructor that
 automatically instantiates an in-memory MicroKernel for use with the
 repository. And if you're only using the test repository for a single
 ContentSession or just a singe Root, then you can shortcut the login
 steps by using either of the last two statements below:

 ContentRepository repository = new Oak().createContentRepository();
 ContentSession session = new Oak().createContentSession();
 Root root = new Oak().createRoot();

 By default no pluggable components are associated with the created
 repository, so all login attempts will work and result in full write
 access. There's also no need to close the sessions or otherwise
 release acquired resources, as normal garbage collection will take
 care of everything.

 To add extra functionality like type validation or indexing support,
 use the with() method. The method takes all kinds of Oak plugins and
 adds them to the repository to be created. The method returns the Oak
 instance being used, so you can chain method calls like this:

 ContentRepository repository = new Oak(kernel)
 .with(new InitialContent())// add initial content
 .with(new DefaultTypeEditor()) // automatically set default types
 .with(new NameValidatorProvider()) // allow only valid JCR names
 .with(new SecurityProviderImpl())  // use the default security
 .with(new PropertyIndexHook()) // simple indexing support
 .with(new PropertyIndexProvider()) // search support for the indexes
 .createContentRepository();

 As you can see, constructing a fully featured JCR repository like this
 will require quite a few plugins. To avoid having to specify them all
 whenever constructing a new repository, we also have a class called
 Jcr in the org.apache.jakcrabbit.oak.jcr package in oak-jcr. That
 class works much like the Oak class, but it constructs
 javax.jcr.Repository instances instead of ContentRepositories and
 automatically includes all the plugin components needed for proper JCR
 functionality.

 MicroKernel kernel = ...;
 Repository repository = new Jcr(kernel).createRepository();

 The Jcr class supports all the same with() methods as the Oak class
 does, so you can easily extend the constructed JCR repository with
 custom functionality if you like. For test purposes the Jcr class also
 has an empty default constructor that works like the one in the Oak
 class.

nice!

cheers
stefan


 Note that this mechanism is mostly intended for embedded use.
 Deployments in OSGi and other managed environments should use the
 native construction/configuration mechanism of the environment.

 BR,

 Jukka Zitting


Re: MicroKernel add/set property

2012-10-18 Thread Stefan Guggisberg
On Thu, Oct 18, 2012 at 12:09 PM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
 MicroKernel.getJournal() is currently only used in an old indexing
 implementation in oak-core

 Old and deprecated, see OAK-298.
 AFAIK all the current indexing code (except the 'old' package which is not
 in use) now uses CommitHook(s) to update, so as long as that mechanism
 works properly life is peachy.


thanks, i've created an issue [0].

cheers
stefan

[0] https://issues.apache.org/jira/browse/OAK-384

 alex

 On Wed, Oct 17, 2012 at 5:03 PM, Michael Dürig mdue...@apache.org wrote:



 On 17.10.12 15:49, Stefan Guggisberg wrote:

 i agree that the ambiguity of '^' vs '+' is confusing. personally i'd
 prefer
 to get rid of the '+' syntax for property creation altogether. as a
 consequence we'd loose the ability to generate PROPERTY_ADDED events
 from the mk journal. i don't know whether that's a problem for the
 current oak-core implementation.


 MicroKernel.getJournal() is currently only used in an old indexing
 implementation in oak-core. Tom or Alex should be able to provide more
 information on how crucial it is to be able to differentiate  between ^ and
 + there or whether that implementation will go away completely eventually.
 Otherwise getJournal() is not used at all. So from my POV it makes sense to
 remove the '+' syntax since it tends to confuse people.

 From another angle, since the ^ and + were introduced to be able to
 differentiate between a setProperty and an addProperty event for JCR
 observation, couldn't we make the same information also available by
 providing additional context in the journal (i.e. the previous value of the
 property or null if none)?

 Michael



Re: [MongoMK] Reading blobs incrementally

2012-10-17 Thread Stefan Guggisberg
On Wed, Oct 17, 2012 at 10:42 AM, Michael Dürig mdue...@apache.org wrote:

 I wonder why the Microkernel API has an asymmetry here: for writing a binary
 you can pass a stream where as for reading you need to pass a byte array.

the write method implies a content-addressable storage for blobs,
i.e. identical binary content is identified by identical identifiers.
the identifier
needs to be computed from the entire blob content. that's why the
signature takes
a stream rather than supporting chunked writes.

cheers
stefan


 Michael


 On 26.9.12 8:38, Mete Atamel wrote:

 Hi,

 I realized that MicroKernelIT#testBlobs takes a while to complete on
 MongoMK. This is partly due to how the test was written and partly due to
 how the blob read offset is implemented in MongoMK. I'm looking for
 feedback on where to fix this.

 To give you an idea on testBlobs, it first writes a blob using MK. Then,
 it verifies that the blob bytes were written correctly by reading the blob
 from MK. However, blob read from MK is not done in one shot. Instead, it's
 done via this input stream:

 InputStream in2 = new BufferedInputStream(new MicroKernelInputStream(mk,
 id));


 MicroKernelInputStream reads from the MK and BufferedInputStream buffers
 the reads in 8K chunks. Then, there's a while loop with in2.read() to read
 the blob fully. This makes a call to MicroKernel#read method with the
 right offset for every 8K chunk until the blob bytes are fully read.

 This is not a problem for small blob sizes but for bigger blob sizes,
 reading 8K chunks can be slow because in MongoMK, every read with offset
 triggers the following:
 -Find the blob from GridFS
 -Retrieve its input stream
 -Skip to the right offset
 -Read 8K
 -Close the input stream

 I could fix this by changing the test to read the blob bytes in one shot
 and then do the comparison. However, I was wondering if we should also
 work on an optimization for successive reads from the blob with
 incremental offsets? Maybe we could keep the input stream of recently read
 blobs around for some time before closing them?

 Best,
 Mete





Re: Staged vs. Stored node

2012-10-17 Thread Stefan Guggisberg
hi mete

i'm copying the oak-dev list since this might be of interest to others.
comments follow inline...

On Tue, Oct 16, 2012 at 8:30 PM, Mete Atamel mata...@adobe.com wrote:
 Hi Stefan,

 One thing that I've been wanting to ask you is the distinction between
 StagedNode vs. StoredNode. I see in StagedNodeTree, you use both but I
 don't understand when one is used vs. other.

the naming (staged nodes) is git-inspired. in git changes need to be 'staged'
('git add') before they can be committed.

StagedNode represents a mutable node instance whereas a StoredNode is
immutable. StagedNodeTree is used to build and collect changes to the
node tree.

e.g.

+/foo : { baz : blah }

results in 2 StagedNode instances:

1. new root node / with a new child node entry baz pointing to 2.
2. new child node /baz

those changes can then be committed by persisting all StagedNode
instances recursively (bottom up), resulting in a new revision pointing
to the new root node.

please note that is specific to the default mk's versioning model
(git-like contenthash-based identifiers).

cheers
stefan

 In MongoMK, we have one
 notion of Node and that seems to work fine but I'm curious why you made
 staged vs. stored node distinction in your implementation and see if it's
 something we need to think about as well.

 Thanks!
 Mete


Re: MicroKernel add/set property

2012-10-17 Thread Stefan Guggisberg
On Tue, Oct 16, 2012 at 4:39 PM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 I have 2 questions on MicroKernel add/set property that MicroKernel
 JavaDoc does not seem to answer.

 1- What's the difference between adding a property vs. setting a property?
 Are the two following commits basically the same?

 mk.commit(/, +\a/key1\ : \value1\, null, null);
 mk.commit(/, ^\a/key1\ : \value1\, null, null);

initially we used to only support the 'set property' (create-or-modify) syntax,
i.e.  ^/some/property : some value

however, in JCR there are 2 distinct event types:
PROPERTY_ADDED and PROPERTY_CHANGED

in the mk journal/diff we therefore need to make the same distinction,
i.e. added properties are reported with the '+' syntax, whereas the '^'
syntax is used to represent property modifications.



 Or are there scenarios where adding a property acts differently than
 setting a property?

 2- Is adding a property twice supposed to work or is it supposed to throw
 a MicroKernelException? For example, this seems to work with
 MicroKernelImpl but is it supposed to?

 mk.commit(/, +\a/key1\ : \value1\, null, null);
 mk.commit(/, +\a/key1\ : \value1\, null, null);

you're right, that seems to be a bug. could you please file a jira issue?


 What about setting a property twice?

that shouldn't be an issue. you can set a property more than once in a
single commit. only the last modification should be persisted.

cheers
stefan



 Thanks,
 Mete




Re: MicroKernel add/set property

2012-10-17 Thread Stefan Guggisberg
On Wed, Oct 17, 2012 at 3:30 PM, Mete Atamel mata...@adobe.com wrote:
 Ok, I opened OAK-381 for adding property twice. However, one thing is
 still not quite clear to me. It looks like set-property is really
 create-or-modify-property whereas add-property is only create-property,

correct

 right? In that case, how do we really generate appropriate PROPERTY_ADDED
 and PROPERTY_CHANGED events? What happens when set-property is being used
 as create-property? For clear separation between PROPERTY_ADDED and
 PROPERTY_CHANGED, I feel like add-property should be prerequisite to
 set-property and set-property should really be modify-property only (i.e.
 If a property does not exist, set-property should throw an exception). If
 you think this is not necessary because appropriate PROPERTY_ADDED and
 PROPERTY_CHANGED events can be derived from set-property as it is today,
 then I'm not sure why add-property is necessary.

the '+' syntax was introduced for representing added properties in the
diff/journal.

without the '+' syntax there would no way to reflect property creation
(required
to generate PROPERTY_ADDED events from the journal).

OTOH restricting '^' to set-property semantics (rather than create-or-set)
would IMO seriously impact the usability. mk clients would need to check
first the existence of a property while they usually don't care but just want
to set it to a specific value.

i agree that the ambiguity of '^' vs '+' is confusing. personally i'd prefer
to get rid of the '+' syntax for property creation altogether. as a
consequence we'd loose the ability to generate PROPERTY_ADDED events
from the mk journal. i don't know whether that's a problem for the
current oak-core implementation.

opinions?

cheers
stefan


 -Mete


 On 10/17/12 2:58 PM, Stefan Guggisberg stefan.guggisb...@gmail.com
 wrote:

On Tue, Oct 16, 2012 at 4:39 PM, Mete Atamel mata...@adobe.com wrote:
 Hi,

 I have 2 questions on MicroKernel add/set property that MicroKernel
 JavaDoc does not seem to answer.

 1- What's the difference between adding a property vs. setting a
property?
 Are the two following commits basically the same?

 mk.commit(/, +\a/key1\ : \value1\, null, null);
 mk.commit(/, ^\a/key1\ : \value1\, null, null);

initially we used to only support the 'set property' (create-or-modify)
syntax,
i.e.  ^/some/property : some value

however, in JCR there are 2 distinct event types:
PROPERTY_ADDED and PROPERTY_CHANGED

in the mk journal/diff we therefore need to make the same distinction,
i.e. added properties are reported with the '+' syntax, whereas the '^'
syntax is used to represent property modifications.



 Or are there scenarios where adding a property acts differently than
 setting a property?

 2- Is adding a property twice supposed to work or is it supposed to
throw
 a MicroKernelException? For example, this seems to work with
 MicroKernelImpl but is it supposed to?

 mk.commit(/, +\a/key1\ : \value1\, null, null);
 mk.commit(/, +\a/key1\ : \value1\, null, null);

you're right, that seems to be a bug. could you please file a jira issue?


 What about setting a property twice?

that shouldn't be an issue. you can set a property more than once in a
single commit. only the last modification should be persisted.

cheers
stefan



 Thanks,
 Mete





Re: MicroKernel add/set property

2012-10-17 Thread Stefan Guggisberg
On Wed, Oct 17, 2012 at 5:03 PM, Michael Dürig mdue...@apache.org wrote:


 On 17.10.12 15:49, Stefan Guggisberg wrote:

 i agree that the ambiguity of '^' vs '+' is confusing. personally i'd
 prefer
 to get rid of the '+' syntax for property creation altogether. as a
 consequence we'd loose the ability to generate PROPERTY_ADDED events
 from the mk journal. i don't know whether that's a problem for the
 current oak-core implementation.


 MicroKernel.getJournal() is currently only used in an old indexing
 implementation in oak-core. Tom or Alex should be able to provide more
 information on how crucial it is to be able to differentiate  between ^ and
 + there or whether that implementation will go away completely eventually.
 Otherwise getJournal() is not used at all. So from my POV it makes sense to
 remove the '+' syntax since it tends to confuse people.

excellent, thanks.


 From another angle, since the ^ and + were introduced to be able to
 differentiate between a setProperty and an addProperty event for JCR
 observation, couldn't we make the same information also available by
 providing additional context in the journal (i.e. the previous value of the
 property or null if none)?

IIUC that would require an alternative json diff syntax which can't be used
for the commit method. i don't think that's worth it.

cheers
stefan


 Michael


Re: The destiny of Oak (Was: [RESULT] [VOTE] Codename for the jr3 implementation effort)

2012-10-05 Thread Stefan Guggisberg
+1 to the Bertrand's suggestion same project name, different software name

cheers
stefan


Re: JSOP diff question

2012-09-28 Thread Stefan Guggisberg


On 28.09.2012, at 11:05, Mete Atamel mata...@adobe.com wrote:

 Hi, 
 
 I have a question with JSOP diff syntax. Say I want to add a node a with
 a property and value. This is the diff I use in the commit:
 
 diff.append(+\a\ : {\prop\ : \value\});
 
 Is it legal to add a and then add its properties in 2 separate add
 operations but in the same commit like this?
 
 diff.append(+\a\ : {});
diff.append(+\a\ : {\prop\ : \value\});
 
 Maybe it's not legal because it would try to add node a twice?
 

correct, it should fail. 

 Also, is it legal to do the previous operation but with property in the
 path like this?
 
 
diff.append(+\a\ : {});
diff.append(+\a/prop\ : \value\);
 

this should work.

cheers
stefan

 Thanks,
 Mete
 


[jira] [Commented] (JCR-3424) hit ORA-12899 when add/save node

2012-09-11 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452802#comment-13452802
 ] 

Stefan Guggisberg commented on JCR-3424:


FWIW, here are the relevant log entries:

org.apache.jackrabbit.core.state.ItemStateException: failed to write bundle: 
21a24c48-670f-4676-8e06-94b628f833b4
[...]
Caused by: java.sql.SQLException: ORA-12899: value too large for column 
LYIN1.VTJP_DEF_BUNDLE.NODE_ID (actual: 81, maximum: 16) 

just wild guesses: an arithmetic overflow perhaps? a sql driver bug?



 hit ORA-12899 when add/save node
 

 Key: JCR-3424
 URL: https://issues.apache.org/jira/browse/JCR-3424
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2
 Environment: JBoss4.2.3 + Oracle 10.2.0.4 + Winxp
Reporter: licheng

 When running one longevity test, we hit ORA-12899 once when saving one node. 
 It's hard to reproduce it in our test.  But someone else also hit this issue 
 before.
 2012-09-06 13:11:30,485 ERROR 
 [org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager] 
 Failed to persist ChangeLog (stacktrace on DEBUG log level), 
 blockOnConnectionLoss = false
 org.apache.jackrabbit.core.state.ItemStateException: failed to write bundle: 
 21a24c48-670f-4676-8e06-94b628f833b4
   at 
 org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.storeBundle(BundleDbPersistenceManager.java:1086)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.putBundle(AbstractBundlePersistenceManager.java:684)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.storeInternal(AbstractBundlePersistenceManager.java:626)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.store(AbstractBundlePersistenceManager.java:503)
   at 
 org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.store(BundleDbPersistenceManager.java:479)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:757)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1487)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
   at 
 org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:326)
   at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:289)
   at 
 org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
   at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.access.JcrAccessUtil.createRepositoryNodeWithJcrName(JcrAccessUtil.java:125)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.access.JcrAccessUtil.createRepositoryNode(JcrAccessUtil.java:85)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrPersistentObjectFactory.createChildNonVersionableLeaveNode(JcrPersistentObjectFactory.java:169)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrPersistentObjectFactory.createChildLeaveNode(JcrPersistentObjectFactory.java:65)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrInternalNodeImpl.createLeaveChild(JcrInternalNodeImpl.java:66)
   at 
 com.vitria.modeling.repository.sapi.service.core.CoreModelContainer.createModel(CoreModelContainer.java:80)
   at 
 com.vitria.modeling.repository.sapi.service.proxy.local.LocalModelContainer.createModel(LocalModelContainer.java:167)
 .
 Caused by: java.sql.SQLException: ORA-12899: value too large for column 
 LYIN1.VTJP_DEF_BUNDLE.NODE_ID (actual: 81, maximum: 16)
   at 
 oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
   at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
   at 
 oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:219)
   at 
 oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:970)
   at 
 oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1190)
   at 
 oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370

Re: process boundaries

2012-09-11 Thread Stefan Guggisberg
hi marcel

On Mon, Sep 10, 2012 at 12:33 PM, Marcel Reutegger mreut...@adobe.com wrote:
 hi all,

 while looking through the oak code and also in light of the
 recent MongoDB based MicroKernel, I was wondering where
 the process boundaries are. right now with the maven build
 everything runs in the same process, which is one possible
 deployment option. I guess as soon as we have some
 alternative MicroKernel implementation that scales well, we
 will probably also think about scaling the application on top
 of oak. the most obvious solution is to just run the application
 multiple times in separate processes and they all connect
 to the oak repository.

 there's existing information on that topic, like the stack
 with the different layers [0], however it doesn't talk about
 where the process boundaries are and where we'd need
 a protocol (except for the top level protocols to access
 the content through JCR or the Oak API directly).

 so, the question I have is basically about oak-core. is it
 intended and designed to run in multiple processes and
 access a single MicroKernel instance? that way an application
 running in multiple processes would embed an oak-core,
 which talks to the MicroKernen. Or is it rather intended
 that oak-core runs in a single process and all clients
 connect to that single instance?

IIRC the latter should be the standard client/server deployment.

applications use the JCR API to access the repository.

the JCR API is exposed by oak-jcr. oak-jcr implements
the JCR transient space and talks to oak-core for read/write
and workspace operations (ns reg, node type reg, access control,
versioning, query etc).

the obvious process boundaries are

1) between oak-jcr and oak-core (remoting of the oak-core API)
2) beween oak-core and oak-mk (remoting of the MicroKernel API)

there's a related jira issue, see [1].

cheers
stefan

[1] https://issues.apache.org/jira/browse/OAK-162


 regards
  marcel

 [0] http://wiki.apache.org/jackrabbit/OakComponentStructure


[jira] [Commented] (OAK-267) Repository fails to start with - cannot branch off a private branch

2012-08-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13440422#comment-13440422
 ] 

Stefan Guggisberg commented on OAK-267:
---

while i still can't explain how a private branch commit could possibly become a 
HEAD 
i've added code in svn r1376578 that throws an exception should the same error 
occur again.
the callstack of that exception should hopefully help investigating the root 
cause.


 Repository fails to start with - cannot branch off a private branch
 -

 Key: OAK-267
 URL: https://issues.apache.org/jira/browse/OAK-267
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.5
Reporter: Chetan Mehrotra
Priority: Minor
 Attachments: stacktrace.txt


 On starting a Sling instance I am get following exception (complete 
 stacktrace would be attached)
 {noformat}
 org.apache.jackrabbit.mk.api.MicroKernelException: java.lang.Exception: 
 cannot branch off a private branch
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.branch(MicroKernelImpl.java:508)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.init(KernelNodeStoreBranch.java:56)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.branch(KernelNodeStore.java:101)
   at org.apache.jackrabbit.oak.core.RootImpl.refresh(RootImpl.java:160)
   at org.apache.jackrabbit.oak.core.RootImpl.init(RootImpl.java:111)
   at 
 org.apache.jackrabbit.oak.core.ContentSessionImpl.getCurrentRoot(ContentSessionImpl.java:78)
   at 
 org.apache.jackrabbit.oak.jcr.SessionDelegate.init(SessionDelegate.java:94)
   at 
 org.apache.jackrabbit.oak.jcr.RepositoryImpl.login(RepositoryImpl.java:137)
 {noformat}
 This error does not away after restart also. On debugging 
 StoredCommit#branchRootId was not null.
 I think the last shutdown was clean as per the logs. I was debugging some 
 query issue which resulted in an exception so that might have something to do 
 with this. Logging a bug for now to keep track. If complete repo (~70 MB of 
 .mk folder)is required then let me know and I would share it somewhere 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-272) every session login causes a mk.branch operation

2012-08-22 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-272:
-

 Summary: every session login causes a mk.branch operation
 Key: OAK-272
 URL: https://issues.apache.org/jira/browse/OAK-272
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Stefan Guggisberg


here's the relevant stack trace (copied from OAK-267):
{code}
at 
org.apache.jackrabbit.mk.core.MicroKernelImpl.branch(MicroKernelImpl.java:508)
at 
org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.init(KernelNodeStoreBranch.java:56)
at 
org.apache.jackrabbit.oak.kernel.KernelNodeStore.branch(KernelNodeStore.java:101)
at org.apache.jackrabbit.oak.core.RootImpl.refresh(RootImpl.java:160)
at org.apache.jackrabbit.oak.core.RootImpl.init(RootImpl.java:111)
at 
org.apache.jackrabbit.oak.core.ContentSessionImpl.getCurrentRoot(ContentSessionImpl.java:78)
at org.apache.jackrabbit.oak.jcr.SessionDelegate.init(SessionDelegate.java:94)
at org.apache.jackrabbit.oak.jcr.RepositoryImpl.login(RepositoryImpl.java:137)
[...]
{code}

while investigating OAK-267 i've noticed 40k empty branch commits all based on 
the same head revision.
those branch commits seem to be unnecessary since they're all empty (i.e. not 
followed by write operations).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-11) Document and tighten contract of Microkernel API

2012-08-21 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438508#comment-13438508
 ] 

Stefan Guggisberg commented on OAK-11:
--

+1 for julian's proposal

 Document and tighten contract of Microkernel API
 

 Key: OAK-11
 URL: https://issues.apache.org/jira/browse/OAK-11
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: documentation

 We should do a review of the Microkernel API with the goal to clarify, 
 disambiguate and document its contract.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-11) Document and tighten contract of Microkernel API

2012-08-21 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438550#comment-13438550
 ] 

Stefan Guggisberg commented on OAK-11:
--

bq. getLength method is kind of vague too. It doesn't specify what should 
happen when blob does not exist. Does it return 0, -1, or throw 
MicroKernelException?
 
good point, clarified 'getLength' and 'read' methods in svn r1375435

 Document and tighten contract of Microkernel API
 

 Key: OAK-11
 URL: https://issues.apache.org/jira/browse/OAK-11
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: documentation

 We should do a review of the Microkernel API with the goal to clarify, 
 disambiguate and document its contract.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-264) MicroKernel.diff for depth limited, unspecified changes

2012-08-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-264.
---

   Resolution: Fixed
Fix Version/s: 0.5

good point! 

fixed in svn r1375476

 MicroKernel.diff for depth limited, unspecified changes
 ---

 Key: OAK-264
 URL: https://issues.apache.org/jira/browse/OAK-264
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Thomas Mueller
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.5


 Currently the MicroKernel API specifies for the method diff, if the depth 
 parameter is used, that unspecified changes below a certain path can be 
 returned as:
   ^ /some/path
 I would prefer the slightly more verbose:
   ^ /some/path: {}
 Reason: It is similar to how getNode() returns node names if the depth 
 limited: some:{path:{}}, and it makes parsing unambiguous: there is 
 always a ':' after the path, whether a property was changed or a node was 
 changed. Without the colon, the parser needs to look ahead to decide whether 
 a node was changed or a property was changed (the token after the path could 
 be the start of the next operation). And we could never ever support ':' as 
 an operation because that would make parsing ambiguous.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-265) waitForCommit gets triggered on private branch commits

2012-08-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-265:
-

Assignee: Stefan Guggisberg

 waitForCommit gets triggered on private branch commits
 --

 Key: OAK-265
 URL: https://issues.apache.org/jira/browse/OAK-265
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.5


 waitForCommit should on be triggered on new (public) head revisions 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-254) waitForCommit returns null in certain situations

2012-08-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-254.
---

Resolution: Fixed

fixed in svn r1374228

 waitForCommit returns null in certain situations
 

 Key: OAK-254
 URL: https://issues.apache.org/jira/browse/OAK-254
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.5


 waitForCommit() returns null if there were no commits since startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-239) MicroKernel.getRevisionHistory: maxEntries behavior should be documented

2012-08-14 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-239.
---

   Resolution: Fixed
Fix Version/s: 0.5

fixed in svn r1372850.

 MicroKernel.getRevisionHistory: maxEntries behavior should be documented
 

 Key: OAK-239
 URL: https://issues.apache.org/jira/browse/OAK-239
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Thomas Mueller
Priority: Minor
 Fix For: 0.5


 The method MicroKernel.getRevisionHistory uses a parameter maxEntries to 
 limit the number of returned entries. If the implementation has to limit the 
 entries, it is not clear from the documentation which entries to return (the 
 oldest entries, the newest entries, or any x entries).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-227:
--

Description: 
a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth -1
String diff0 = mk.diff(rev0, rev2, /, -1);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff2 = mk.diff(rev0, rev2, /, 1);
// returned ^/test/foo, indicating that there are changes below /test/foo 

// diff with depth 0
String diff3 = mk.diff(rev0, rev2, /, 0);
// returned ^/test, indicating that there are changes below /test 
{code}

  was:
a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff1 = mk.diff(rev0, rev2, /, 1);
// returned ^/test, indicating that there are changes below /test 
{code}





 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth -1
 String diff0 = mk.diff(rev0, rev2, /, -1);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff2 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test/foo, indicating that there are changes below /test/foo 
 // diff with depth 0
 String diff3 = mk.diff(rev0, rev2, /, 0);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-227.
---

Resolution: Fixed

fixed in svn r1369019

 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth -1
 String diff0 = mk.diff(rev0, rev2, /, -1);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff2 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test/foo, indicating that there are changes below /test/foo 
 // diff with depth 0
 String diff3 = mk.diff(rev0, rev2, /, 0);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Oak website design update

2012-08-03 Thread Stefan Guggisberg
nice!

+1 for #2

cheers
stefan

On Fri, Aug 3, 2012 at 5:42 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 We had a chance to ask a real designer (instead of just me and my two
 left eyes) take a look at improving the design of the Oak website. He
 came up with screenshots of two alternative design ideas:

 1) http://people.apache.org/~jukka/2012/oak/oak-design-1.png
 2) http://people.apache.org/~jukka/2012/oak/oak-design-2.png

 To move forward, please give feedback on which approach you'd like
 better. Also ideas for improvement are welcome, but to avoid designing
 the site by committee it's probably best to stick only to general
 guidelines and leave the details up to the designer.

 PS. My favorite is design 2.

 BR,

 Jukka Zitting


[jira] [Created] (OAK-227) add depth parameter to MicroKernel#diff method

2012-08-02 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-227:
-

 Summary: add depth parameter to MicroKernel#diff method
 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg


a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff1 = mk.diff(rev0, rev2, /, 1);
// returned ^/test, indicating that there are changes below /test 
{code}




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-02 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-227:
--

Summary: MicroKernel API: add depth parameter to diff method  (was: add 
depth parameter to MicroKernel#diff method)

 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff1 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-77) Consolidate Utilities

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-77?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-77:
-

Component/s: (was: mk)

removing mk component, there are IMO no redundant utility classes anymore in mk 
worth refactoring 

 Consolidate Utilities
 -

 Key: OAK-77
 URL: https://issues.apache.org/jira/browse/OAK-77
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core, jcr
Reporter: angela
Priority: Minor

 as discussed on the dev list i would like to consolidate the various
 utilities. getting rid of redundancies etc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-210:
-

 Summary: granularity of persisted data
 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg


the current persistence granularity is _single nodes_ (a node consists of 
properties and child node information). 

instead of storing/retrieving single nodes it would IMO make sense to store 
subtree aggregates of specific nodes. the choice of granularity could be based 
on simple filter criteria (e.g. property value).

dynamic persistence granularity would help reducing the number of records and 
r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-210:
-

Assignee: Stefan Guggisberg

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423864#comment-13423864
 ] 

Stefan Guggisberg commented on OAK-210:
---

bq. Do you see this as something to be implemented (or not) by each MK 
independently (i.e. something like an MK implementation detail)?

no, that's an implementation detail that doesn't affect the semantics of the 
MicroKernel API.

the most notable impact will be that the current implementation won't be able 
to provide {{:hash}} values for _every_ node. but that's already explicitly 
allowed for, see {{Microkernel#getNodes}} java doc.

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-210:
--

Issue Type: Improvement  (was: Bug)

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (JCR-3246) RepositoryImpl attempts to close active session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reopened JCR-3246:



reopening based on nick's comment/analysis

(thanks, nick!)

 RepositoryImpl attempts to close active session twice on shutdown
 -

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Critical

 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3246) UserManagerImpl attempts to close session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3246:
---

Component/s: (was: jackrabbit-core)
 security
   Priority: Minor  (was: Critical)
Summary: UserManagerImpl attempts to close session twice on shutdown  
(was: RepositoryImpl attempts to close active session twice on shutdown)

 UserManagerImpl attempts to close session twice on shutdown
 ---

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: security
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Minor

 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3246) UserManagerImpl attempts to close session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3246:
---

Attachment: JCR-3246.patch

proposed patch

 UserManagerImpl attempts to close session twice on shutdown
 ---

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: security
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Minor
 Attachments: JCR-3246.patch


 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Internal content in Oak

2012-07-19 Thread Stefan Guggisberg
On Thu, Jul 19, 2012 at 11:09 AM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Thu, Jul 19, 2012 at 10:18 AM, Stefan Guggisberg
 stefan.guggisb...@gmail.com wrote:
 implementing the transient space in Oak-Core is
 legitimate, although premature optimization for
 the specific use case where the entire stack
 (jcr-...-mk) runs in the same vm.

 To clarify, the decision to write changes from oak-jcr directly to
 oak-core was not driven by performance (premature optimization) but
 rather by the need a) to in any case have those changes in oak-core
 for validation, etc. and c) to support large content imports that
 wouldn't necessarily fit into memory. We also wanted to avoid having
 to write essentially the same code twice for oak-core and oak-jcr. The
 performance benefit of reduced amount of internal copying and memory
 overhead is just a nice side-effect of the design.

a) in scenarios where the Oak-API is remoted we'll have to buffer transient
changes on the client and batch-write them to the 'server' (Oak-API impl).
handling the transient space exclusively below the Oak-API is IMO not
an option for this use case.

c) (or rather b) ?) large content imports should IMO be done through
javax.jcr.Workspace#import. there's an equivalent functionality in
the old SPI and it should IMO be exposed in Oak-API as well.

cheers
stefan


 See the mentioned list archives of March/April for more background,
 with [1], [2] and [3] being good starting points.

 [1] http://markmail.org/message/uficvjx35cxy5h4i
 [2] http://markmail.org/message/panl3wxfekvmcfyw
 [3] http://markmail.org/message/m7linbldpirjz2bn

 BR,

 Jukka Zitting


[jira] [Commented] (JCR-3368) CachingHierarchyManager: inconsistent state after transient changes on root node

2012-07-18 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13417069#comment-13417069
 ] 

Stefan Guggisberg commented on JCR-3368:


 It seems that the special handling of the root node has nothing to do with 
 this issue. 

i cannot confirm this. AFAIU this issue only occurs when removing a direct 
child of the root node.
CachingHierarchyManager (CHM) keeps a reference on the root node's item state 
which seems
to become stale under certain cirumstances. 

i tried your altered test case. it failed on 
   
session.getNode(/foo/bar/qux); 

but that was because your test case doesn't clean up the test data. there were 
/foo.../foo[n]
nodes from previous test runs. when cleaning the test data before/after each 
run the test 
doesn't fail anymore. 

 CachingHierarchyManager: inconsistent state after transient changes on root 
 node 
 -

 Key: JCR-3368
 URL: https://issues.apache.org/jira/browse/JCR-3368
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.2.12, 2.4.2, 2.5
Reporter: Unico Hommes
 Attachments: HasNodeAfterRemoveTest.java


 See attached test case.
 You will see the following exception:
 javax.jcr.RepositoryException: failed to retrieve state of intermediary node
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:156)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolveNodePath(HierarchyManagerImpl.java:372)
 at org.apache.jackrabbit.core.NodeImpl.getNodeId(NodeImpl.java:276)
 at 
 org.apache.jackrabbit.core.NodeImpl.resolveRelativeNodePath(NodeImpl.java:223)
 at org.apache.jackrabbit.core.NodeImpl.hasNode(NodeImpl.java:2250)
 at 
 org.apache.jackrabbit.core.HasNodeAfterRemoveTest.testRemove(HasNodeAfterRemoveTest.java:14)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
 at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009)
 Caused by: org.apache.jackrabbit.core.state.NoSuchItemStateException: 
 c7ccbcd3-0524-4d4d-a109-eae84627f94e
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getTransientItemState(SessionItemStateManager.java:304)
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:153)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.getItemState(HierarchyManagerImpl.java:152)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolvePath(HierarchyManagerImpl.java:115)
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:152)
 ... 29 more
 I tried several things to fix this but didn't find a better solution than to 
 just wrap the statement
 NodeId id = resolveRelativeNodePath(relPath);
 in a try catch RepositoryException and return false when that exception 
 occurs.
 In particular I tried

[jira] [Updated] (JCR-3173) InvalidItemStateException if accessing VersionHistory before checkin()

2012-07-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3173:
---

Component/s: versioning
 transactions

 InvalidItemStateException if accessing VersionHistory before checkin()
 --

 Key: JCR-3173
 URL: https://issues.apache.org/jira/browse/JCR-3173
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, transactions, versioning
Affects Versions: 2.2.10
Reporter: Matthias Reischenbacher
 Attachments: UserTransactionCheckinTest.java


 A checkin operation fails during a transaction if the VersionHistory of a 
 node is accessed previously. See the attached test case for further details.
 ---
 Test set: org.apache.jackrabbit.core.version.UserTransactionCheckinTest
 ---
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.072 sec  
 FAILURE!
 testRestoreWithXA(org.apache.jackrabbit.core.version.UserTransactionCheckinTest)
   Time elapsed: 3.858 sec   ERROR!
 javax.jcr.InvalidItemStateException: Could not find child 
 e77834ee-244c-441f-ab94-19847c769fa4 of node 
 03629609-8049-46ee-9e80-279c70b3a34d
   at 
 org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:207)
   at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:99)
   at org.apache.jackrabbit.core.ItemManager.canRead(ItemManager.java:421)
   at 
 org.apache.jackrabbit.core.ItemManager.createItemData(ItemManager.java:843)
   at 
 org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:391)
   at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:328)
   at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:622)
   at 
 org.apache.jackrabbit.core.SessionImpl.getNodeById(SessionImpl.java:493)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl$1.perform(VersionManagerImpl.java:123)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl$1.perform(VersionManagerImpl.java:1)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.perform(VersionManagerImpl.java:96)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.checkin(VersionManagerImpl.java:115)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.checkin(VersionManagerImpl.java:101)
   at org.apache.jackrabbit.core.NodeImpl.checkin(NodeImpl.java:2830)
   at 
 org.apache.jackrabbit.core.version.UserTransactionCheckinTest.testRestoreWithXA(UserTransactionCheckinTest.java:35)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3379) XA concurrent transactions - NullPointerException

2012-07-10 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3379:
---

Component/s: versioning
 transactions

 XA concurrent transactions - NullPointerException
 -

 Key: JCR-3379
 URL: https://issues.apache.org/jira/browse/JCR-3379
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, transactions, versioning
Affects Versions: 2.4.2, 2.5
 Environment: java version 1.6.0_26
 Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
 Linux dev 2.6.32-5-amd64 #1 SMP Thu Mar 22 17:26:33 UTC 2012 x86_64 GNU/Linux
Reporter: Stanislav Dvorscak
Assignee: Claus Köll
 Attachments: JCR-3379.patch


 If several threads are working with XA transactions, the NullPointerException 
 is randomly happened. After that each other transaction will deadlock on the 
 Jackrabbit side, and the restart of the server is necessary.
 The exception is:
 Exception in thread executor-13 java.lang.NullPointerException
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.isSameGlobalTx(VersioningLock.java:116)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.allowReader(VersioningLock.java:126)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.endWrite(VersioningLock.java:161)
   at 
 EDU.oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$WriterLock.release(Unknown
  Source)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$WriteLock.release(VersioningLock.java:76)
   at 
 org.apache.jackrabbit.core.version.InternalXAVersionManager$2.internalReleaseWriteLock(InternalXAVersionManager.java:703)
   at 
 org.apache.jackrabbit.core.version.InternalXAVersionManager$2.commit(InternalXAVersionManager.java:691)
   at 
 org.apache.jackrabbit.core.TransactionContext.commit(TransactionContext.java:195)
   at 
 org.apache.jackrabbit.core.XASessionImpl.commit(XASessionImpl.java:326)
   at 
 org.apache.jackrabbit.rmi.server.ServerXASession.commit(ServerXASession.java:58)
   at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
   at sun.rmi.transport.Transport$1.run(Transport.java:159)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142)
   at org.apache.jackrabbit.rmi.server.ServerXASession_Stub.commit(Unknown 
 Source)
   at 
 org.apache.jackrabbit.rmi.client.ClientXASession.commit(ClientXASession.java:74)
   at org.objectweb.jotm.SubCoordinator.doCommit(SubCoordinator.java:1123)
   at 
 org.objectweb.jotm.SubCoordinator.commit_one_phase(SubCoordinator.java:483)
   at org.objectweb.jotm.TransactionImpl.commit(TransactionImpl.java:318)
   at org.objectweb.jotm.Current.commit(Current.java:452)
   at 
 org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1010)
   at 
 org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
   at 
 org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
   at 
 org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
   at 
 org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
   at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java

[jira] [Commented] (OAK-169) Support orderable nodes

2012-07-10 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13410324#comment-13410324
 ] 

Stefan Guggisberg commented on OAK-169:
---

FWIW: here's the relevant discussion on the oak-dev list: 
http://thread.gmane.org/gmane.comp.apache.jackrabbit.devel/34124/focus=34277

 Support orderable nodes
 ---

 Key: OAK-169
 URL: https://issues.apache.org/jira/browse/OAK-169
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: jcr
Reporter: Jukka Zitting

 There are JCR clients that depend on the ability to explicitly specify the 
 order of child nodes. That functionality is not included in the MicroKernel 
 tree model, so we need to implement it either in oak-core or oak-jcr using 
 something like an extra (hidden) {{oak:childOrder}} property that records the 
 specified ordering of child nodes. A multi-valued string property is probably 
 good enough for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-169) Support orderable nodes

2012-07-10 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13410364#comment-13410364
 ] 

Stefan Guggisberg commented on OAK-169:
---

bq. There is currently no statement about iteration order stability in the 
Microkernel API contract.

good point, fixed in svn rev. 1359679

 Support orderable nodes
 ---

 Key: OAK-169
 URL: https://issues.apache.org/jira/browse/OAK-169
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: jcr
Reporter: Jukka Zitting

 There are JCR clients that depend on the ability to explicitly specify the 
 order of child nodes. That functionality is not included in the MicroKernel 
 tree model, so we need to implement it either in oak-core or oak-jcr using 
 something like an extra (hidden) {{oak:childOrder}} property that records the 
 specified ordering of child nodes. A multi-valued string property is probably 
 good enough for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   9   10   >