Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
+1 Chetan Mehrotra On Mon, Sep 22, 2014 at 8:55 PM, Michael Dürig wrote: > > > On 22.9.14 6:13 , Amit Jain wrote: >> >> [X] +1 Release this package as Apache Jackrabbit Oak 1.0.6 > > > Michael
Re: blobs are not being retained when using MicroKernel interface, adds "str:" prefix to blobId property value
Thank you for replying. I was hoping to use the MicroKernel with simple JSON for remote access. If NodeStoreKernel is not being maintained, can I assume that SegmentMK is? If possible, how do I obtain a SegmentMK backed oak instance with MongoDB for blob store exposed through the MicroKernel API? Thanks, Adrien On Mon, Sep 22, 2014 at 9:32 AM, Stefan Guggisberg < stefan.guggisb...@gmail.com> wrote: > hi adrien, > > On Mon, Sep 22, 2014 at 5:35 PM, Adrien Lamoureux > wrote: > > Hi, > > > > No one has responded to the issues I'm having with the MicroKernel. > > sorry, missed that one. > > the problem you're having ('str:' being prepended to ':blobid:...') > seems to be caused by a bug > in o.a.j.oak.kernel.NodeStoreKernel. > > > > > Is the correct location to ask these questions? I tried finding a > solution > > to this issue in your documentation and found none. > > you could file a jira issue. however, i am not sure NodeStoreKernel is > being actively maintained. > > cheers > stefan > > > > > Thanks, > > > > Adrien > > > > On Tue, Sep 16, 2014 at 1:51 PM, Adrien Lamoureux < > > lamoureux.adr...@gmail.com> wrote: > > > >> Hello, > >> > >> I've been testing Oak 1.0.5, and changed Main.java under oak-run to > enable > >> a MicroKernel to run at startup with the standalone service at the > bottom > >> of the addServlets() method: > >> > >> private void addServlets(Oak oak, String path) { > >> > >> Jcr jcr = new Jcr(oak); > >> > >> // 1 - OakServer > >> > >> ContentRepository repository = > oak.createContentRepository(); > >> > >> . > >> > >> org.apache.jackrabbit.oak.core.ContentRepositoryImpl repoImpl = > >> (org.apache.jackrabbit.oak.core.ContentRepositoryImpl)repository; > >> > >> org.apache.jackrabbit.oak.kernel.NodeStoreKernel nodeStoreK = new > >> > org.apache.jackrabbit.oak.kernel.NodeStoreKernel(repoImpl.getNodeStore()); > >> > >> org.apache.jackrabbit.mk.server.Server mkserver = new > >> org.apache.jackrabbit.mk.server.Server(nodeStoreK); > >> > >> mkserver.setPort(28080); > >> > >> mkserver.setBindAddress(java.net.InetAddress.getByName("localhost")); > >> > >> mkserver.start(); > >> } > >> > >> I then used an org.apache.jackrabbit.mk.client.Client to connect to it, > >> and everything seemed to work fine, including writing / reading blobs, > >> however, the blobs are not being retained, and it appears to be > impossible > >> to set a ":blobId:" prefix for a property value without it forcing an > >> additional 'str:' prefix. > >> > >> Here are a couple of examples using curl to create a node with a single > >> property to hold the blobId. The first uses the proper ":blobId:" > prefix, > >> the other doesn't: > >> > >> curl -X POST --data 'path=/&message=' --data-urlencode > >> 'json_diff=+"testFile1.jpg" : > >> > {"testFileRef":":blobId:93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' > >> http://localhost:28080/commit.html > >> > >> RETURNED: > >> > >> curl -X POST --data > >> > 'path=/testFile1.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' > >> http://localhost:28080/getNodes.html > >> > >> { > >> > >> "testFileRef": "*str::blobId:* > >> 93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", > >> > >> ":childNodeCount": 0 > >> > >> } > >> > >> I then tried without the blobId prefix, and it did not add a prefix: > >> > >> curl -X POST --data 'path=/&message=' --data-urlencode > >> 'json_diff=+"testFile2.jpg" : > >> > {"testFileRef":"93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' > >> http://localhost:28080/commit.html > >> > >> RETURNED: > >> > >> curl -X POST --data > >> > 'path=/testFile2.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' > >> http://localhost:28080/getNodes.html > >> > >> { > >> > >> "testFileRef": > >> "93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", > >> > >> ":childNodeCount": 0 > >> > >> } > >> > >> The blob itself was later removed/deleted, presumably by some sort of > >> cleanup mechanism. I'm assuming that it couldn't find the reference to > the > >> blob. > >> > >> For sanity check, I tried saving a different one line text file at the > >> Java Content Repository level of abstraction, and this is the result: > >> > >> curl -X POST --data > >> > 'path=/testFile&depth=2&offset=0&count=-2&filter={"nodes":["*"],"properties":["*"]}' > >> http://localhost:28080/getNodes.html > >> > >> { > >> > >> "jcr:created": "dat:2014-09-16T13:41:38.084-07:00", > >> > >> "jcr:createdBy": "admin", > >> > >> "jcr:primaryType": "nam:nt:file", > >> > >> ":childNodeCount": 1, > >> > >> "jcr:content": { > >> > >> ":childOrder": "[0]:Name", > >> > >> "jcr:encoding": "UTF-8", > >> > >> "jcr:lastModified": "dat:2014-09-16T13:41:38.094-07:00", > >> > >> "jcr:mimeType": "text/plain", > >> > >> "jcr:data": > >> > ":blobId:428ed7545cd993bf6add8cd74cd6ad70f517341bbc1b31615f9286c652cd214a",
AggregateIndex and AdvanceQueryIndex
Hi, For Lucene based property index (OAK-2005) I need to make LuceneIndex implement AdvanceQueryIndex. As AggregateIndex (AI) wrap LuceneIndex (for FullText search) it would also need to be adapted to support the same (OAK-2119). However making it do that seems bit tricky A - Cost aggregation --- AggregateIndex aggregates the cost also. How such thing should be implemented in terms of IndexPlan is not clear. Also I am not sure if cost needs to be re defined as wrapped index is not registered. Probably AggregateIndex just returns the baseIndex cost B - FulltextQueryIndex -- As FulltextQueryIndex does not extend AdvanceQueryIndex it causes issue in wrapping. Should I create a new AdvanceFulltextQueryIndex like. public interface AdvanceFulltextQueryIndex extends FulltextQueryIndex, AdvancedQueryIndex { } Further I do not understand the AggregateIndex logic very well and not sure how a Fulltext index which also handles property restriction can be wrapped. Any guidance here would be helpful! Given that initial implementation would not support both Fulltext query and Property based query simultaneously we can take an alternative approach for now (its fallback Plan B and only considered last option) 1. Have two impls LuceneIndex and LucenePropertyIndex 2. LuceneIndex would be wrapped by AggregateIndex and would server Fulltext queries 3. LucenePropertyIndex would not be wrapped and would only serve queries which involve property restrictions. With this existing logic would not be modified and we can move ahead with Lucene based property index. Later once we unify them we can tackle this issue Chetan Mehrotra
Re: blobs are not being retained when using MicroKernel interface, adds "str:" prefix to blobId property value
hi adrien, On Mon, Sep 22, 2014 at 5:35 PM, Adrien Lamoureux wrote: > Hi, > > No one has responded to the issues I'm having with the MicroKernel. sorry, missed that one. the problem you're having ('str:' being prepended to ':blobid:...') seems to be caused by a bug in o.a.j.oak.kernel.NodeStoreKernel. > > Is the correct location to ask these questions? I tried finding a solution > to this issue in your documentation and found none. you could file a jira issue. however, i am not sure NodeStoreKernel is being actively maintained. cheers stefan > > Thanks, > > Adrien > > On Tue, Sep 16, 2014 at 1:51 PM, Adrien Lamoureux < > lamoureux.adr...@gmail.com> wrote: > >> Hello, >> >> I've been testing Oak 1.0.5, and changed Main.java under oak-run to enable >> a MicroKernel to run at startup with the standalone service at the bottom >> of the addServlets() method: >> >> private void addServlets(Oak oak, String path) { >> >> Jcr jcr = new Jcr(oak); >> >> // 1 - OakServer >> >> ContentRepository repository = oak.createContentRepository(); >> >> . >> >> org.apache.jackrabbit.oak.core.ContentRepositoryImpl repoImpl = >> (org.apache.jackrabbit.oak.core.ContentRepositoryImpl)repository; >> >> org.apache.jackrabbit.oak.kernel.NodeStoreKernel nodeStoreK = new >> org.apache.jackrabbit.oak.kernel.NodeStoreKernel(repoImpl.getNodeStore()); >> >> org.apache.jackrabbit.mk.server.Server mkserver = new >> org.apache.jackrabbit.mk.server.Server(nodeStoreK); >> >> mkserver.setPort(28080); >> >> mkserver.setBindAddress(java.net.InetAddress.getByName("localhost")); >> >> mkserver.start(); >> } >> >> I then used an org.apache.jackrabbit.mk.client.Client to connect to it, >> and everything seemed to work fine, including writing / reading blobs, >> however, the blobs are not being retained, and it appears to be impossible >> to set a ":blobId:" prefix for a property value without it forcing an >> additional 'str:' prefix. >> >> Here are a couple of examples using curl to create a node with a single >> property to hold the blobId. The first uses the proper ":blobId:" prefix, >> the other doesn't: >> >> curl -X POST --data 'path=/&message=' --data-urlencode >> 'json_diff=+"testFile1.jpg" : >> {"testFileRef":":blobId:93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' >> http://localhost:28080/commit.html >> >> RETURNED: >> >> curl -X POST --data >> 'path=/testFile1.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' >> http://localhost:28080/getNodes.html >> >> { >> >> "testFileRef": "*str::blobId:* >> 93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", >> >> ":childNodeCount": 0 >> >> } >> >> I then tried without the blobId prefix, and it did not add a prefix: >> >> curl -X POST --data 'path=/&message=' --data-urlencode >> 'json_diff=+"testFile2.jpg" : >> {"testFileRef":"93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' >> http://localhost:28080/commit.html >> >> RETURNED: >> >> curl -X POST --data >> 'path=/testFile2.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' >> http://localhost:28080/getNodes.html >> >> { >> >> "testFileRef": >> "93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", >> >> ":childNodeCount": 0 >> >> } >> >> The blob itself was later removed/deleted, presumably by some sort of >> cleanup mechanism. I'm assuming that it couldn't find the reference to the >> blob. >> >> For sanity check, I tried saving a different one line text file at the >> Java Content Repository level of abstraction, and this is the result: >> >> curl -X POST --data >> 'path=/testFile&depth=2&offset=0&count=-2&filter={"nodes":["*"],"properties":["*"]}' >> http://localhost:28080/getNodes.html >> >> { >> >> "jcr:created": "dat:2014-09-16T13:41:38.084-07:00", >> >> "jcr:createdBy": "admin", >> >> "jcr:primaryType": "nam:nt:file", >> >> ":childNodeCount": 1, >> >> "jcr:content": { >> >> ":childOrder": "[0]:Name", >> >> "jcr:encoding": "UTF-8", >> >> "jcr:lastModified": "dat:2014-09-16T13:41:38.094-07:00", >> >> "jcr:mimeType": "text/plain", >> >> "jcr:data": >> ":blobId:428ed7545cd993bf6add8cd74cd6ad70f517341bbc1b31615f9286c652cd214a", >> >> "jcr:primaryType": "nam:nt:unstructured", >> >> ":childNodeCount": 0 >> >> } >> >> } >> >> The ":blobId:" prefix appears intact in this case.. >> >> Any help would be greatly appreciated, as I would like to start using the >> MicroKernel for remote access, and file retention is critical. >> >> Thanks, >> >> Adrien >> >> >> >>
Re: blobs are not being retained when using MicroKernel interface, adds "str:" prefix to blobId property value
Hi, No one has responded to the issues I'm having with the MicroKernel. Is the correct location to ask these questions? I tried finding a solution to this issue in your documentation and found none. Thanks, Adrien On Tue, Sep 16, 2014 at 1:51 PM, Adrien Lamoureux < lamoureux.adr...@gmail.com> wrote: > Hello, > > I've been testing Oak 1.0.5, and changed Main.java under oak-run to enable > a MicroKernel to run at startup with the standalone service at the bottom > of the addServlets() method: > > private void addServlets(Oak oak, String path) { > > Jcr jcr = new Jcr(oak); > > // 1 - OakServer > > ContentRepository repository = oak.createContentRepository(); > > . > > org.apache.jackrabbit.oak.core.ContentRepositoryImpl repoImpl = > (org.apache.jackrabbit.oak.core.ContentRepositoryImpl)repository; > > org.apache.jackrabbit.oak.kernel.NodeStoreKernel nodeStoreK = new > org.apache.jackrabbit.oak.kernel.NodeStoreKernel(repoImpl.getNodeStore()); > > org.apache.jackrabbit.mk.server.Server mkserver = new > org.apache.jackrabbit.mk.server.Server(nodeStoreK); > > mkserver.setPort(28080); > > mkserver.setBindAddress(java.net.InetAddress.getByName("localhost")); > > mkserver.start(); > } > > I then used an org.apache.jackrabbit.mk.client.Client to connect to it, > and everything seemed to work fine, including writing / reading blobs, > however, the blobs are not being retained, and it appears to be impossible > to set a ":blobId:" prefix for a property value without it forcing an > additional 'str:' prefix. > > Here are a couple of examples using curl to create a node with a single > property to hold the blobId. The first uses the proper ":blobId:" prefix, > the other doesn't: > > curl -X POST --data 'path=/&message=' --data-urlencode > 'json_diff=+"testFile1.jpg" : > {"testFileRef":":blobId:93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' > http://localhost:28080/commit.html > > RETURNED: > > curl -X POST --data > 'path=/testFile1.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' > http://localhost:28080/getNodes.html > > { > > "testFileRef": "*str::blobId:* > 93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", > > ":childNodeCount": 0 > > } > > I then tried without the blobId prefix, and it did not add a prefix: > > curl -X POST --data 'path=/&message=' --data-urlencode > 'json_diff=+"testFile2.jpg" : > {"testFileRef":"93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33"}' > http://localhost:28080/commit.html > > RETURNED: > > curl -X POST --data > 'path=/testFile2.jpg&depth=2&offset=0&count=-1&filter={"nodes":["*"],"properties":["*"]}' > http://localhost:28080/getNodes.html > > { > > "testFileRef": > "93e6002eb8f3c4128b2ce18351e16b0d72b870f6e1ee507b5221579f0dd31a33", > > ":childNodeCount": 0 > > } > > The blob itself was later removed/deleted, presumably by some sort of > cleanup mechanism. I'm assuming that it couldn't find the reference to the > blob. > > For sanity check, I tried saving a different one line text file at the > Java Content Repository level of abstraction, and this is the result: > > curl -X POST --data > 'path=/testFile&depth=2&offset=0&count=-2&filter={"nodes":["*"],"properties":["*"]}' > http://localhost:28080/getNodes.html > > { > > "jcr:created": "dat:2014-09-16T13:41:38.084-07:00", > > "jcr:createdBy": "admin", > > "jcr:primaryType": "nam:nt:file", > > ":childNodeCount": 1, > > "jcr:content": { > > ":childOrder": "[0]:Name", > > "jcr:encoding": "UTF-8", > > "jcr:lastModified": "dat:2014-09-16T13:41:38.094-07:00", > > "jcr:mimeType": "text/plain", > > "jcr:data": > ":blobId:428ed7545cd993bf6add8cd74cd6ad70f517341bbc1b31615f9286c652cd214a", > > "jcr:primaryType": "nam:nt:unstructured", > > ":childNodeCount": 0 > > } > > } > > The ":blobId:" prefix appears intact in this case.. > > Any help would be greatly appreciated, as I would like to start using the > MicroKernel for remote access, and file retention is critical. > > Thanks, > > Adrien > > > >
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
On 22.9.14 6:13 , Amit Jain wrote: [X] +1 Release this package as Apache Jackrabbit Oak 1.0.6 Michael
Re: Travis build failing
> # Pedantic failing oups, my bad, should be fixed now. alex On Mon, Sep 22, 2014 at 3:22 PM, Davide Giannella wrote: > Regular update from our build system :) > > build #4772 https://travis-ci.org/apache/jackrabbit-oak/builds/35746487 > > # Pedantic failing > > https://travis-ci.org/apache/jackrabbit-oak/jobs/35746488 > > mvn org.apache.rat:apache-rat-plugin:0.10:check -rf :oak-run tells me > > > oak-run/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentNodeStateHelper.java > > # unittesting > > https://travis-ci.org/apache/jackrabbit-oak/jobs/35746489 > > Running org.apache.jackrabbit.oak.jcr.ItemSaveTest > > No output has been received in the last 10 minutes, this potentially > indicates a stalled build or something wrong with the build itself. > > # Document fixtures > > https://travis-ci.org/apache/jackrabbit-oak/jobs/35746491 > > several OOM but the first one seems to be related to > org.apache.jackrabbit.oak.jcr.ConcurrentAddNodesClusterIT > > Cheers > Davide >
Re: Travis build failing
Regular update from our build system :) build #4772 https://travis-ci.org/apache/jackrabbit-oak/builds/35746487 # Pedantic failing https://travis-ci.org/apache/jackrabbit-oak/jobs/35746488 mvn org.apache.rat:apache-rat-plugin:0.10:check -rf :oak-run tells me oak-run/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentNodeStateHelper.java # unittesting https://travis-ci.org/apache/jackrabbit-oak/jobs/35746489 Running org.apache.jackrabbit.oak.jcr.ItemSaveTest No output has been received in the last 10 minutes, this potentially indicates a stalled build or something wrong with the build itself. # Document fixtures https://travis-ci.org/apache/jackrabbit-oak/jobs/35746491 several OOM but the first one seems to be related to org.apache.jackrabbit.oak.jcr.ConcurrentAddNodesClusterIT Cheers Davide
unstable oak releases
Hello Team, there'd been discussions around regularly release oak unstable (trunk) snapshots for any client willing to use/test the very latest stuff. My idea around it would be: * release every 2 weeks. * tag it officially * no branches. Trunk is the only one. * works with odd/even versions. 1.1.x is unstable 1.2.x is stable. Questions: * should we go through the voting system for this as well? * anything against the above ideas? I'm happy to take the ownership of the release process for the unstable stuff if anyone could point any documentation :) Cheers Davide
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
On 2014-09-22 09:33, Davide Giannella wrote: On 22/09/2014 08:00, Julian Reschke wrote: ... gpg: public key of ultimately trusted key 9B5582C8 not found gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 3 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 3u [ERROR]NOT OK: gpg import I'm not seeing it. It could be your side or a one-off issue. Could you retry? D. I did; and now it works. So +1 for releasing Best regards, Julian
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
+1 all checks ok On Mon, Sep 22, 2014 at 9:35 AM, Davide Giannella wrote: > > [X] +1 Release this package as Apache Jackrabbit Oak 1.0.6 > > > Cheers > Davide > > >
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
> [X] +1 Release this package as Apache Jackrabbit Oak 1.0.6 > Cheers Davide
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
On 22/09/2014 08:00, Julian Reschke wrote: > ... >> gpg: public key of ultimately trusted key 9B5582C8 not found >> gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model >> gpg: depth: 0 valid: 3 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 3u >> [ERROR]NOT OK: gpg import I'm not seeing it. It could be your side or a one-off issue. Could you retry? D.
Re: [VOTE] Release Apache Jackrabbit Oak 1.0.6
On 2014-09-22 06:13, Amit Jain wrote: sh check-release.sh oak 1.0.6 54626eba04bf3a297f3fece3deb1a8304f06071b I'm seeing: gpg: key AEA9D105: public key "Michael Duerig (CODE SIGNING KEY) " imported gpg: key 74628A7F: public key "Amit Jain (CODE SIGNING KEY) " imported gpg: Total number processed: 15 gpg: imported: 2 (RSA: 2) gpg: unchanged: 13 gpg: public key of ultimately trusted key 2A77B29C not found gpg: public key of ultimately trusted key 9B5582C8 not found gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 3 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 3u [ERROR]NOT OK: gpg import Best regards, Julian