Hi,
I noticed that the BlobStore API recently acquired a similar interface
through GarbageCollectableBlobStore.
The current impl in RDBBlobStore just returns an iterator that wraps a
result set, which works right now as the RDBBlobStore keeps holding the
Connection.
I was planning to
On 2014-03-20 08:44, Marcel Reutegger wrote:
Hi,
I noticed that the BlobStore API recently acquired a similar interface
through GarbageCollectableBlobStore.
The current impl in RDBBlobStore just returns an iterator that wraps a
result set, which works right now as the RDBBlobStore keeps
Hi,
Just seen this when running LargeOperationIT#manySiblings against
revision 1579234.
Michael
java.nio.BufferUnderflowException
at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:235)
at java.nio.ByteBuffer.get(ByteBuffer.java:675)
at
Hi,
For the database case, what we could do is return an iterator that
internally chunks, that is:
1) run the query select * from datastore where id ? limit 1 order
by id
2) if the results set is empty, then iterator.hasNext is false
3) else read this in memory and close the connection
4)
Hi,
I think the best you can do with implementations that require an
explicit release of resource is to perform some kind of batch loading
ever N items with increasing offset. this is actually what the MongoDB
Java driver does under the hood.
hmm, I had another look at how mongo does it.
Yes boys and girls, files need licence headers!
Please check new files before committing them, last 2 days I found 3
occurrences, probably more than the entire last month put together.
When in doubt, run your builds with the pedantic profile activated.
(mvn clean install
Roger that!
For Java file IDE takes care of them. Probably we can just exclude
test/resources from rat plugin? Most of the missing headers are
reported in that probably
Chetan Mehrotra
On Thu, Mar 20, 2014 at 2:50 PM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
Yes boys and girls, files
On 2014-03-20 10:18, Chetan Mehrotra wrote:
So I would refactor the logic to use Iterables and the required
changes in MongoDocumentStore. For the RDB one I would just wrap
current impl. Should be able to push the change once Load 19 is cut.
...
What do you mean by wrap the current impl?
Best
A candidate for the Jackrabbit Oak 0.19 release is available at:
https://dist.apache.org/repos/dist/dev/jackrabbit/oak/0.19/
The release candidate is a zip archive of the sources in:
https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-0.19/
The SHA1 checksum of the archive
On 20/03/2014 11:06, Alex Parvulescu wrote:
$ sh check-release.sh oak 0.19 eb3e0c6aca065485b35e5ccad8f2cb5b95911209
Please vote on releasing this package as Apache Jackrabbit Oak 0.19.
The vote is open for the next 72 hours and passes if a majority of at
least three +1 Jackrabbit PMC
On 2014-03-20 12:17, Davide Giannella wrote:
On 20/03/2014 11:06, Alex Parvulescu wrote:
$ sh check-release.sh oak 0.19 eb3e0c6aca065485b35e5ccad8f2cb5b95911209
Please vote on releasing this package as Apache Jackrabbit Oak 0.19.
The vote is open for the next 72 hours and passes if a
Hi,
On Thu, Mar 20, 2014 at 4:20 AM, Michael Dürig mdue...@apache.org wrote:
java.nio.BufferUnderflowException
at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:235)
at java.nio.ByteBuffer.get(ByteBuffer.java:675)
at
Hi,
On Thu, Mar 20, 2014 at 5:50 AM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
Th problem I experienced comes in when there is enough content writes that
a segment flush is triggered, so basically the same node, even unchanged
ends up in a different segment, so with a different segment
ok it looks like a .gitignore file on the oak-run project fails the source
comparison check.
I'm not able to say why that is the case yet.
There are 2 .gitignore files in the project, one on the root [0] and this
new one on the oak-run project.
The one on the root doesn't exist in the zip file
ok found it, the check-release script removes the hidden files from the
*root* only
rm -f $SVNDIR/.??* # Remove hidden files not included in release
I'm guessing this should be tweaked to include sub-folders too.
On Thu, Mar 20, 2014 at 2:08 PM, Alex Parvulescu
right, it should be good now, would you guys mind giving it another shot?
thanks a lot,
alex
On Thu, Mar 20, 2014 at 2:17 PM, Alex Parvulescu
alex.parvule...@gmail.comwrote:
ok found it, the check-release script removes the hidden files from the
*root* only
rm -f $SVNDIR/.??* # Remove
On 20/03/2014 13:17, Alex Parvulescu wrote:
ok found it, the check-release script removes the hidden files from the
*root* only
rm -f $SVNDIR/.??* # Remove hidden files not included in release
I'm guessing this should be tweaked to include sub-folders too.
Something like
find $SVNDIR -iname
Thanks Davide, the script is already fixed :)
Could you test the release again?
thanks,
alex
On Thu, Mar 20, 2014 at 2:36 PM, Davide Giannella
giannella.dav...@gmail.com wrote:
On 20/03/2014 13:17, Alex Parvulescu wrote:
ok found it, the check-release script removes the hidden files from
On 20.3.14 12:06 , Alex Parvulescu wrote:
Please vote on releasing this package as Apache Jackrabbit Oak 0.19.
The vote is open for the next 72 hours and passes if a majority of at
least three +1 Jackrabbit PMC votes are cast.
[X] +1 Release this package as Apache Jackrabbit Oak 0.19
Hi,
On Thu, Mar 20, 2014 at 9:17 AM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
ok found it, the check-release script removes the hidden files from the
*root* only
That was by design, as we should have no need for hidden files deeper
in the source tree. I merged the .gitignore files in
On 20/03/2014 11:06, Alex Parvulescu wrote:
...
[ ] +1 Release this package as Apache Jackrabbit Oak 0.19
[ ] -1 Do not release this package because...
+1
D.
On 2014-03-20 14:33, Alex Parvulescu wrote:
right, it should be good now, would you guys mind giving it another shot?
thanks a lot,
alex
...
[X] +1 Release this package as Apache Jackrabbit Oak 0.19
Best regards, Julian
On Thu, Mar 20, 2014 at 1:33 PM, Jukka Zitting jukka.zitt...@gmail.comwrote:
Hi,
On Thu, Mar 20, 2014 at 5:50 AM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
Th problem I experienced comes in when there is enough content writes
that
a segment flush is triggered, so basically the
Good afternoon,
on my local ConcurrentBlobTest randomly fails (see below for details).
It's not blocking as normally I simply re-run the build and it won't
reappear but it's there.
I gave a quick search around and didn't find any references to it.
Do you want me to file an issue to keep track
Hi,
This came up with OAK-1541 where nodes are being added from multiple
sessions concurrently:
Session 1: root.addNode(a).addNode(b);
Session 2: root.addNode(a).addNode(c);
This currently fails for whichever session saves last because node a is
different from the already existing node a.
Hi,
On Thu, Mar 20, 2014 at 11:05 AM, Alex Parvulescu
alex.parvule...@gmail.com wrote:
On Thu, Mar 20, 2014 at 1:33 PM, Jukka Zitting jukka.zitt...@gmail.comwrote:
Perhaps the comparison is between content in the source repository and
that in the backup repository? In that case the segment
IMO the benefits (less avoidable conflicts for concurrent writes or unexpected
creation of SNSs) outweigh the downside (reproduce JR2 behaviour).
my 2c
Michael
On 20 Mar 2014, at 16:38, Michael Dürig mdue...@apache.org wrote:
Hi,
This came up with OAK-1541 where nodes are being added
i agree... in particular since we don't support same name siblings any
more.
On 20/03/14 16:47, Michael Marth mma...@adobe.com wrote:
IMO the benefits (less avoidable conflicts for concurrent writes or
unexpected creation of SNSs) outweigh the downside (reproduce JR2
behaviour).
my 2c
Michael
Build Update for apache/jackrabbit-oak
-
Build: #3790
Status: Passed
Duration: 2873 seconds
Commit: c5d23b03d3004b960f78e5a599acc433a4d6a42c (trunk)
Author: Jukka Zitting
Message: OAK-1584: Performance regression of adding and removinf child nodes
after
Build Update for apache/jackrabbit-oak
-
Build: #3792
Status: Fixed
Duration: 2824 seconds
Commit: fa47b32be5f9f503dfac3bd439893e63a111af49 (trunk)
Author: Davide Giannella
Message: OAK-1561 refactored and optimised the code preparing the ground for
OAK-1570
Hi,
Indeed you are right, local backup is pretty efficient and it will perform
properly when it has the checkpoint available.
I was a bit off in my observations initially when I tried to backup using
an HttpStore based setup for testing the failover and I assumed that this
perceived slowness
On 2014-03-20 16:20, Davide Giannella wrote:
Good afternoon,
on my local ConcurrentBlobTest randomly fails (see below for details).
It's not blocking as normally I simply re-run the build and it won't
reappear but it's there.
I gave a quick search around and didn't find any references to it.
32 matches
Mail list logo