Hi,
> Last week's F2F resulted in an initial draft of goals for jr3 [1]. A
> general direction this is taking is trading some of the consistency
> guarantees for better availability (especially in a clustered set up).
> As it stands - and as Jukka already noted - the specifics are currently
> too
On Thu, Feb 23, 2012 at 9:09 AM, Marcel Reutegger wrote:
>
>> - Lock enforcement?
>
> that's definitively a tough one because it depends on repository
> wide state.
>
>> - Query index consistency?
>
> I think consistency is a prerequisite here, otherwise it's quite
> difficult to implement the que
Hi,
>But before discussing the details, what is to be understand by 'query
>index consistency'?
I don't know if I mean the same thing, but...
>Does this mean that the indexes should be consistent with the latest
>persisted data. Thus within a single cluster node, after a persist,
>the index must
Hello,
On Thu, Feb 23, 2012 at 12:07 PM, Thomas Mueller wrote:
>>then we still don't have
>>transactional searches
>
> My plan was to support searches for data that is persisted (no search in
> the transient space).
Yes this seems very natural to me, so very much agree. However what I
meant was
- Atomicity of save operations?
how does a temporary violation of atomic saves look like?
are you thinking of partially visible changes?
I actually had clustering on my mind where the repository is partitioned
across various cluster nodes. If we require atomicity for save operation
acros
Hi,
>after a search is executed, you get back a jcr
>NodeIterator from the search result. In the mean time, while
>iterating, a node from the result can be deleted by a different
>session.
We plan to use a MVCC model, so you will still see the old data.
>So, the search result is not transactiona
On Thu, Feb 23, 2012 at 1:48 PM, Thomas Mueller wrote:
> Hi,
>
>>after a search is executed, you get back a jcr
>>NodeIterator from the search result. In the mean time, while
>>iterating, a node from the result can be deleted by a different
>>session.
>
> We plan to use a MVCC model, so you will s
On 24.2.12 9:53, Ard Schrijvers wrote:
On Thu, Feb 23, 2012 at 1:48 PM, Thomas Mueller wrote:
Hi,
after a search is executed, you get back a jcr
NodeIterator from the search result. In the mean time, while
iterating, a node from the result can be deleted by a different
session.
We plan to
On Fri, Feb 24, 2012 at 11:01 AM, Michael Dürig wrote:
>
> On 24.2.12 9:53, Ard Schrijvers wrote:
>>
>> On Thu, Feb 23, 2012 at 1:48 PM, Thomas Mueller wrote:
>>>
>>> Hi,
>>>
after a search is executed, you get back a jcr
NodeIterator from the search result. In the mean time, while
Am 23.02.2012 um 11:26 schrieb Ard Schrijvers:
> I've come to believe over the years, that a generic
> hierarchical jcr full text index and queries is a bad idea : In the
> end, it just doesn't scale, is extremely complex to build (Lucene is
> flat), and even worse, it doesn't seem to satisfy custo
On Fri, Feb 24, 2012 at 11:30 PM, Alexander Klimetschek
wrote:
> Am 23.02.2012 um 11:26 schrieb Ard Schrijvers:
>> I've come to believe over the years, that a generic
>> hierarchical jcr full text index and queries is a bad idea : In the
>> end, it just doesn't scale, is extremely complex to build
What are the consistency assumptions a JCR client should be allowed to
make?
An approach where temporary inconsistencies are tolerated (i.e. eventual
consistency) increases availability and throughput. In such a case
do/can/should we tolerate temporary violations of:
- Node type constraints?
On 23.2.12 11:43, Michael Dürig wrote:
- Atomicity of save operations?
how does a temporary violation of atomic saves look like?
are you thinking of partially visible changes?
I actually had clustering on my mind where the repository is partitioned
across various cluster nodes. If we re
Hi,
> >> What are the consistency assumptions a JCR client should be allowed to
> >> make?
> >>
> >> An approach where temporary inconsistencies are tolerated (i.e. eventual
> >> consistency) increases availability and throughput. In such a case
> >> do/can/should we tolerate temporary violations
Hi,
> On 23.2.12 11:43, Michael Dürig wrote:
> >
> >
> >>> - Atomicity of save operations?
> >>
> >> how does a temporary violation of atomic saves look like?
> >> are you thinking of partially visible changes?
> >>
> >
> > I actually had clustering on my mind where the repository is partitioned
>
On 28.2.12 14:54, Marcel Reutegger wrote:
Hi,
On 23.2.12 11:43, Michael Dürig wrote:
- Atomicity of save operations?
how does a temporary violation of atomic saves look like?
are you thinking of partially visible changes?
I actually had clustering on my mind where the repository is p
What are the consistency assumptions a JCR client should be allowed to
make?
An approach where temporary inconsistencies are tolerated (i.e. eventual
consistency) increases availability and throughput. In such a case
do/can/should we tolerate temporary violations of:
- Node type constraints?
s
> > I'd solve this differently. Saves are always performed on one partition,
> > even if some of the change set actually goes beyond a given partition.
> > this is however assuming that our implementation supports dynamic
> > partitioning and redistribution (e.g. when a new cluster node is added
>
Hi,
On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:
I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a given partition.
this is however assuming that our implementation supports dynamic
partitioning and redistrib
Hi,
> On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:
>
> > I'd solve this differently. Saves are always performed on one
> > partition,
> > even if some of the change set actually goes beyond a given partition.
> > this is however assuming that our implementation supports dynamic
> > partit
On 29.2.12 13:52, Marcel Reutegger wrote:
Hi,
On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:
I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a given partition.
this is however assuming that our implementation
> Vector clocks. See the presentation [1] which I prepared for the last F2F.
I understand this would require tagging each node with a timestamp, right?
If that's the case, then it's not just about complexity, but also additional
storage requirements.
regards
marcel
Hi,
On Feb 29, 2012, at 2:52 PM, Marcel Reutegger wrote:
Hi,
On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:
I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a given
partition.
this is however assuming that
On 29.2.12 16:30, Dominique Pfister wrote:
Hi,
On Feb 29, 2012, at 2:52 PM, Marcel Reutegger wrote:
Hi,
On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:
I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a give
On 29.2.12 15:45, Marcel Reutegger wrote:
Vector clocks. See the presentation [1] which I prepared for the last F2F.
I understand this would require tagging each node with a timestamp, right?
If that's the case, then it's not just about complexity, but also additional
storage requirements.
On Feb 29, 2012, at 5:45 PM, Michael Dürig wrote:
That's an idea I mentioned earlier already [1]: make cluster sync
transparent to JCR sessions. That is, any modification required by the
sync, should look like just another session operation to JCR clients
(i.e. there should also be observation e
> On 29.2.12 15:45, Marcel Reutegger wrote:
> >> Vector clocks. See the presentation [1] which I prepared for the last F2F.
> >
> > I understand this would require tagging each node with a timestamp, right?
> > If that's the case, then it's not just about complexity, but also additional
> > storage
> So, this could result in a save on P that initially succeeds but
> ultimately fails, because the concurrent one on Q wins? I'm wondering
> how this could be reflected to an MK client: if a save corresponds to
> a MK commit call that immediately returns a new revision ID, would you
> suggest that
Hi,
> I understand this would require tagging each node with a timestamp,
>right?
> If that's the case, then it's not just about complexity, but also
additional
> storage requirements.
If the node id is a counter, then there is no additional storage
requirement.
Regards,
Thomas
> > I understand this would require tagging each node with a timestamp,
> >right?
> > If that's the case, then it's not just about complexity, but also
> additional
> > storage requirements.
>
> If the node id is a counter, then there is no additional storage
> requirement.
but in a distributed
Hi,
>but in a distributed setup we cannot just use a simple counter.
I believe with vector clocks you can, see
http://en.wikipedia.org/wiki/Vector_clock
Regards,
Thomas
Vector clocks seem to not work well in systems with dynamic number of
participants, a problem that is adressed by Interval Tree Clocks [1] and [2].
[1] https://github.com/ricardobcl/Interval-Tree-Clocks
[2] http://gsd.di.uminho.pt/members/cbm/ps/itc2008.pdf
On Feb 29, 2012, at 3:38 PM, Michael D
32 matches
Mail list logo