Re: [lock-free] Re: 3-thread consensus.

2020-01-19 Thread bittnkr
Hi Manuel,

I really thought this list was done... But It appears you still here.
Thanks for your patience.

This is the first time someone call me a troll. I don't know if this is a
detriment or an achievement to an introspect and somewhat arrogant nerd
like me.

I think I found the misunderstanding...

> Take for example a sorted linked list. Searching/inserting/deleting
entries with a specific key in the list is obviously O(n) where n is the
number of items in that list (I hope we can at least agree on that).

When I talk about O(1), I'm referring to the number of threads (the x axis
of the chart) and specifically the insert and remove operations(the cost on
Y axis), not everything you can do with the structure.

In my point of view, that's the fundamental behavior of a lock-free
structure.

If a queue/stack/whatever have a constant insert/remove cost which doesn't
change with the number of participants, that's the proof its lock-free.

Almost all lock-free papers I read, at the end have a chart like that as a
proof of its lock-freedom.

In the linked list example, the search is O(n), but inserts and deletes are
O(1). Note how the Y values doesn't change with the number of threads
(almost a horizontal line).

About my code, I think it should speak for itself. I know there is no lock
on it. No thread will wait for another finish its jobs at any point.

Besides the queue is full/empty. but this is not a lock. As I've said
before If you don't have nothing produced, there is no what to be
consumed. The loop is not a spin lock, just the normal flow of push()/pop().

Anyway, maybe I'm being too simplistic, and still missing some obscure
nuance only reserved for higher minds than that of a brute troll like me
(smiles, and no offenses) . ;-)

I was just euphoric willing to share my joy with someone... But I think I
must look another place.

Thanks for all.

Em sáb., 18 de jan. de 2020 às 13:10, Manuel Pöter 
escreveu:

> Hi bittnkr,
>
> I am still not sure whether you are interested in a meaningful discussion
> or not (but most of your responses convey the feeling that you just want to
> troll around). I will make one last attempt to try to explain your
> misconception, but if you continue to ignore all arguments and references
> to other sources I'm done.
>
> As I said before - progress guarantees (wait-freedom, lock-freedom, ...)
> have nothing to do with algorithmic complexity. And they have nothing to to
> with performance either - in many cases (fine-grained) lock-based solutions
> perform better then lock-free versions. The key difference is, that in a
> lock-free data structure at least one thread will be able to make progress
> on its operation, even if other threads are blocked, preempted or otherwise
> delayed.
>
> Take for example a sorted linked list. Searching/inserting/deleting
> entries with a specific key in the list is obviously O(n) where n is the
> number of items in that list (I hope we can at least agree on that).
> According to your definition, this operation cannot be lock-free. Harris'
> proposed a lock-free solution that is *unbounded*, i.e., O(∞), yet it is
> still lock-free. In fact, most lock-free algorithms are unbounded - if they
> are bounded, they are not just lock-free, but wait-free.
>
> Regarding the chart in your screenshot: it shows that Harris' list scales
> much better than the mutex based one, even though the mutex based solution
> is O(n). But you cannot derive any more information from that chart other
> then how good each implementation scales with the number of threads. In
> particular you cannot tell if any of the implementations are lock-free (nor
> whether their implementation is O(n) for that matter).
>
> Take a closer look at your own code - it is definitely not O(1)!
> do {
>o = out;
>while (o == in)
>  sched_yield(); // if empty, wait for item
>  } while (isFree[o & mask] || !out.compare_exchange_weak(o, o + 1));
> This piece from the pop operation has two unbounded loops in it. Why would
> you assume that this is O(1)?  There is no guarantee that either loop
> terminates after a bounded number of iterations, so it is O(∞). Not only
> that, but it depends on a concurrent push operation to store the value into
> the buffer and setting isFree. But you have no guarantee *when* (or even
> *if* for that matter) this is going to happen. And this is what makes
> your code *not lock-free*: the fact that one thread (pop) depends on some
> other thread having to finish another operation (push).
>
> There exists a universally accepted meaning of lock-freedom (as well as
> all the other progress guarantees) in the scientific community.  The
> cannonical reference for most of these definitions usually is the paper "A
> methodology for implementing highly concurrent data obj

Re: [lock-free] Re: 3-thread consensus.

2020-01-17 Thread bittnkr
> So I guess your opinion is just as good? :-)

No, I know my facts are better.

I'm sure my solution is lock free, because it's O(1) and have no leaks.

If you can't see the same, what can I do?

Em sex, 17 de jan de 2020 12:58, Nitsan Wakart  escreveu:

> "everyone has it own ideas of what means to be lock free. " - So I guess
> your opinion is just as good? :-)
> If we are willing to redefine lock-free, we can call anything lock-free.
> Manuel provided you with the accepted view, you seem to think those are
> just opinions.
> If there's no agreement on what the term means the rest of the discussion
> is pointless, which is why Dmitry asked you: "what is your definition of a
> lock-free algorithm?" when he realized you are using your own definition
> (and then probably lost interest, given the pointlessness).
>
>
> On Fri, Jan 17, 2020 at 4:02 PM bittnkr  wrote:
>
>> Dear Nitsan,
>>
>> Thanks for your message. You bring me a smile this morning.
>>
>> I really don't think so...
>>
>> I'm not manipulating the words to put an alien meaning on them.
>>
>> But instead, I'm trying to simplify a problem, considered impossible,
>> with a rather new, but simpler and concrete definition of a complex theme.
>>
>> Unfortunately, I've seen lots of different positions about the lock free
>> structures, so many that seems everyone has it own ideas of what means to
>> be lock free.
>>
>> This topic is a proof of this.
>>
>> We can debate for hours about the difference between lock-less or
>> lock-free and go nowhere, just chasing the tail. ;)
>>
>> Maybe because the lack of a simpler definition the solution is still
>> unknown.
>>
>> So, let's put some bases and premises to find a common background:
>>
>> Suppose you have 2 threads A incrementing an atomic integer with
>> positive and negative random numbers. Producing a Brownian motion.
>>
>> Int X: Atomic =0
>>
>> A() {X += random ());
>> B() {X -= random ());
>>
>> C() {print(X)}
>>
>> I ask you: It this program lock free?
>>
>> It may be obvious to you that this operation is lock free. Because there
>> is no locks and we are using atomic increment.
>>
>> But why? What's the primal fact that makes this operation lock-free?
>>
>> If you add another 50 instances of A? What will happen with the
>> performance of the program, it will increase, decrease or neither?
>>
>> What is the O() of this program/operation?
>>
>> And if you change the code to update X using a mutex?
>>
>> What will happen to the performance when you add those additional
>> threads? And the O()?
>>
>> Em sex, 17 de jan de 2020 03:55, Nitsan Wakart 
>> escreveu:
>>
>>> Dear Bittnkr,
>>> You are entitled to your own opinion, not your own facts.
>>> The fact is that lock-free is a concrete term that has a current
>>> meaning, one that Manuel has taken the time to collect from different
>>> authoritative sources. You seem to go for the Humpty Dumpty approach:
>>>"When I use a *word*," Humpty Dumpty *said*, in rather a scornful
>>> tone, "it *means* just what I choose it to *mean*—neither more nor
>>> less."
>>> This is good fun in a story book, not so productive in this context
>>> though. If you want lock-free to mean whatever you want it to mean, then I
>>> agree your queue is lock-free.
>>> ;-)
>>>
>>> On Thu, Jan 16, 2020 at 8:43 PM bittnkr  wrote:
>>>
>>>> Dear Manuel,
>>>>
>>>> When pointless people loose their arguments, they said that discussion
>>>> is pointless. ;D
>>>>
>>>> I made the most right-to-the-point affirmation I ever made in my entire
>>>> life:
>>>>
>>>> *The only requirement to a algorithm be lock-free is to be O(1).*
>>>>
>>>> To me, this affirmation is obvious and self contained at the point to
>>>> be axiomatic.
>>>>
>>>> I don't know about Harri's linked list, but this is my general point
>>>> about lock freedom.
>>>>
>>>> Can you refute it?
>>>>
>>>> Em qui, 16 de jan de 2020 05:16, Manuel Pöter 
>>>> escreveu:
>>>>
>>>>> Progress guarantees have nothing to do with algorithmic complexity!
>>>>> Take Harris' lock-free linked list
>>>>> <https://timharris.uk/papers/2001-disc.pdf> - the list operations are
>>>>> definitely no

Re: [lock-free] Re: 3-thread consensus.

2020-01-16 Thread bittnkr
Dear Manuel,

When pointless people loose their arguments, they said that discussion is
pointless. ;D

I made the most right-to-the-point affirmation I ever made in my entire
life:

*The only requirement to a algorithm be lock-free is to be O(1).*

To me, this affirmation is obvious and self contained at the point to be
axiomatic.

I don't know about Harri's linked list, but this is my general point about
lock freedom.

Can you refute it?

Em qui, 16 de jan de 2020 05:16, Manuel Pöter 
escreveu:

> Progress guarantees have nothing to do with algorithmic complexity! Take 
> Harris'
> lock-free linked list <https://timharris.uk/papers/2001-disc.pdf> - the
> list operations are definitely not O(1), but it is still lock-free.
> I am not sure if you really don't understand this, or if you are
> deliberately ignoring all comments trying to explain what lock-freedom
> usually means. Either way, this discussion seems rather pointless by now...
>
> On Thursday, 16 January 2020 01:47:49 UTC+1, bittnkr wrote:
>>
>>
>> > Your algorithm is lockless (i.e. without lock) but not lock-free
>> according to the definition of lock-freedom.
>>
>> You make me think if my cellphone is wireless or wirefree. :)
>>
>> Thinking carefully about, I got an axiom:
>>
>> An algorithm can be called lock-free if it's O(1).
>>
>> Condition necessary and sufficient.
>>
>>
>> Em qua, 15 de jan de 2020 07:36, Mamy Ratsimbazafy 
>> escreveu:
>>
>>> The definition of obstruction-free, lock-free or wait-free is not a
>>> guarantee of performance, it's a guarantee of progress, in particular when
>>> thread can dies (non-blocking algorithm
>>> <https://en.wikipedia.org/wiki/Non-blocking_algorithm>)
>>>
>>> In computer science <https://en.wikipedia.org/wiki/Computer_science>,
>>>> an algorithm <https://en.wikipedia.org/wiki/Algorithm> is called
>>>> *non-blocking* if failure or suspension
>>>> <https://en.wikipedia.org/wiki/Scheduling_(computing)> of any thread
>>>> <https://en.wikipedia.org/wiki/Thread_(computing)> cannot cause
>>>> failure or suspension of another thread;[1]
>>>> <https://en.wikipedia.org/wiki/Non-blocking_algorithm#cite_note-1> for
>>>> some operations, these algorithms provide a useful alternative to
>>>> traditional blocking implementations
>>>> <https://en.wikipedia.org/wiki/Lock_(computer_science)>. A
>>>> non-blocking algorithm is *lock-free* if there is guaranteed
>>>> system-wide progress
>>>> <https://en.wikipedia.org/wiki/Resource_starvation>, and *wait-free*
>>>> if there is also guaranteed per-thread progress.
>>>>
>>>
>>> Your algorithm is lockless (i.e. without lock) but not lock-free
>>> according to the definition of lock-freedom.
>>>
>>> As you aptly said, in the practical case wait-free or lock-free is not a
>>> necessary property to be able to scale a data-structure to multiple
>>> contending threads.
>>> For example, Dmitry's famous MPSC node-based queue
>>> <http://www.1024cores.net/home/lock-free-algorithms/queues/non-intrusive-mpsc-node-based-queue>
>>> is also lockless, as if a producer dies in the wrong place, the consumer
>>> cannot get to the end of the queue.
>>> There are lockless (or linearizable) variants of it but AFAIK, it
>>> requires giving up some performance.
>>> In practice, using this queue means assuming that threads don't die
>>> which is a reasonable assumptions most of the time.
>>>
>>> Lock-freedom or wait-freedom is valuable is when you need strong
>>> worst-case guarantees (real-time scheduling for example) or cannot rely on
>>> the OS (because you might be writing one or you have specific
>>> fault-tolerance requirements).
>>>
>>> --
>>> Mamy
>>>
>>> On Wed, Jan 15, 2020 at 10:58 AM bittnkr  wrote:
>>>
>>>> > Let me ask first: what is your definition of a lock-free algorithm?
>>>>
>>>> When there is no locks/waits between threads/actors.
>>>>
>>>> And a lock happens when some thread A cannot proceed while another
>>>> thread B do not finish its job.
>>>>
>>>> But for practical reasons, I believe that the most important
>>>> feature/characteristic of a Lock-free algorithm is to be O(1), The number
>>>> of elements must not impact the performance. Doesn't matter if you are
>>>> handling 2 or 10.000 threads, the c

Re: [lock-free] Re: 3-thread consensus.

2020-01-15 Thread bittnkr
> Your algorithm is lockless (i.e. without lock) but not lock-free
according to the definition of lock-freedom.

You make me think if my cellphone is wireless or wirefree. :)

Thinking carefully about, I got an axiom:

An algorithm can be called lock-free if it's O(1).

Condition necessary and sufficient.


Em qua, 15 de jan de 2020 07:36, Mamy Ratsimbazafy 
escreveu:

> The definition of obstruction-free, lock-free or wait-free is not a
> guarantee of performance, it's a guarantee of progress, in particular when
> thread can dies (non-blocking algorithm
> <https://en.wikipedia.org/wiki/Non-blocking_algorithm>)
>
> In computer science <https://en.wikipedia.org/wiki/Computer_science>, an
>> algorithm <https://en.wikipedia.org/wiki/Algorithm> is called
>> *non-blocking* if failure or suspension
>> <https://en.wikipedia.org/wiki/Scheduling_(computing)> of any thread
>> <https://en.wikipedia.org/wiki/Thread_(computing)> cannot cause failure
>> or suspension of another thread;[1]
>> <https://en.wikipedia.org/wiki/Non-blocking_algorithm#cite_note-1> for
>> some operations, these algorithms provide a useful alternative to
>> traditional blocking implementations
>> <https://en.wikipedia.org/wiki/Lock_(computer_science)>. A non-blocking
>> algorithm is *lock-free* if there is guaranteed system-wide progress
>> <https://en.wikipedia.org/wiki/Resource_starvation>, and *wait-free* if
>> there is also guaranteed per-thread progress.
>>
>
> Your algorithm is lockless (i.e. without lock) but not lock-free according
> to the definition of lock-freedom.
>
> As you aptly said, in the practical case wait-free or lock-free is not a
> necessary property to be able to scale a data-structure to multiple
> contending threads.
> For example, Dmitry's famous MPSC node-based queue
> <http://www.1024cores.net/home/lock-free-algorithms/queues/non-intrusive-mpsc-node-based-queue>
> is also lockless, as if a producer dies in the wrong place, the consumer
> cannot get to the end of the queue.
> There are lockless (or linearizable) variants of it but AFAIK, it requires
> giving up some performance.
> In practice, using this queue means assuming that threads don't die which
> is a reasonable assumptions most of the time.
>
> Lock-freedom or wait-freedom is valuable is when you need strong
> worst-case guarantees (real-time scheduling for example) or cannot rely on
> the OS (because you might be writing one or you have specific
> fault-tolerance requirements).
>
> --
> Mamy
>
> On Wed, Jan 15, 2020 at 10:58 AM bittnkr  wrote:
>
>> > Let me ask first: what is your definition of a lock-free algorithm?
>>
>> When there is no locks/waits between threads/actors.
>>
>> And a lock happens when some thread A cannot proceed while another thread
>> B do not finish its job.
>>
>> But for practical reasons, I believe that the most important
>> feature/characteristic of a Lock-free algorithm is to be O(1), The number
>> of elements must not impact the performance. Doesn't matter if you are
>> handling 2 or 10.000 threads, the cost per operation must be the same.
>>
>>
>>
>> Em qua., 15 de jan. de 2020 às 04:17, Dmitry Vyukov 
>> escreveu:
>>
>>> On Tue, Jan 14, 2020 at 4:03 PM  wrote:
>>> >
>>> > Hi Manuel,
>>> >
>>> > How would a processing line be broken between the CAS and the release
>>> of the seat?
>>> >
>>> > This will happen only if the S.O. preempt the thread on that point and
>>> never get back.
>>> >
>>> > This is what I mean with the "my entire system is broken"
>>> >
>>> > Only a S.O. scheduling failure can produce that.
>>> >
>>> > Otherwise is impossible that a thread do not terminate that processing
>>> line.
>>> >
>>> > Can you think another possibility?
>>>
>>> Let me ask first: what is your definition of a lock-free algorithm?
>>>
>>> > Em terça-feira, 14 de janeiro de 2020 06:55:14 UTC-2, Manuel Pöter
>>> escreveu:
>>> >>
>>> >> The lock-free property guarantees that at any time at least one
>>> thread is making progress in a finite number of steps. Or to put this more
>>> generally: a stalled thread must not cause all other threads to stall
>>> indefinitely.
>>> >> The arguments about lock-freedom (or the lack thereof) are usually
>>> based on the (somewhat artificial) assumption that any thread can fail at
>>> any time - i.e., simply stop executing. If suc

Re: [lock-free] Re: 3-thread consensus.

2020-01-15 Thread bittnkr
> Let me ask first: what is your definition of a lock-free algorithm?

When there is no locks/waits between threads/actors.

And a lock happens when some thread A cannot proceed while another thread B
do not finish its job.

But for practical reasons, I believe that the most important
feature/characteristic of a Lock-free algorithm is to be O(1), The number
of elements must not impact the performance. Doesn't matter if you are
handling 2 or 10.000 threads, the cost per operation must be the same.



Em qua., 15 de jan. de 2020 às 04:17, Dmitry Vyukov 
escreveu:

> On Tue, Jan 14, 2020 at 4:03 PM  wrote:
> >
> > Hi Manuel,
> >
> > How would a processing line be broken between the CAS and the release of
> the seat?
> >
> > This will happen only if the S.O. preempt the thread on that point and
> never get back.
> >
> > This is what I mean with the "my entire system is broken"
> >
> > Only a S.O. scheduling failure can produce that.
> >
> > Otherwise is impossible that a thread do not terminate that processing
> line.
> >
> > Can you think another possibility?
>
> Let me ask first: what is your definition of a lock-free algorithm?
>
> > Em terça-feira, 14 de janeiro de 2020 06:55:14 UTC-2, Manuel Pöter
> escreveu:
> >>
> >> The lock-free property guarantees that at any time at least one thread
> is making progress in a finite number of steps. Or to put this more
> generally: a stalled thread must not cause all other threads to stall
> indefinitely.
> >> The arguments about lock-freedom (or the lack thereof) are usually
> based on the (somewhat artificial) assumption that any thread can fail at
> any time - i.e., simply stop executing. If such a failed thread causes the
> whole system to grind to a halt, then it is not lock-free.
> >>
> >> You wrote yourself that "If a thread dies at that point, my entire
> system is broken...", so it is definitely not lock-free.
> >> That being said, lock-freedom is a nice property, but by no means
> indispensable for scalability. Several of Dmitry's algorithms are not
> lock-free (like the bounded MPMC queue), which does not mean that they do
> not scale.
> >>
> >> Alistarh et al. showed that lock-free algorithms are practically
> wait-free. I suppose the same could be shown for several concurrent
> algorithms that are not strictly lock-free. So it is not so much important
> whether an algorithm is lock-free or not, but whether it works well in
> practice for the use case it is designed for.
> >>
> >>
> >> On Tuesday, 14 January 2020 03:16:23 UTC+1, bittnkr wrote:
> >>>
> >>> >
> >>> Well, if producer is preempted, consumers are blocked from progressing.
> >>>
> >>> Sorry, but this isn't true.
> >>>
> >>> The consumers are preempted only in case of a empty queue. Which isn't
> a lock.
> >>>
> >>> Meaning that there is nothing to do. If you don't have active
> producers, three is nothing to consume.
> >>>
> >>> How can you see a lock there?
> >>>
> >>> Please
> >>>
> >>> Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov 
> escreveu:
> >>>>
> >>>> +lock-free group again, please don't drop it
> >>>>
> >>>> On Mon, Jan 13, 2020 at 8:19 AM bittnkr  wrote:
> >>>> >
> >>>> > If a thread dies at that point, my entire system is broken...
> >>>>
> >>>> Which means it's not lock-free.
> >>>>
> >>>> > But It can preempts without any problem at near zero cost.
> >>>>
> >>>> Well, if producer is preempted, consumers are blocked from
> >>>> progressing. This is 100% equivalent to a mutex. If a thread is
> >>>> preempted while holding a mutex, it also does not result in any
> >>>> correctness problems.
> >>>>
> >>>>
> >>>> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov 
> escreveu:
> >>>> >>
> >>>> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr  wrote:
> >>>> >> >
> >>>> >> >
> >>>> >> > > This blocks consumes from progressing (consuming next produced
> items)
> >>>> >> > and is effectively a mutex.
> >>>> >> >
> >>>> >> > Suppose the thread A got a local copy of tail in t then is
> preempted,
> >>>> >> >
> >>>> >>

[lock-free] Re: 3-thread consensus.

2020-01-14 Thread bittnkr
Thinking about my own question, I find that an abnormal termination is 
another possibility, 
yet in this case, the entire program is finished/dead/broken.

Em terça-feira, 14 de janeiro de 2020 13:03:42 UTC-2, bit...@gmail.com 
escreveu:
>
> Hi Manuel, 
>
> How would a processing line be broken between the CAS and the release of 
> the seat?
>
> This will happen *only if* the S.O. preempt the thread on that point and 
> never get back.
>
> This is what I mean with the "my entire system is broken"
>
> *Only a S.O. scheduling failure can produce that.*
>
> Otherwise is impossible that a thread do not terminate that processing 
> line. 
>
> Can you think another possibility?
>
>
> Em terça-feira, 14 de janeiro de 2020 06:55:14 UTC-2, Manuel Pöter 
> escreveu:
>>
>> The lock-free property guarantees that at any time at least one thread is 
>> making progress in a finite number of steps. Or to put this more generally: 
>> a stalled thread must not cause all other threads to stall indefinitely.
>> The arguments about lock-freedom (or the lack thereof) are usually based 
>> on the (somewhat artificial) assumption that any thread can fail at any 
>> time - i.e., simply stop executing. If such a failed thread causes the 
>> whole system to grind to a halt, then it is not lock-free.
>>
>> You wrote yourself that "If a thread dies at that point, my entire system 
>> is broken...", so it is definitely not lock-free.
>> That being said, lock-freedom is a nice property, but by no means 
>> indispensable for scalability. Several of Dmitry's algorithms are not 
>> lock-free (like the bounded MPMC queue), which does not mean that they do 
>> not scale.
>>
>> Alistarh et al. showed that lock-free algorithms are practically 
>> wait-free. I suppose the same could be shown for several concurrent 
>> algorithms that are not strictly lock-free. So it is not so much important 
>> whether an algorithm is lock-free or not, but whether it works well in 
>> practice for the use case it is designed for.
>>
>>
>> On Tuesday, 14 January 2020 03:16:23 UTC+1, bittnkr wrote:
>>>
>>> > 
>>> Well, if producer is preempted, consumers are blocked from progressing.
>>>
>>> Sorry, but this isn't true. 
>>>
>>> The consumers are preempted only in case of a empty queue. Which isn't 
>>> a lock. 
>>>
>>> Meaning that there is nothing to do. If you don't have active producers, 
>>> three is nothing to consume.
>>>
>>> How can you see a lock there?
>>>
>>> Please
>>>
>>> Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov  
>>> escreveu:
>>>
>>>> +lock-free group again, please don't drop it
>>>>
>>>> On Mon, Jan 13, 2020 at 8:19 AM bittnkr  wrote:
>>>> >
>>>> > If a thread dies at that point, my entire system is broken...
>>>>
>>>> Which means it's not lock-free.
>>>>
>>>> > But It can preempts without any problem at near zero cost.
>>>>
>>>> Well, if producer is preempted, consumers are blocked from
>>>> progressing. This is 100% equivalent to a mutex. If a thread is
>>>> preempted while holding a mutex, it also does not result in any
>>>> correctness problems.
>>>>
>>>>
>>>> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov  
>>>> escreveu:
>>>> >>
>>>> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr  wrote:
>>>> >> >
>>>> >> >
>>>> >> > > This blocks consumes from progressing (consuming next produced 
>>>> items)
>>>> >> > and is effectively a mutex.
>>>> >> >
>>>> >> > Suppose the thread A got a local copy of tail in t then is 
>>>> preempted,
>>>> >> >
>>>> >> > another thread will get the same tail an get the seat normally.
>>>> >> >
>>>> >> > When the previous thread retakes the line, the CAS will fail, 
>>>> because the seat was taken.
>>>> >> >
>>>> >> > Restarting the while without any kind of blocking.
>>>> >> >
>>>> >> > Where do you see a mutex here?
>>>> >>
>>>> >> I mean preemption between succeeding CAS and writing element /NULL 
>>>> to the array.
>>>> >>
>>>> >> If a thread

[lock-free] Re: 3-thread consensus.

2020-01-14 Thread bittnkr
Hi Manuel, 

How would a processing line be broken between the CAS and the release of 
the seat?

This will happen *only if* the S.O. preempt the thread on that point and 
never get back.

This is what I mean with the "my entire system is broken"

*Only a S.O. scheduling failure can produce that.*

Otherwise is impossible that a thread do not terminate that processing 
line. 

Can you think another possibility?


Em terça-feira, 14 de janeiro de 2020 06:55:14 UTC-2, Manuel Pöter escreveu:
>
> The lock-free property guarantees that at any time at least one thread is 
> making progress in a finite number of steps. Or to put this more generally: 
> a stalled thread must not cause all other threads to stall indefinitely.
> The arguments about lock-freedom (or the lack thereof) are usually based 
> on the (somewhat artificial) assumption that any thread can fail at any 
> time - i.e., simply stop executing. If such a failed thread causes the 
> whole system to grind to a halt, then it is not lock-free.
>
> You wrote yourself that "If a thread dies at that point, my entire system 
> is broken...", so it is definitely not lock-free.
> That being said, lock-freedom is a nice property, but by no means 
> indispensable for scalability. Several of Dmitry's algorithms are not 
> lock-free (like the bounded MPMC queue), which does not mean that they do 
> not scale.
>
> Alistarh et al. showed that lock-free algorithms are practically 
> wait-free. I suppose the same could be shown for several concurrent 
> algorithms that are not strictly lock-free. So it is not so much important 
> whether an algorithm is lock-free or not, but whether it works well in 
> practice for the use case it is designed for.
>
>
> On Tuesday, 14 January 2020 03:16:23 UTC+1, bittnkr wrote:
>>
>> > 
>> Well, if producer is preempted, consumers are blocked from progressing.
>>
>> Sorry, but this isn't true. 
>>
>> The consumers are preempted only in case of a empty queue. Which isn't a 
>> lock. 
>>
>> Meaning that there is nothing to do. If you don't have active producers, 
>> three is nothing to consume.
>>
>> How can you see a lock there?
>>
>> Please
>>
>> Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov  
>> escreveu:
>>
>>> +lock-free group again, please don't drop it
>>>
>>> On Mon, Jan 13, 2020 at 8:19 AM bittnkr  wrote:
>>> >
>>> > If a thread dies at that point, my entire system is broken...
>>>
>>> Which means it's not lock-free.
>>>
>>> > But It can preempts without any problem at near zero cost.
>>>
>>> Well, if producer is preempted, consumers are blocked from
>>> progressing. This is 100% equivalent to a mutex. If a thread is
>>> preempted while holding a mutex, it also does not result in any
>>> correctness problems.
>>>
>>>
>>> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov  
>>> escreveu:
>>> >>
>>> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr  wrote:
>>> >> >
>>> >> >
>>> >> > > This blocks consumes from progressing (consuming next produced 
>>> items)
>>> >> > and is effectively a mutex.
>>> >> >
>>> >> > Suppose the thread A got a local copy of tail in t then is 
>>> preempted,
>>> >> >
>>> >> > another thread will get the same tail an get the seat normally.
>>> >> >
>>> >> > When the previous thread retakes the line, the CAS will fail, 
>>> because the seat was taken.
>>> >> >
>>> >> > Restarting the while without any kind of blocking.
>>> >> >
>>> >> > Where do you see a mutex here?
>>> >>
>>> >> I mean preemption between succeeding CAS and writing element /NULL to 
>>> the array.
>>> >>
>>> >> If a thread is terminated at that point, the whole queue is broken (no
>>> >> termination-safety).
>>> >>
>>> >>
>>> >> > Em dom, 12 de jan de 2020 04:49, Dmitry Vyukov  
>>> escreveu:
>>> >> >>
>>> >> >> On Sat, Jan 11, 2020 at 9:26 AM bittnkr  wrote:
>>> >> >> >
>>> >> >> > Good observations. Thank you.
>>> >> >> >
>>> >> >> > If the thread preempts on those points, the seat position will 
>>> be held on local variables h and t.
>>> >

Re: [lock-free] Re: 3-thread consensus.

2020-01-14 Thread bittnkr
Hi Brendon,

Thanks for the observations.

It that point between the CAS and the effective release of the seat, *all 
other threads are free to take another seat *without restriction. 

As well in any point of the code. There is no a single point where any 
thread is blocked from getting  a seat. 

That why I'm sure that it is lock free.

The queue will only stall, if it get full (all seats *after that point are 
ocupied* before the next cicle.) 

And only if the consumers get nothing.

If you have a queue of 64k positions, the other threads will have the same 
amount of chances to continue without locking. 

With our current memory availability, a queue with a million positions is 
pretty factible. But after some point, the benchmarks shows that there are 
no speed improvement in increasing the buffer size. The optimum size must 
fit inside the processor cache size. The optimum I found is only 64 
positions. Take a look on the benchmarks 
<https://github.com/bittnkr/uniq#benchmarks---measuring-the-flow>

I've update the docs <https://github.com/bittnkr/uniq> and I think it is 
clearer now.

Please run the code, stress it, increase the number of threads, reduce and 
increase the buffer size and change the nature of the data pushed.
 
If you do that, I believe that you will agree with me: Its so lock free as 
it is possible to be.


Em terça-feira, 14 de janeiro de 2020 04:38:32 UTC-2, Brendon Costa 
escreveu:
>
> I was interested in your queue so had a look as well. Thanks for sharing 
> it. I find it interesting looking at other implementations.
>
> I think Dmitry is correct. See the comment inline below. From my 
> understanding the code locks the consumers waiting on a producer at the 
> point in time where I placed the comment in the code. It has a slight 
> advantage over a standard lock impl though as other producers are not 
> blocked only other consumers. 
>
>   int push(T item) {
> int i;
> do {
>   i = in;
>   while (i - out > mask)
> sched_yield(); // if full, wait for space
> } while (!isFree[i & mask] || !in.compare_exchange_weak(i, i + 1));
>
> // At this point in the code, the producer has incremented "in". Until 
> this thread sets isFree[i & mask] = 0, all consumers are blocked waiting 
> on isFree[o & mask] to be false. 
> // Even if another producer thread adds more data. So if the OS 
> pre-empts this thread at this point in time
> // no consumers will make any progress until the OS re-schedules this 
> thread to finish its work.
> 
> buffer[i & mask] = item;
> isFree[i & mask] = 0;
> return i;
>   }
>
>
>
> On Tue, 14 Jan 2020 at 13:16, bittnkr > 
> wrote:
>
>> > 
>> Well, if producer is preempted, consumers are blocked from progressing.
>>
>> Sorry, but this isn't true. 
>>
>> The consumers are preempted only in case of a empty queue. Which isn't a 
>> lock. 
>>
>> Meaning that there is nothing to do. If you don't have active producers, 
>> three is nothing to consume.
>>
>> How can you see a lock there?
>>
>> Please
>>
>> Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov > > escreveu:
>>
>>> +lock-free group again, please don't drop it
>>>
>>> On Mon, Jan 13, 2020 at 8:19 AM bittnkr > 
>>> wrote:
>>> >
>>> > If a thread dies at that point, my entire system is broken...
>>>
>>> Which means it's not lock-free.
>>>
>>> > But It can preempts without any problem at near zero cost.
>>>
>>> Well, if producer is preempted, consumers are blocked from
>>> progressing. This is 100% equivalent to a mutex. If a thread is
>>> preempted while holding a mutex, it also does not result in any
>>> correctness problems.
>>>
>>>
>>> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov >> > escreveu:
>>> >>
>>> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr >> > wrote:
>>> >> >
>>> >> >
>>> >> > > This blocks consumes from progressing (consuming next produced 
>>> items)
>>> >> > and is effectively a mutex.
>>> >> >
>>> >> > Suppose the thread A got a local copy of tail in t then is 
>>> preempted,
>>> >> >
>>> >> > another thread will get the same tail an get the seat normally.
>>> >> >
>>> >> > When the previous thread retakes the line, the CAS will fail, 
>>> because the seat was taken.
>>> >> >
>>> >> > Restart

[lock-free] Re: 3-thread consensus.

2020-01-13 Thread bittnkr
>
Well, if producer is preempted, consumers are blocked from progressing.

Sorry, but this isn't true.

The consumers are preempted only in case of a empty queue. Which isn't a
lock.

Meaning that there is nothing to do. If you don't have active producers,
three is nothing to consume.

How can you see a lock there?

Please

Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov  escreveu:

> +lock-free group again, please don't drop it
>
> On Mon, Jan 13, 2020 at 8:19 AM bittnkr  wrote:
> >
> > If a thread dies at that point, my entire system is broken...
>
> Which means it's not lock-free.
>
> > But It can preempts without any problem at near zero cost.
>
> Well, if producer is preempted, consumers are blocked from
> progressing. This is 100% equivalent to a mutex. If a thread is
> preempted while holding a mutex, it also does not result in any
> correctness problems.
>
>
> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov 
> escreveu:
> >>
> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr  wrote:
> >> >
> >> >
> >> > > This blocks consumes from progressing (consuming next produced
> items)
> >> > and is effectively a mutex.
> >> >
> >> > Suppose the thread A got a local copy of tail in t then is preempted,
> >> >
> >> > another thread will get the same tail an get the seat normally.
> >> >
> >> > When the previous thread retakes the line, the CAS will fail, because
> the seat was taken.
> >> >
> >> > Restarting the while without any kind of blocking.
> >> >
> >> > Where do you see a mutex here?
> >>
> >> I mean preemption between succeeding CAS and writing element /NULL to
> the array.
> >>
> >> If a thread is terminated at that point, the whole queue is broken (no
> >> termination-safety).
> >>
> >>
> >> > Em dom, 12 de jan de 2020 04:49, Dmitry Vyukov 
> escreveu:
> >> >>
> >> >> On Sat, Jan 11, 2020 at 9:26 AM bittnkr  wrote:
> >> >> >
> >> >> > Good observations. Thank you.
> >> >> >
> >> >> > If the thread preempts on those points, the seat position will be
> held on local variables h and t.
> >> >>
> >> >> This blocks consumes from progressing (consuming next produced items)
> >> >> and is effectively a mutex. This makes algorithm non-termination-safe
> >> >> and non-lock-free.
> >> >>
> >> >>
> >> >> > After the thread line is restored, the CAS can fail, and the loop
> will just restart in normal flow, without any locking.
> >> >> >
> >> >> > I updated the docs, I think is clearer now.
> >> >> >
> >> >> > Em sáb., 11 de jan. de 2020 às 05:38, Dmitry Vyukov <
> dvyu...@gmail.com> escreveu:
> >> >> >>
> >> >> >> On Sat, Jan 11, 2020 at 4:09 AM bittnkr 
> wrote:
> >> >> >> >
> >> >> >> > Hello Dmitry and fellows from the group.
> >> >> >> >
> >> >> >> > If you look carefully, you will see that there is no blocking,
> just simple CAS, even the buffer is a common buffer.
> >> >> >>
> >> >> >> But consider you look inside of a spin lock, or you take a
> >> >> >> spin-lock-based algorithm and inline all spin lock code into the
> >> >> >> algorithm. What you will see is exactly the same -- no blocking,
> just
> >> >> >> a CAS. However, it does not really change anything, it's still
> >> >> >> blocking mutex-based algorithm.
> >> >> >> One can also combine spin lock state with some data, e.g. a
> particular
> >> >> >> value of a data member means "locked" and blocks progress of other
> >> >> >> threads. It makes spin lock even less visible, but again does not
> >> >> >> change anything on conceptual level.
> >> >> >>
> >> >> >> If I am reading the algorithm correctly, if a thread is preempted
> >> >> >> between these 2 operations:
> >> >> >>
> >> >> >>  } while ( (data[t & mask]) || (CompareExchange(tail, t+1, t) !=
> t) )
> >> >> >>   data[t & mask] = item // now is safe to update the buffer
> >> >> >>
> >> >> >> It effectively locks a mutex