The definition of obstruction-free, lock-free or wait-free is not a
guarantee of performance, it's a guarantee of progress, in particular when
thread can dies (non-blocking algorithm
<https://en.wikipedia.org/wiki/Non-blocking_algorithm>)

In computer science <https://en.wikipedia.org/wiki/Computer_science>, an
> algorithm <https://en.wikipedia.org/wiki/Algorithm> is called
> *non-blocking* if failure or suspension
> <https://en.wikipedia.org/wiki/Scheduling_(computing)> of any thread
> <https://en.wikipedia.org/wiki/Thread_(computing)> cannot cause failure
> or suspension of another thread;[1]
> <https://en.wikipedia.org/wiki/Non-blocking_algorithm#cite_note-1> for
> some operations, these algorithms provide a useful alternative to
> traditional blocking implementations
> <https://en.wikipedia.org/wiki/Lock_(computer_science)>. A non-blocking
> algorithm is *lock-free* if there is guaranteed system-wide progress
> <https://en.wikipedia.org/wiki/Resource_starvation>, and *wait-free* if
> there is also guaranteed per-thread progress.
>

Your algorithm is lockless (i.e. without lock) but not lock-free according
to the definition of lock-freedom.

As you aptly said, in the practical case wait-free or lock-free is not a
necessary property to be able to scale a data-structure to multiple
contending threads.
For example, Dmitry's famous MPSC node-based queue
<http://www.1024cores.net/home/lock-free-algorithms/queues/non-intrusive-mpsc-node-based-queue>
is also lockless, as if a producer dies in the wrong place, the consumer
cannot get to the end of the queue.
There are lockless (or linearizable) variants of it but AFAIK, it requires
giving up some performance.
In practice, using this queue means assuming that threads don't die which
is a reasonable assumptions most of the time.

Lock-freedom or wait-freedom is valuable is when you need strong worst-case
guarantees (real-time scheduling for example) or cannot rely on the OS
(because you might be writing one or you have specific fault-tolerance
requirements).

--
Mamy

On Wed, Jan 15, 2020 at 10:58 AM bittnkr <bitt...@gmail.com> wrote:

> > Let me ask first: what is your definition of a lock-free algorithm?
>
> When there is no locks/waits between threads/actors.
>
> And a lock happens when some thread A cannot proceed while another thread
> B do not finish its job.
>
> But for practical reasons, I believe that the most important
> feature/characteristic of a Lock-free algorithm is to be O(1), The number
> of elements must not impact the performance. Doesn't matter if you are
> handling 2 or 10.000 threads, the cost per operation must be the same.
>
>
>
> Em qua., 15 de jan. de 2020 às 04:17, Dmitry Vyukov <dvyu...@gmail.com>
> escreveu:
>
>> On Tue, Jan 14, 2020 at 4:03 PM <bitt...@gmail.com> wrote:
>> >
>> > Hi Manuel,
>> >
>> > How would a processing line be broken between the CAS and the release
>> of the seat?
>> >
>> > This will happen only if the S.O. preempt the thread on that point and
>> never get back.
>> >
>> > This is what I mean with the "my entire system is broken"
>> >
>> > Only a S.O. scheduling failure can produce that.
>> >
>> > Otherwise is impossible that a thread do not terminate that processing
>> line.
>> >
>> > Can you think another possibility?
>>
>> Let me ask first: what is your definition of a lock-free algorithm?
>>
>> > Em terça-feira, 14 de janeiro de 2020 06:55:14 UTC-2, Manuel Pöter
>> escreveu:
>> >>
>> >> The lock-free property guarantees that at any time at least one thread
>> is making progress in a finite number of steps. Or to put this more
>> generally: a stalled thread must not cause all other threads to stall
>> indefinitely.
>> >> The arguments about lock-freedom (or the lack thereof) are usually
>> based on the (somewhat artificial) assumption that any thread can fail at
>> any time - i.e., simply stop executing. If such a failed thread causes the
>> whole system to grind to a halt, then it is not lock-free.
>> >>
>> >> You wrote yourself that "If a thread dies at that point, my entire
>> system is broken...", so it is definitely not lock-free.
>> >> That being said, lock-freedom is a nice property, but by no means
>> indispensable for scalability. Several of Dmitry's algorithms are not
>> lock-free (like the bounded MPMC queue), which does not mean that they do
>> not scale.
>> >>
>> >> Alistarh et al. showed that lock-free algorithms are practically
>> wait-free. I suppose the same could be shown for several concurrent
>> algorithms that are not strictly lock-free. So it is not so much important
>> whether an algorithm is lock-free or not, but whether it works well in
>> practice for the use case it is designed for.
>> >>
>> >>
>> >> On Tuesday, 14 January 2020 03:16:23 UTC+1, bittnkr wrote:
>> >>>
>> >>> >
>> >>> Well, if producer is preempted, consumers are blocked from
>> progressing.
>> >>>
>> >>> Sorry, but this isn't true.
>> >>>
>> >>> The consumers are preempted only in case of a empty queue. Which
>> isn't a lock.
>> >>>
>> >>> Meaning that there is nothing to do. If you don't have active
>> producers, three is nothing to consume.
>> >>>
>> >>> How can you see a lock there?
>> >>>
>> >>> Please
>> >>>
>> >>> Em seg, 13 de jan de 2020 04:35, Dmitry Vyukov <dvy...@gmail.com>
>> escreveu:
>> >>>>
>> >>>> +lock-free group again, please don't drop it
>> >>>>
>> >>>> On Mon, Jan 13, 2020 at 8:19 AM bittnkr <bit...@gmail.com> wrote:
>> >>>> >
>> >>>> > If a thread dies at that point, my entire system is broken...
>> >>>>
>> >>>> Which means it's not lock-free.
>> >>>>
>> >>>> > But It can preempts without any problem at near zero cost.
>> >>>>
>> >>>> Well, if producer is preempted, consumers are blocked from
>> >>>> progressing. This is 100% equivalent to a mutex. If a thread is
>> >>>> preempted while holding a mutex, it also does not result in any
>> >>>> correctness problems.
>> >>>>
>> >>>>
>> >>>> > Em seg, 13 de jan de 2020 03:55, Dmitry Vyukov <dvy...@gmail.com>
>> escreveu:
>> >>>> >>
>> >>>> >> On Sun, Jan 12, 2020 at 7:33 PM bittnkr <bit...@gmail.com> wrote:
>> >>>> >> >
>> >>>> >> >
>> >>>> >> > > This blocks consumes from progressing (consuming next
>> produced items)
>> >>>> >> > and is effectively a mutex.
>> >>>> >> >
>> >>>> >> > Suppose the thread A got a local copy of tail in t then is
>> preempted,
>> >>>> >> >
>> >>>> >> > another thread will get the same tail an get the seat normally.
>> >>>> >> >
>> >>>> >> > When the previous thread retakes the line, the CAS will fail,
>> because the seat was taken.
>> >>>> >> >
>> >>>> >> > Restarting the while without any kind of blocking.
>> >>>> >> >
>> >>>> >> > Where do you see a mutex here?
>> >>>> >>
>> >>>> >> I mean preemption between succeeding CAS and writing element
>> /NULL to the array.
>> >>>> >>
>> >>>> >> If a thread is terminated at that point, the whole queue is
>> broken (no
>> >>>> >> termination-safety).
>> >>>> >>
>> >>>> >>
>> >>>> >> > Em dom, 12 de jan de 2020 04:49, Dmitry Vyukov <
>> dvy...@gmail.com> escreveu:
>> >>>> >> >>
>> >>>> >> >> On Sat, Jan 11, 2020 at 9:26 AM bittnkr <bit...@gmail.com>
>> wrote:
>> >>>> >> >> >
>> >>>> >> >> > Good observations. Thank you.
>> >>>> >> >> >
>> >>>> >> >> > If the thread preempts on those points, the seat position
>> will be held on local variables h and t.
>> >>>> >> >>
>> >>>> >> >> This blocks consumes from progressing (consuming next produced
>> items)
>> >>>> >> >> and is effectively a mutex. This makes algorithm
>> non-termination-safe
>> >>>> >> >> and non-lock-free.
>> >>>> >> >>
>> >>>> >> >>
>> >>>> >> >> > After the thread line is restored, the CAS can fail, and the
>> loop will just restart in normal flow, without any locking.
>> >>>> >> >> >
>> >>>> >> >> > I updated the docs, I think is clearer now.
>> >>>> >> >> >
>> >>>> >> >> > Em sáb., 11 de jan. de 2020 às 05:38, Dmitry Vyukov <
>> dvy...@gmail.com> escreveu:
>> >>>> >> >> >>
>> >>>> >> >> >> On Sat, Jan 11, 2020 at 4:09 AM bittnkr <bit...@gmail.com>
>> wrote:
>> >>>> >> >> >> >
>> >>>> >> >> >> > Hello Dmitry and fellows from the group.
>> >>>> >> >> >> >
>> >>>> >> >> >> > If you look carefully, you will see that there is no
>> blocking, just simple CAS, even the buffer is a common buffer.
>> >>>> >> >> >>
>> >>>> >> >> >> But consider you look inside of a spin lock, or you take a
>> >>>> >> >> >> spin-lock-based algorithm and inline all spin lock code
>> into the
>> >>>> >> >> >> algorithm. What you will see is exactly the same -- no
>> blocking, just
>> >>>> >> >> >> a CAS. However, it does not really change anything, it's
>> still
>> >>>> >> >> >> blocking mutex-based algorithm.
>> >>>> >> >> >> One can also combine spin lock state with some data, e.g. a
>> particular
>> >>>> >> >> >> value of a data member means "locked" and blocks progress
>> of other
>> >>>> >> >> >> threads. It makes spin lock even less visible, but again
>> does not
>> >>>> >> >> >> change anything on conceptual level.
>> >>>> >> >> >>
>> >>>> >> >> >> If I am reading the algorithm correctly, if a thread is
>> preempted
>> >>>> >> >> >> between these 2 operations:
>> >>>> >> >> >>
>> >>>> >> >> >>  } while ( (data[t & mask]) || (CompareExchange(tail, t+1,
>> t) != t) )
>> >>>> >> >> >>   data[t & mask] = item // now is safe to update the buffer
>> >>>> >> >> >>
>> >>>> >> >> >> It effectively locks a mutex on the queue and blocks
>> progress of all consumers.
>> >>>> >> >> >>
>> >>>> >> >> >> There is also a similar blocks on consumer side here:
>> >>>> >> >> >>
>> >>>> >> >> >>  } while ( !(data[h & mask]) || CompareExchange(head, h+1,
>> h) != h )
>> >>>> >> >> >>   data[h] = 0 // release the seat
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >> > The queue only sleeps if it is empty out of it is full.
>> But this is not locking.
>> >>>> >> >> >> >
>> >>>> >> >> >> > All protection is done by the two atomic variables head
>> an tail.
>> >>>> >> >> >> >
>> >>>> >> >> >> > The cost is constant near a single CAS. For any number of
>> threads.
>> >>>> >> >> >> >
>> >>>> >> >> >> > Take a look on this benchmarks. They speaks for
>> themselves.
>> >>>> >> >> >> >
>> >>>> >> >> >> >
>> https://github.com/bittnkr/uniq/blob/master/README.md#benchmarks
>> >>>> >> >> >> >
>> >>>> >> >> >> > Besides I don't have a formal proof, we have a test with
>> zero checksum.
>> >>>> >> >> >> >
>> >>>> >> >> >> > This is the results I have for a producer/consumer test
>> pushing random numbers is the queue.
>> >>>> >> >> >> >
>> >>>> >> >> >> > Creating 4 producers & 4 consumers
>> >>>> >> >> >> > to flow 10.000.000 items trough the queue.
>> >>>> >> >> >> >
>> >>>> >> >> >> > Produced: 10.743.668.245.000.000
>> >>>> >> >> >> > Consumed: 5.554.289.678.184.004
>> >>>> >> >> >> > Produced: 10.743.668.245.000.000
>> >>>> >> >> >> > Consumed: 15.217.833.969.059.643
>> >>>> >> >> >> > Produced: 10.743.668.245.000.000
>> >>>> >> >> >> > Consumed: 7.380.542.769.600.801
>> >>>> >> >> >> > Produced: 10.743.668.245.000.000
>> >>>> >> >> >> > Consumed: 14.822.006.563.155.552
>> >>>> >> >> >> >
>> >>>> >> >> >> > Checksum: 0 (it must be zero)
>> >>>> >> >> >> >
>> >>>> >> >> >> > The producers increase total and the consumers decrease.
>> The result for 10M random numbers is zero.
>> >>>> >> >> >> >
>> >>>> >> >> >> > Thanks for the advices, I'll investigate about this tools.
>> >>>> >> >> >> >
>> >>>> >> >> >> > Em qui, 9 de jan de 2020 04:58, Dmitry Vyukov <
>> dvy...@gmail.com> escreveu:
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> On Wed, Jan 8, 2020 at 9:29 PM bittnkr <bit...@gmail.com>
>> wrote:
>> >>>> >> >> >> >> >
>> >>>> >> >> >> >> > Dear Dmitry,
>> >>>> >> >> >> >> >
>> >>>> >> >> >> >> > I found a nice solution for the problem called 3
>> thread consensus, considered impossible on the book The art of the
>> multiprocessor programming. I think that is a breakthrough.
>> >>>> >> >> >> >> >
>> >>>> >> >> >> >> > Debating on S.O. with someone about if the solution is
>> solid or no, If it is possible to occur data races, he referred relacy.
>> >>>> >> >> >> >> >
>> >>>> >> >> >> >> > So I'm writing you to ask your opinion about the
>> solution.
>> >>>> >> >> >> >> >
>> >>>> >> >> >> >> > Can you take a little look on it?
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> +lock-free mailing list
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> Hi bittnkr,
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> At first glance the algorithm at
>> https://github.com/bittnkr/uniq looks
>> >>>> >> >> >> >> blocking and non-linearizable to me.
>> >>>> >> >> >> >> Very similar in nature to:
>> >>>> >> >> >> >>
>> http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> --
>> >>>> >> >> >> >> Dmitry Vyukov
>> >>>> >> >> >> >>
>> >>>> >> >> >> >> All about lockfree/waitfree algorithms, multicore,
>> scalability,
>> >>>> >> >> >> >> parallel computing and related topics:
>> >>>> >> >> >> >> http://www.1024cores.net
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >> --
>> >>>> >> >> >> Dmitry Vyukov
>> >>>> >> >> >>
>> >>>> >> >> >> All about lockfree/waitfree algorithms, multicore,
>> scalability,
>> >>>> >> >> >> parallel computing and related topics:
>> >>>> >> >> >> http://www.1024cores.net
>> >>>> >> >>
>> >>>> >> >>
>> >>>> >> >>
>> >>>> >> >> --
>> >>>> >> >> Dmitry Vyukov
>> >>>> >> >>
>> >>>> >> >> All about lockfree/waitfree algorithms, multicore, scalability,
>> >>>> >> >> parallel computing and related topics:
>> >>>> >> >> http://www.1024cores.net
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >> --
>> >>>> >> Dmitry Vyukov
>> >>>> >>
>> >>>> >> All about lockfree/waitfree algorithms, multicore, scalability,
>> >>>> >> parallel computing and related topics:
>> >>>> >> http://www.1024cores.net
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Dmitry Vyukov
>> >>>>
>> >>>> All about lockfree/waitfree algorithms, multicore, scalability,
>> >>>> parallel computing and related topics:
>> >>>> http://www.1024cores.net
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to the Google
>> Groups "Scalable Synchronization Algorithms" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to lock-free+unsubscr...@googlegroups.com.
>> > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/lock-free/c1a1fd50-1635-4404-95da-00042df22230%40googlegroups.com
>> .
>>
>>
>>
>> --
>> Dmitry Vyukov
>>
>> All about lockfree/waitfree algorithms, multicore, scalability,
>> parallel computing and related topics:
>> http://www.1024cores.net
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Scalable Synchronization Algorithms" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/lock-free/MnhH9AlLfOc/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> lock-free+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/lock-free/CAEeQi3t8dFODaK22rEM5rRhutouHyDz_hd6a6C-Yi3Utx5HdSQ%40mail.gmail.com
>> .
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Scalable Synchronization Algorithms" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to lock-free+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/lock-free/CADQeMNeBfn%2BFtb%3D%3D%2B2cgU1g7GRYwR07PRy5T15-LJ1NaPcYXig%40mail.gmail.com
> <https://groups.google.com/d/msgid/lock-free/CADQeMNeBfn%2BFtb%3D%3D%2B2cgU1g7GRYwR07PRy5T15-LJ1NaPcYXig%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/CAPhqw8uOtZX4M1K2BRyfOdQDyyt2ch58qnbADN6WMvciQv_omA%40mail.gmail.com.

Reply via email to