[lock-free] Eventcount with timeout

2018-12-23 Thread Artur Brugeman
Hi Dmitry,

I want to use your eventcount (took the source from intel forum). 

Currently I was using semaphores, which allowed me to set waiting timeout. 

Questions:
1. Is the source from intel forum 'the latest and stable'? You had a pretty 
long discussion there and I'm not sure the posted sources incorporated all 
the fixes.
2. Can eventcount support waiting timeouts? Can I just add 'timeout' param 
to prepare_wait and commit_wait and call 'sema.timedwait' instead of 
'wait'? In fact I did just that and now I get segfaults here and there, so 
not sure it's the way to go.

Can you please share your thoughts on this?

Thanks a lot!
--
Artur   

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/c344aade-6be6-414f-89bd-d942026777e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [lock-free] Re: Simple Example, Very Basic Per-Thread Proxy GC...

2018-12-23 Thread Dmitry Vyukov
On Sun, Dec 23, 2018 at 7:31 AM Chris M. Thomasson  wrote:
 > On Monday, December 17, 2018 at 11:23:20 PM UTC-8, Chris M. Thomasson 
 > wrote:
 >>
 >> If interested, I can give more details. For now, here is the code in 
 >> the form of a Relacy Test Unit:

 Can you give a short executive summary? What are the
 advantages/disadvantages/novelty?
>>>
>>>
>>> Very quick, sorry: Should have some more time tonight.
>>>
>>> A simple proxy with per-thread mutexes. Threads enter a protected region by 
>>> taking their own lock, and leave it by releasing said lock. Very basic. 
>>> When a thread wants to defer a node for deletion it pushes it onto a global 
>>> lock-free stack via CAS. There is a reaper thread that flushes all of the 
>>> nodes with XCHG, and keeps it on an internal list. The reaper thread loop 
>>> then tries to acquire and release all of the per-thread locks. Once it has 
>>> done this, it says quiescent state. It keeps nodes in a way that ensures at 
>>> least two periods of this state have occurred before it actually calls 
>>> delete and destroys memory. Since the reapers use a try_lock to detect a 
>>> quiescent state, it can livelock in a reaper in the sense that it never 
>>> gains a lock. However, Relacy should never detect the condition because of 
>>> the limited iteration loop for workers wrt the test code itself. There is a 
>>> work around. We can let a reaper fail for a little while before it actually 
>>> ditches try_lock and just locks the per-thread quiescence lock. It would 
>>> act just like an adaptive mutex, in a sense...
>>>
>>>
>> [...]
>>>
>>> So far, so good. It passes 1000 iterations. I am letting 
>>> rl::sched_bound run for around 65 minutes now, no problems at iteration:
>>>
>>>
>>
>>
>> Fwiw, its been running overnight using rl:sched_bound, I am at iteration:
>>
>> 99% (3506831360/3506831361)
>> 99% (3506896896/3506896897)
>> 99% (3506962432/3506962433)
>> 99% (3507027968/3507027969)
>> 99% (3507093504/3507093505)
>> 99% (3507159040/3507159041)
>> 99% (3507224576/3507224577)
>> 99% (3507290112/3507290113)
>>
>>
>> No problems found so far at 3.5 billion iterations.
>
>
>
> No problems so far, however the program has still not completed and the damn 
> battery accidentally went dead on the testing laptop! This means that I have 
> to run it again.
>
> Isn't there a way to start from a given iteration count?
>
>
>>
>> I need to code up a scenario where a thread actually iterates through all of 
>> the unode's in g_worker_stack. I think it should fail in this case. Humm...
>>
>>
>>>
>>>
>>>
>>> Also, need to get the latest version of Relacy. I am using version: 2.3.
>>>
>>>
>>> Does it bite the dust in the latest version? ;^o


Nothing major has changed in the past years, so whatever version you
have should be effectively the latest.

Yes, the exhaustive mode is not too useful for anything non-trivial.
And as you see the iteration count estimation is totally broken.
The context switch bound mode may be more useful.
There is no persistent checkpointing and not parallel mode (shame!).
Every time I start thinking about this, I can't choose between
continuing improving Relacy or supporting its main features in
ThreadSanitizer which has compiler instrumentation for memory accesses
and atomic operations and a set of interceptors so existing programs
work out-of-the-box. But in the end I can't find time for either.
Some other people expressed interest in adding some Relacy-like
features to ThreadSanitizer, e.g.:
https://groups.google.com/forum/#!topic/thread-sanitizer/c32fnH0QQe4
but nothing real come up out of that yet.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/CAEeQi3uV-_e%3D59fwuj9J8o1vcUCmPf2FFcmtO95YCe359tsbbw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.