[lock-free] Atomic XCHG based Stack, simple for learning...

2018-12-28 Thread Chris M. Thomasson
This experimental algorithm uses only XCHG at the cost of having a consumer 
wait for the next pointer to be set in a node. However, it allows for real 
work to be done before any waits are performed. So, the "real" work should 
provide a "natural backoff" that might minimize the waiting. The algorithm 
can be improved upon quite a bit. I have some "interesting" ideas for it. 
Well, here is the code in the form of a Relacy unit test:

Notice the "hand off" comments and 0xDEADBEEF, this can be made to work 
much better...
__
// Simple Atomic Stack
// For Academic and Experimental things... 
// Beginner, Moderate?
// In Good Ol' Relacy! Nice as always.
//___


//#define RL_DEBUGBREAK_ON_ASSERT
//#define RL_MSVC_OUTPUT
//#define RL_FORCE_SEQ_CST
//#define RL_GC


#include 
#include 


// Simple macro based redirection of the verbose  std membars.
#define CT_MB_ACQ std::memory_order_acquire
#define CT_MB_REL std::memory_order_release
#define CT_MB_RLX std::memory_order_relaxed
#define CT_MB_ACQ_REL std::memory_order_acq_rel
#define CT_MB_SEQ_CST std::memory_order_seq_cst
#define CT_MB(mp_0) std::atomic_thread_fence(mp_0)


// Some global vars directing the show...
#define THREADS 3
#define ITERS 2


// Experimental Stack
struct ct_ecstack
{
#define WAITNEXT ((node*)0xDEADBEEF)

struct node
{
std::atomic m_next;
VAR_T(unsigned int) m_data;

node(unsigned int data) : m_next(nullptr), m_data(data) {}

void process()
{
// simulate some work?
rl::yield(1 + rl::rand(2), $); // ;^)
}

node* next_wait()
{
node* next = nullptr;

while ((next = m_next.load(CT_MB_RLX)) == WAITNEXT)
{
// Humm, we can actually do other work right here...
// Hand off...
rl::yield(1, $);
}

return next;
}
};


std::atomic m_head;

ct_ecstack() : m_head(nullptr) {}


void push(node* n)
{
n->m_next.store(WAITNEXT, CT_MB_RLX);
node* head = m_head.exchange(n, CT_MB_REL); // release
n->m_next.store(head, CT_MB_RLX);
}


node* flush_try()
{
return m_head.exchange(nullptr, CT_MB_ACQ); // acquire
}
};




// Relacy Stack Test...
struct ct_ecstack_test
: rl::test_suite
{
ct_ecstack g_ecstack;

void process()
{
ct_ecstack::node* n = g_ecstack.flush_try(); // flush all

while (n)
{
// Process n first, acts like a backoff for the next wait
// Hand off some other nodes? Future note...
n->process();

// Wait for the next pointer, or hand off?
ct_ecstack::node* next = n->next_wait();

// Destroy
delete n;

// Loop on...
n = next;
}
}

void thread(unsigned int tidx)
{
for (unsigned int i = 0; i < ITERS; ++i)
{
g_ecstack.push(new ct_ecstack::node(tidx));
g_ecstack.push(new ct_ecstack::node(tidx));
g_ecstack.push(new ct_ecstack::node(tidx));

process();

g_ecstack.push(new ct_ecstack::node(tidx));
}

process();
}
};



// Test away... Or fly? Humm...
int main()
{
{
rl::test_params p;

p.iteration_count = 1000;
//p.execution_depth_limit = 3;
//p.search_type = rl::sched_bound;
//p.search_type = rl::fair_full_search_scheduler_type;
//p.search_type = rl::fair_context_bound_scheduler_type;

rl::simulate(p);
}

return 0;
}
__


Any thoughts? This should be fairly basic in nature.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/1a2de2c4-9c5c-4bd9-a663-a084e16e3bcd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [lock-free] Re: Scalable hash map

2018-12-28 Thread Manuel Pöter
Hi Dmitry,

in one of your last post you wrote:

I've uploaded modified hash map which supports strings (and any other 
> satellite data) as keys/values: 
> http://groups.google.com/group/lock-free/files 
> There is example included which uses strings as both keys and values.
>

That made me think there was a version that supported non-trivial types.
But reading on it says:

The main idea is that satellite data (strings) must be passed through 
> PDR (Partial-copy-on-write Deferred Reclamation) system [...]
>

I missed that part when I read that post the first time. So basically you 
would
have to work with an additional indirection - allocate a pcx-node that 
holds the
string (or whatever type you are using) and use a pointer to that node as 
key/value.
And of course define a specialization of the concurrent_hash_map_traits for 
that
node type. Correct?

Thanks,
Manuel

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/ef5c2e8a-a722-420e-912e-ed343a191bfb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[lock-free] Re: Actor scheduler source code availability

2018-12-28 Thread Dmitry Vyukov
On Wed, Dec 12, 2018 at 1:04 PM Per  wrote:
>
> Hi Dmitry,
>
> First, I want to thank you for the fantastic resource you have put together 
> on 1024cores.net. I have found it very helpful in my studies on lockfree 
> algorithms.
>
> I have a question that I was hoping you could help me with. The "Case Study: 
> Actor Scheduler" references a scalable, fast and starvation-free actor 
> scheduler written in ~300 lines of C. I was wondering if this scheduler is 
> open sourced? If so, where can I find it?

Hi Per,

Yes, it's referenced right from the bottom of that page (acsc.zip):
http://www.1024cores.net/home/scalable-architecture/case-study-actor-scheduler
http://www.1024cores.net/home/scalable-architecture/case-study-actor-scheduler/acsc.zip

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/CAEeQi3tCyC5Z%2BWoNneHCvwHWmS%3D7NQo%3DKvofj8YC%2Br0tif6pEw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [lock-free] Re: What is the most simple SPSC queue?

2018-12-28 Thread Dmitry Vyukov
On Fri, Dec 28, 2018 at 10:57 AM segn  wrote:
>
> W dniu czwartek, 13 grudnia 2018 07:54:14 UTC+1 użytkownik segn napisał:
>>
>> Basically I'm looking for implementation of the same queue. In the project, 
>> the same principle was used for both thread-thread single-direction 
>> communication (typical master thread + # of worker threads; each channel is 
>> the SPSC FIFO queue), and for kernel-thread communication.
>>
>> The second one was quite specific: it was using a reserved shared memory 
>> block, used as an C array in a way circular buffer is used (so: FIFO). I 
>> wrote the queue, but surprisingly I've lost memory on this :) I cannot tell 
>> if there was some index `int first_free_idx', it looks like there should be.
>
>
> I've found the queue that I've used, I think that many (like me and other's 
> from my team) programmers came up with it on their own, but here it is called 
> Fast-Forward Queue: 
> https://scholar.colorado.edu/cgi/viewcontent.cgi?article=1960=csci_techreports

Hi segn,

There is also something called FastFlow and my version that was faster
than FastFlow in my experiments:
http://www.1024cores.net/home/lock-free-algorithms/queues/fastflow

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/CAEeQi3un-OZfGxi-4Xwsuc2B%3D%2BQX%3Du06ccAGTpYwsCV0Vtzt4A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [lock-free] Re: Scalable hash map

2018-12-28 Thread Dmitry Vyukov
On Fri, Dec 28, 2018 at 1:22 PM  wrote:
>
> Hi,
>
> sorry for resurrecting this old thread!
>
> The link to the files (http://groups.google.com/group/lock-free/files) no 
> longer works.
> I've downloaded the hash_map.zip file from your homepage, but this version of 
> the code only supports pod-types of 4 or 8 bytes. I am particularly 
> interested in your solution to support non-trivial key/value types. Any 
> chance this code is still available somewhere?

Hi Manuel,

Maybe non-trivial key/types were never supported? I don't think there
were 2 different versions.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/CAEeQi3vjd4H0S9WD-bK-Z%3Dh9V9BU%3DKYB03hXFumCtZ%3D3JV9s%3Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[lock-free] Re: Scalable hash map

2018-12-28 Thread manuel
Hi,

sorry for resurrecting this old thread!

The link to the files (http://groups.google.com/group/lock-free/files) no 
longer works.
I've downloaded the hash_map.zip file from your homepage, but this version 
of the code only supports pod-types of 4 or 8 bytes. I am particularly 
interested in your solution to support non-trivial key/value types. Any 
chance this code is still available somewhere?

Thanks,
Manuel

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/47640666-7251-45ed-9a3a-141dd092713e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[lock-free] Re: What is the most simple SPSC queue?

2018-12-28 Thread segn


W dniu czwartek, 13 grudnia 2018 07:54:14 UTC+1 użytkownik segn napisał:

> Basically I'm looking for implementation of the same queue. In the 
> project, the same principle was used for both thread-thread 
> single-direction communication (typical master thread + # of worker 
> threads; each channel is the SPSC FIFO queue), and for kernel-thread 
> communication.
>
> The second one was quite specific: it was using a reserved shared memory 
> block, used as an C array in a way circular buffer is used (so: FIFO). I 
> wrote the queue, but surprisingly I've lost memory on this :) I cannot tell 
> if there was some index `int first_free_idx', it looks like there should be.
>
 
I've found the queue that I've used, I think that many (like me and other's 
from my team) programmers came up with it on their own, but here it is 
called Fast-Forward Queue: 
https://scholar.colorado.edu/cgi/viewcontent.cgi?article=1960=csci_techreports

segn

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/89eb09ea-2eba-468a-835a-e7da510d73eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[lock-free] Re: Eventcount with timeout

2018-12-28 Thread Chris M. Thomasson


On Thursday, December 27, 2018 at 9:55:57 PM UTC-8, Artur Brugeman wrote:
>
> Hi Chris,
>
> Thank you for your thoughts on this, my thinking was in line with yours 
> and that's why I tried to add timed waiting in Dmitry's eventcount. I guess 
> I need to spend some more time there and figure it out. 
>

Iirc, Dmitrys algo used some per-thread waitsets for an eventcount algo. 
Cannot remember right now if it is the same one you are referring to.

Fwiw, one can even spin on the lock for wait_begin, wrt something like:

bool wait_begin_try()
{
if (! m_mutex.try_lock($)) return false;
m_waitbit.store(1, CT_MB_RLX);
CT_MB(CT_MB_SEQ_CST);
return true;
}


The predicate loop can look like:

unsigned int signal = 0;

while ((signal = m_signal.load(CT_MB_RLX)) != 2)
{
if (! g_ecount.wait_begin_try())
{
rl::yield(1, $);
continue;
}

signal = m_signal.load(CT_MB_RLX);

if (signal == 2)
{
g_ecount.wait_cancel();
break;
}

g_ecount.wait_commit();
}


It can even be adaptive in nature where one can fail the try_lock a couple 
of times, then just lock the sucker!

;^)
 

>
>
> On Friday, December 28, 2018 at 4:32:04 AM UTC+5, Chris M. Thomasson wrote:
>>
>> On Sunday, December 23, 2018 at 9:30:03 PM UTC-8, Artur Brugeman wrote:
>>>
>>> Hi Dmitry,
>>>
>>> I want to use your eventcount (took the source from intel forum). 
>>>
>>> Currently I was using semaphores, which allowed me to set waiting 
>>> timeout. 
>>>
>>> Questions:
>>> 1. Is the source from intel forum 'the latest and stable'? You had a 
>>> pretty long discussion there and I'm not sure the posted sources 
>>> incorporated all the fixes.
>>> 2. Can eventcount support waiting timeouts? Can I just add 'timeout' 
>>> param to prepare_wait and commit_wait and call 'sema.timedwait' instead of 
>>> 'wait'? In fact I did just that and now I get segfaults here and there, so 
>>> not sure it's the way to go.
>>>
>>> Can you please share your thoughts on this?
>>>
>>> Thanks a lot!
>>>
>>>
>> Fwiw, a timed wait on an eventcount works as easily as spinning on it wrt 
>> not waiting on a kernel waitset at all. The predicate is the user algorithm 
>> itself. So, imagine if the waits never actually block, but spin around. 
>> Yet, everything still works. Fwiw, the following code, that should compile 
>> with Relacy, is the most simplistic eventcount I can imagine:
>> _
>> ...
>>
>> think if g_ecount.wait_commit(); was timed... It would still work as is. 
>> So, yes an eventcount can easily handle timed waits.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/b67bbb52-d353-416e-85fa-7399c69e9349%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.