Re: uniform initialization in D (as in C++11): i{...}

2016-04-05 Thread Jin via Digitalmars-d

On Tuesday, 5 April 2016 at 05:39:25 UTC, Timothee Cour wrote:
what's D's answer for C++11's uniform initialization [1] which 
allows DRY code?


Could we have this:

struct A{
  int a;
  int b;
}

A fun(A a, int b) {
  if(b==1) return i{0,1};
  else if(b==2) return i{2,3};
  else return fun(i{3,4}, 1);
}



A fun(A a, int b) {
  if(b==1) return [0,1];
  else if(b==2) return [a:2,b:3];
  else return fun([3,4], 1);
}



Re: string to uppercase

2016-04-03 Thread Jin via Digitalmars-d-learn

On Sunday, 3 April 2016 at 03:05:08 UTC, stunaep wrote:
Is there any easy way to convert a string to uppercase? I tried 
s.asUpperCase, but it returns a ToCaserImpl, not a string, and 
it cant be cast to string. I also tried toUpper but it wasnt 
working with strings


http://dpaste.dzfl.pl/b14c35f747cc

import std.uni, std.stdio;

void main()
{
writeln( "abcабв".toUpper );
}


Re: Oh, my GoD! Goroutines on D

2016-03-30 Thread Jin via Digitalmars-d

On Wednesday, 30 March 2016 at 15:22:26 UTC, Casey Sybrandy wrote:
Have you considered using a Disrupter 
(http://lmax-exchange.github.io/disruptor/) for the channels?  
Not sure how it compares to what you're using from Vibe.d, but 
it's not a hard data structure to implement and, IIRC, it 
allows for multiple producers and consumers.


Oh, and yes, I know that it would have to be rewritten in D 
unless there's a C version somewhere.  I actually did it once 
and it wasn't too bad.  I don't think I have a copy anymore, 
but if I do find it, I can put it up somewhere.


This is java bloatware. :-(


Re: Oh, my GoD! Goroutines on D

2016-03-29 Thread Jin via Digitalmars-d

On Tuesday, 29 March 2016 at 12:30:24 UTC, Dejan Lekic wrote:

+1
Wiki is absolutely the best solution to this, I agree. Plus, we 
already have a wiki so he should just go there and start 
writing. The community will incorrect grammar/syntax and typos.


http://wiki.dlang.org/Go_to_D


Re: Oh, my GoD! Goroutines on D

2016-03-28 Thread Jin via Digitalmars-d

On Monday, 28 March 2016 at 19:29:55 UTC, Walter Bright wrote:

On 3/28/2016 6:10 AM, Jin wrote:

My english is too bad to write articles, sorry :-(


Baloney, your english is very good. Besides, I'm sure there 
will be many volunteers here to help you touch it up.


I just wrote the article on russin: 
https://habrahabr.ru/post/280378/
Translation to english: 
https://translate.google.com/translate?hl=ru=ru=en=https%3A%2F%2Fhabrahabr.ru%2Fpost%2F280378%2F


Re: Wait-free thread communication

2016-03-28 Thread Jin via Digitalmars-d

On Monday, 28 March 2016 at 17:16:14 UTC, Dmitry Olshansky wrote:
If nothing changed implementation-wise this is just data-racy 
queues :)


Why?


All I see is a ring buffer with hand-wavy atomicFence on one of 
mutating operations. popFront is not protected at all.


popFront does not need protection. It is atommic for provider.

Also force yielding a thread is not a sane synchronization 
technique.


Here the fibers are used.

Over all - I suggest to not label this as "wait free" code as 
it's waaay far from what it takes to get that.


Each operation (clear,front,popFront,full,put) has a fixed number 
of steps if you checks for clear before access to front and check 
for full before put. If you not check this - you will be blockek 
of cource. What do you expect? Exception? Ignore?


Re: Wait-free thread communication

2016-03-28 Thread Jin via Digitalmars-d

On Monday, 28 March 2016 at 16:39:45 UTC, Dmitry Olshansky wrote:

On 27-Mar-2016 21:23, Jin wrote:

I just use this queues to implement Go like api for concurrency
(coroutines+channels):
http://forum.dlang.org/thread/lcfnfnhjzonkdkeau...@forum.dlang.org




If nothing changed implementation-wise this is just data-racy 
queues :)


Why?


Re: Oh, my GoD! Goroutines on D

2016-03-28 Thread Jin via Digitalmars-d

On Sunday, 27 March 2016 at 20:39:57 UTC, Walter Bright wrote:

On 3/27/2016 11:17 AM, Jin wrote:

[...]


Nice! Please write an article about this!


My english is too bad to write articles, sorry :-(


Re: Wait-free thread communication

2016-03-27 Thread Jin via Digitalmars-d
I just use this queues to implement Go like api for concurrency 
(coroutines+channels): 
http://forum.dlang.org/thread/lcfnfnhjzonkdkeau...@forum.dlang.org





Oh, my GoD! Goroutines on D

2016-03-27 Thread Jin via Digitalmars-d

DUB module: http://code.dlang.org/packages/jin-go
GIT repo: https://github.com/nin-jin/go.d

Function "go" starts coroutines (vibe.d tasks) in thread pool 
like in Go language:


unittest
{
import core.time;
import jin.go;

__gshared static string[] log;

static void saying( string message )
{
for( int i = 0 ; i < 3 ; ++i ) {
sleep( 100.msecs );
log ~= message;
}
}

go!saying( "hello" );
sleep( 50.msecs );
saying( "world" );

		log.assertEq([ "hello" , "world" , "hello" , "world" , "hello" 
, "world" ]);

}

Class "Channel" is wait-free one-consumer-one-provider channel. 
By default, Channel blocks thread on receive from clear channel 
or send to full channel:


unittest
{
static void summing( Channel!int output ) {
foreach( i ; ( output.size * 2 ).iota ) {
output.next = 1;
}
output.close();
}

auto input = go!summing;
while( !input.full ) yield;

input.sum.assertEq( input.size * 2 );
}

You can no wait if you do not want:

unittest
{
import core.time;
import jin.go;

static auto after( Channel!bool channel , Duration dur )
{
sleep( dur );
if( !channel.closed ) channel.next = true;
}

static auto tick( Channel!bool channel , Duration dur )
{
while( !channel.closed ) after( channel , dur );
}

auto ticks = go!tick( 101.msecs );
auto booms = go!after( 501.msecs );

string log;

while( booms.clear )
{
while( !ticks.clear ) {
log ~= "tick";
ticks.popFront;
}
log ~= ".";
sleep( 51.msecs );
}
log ~= "BOOM!";

log.assertEq( "..tick..tick..tick..tick..BOOM!" );
}

Channel are InputRange and OutputRange compatible. Structs 
"Inputs" and "Outputs" are round-robin facade to array of 
channels.


More examples in unit tests: 
https://github.com/nin-jin/go.d/blob/master/source/jin/go.d#L293


Current problems:

1. You can give channel to more than two thread. I'm going to 
play with unique pointers to solve this problem. Any hints?


2. Sometimes you must close channel to notify partner to break 
range cycle. Solving (1) problem can solve and this.


3. API may be better. Advices?


Re: Wait-free thread communication

2016-01-13 Thread Jin via Digitalmars-d

On Sunday, 10 January 2016 at 21:25:34 UTC, Martin Nowak wrote:
For blocking thread i use loop with Thread.sleep - this is bad 
decision IMHO.

Have a look at this exponential backoff implementation for my GC
spinlock PR.
https://github.com/D-Programming-Language/druntime/pull/1447/files#diff-fb5cbe06e1aaf83814ccf5ff08f05519R34
In general you need some sort of configurable or adaptive 
backoff or you'll waste too much time context switching.


I am using Waiter for this 
https://github.com/nin-jin/go.d/blob/master/source/jin/go.d#L171




Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d

On Saturday, 9 January 2016 at 08:28:33 UTC, thedeemon wrote:
Your benchmarks show time difference in your favor just because 
you compare very different things: your queue is benchmarked in 
single thread with fibers while std.concurrency is measured 
with multiple threads communicating with each other. Doing many 
switches between threads is much slower than switching between 
fibers in one thread, hence the time difference. It's not that 
your queue is any good, it's just you measure it wrong.


No, jin.go creates new native thread on every call. And this is 
problem :-) We can not create thousand of threads without errors.


std.concurrency uses mutex to synchromize access to message 
queue. Cost of syncronization is proportional to count of threads.


On Saturday, 9 January 2016 at 08:28:33 UTC, thedeemon wrote:
You call it wait-free when in fact it's just the opposite: if a 
queue buffer is full on push  it just waits in Thread.sleep 
which is

1) not wait-free at all
2) very heavy (call to kernel, context switch)
And when buffer is not full/empty it works as a simple 
single-threaded queue, which means it's fine for using with 
fibers inside one thread but will not work correctly in 
multithreaded setting.


Taking data from empty queue and pushing data to full queue is 
logicaly impossible for queues. We have 3 strategies for this 
cases:
1. Throwing runtime exception, like std.concurrency.send on full 
message box by default.
2. Blocking thread, like std.concurrency.receive on empty message 
box.

2. Skipping action.

jin-go uses blocking strategy by default and you can check 
queue.empty and queue.full if you want to implement other 
strategy.


Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d

On Saturday, 9 January 2016 at 14:20:18 UTC, Andy Smith wrote:
I'm a little worried you have no volatile writes or fences 
around your code when you 'publish' an event using head/tail 
etc. It looks like it's working but how are you ensuring no 
compiler/CPU reordering is ocurring. Does x86_64 actually allow 
you to get away with this? I know its memory model is stricter 
than others...


This works fine in my tests, but how to enforce memory barrier in 
D?




Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d

On Saturday, 9 January 2016 at 13:55:16 UTC, thedeemon wrote:

On Saturday, 9 January 2016 at 11:00:01 UTC, Jin wrote:

No, jin.go creates new native thread on every call. And this 
is problem :-) We can not create thousand of threads without 
errors.


Ah, sorry, I misread the source. FiberScheduler got me 
distracted. Why is it there?


This is artefact of experiments with fibers :-)


Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d
On Saturday, 9 January 2016 at 16:29:07 UTC, Ola Fosheim Grøstad 
wrote:

They have to be atomic if you want your code to be portable.


Do not want yet :-)


Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d

On Saturday, 9 January 2016 at 14:20:18 UTC, Andy Smith wrote:
I'm a little worried you have no volatile writes or fences 
around your code when you 'publish' an event using head/tail 
etc. It looks like it's working but how are you ensuring no 
compiler/CPU reordering is ocurring. Does x86_64 actually allow 
you to get away with this? I know its memory model is stricter 
than others...


I just add atomic fence for push and take:

this.messages[ this.tail ] = value;
atomicFence;
this.tail = ( this.tail + 1 ) % this.size;



Re: Wait-free thread communication

2016-01-09 Thread Jin via Digitalmars-d
On Saturday, 9 January 2016 at 16:05:34 UTC, Ola Fosheim Grøstad 
wrote:

You need it in the tests.


If memory writes will reorder, then current tests will fail. 
Memory reades can be in any order.


On Saturday, 9 January 2016 at 16:05:34 UTC, Ola Fosheim Grøstad 
wrote:

I haven't used atomics in D, but you have

  atomicLoad(MemoryOrder ms = MemoryOrder.seq, T)(ref const 
shared T val)
  atomicStore(MemoryOrder ms = MemoryOrder.seq, T, V1)(ref 
shared T val, V1 newval)


I do not understand how to use its right. So i simple use 
atomicFence. Performance does not degrade.


Re: Wait-free thread communication

2016-01-08 Thread Jin via Digitalmars-d

On Friday, 8 January 2016 at 18:07:49 UTC, Nordlöw wrote:

On Friday, 8 January 2016 at 16:58:59 UTC, Jin wrote:
Idea: no mutex, no CAS, only one direction queues without any 
locks.


My prototype (https://github.com/nin-jin/go.d) is up to 10x 
faster than std.concurrency.send/receive


Very interesting. D has builtin unittests. You should add 
stress unittests to assure that your logic is correct. You can 
start by searching for keyword `unittest` in the 
std.concurrency module for advice how to do this.


I just add unit tests. But how to idiomatic implement benchmarks 
(compiling must be in release mode)? Currently, i was implement 
main function in app.d and run with "dub --build=release", but 
nobody can import my module.


Wait-free thread communication

2016-01-08 Thread Jin via Digitalmars-d
Idea: no mutex, no CAS, only one direction queues without any 
locks.


My prototype (https://github.com/nin-jin/go.d) is up to 10x 
faster than std.concurrency.send/receive


---
writers =512
readers =1
std.concurency milliseconds=1313
jin.go milliseconds=113
---

Realization:

Queue! - buffered static typed channel. One and only one thread 
can push data to queue, and one and only one can take data from 
queue. Thread blocks when he takes from empty queue and pushes to 
fulfilled queue.


Queues! - list of queues with round robin balancing

go! - starts new thread, in future i want to implement something 
like goroutine, but i do not understand how yet.


I need some good advice, i am newbie in D :-)

Problems:

For blocking thread i use loop with Thread.sleep - this is bad 
decision IMHO.


On my system i can create only up to 1000 threads without errors. 
Fibers from core.thread or Tasks from std.parallelism potentiaдly 
can resolve this problem, but how to integrate their gracefully.


API of jin-go can be better?

Which dub modules can help me?


Re: I hate new DUB config format

2015-12-06 Thread Jin via Digitalmars-d

Example:

name =dedcpu
author =Luis Panadero Guardeño
author =Jin
targetType =none
license =BSD 3-clause
description
=DCPU-16 tools
=and other staff
subPackage
name =lem1802
description =Visual LEM1802 font editor
targetType =executable
targetName =lem1802
excludedSourceFile =src/bconv.d
excludedSourceFile =src/ddis.d
lib
name =gtkd
platform =windows
config
name =nogtk
platform =windows
config
name =gtk
platform =posix
dependency
name =gtk-d:gtkd
version ~> 3.2.0


Re: I hate new DUB config format

2015-12-06 Thread Jin via Digitalmars-d

How about this format? https://github.com/nin-jin/tree.d