Re: shared - i need it to be useful

2018-10-22 Thread Stanislav Blinov via Digitalmars-d

On Monday, 22 October 2018 at 00:22:19 UTC, Manu wrote:

No no, they're repeated, not scattered, because I seem to have 
to keep repeating it over and over, because nobody is reading 
the text, or perhaps imaging there is a lot more text than 
there is.

...
You mean like every post in opposition which disregards the 
rules and baselessly asserts it's a terrible idea? :/

...
I responded to your faulty program directly with the correct 
program, and you haven't acknowledged it. Did you see it?


Right back at you.

Quote:

I think this is a typical sort of construction:

struct ThreadsafeQueue(T)
{
  void QueueItem(T*) shared;
  T* UnqueueItem() shared;
}

struct SpecialWorkList
{
  struct Job { ... }

  void MakeJob(int x, float y, string z) shared  // <- any thread 
may

produce a job
  {
Job* job = new Job; // <- this is thread-local
PopulateJob(job, x, y, z); // <- preparation of a job might be
complex, and worthy of the SpecialWorkList implementation

jobList.QueueItem(job);  // <- QueueItem encapsulates
thread-safety, no need for blunt casts
  }

  void Flush() // <- not shared, thread-local consumer
  {
Job* job;
while (job = jobList.UnqueueItem()) // <- it's obviously safe 
for
a thread-local to call UnqueueItem even though the implementation 
is

threadsafe
{
  // thread-local dispatch of work...
  // perhaps rendering, perhaps deferred destruction, perhaps
deferred resource creation... whatever!
}
  }

  void GetSpecialSystemState() // <- this has NOTHING to do with 
the

threadsafe part of SpecialWorkList
  {
return os.functionThatChecksSystemState();
  }

  // there may be any number of utility functions that don't 
interact

with jobList.

private:
  void PopulateJob(ref Job job, ...)
  {
// expensive function; not thread-safe, and doesn't have any
interaction with threading.
  }

  ThreadsafeQueue!Job jobList;
}


This isn't an amazing example, but it's typical of a thing that's
mostly thread-local, and only a small controlled part of it's
functionality is thread-safe.
The thread-local method Flush() also deals with thread-safety
internally... because it flushes a thread-safe queue.

All thread-safety concerns are composed by a utility object, so 
there's no need for locks, magic, or casts here.


EndQuote;

The above:
1) Will not compile, not currently, not under your proposal 
(presumably you forgot in frustration to cast before calling 
PopulateJob?..)
2) Does not in any way demonstrate a practical @safe application 
of an implicit conversion. As I wrote in the original response to 
that code, with that particular code it seems more like you just 
need forwarding methods that call `shared` methods under the hood 
(i.e. MakeJob), and it'd be "nice" if you didn't have to write 
those and could just call `shared` MakeJob on an un-`shared` 
reference directly. But these are all assumptions without seeing 
the actual usage.


Please just stop acting like everyone here is opposing *you*. All 
you're doing is dismissing everyone with a "nuh-huh, you no 
understand, you bad". If it was just me, fine, it would mean I'm 
dumb and not worthy of this discussion. But this isn't the case, 
which means *you are not getting your point across*. And yet 
instead of trying to fix that, you're getting all snarky.


Re: Manu's `shared` vs the @trusted promise

2018-10-22 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
It took me a while to understand Manu's idea for `shared`, and 
I suspect that it was/is the same for others...


Three threads one...
Three threads two...
Three threads three! Sold! Thank you very much, ladies and 
gentlemen!




Re: shared - i need it to be useful

2018-10-21 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 21 October 2018 at 19:22:45 UTC, Manu wrote:
On Sun, Oct 21, 2018 at 5:50 AM Stanislav Blinov via 
Digitalmars-d  wrote:


Because the whole reason to have `shared` is to avoid the 
extraneous checks that you mentioned above,


No, it is to assure that you write correct not-broken code.


You can do that without `shared`.

and only write actual useful code (i.e. lock-write-unlock, or 
read-put-to-queue-repeat, or whatever), not busy-work (testing 
if the file is open on every call).


`shared` is no comment on performance. You have written a slow 
locking API.
If you care about perf, you would write a completely different 
API that favours perf.

This not impossible, nor even particularly hard.


You're conflating your assumptions about the code with the topic 
of this discussion. I can write a million examples and you'll 
still find a million reasons to talk about how they're 
incorrectly implemented, instead of focusing on merits or 
disadvantages of your proposal with the code given as is.


If you have a `shared` reference, it better be to existing 
data.


I mean, if I dereference a pointer, it had better not be null!


Why would you share a null pointer?


That's why having `shared` and un-`shared`
references to the same data simultaneously is not safe: you 
can't guarantee in any way that the owning thread doesn't 
invalidate

the data through it's non-`shared` reference while you're doing
your threadsafe `shared` work; you can only "promise" that by
convention (documentation).


The owning thread is not a special actor. Your reasoning is 
wonky here.


Why have it then at all? If it's not a "special" actor, just make 
all shared data `shared`. But your proposal specifically targets 
the conversion, suggesting you *do* need a special actor.



And I have partially-read or partially-written data.


I expect you flushed before killing the file.


So? The threads still weren't done yet.

Or Maybe I call closeFile(), main thread continues and opens 
another file,

which gives the same file descriptor, `shared` references to
FileHandle which the user forgot to wait on continue to work
oblivious to the fact that it's a different file now.


It's wild to suggest that ANY design for `shared` should 
somehow deal with the OS recycling a file handle...


I'm not suggesting that at all, you've completely misrepresenting 
what I'm saying by splitting a quote.


And it's still not an un-@safe crash! It's just a program with 
a bug.


Ok, if you say that sticking @safe on code that can partially 
piece together data from unrelated sources is fine, then sure.



> I'm going to assume that `shareWithThreads()` was implemented
> by an 'expert' who checked the function results for errors...


But you can only find out about these errors in 
`waitForThreads`, the very call that the user "forgot" to make!


...what?


Please suggest another way of handling errors reported by other 
threads. More `shared` state?



[ ... snip ... ]


You have to concede defeat at this point.


I agree. No matter how hard I try or how many times I ask you to 
demonstrate, I still fail to see the value in assuming @safe 
implicitly conversion of mutable data to shared. Instead of 
defending your proposal, you chose to attack the opposition. 
You've defeated me, flawless victory.



Destroy my proposal with another legitimately faulty program.


Is there a point? I post code, you: "nah, that's wrong". Steven 
posts code, you: "nah, that's wrong". Timon posts code, you: 
"nah, that's wrong". Walter posts code, you: "nah, that's 
wrong"... What's right then?


Re: shared - i need it to be useful

2018-10-21 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 21 October 2018 at 11:25:16 UTC, aliak wrote:
On Saturday, 20 October 2018 at 16:41:41 UTC, Stanislav Blinov 
wrote:
Those are not "ok". They're only "ok" under Manu's proposal so 
long as the author of C promises (via documentation) that 
that's indeed "ok". There can be no statically-enforced 
guarantees that those calls are "ok", or that issuing them in 
that order is "ok". Yet Manu keeps insisting that somehow 
there is.


No he is not insisting you can statically enforce thread safety.


I stand corrected, it would seem so.

When I say ok, I mean assuming the implementer actually wrote 
correct code. This applies to any shared method today as well.


This ("ok") can only be achieved if the "implementor" (the 
"expert") writes every function self-contained, at which point 
sharing something from user code becomes a non-issue (i.e. it 
becomes unnecessary). But that's not a very useful API. As soon 
as you have more than one function operating on the same data, 
the onus is on the user (the caller) to call those functions in 
correct order, or, more generally, without invalidating the state 
of shared data.


Re: shared - i need it to be useful

2018-10-21 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 21 October 2018 at 05:47:14 UTC, Manu wrote:
On Sat, Oct 20, 2018 at 10:10 AM Stanislav Blinov via 
Digitalmars-d  wrote:


Synchronized with what? You still have `a`, which isn't 
`shared` and doesn't require any atomic access or 
synchronization. At this point it doesn't matter if it's an 
int or a struct. As soon as you share `a`, you can't just 
pretend that reading or writing `a` is safe.



`b` can't read or write `a`... accessing `a` is absolutely safe.


It's not, with or without your proposal. The purpose of sharing 
`a` into `b` is to allow someone to access `*a` in a threadsafe 
way (but un-@safe, as it *will* require casting away `shared` 
from `b`). That is what's making keeping an unshared reference 
`a` un-@safe: whoever accesses `*a` in their @trusted 
implementations via `*b` can't know that `*a` is being 
(@safe-ly!) accessed in a non-threadsafe way at the same time.


Someone must do something unsafe to undermine your 
threadsafety... and
if you write unsafe code and don't know what you're doing, 
there's

nothing that can help you.


Ergo, it follows that anyone that is making an implicit cast from 
mutable to shared better know what they're doing, which mere 
mortal users (not "experts") might not. I.e. it's a way to giving 
a loaded gun to someone who never held a weapon before.



Today, every interaction with shared is unsafe.


Nod.

Creating a safe interaction with shared will lead to people not 
doing unsafe things at every step.


Triple nod.


Encapsulate it all you want, safety only remains a
contract of convention, the language can't enforce it.


You're talking about @trusted code again. You're fixated on 
unsafe interactions... my proposal is about SAFE interactions. 
I'm trying to obliterate unsafe interactions with shared.


I know... Manu, I *know* what you're trying to do. We (me, Atila, 
Timon, Walter...) are not opposing your goals, we're pointing out 
the weakest spot of your proposal, which, it would seem, would 
require more changes to the language than just disallowing 
reading/writing `shared` members.



module expertcode;

@safe:

struct FileHandle {
 @safe:

 void[] read(void[] storage) shared;
 void[] write(const(void)[] buffer) shared;
}

FileHandle openFile(string path);
// only the owner can close
void closeFile(ref FileHandle);

void shareWithThreads(shared FileHandle*); // i.e. generate a
number of jobs in some queue
void waitForThreads(); // waits until all
processing is done

module usercode;

import expertcode;

void processHugeFile(string path) {
 FileHandle file = openFile(path);
 shareWithThreads(&file);// implicit cast
 waitForThreads();
 file.closeFile();
}


This is a very strange program...


Why? That's literally the purpose of being able to `share`: you 
create/acquire a resource, share it, but keep a non-`shared` 
reference to yourself. If that's not required, you'd just create 
the data `shared` to begin with.


I'm dubious it is in fact "expertcode"... but let's look into 
it.


You're fixating on it being file now. I give an abstract example, 
you dismiss it as contrived, I give a concrete one, you want to 
dismiss it as "strange".


Heh, replace 'FileHandle' with 'BackBuffer', 'openFile' with 
'acquireBackBuffer', 'shareWithThreads' with 
'generateDrawCommands', 'waitForThreads' with 
'gatherCommandsAndDraw', 'closeFile' with 'postProcessAndPresent' 
;)


File handle seems to have just 2 methods... and they are both 
threadsafe. Open and Close are free-functions.


It doesn't matter if they're free functions or not. What matters 
is signature: they're taking non-`shared` (i.e. 'owned') 
reference. Methods are free functions in disguise.


Close does not promise threadsafety itself (but of course, it 
doesn't violate read/write's promise, or the program is 
invalid).


Yep, and that's the issue. It SHALL NOT violate threadsafety, but 
it can't promise such in any way :(


I expect the only possible way to achieve this is by an 
internal mutex to make sure read/write/close calls are 
serialised.


With that particular interface, yes.

read and write will appropriately check their file-open state 
each time they perform their actions.


Why? The only purpose of giving someone a `shared` reference is 
to give a reference to an open file. `shared` references can't do 
anything with the file but read and write, they would expect to 
be able to do so.


What read/write do in the case of being called on a closed 
file... anyones guess? I'm gonna say they do no-op... they 
return a null pointer to indicate the error state.


Looking at the meat of the program; you open

Re: shared - i need it to be useful

2018-10-21 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 21 October 2018 at 09:58:18 UTC, Walter Bright wrote:

On 10/20/2018 11:08 AM, Nicholas Wilson wrote:
You can if no-one else writes to it, which is the whole point 
of Manu's proposal. Perhaps it should be const shared instead 
of shared but still.


There is no purpose whatsoever to data that can be neither read 
nor written. Shared data is only useful if, at some point, it 
is read/written, presumably by casting it to unshared in 
@trusted code. As soon as that is done, you've got a data race 
with the other existing unshared aliases.


Just a thought: if a hard requirement is made on `shared` data to 
be non-copyable, a @safe conversion could be guaranteed. But it 
can't be implicit either:


shared(T) share(T)(T value) if (!is(T == shared) && 
!isCopyable!T) {

shared(T) result = move(value);
return result;
}

struct ShareableData {
@disable ; // Generated by 
compiler in presence of `shared` members and/or `shared` methods


/* ... */
}

void sendToThread(T)(shared T* ptr) @safe;

void usage() @safe {
int x;
sendToThread(&x); // Error: 'x' is not shared
shared y = x; // Ok
sendToThread(&y); // Ok


ShareableData data;
sendToThread(&data); // Error: 'data' is not shared
auto p = &data;
sendToThread(p); // Error: *p is not shared

auto sharedData = share(move(data));
sendToThread(&sharedData); // Ok

auto yCopy = y;   // Error: cannot copy 'shared' y
auto dataCopy = sharedData; // Error: cannot copy 'shared' 
sharedData


ShareableData otherData;
sendToThread(cast(shared(ShareableData)*) &otherData); // 
Error non-@safe cast in @safe code

}

And again, we're back to 'once it's shared, it can't be @safe-ly 
unshared', which ruins the distinction between owned and shared 
references, which is one of the nicer properties that Manu's 
proposal seems to want to achieve :(


Re: shared - i need it to be useful

2018-10-20 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 20 October 2018 at 18:30:59 UTC, Manu wrote:
On Sat, Oct 20, 2018 at 9:45 AM Stanislav Blinov via 
Digitalmars-d  wrote:


On Saturday, 20 October 2018 at 16:18:53 UTC, aliak wrote:

> class C {
>   void f();
>   void g() shared;
> }

Those are not "ok". They're only "ok" under Manu's proposal so 
long as the author of C promises (via documentation) that 
that's indeed "ok". There can be no statically-enforced 
guarantees that those calls are "ok", or that issuing them in 
that order is "ok". Yet Manu keeps insisting that somehow 
there is.


I only insist that if you write a shared method, you promise 
that it is threadsafe.
If f() undermines g() threadsafety, then **g() is NOT 
threadsafe**, and you just write an invalid program.


You can write an invalid program in any imaginable number of 
ways; that's just not an interesting discussion. An interesting 
discussion is what we might to to help prevent writing such an 
invalid program... I don't suggest here what we can do to 
statically encorce this, but I suspect there does exist *some* 
options which may help, which can be further developments.


What I also assert is that *this unsafe code is rare*... it 
exists only at the bottom of the tooling stack, and anyone else 
using a shared object will not do unsafe, and therefore will 
not be able to create the problem. If you do unsafety anywhere 
near `shared`, you should feel nervous. I'm trying to make a 
world where you aren't *required* to do unsafety at every 
single interaction.


Understand: f() can only undermine g() promise of threadsafety 
**if f() is not @safe**. Users won't create this situation 
accidentally, they can only do it deliberately.


---

module expertcode;

@safe:

struct FileHandle {
@safe:

void[] read(void[] storage) shared;
void[] write(const(void)[] buffer) shared;
}

FileHandle openFile(string path);
// only the owner can close
void closeFile(ref FileHandle);

void shareWithThreads(shared FileHandle*); // i.e. generate a 
number of jobs in some queue
void waitForThreads(); // waits until all 
processing is done


module usercode;

import expertcode;

void processHugeFile(string path) {
FileHandle file = openFile(path);
shareWithThreads(&file);// implicit cast
waitForThreads();
file.closeFile();
}

---

Per your proposal, everything in 'expertcode' can be written 
@safe, i.e. not violating any of the points that @safe forbids, 
or doing so only in a @trusted manner. As far as the language is 
concerned, this would mean that processHugeFile can be @safe as 
well.


Remove the call to `waitForThreads()` (assume user just forgot 
that, i.e. the "accident"). Nothing would change for the 
compiler: all calls remain @safe. And yet, if we're lucky, we get 
a consistent instacrash. If we're unlucky, we get memory 
corruption, or an unsolicited write to another currently open 
file, either of which can go unnoticed for some time.


Of course the program becomes invalid if you do that, there's no 
question about it, this goes for all buggy code. The problem is, 
definition of "valid" lies beyond the type system: it's an 
agreement between different parts of code, i.e. between expert 
programmers who wrote FileHandle et al., and users who write 
processHugeFile(). The main issue is that certain *runtime* 
conditions can still violate @safe-ty.


Your proposal makes the language more strict wrt. to writing 
@safe 'expertmodule', thanks to disallowing reads and writes 
through `shared`, which is great.
However the implicit conversion to `shared` doesn't in any way 
improve the situation as far as user code is concerned, unless 
I'm still missing something.


Re: shared - i need it to be useful

2018-10-20 Thread Stanislav Blinov via Digitalmars-d
On Saturday, 20 October 2018 at 16:48:05 UTC, Nicholas Wilson 
wrote:
On Saturday, 20 October 2018 at 09:04:17 UTC, Walter Bright 
wrote:

On 10/19/2018 11:18 PM, Manu wrote:

The reason I ask is because, by my definition, if you have:
int* a;
shared(int)* b = a;

While you have 2 numbers that address the same data, it is 
not actually aliased because only `a` can access it.


They are aliased,


Quoting Wikipedia:

two pointers A and B which have the same value, then the name 
A[0] aliases the name B[0]. In this case we say the pointers A 
and B alias each other. Note that the concept of pointer 
aliasing is not very well-defined – two pointers A and B may or 
may not alias each other, depending on what operations are 
performed in the function using A and B.


In this case given the above: `a[0]` does not alias `b[0]` 
because `b[0]` is ill defined under Manu's proposal, because 
the memory referenced by `a` is not reachable through `b` 
because you can't read or write through `b`.



by code that believes it is unshared


you cannot `@safe`ly modify the memory  through `b`, `a`'s view 
of the memory is unchanged in @safe code.


And that's already a bug, because the language can't enforce 
threadsafe access through `a`, regardless of presence of `b`. 
Only the programmer can.



and, code that believes it is shared.


you cannot have non-atomic access though `b`, `b` has no @safe 
view of the memory, unless it is atomic (which by definition is 
synchronised).


Synchronized with what? You still have `a`, which isn't `shared` 
and doesn't require any atomic access or synchronization. At this 
point it doesn't matter if it's an int or a struct. As soon as 
you share `a`, you can't just pretend that reading or writing `a` 
is safe. Encapsulate it all you want, safety only remains a 
contract of convention, the language can't enforce it.


Re: shared - i need it to be useful

2018-10-20 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 20 October 2018 at 16:18:53 UTC, aliak wrote:


class C {
  void f();
  void g() shared;
}

void t1(shared C c) {
  c.g; // ok
  c.f; // error
}

void t2(shared C c) {
  c.g; // ok
  c.f; // error
}

auto c = new C();
spawn(&t1, c);
spawn(&t2, c);
c.f; // ok
c.g; // ok


Those are not "ok". They're only "ok" under Manu's proposal so 
long as the author of C promises (via documentation) that that's 
indeed "ok". There can be no statically-enforced guarantees that 
those calls are "ok", or that issuing them in that order is "ok". 
Yet Manu keeps insisting that somehow there is.


Re: More zero-initialization optimizations pending in std.experimental.allocator?

2018-10-20 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 20 October 2018 at 15:10:38 UTC, Nathan S. wrote:

Other opportunities  would rely on being able to identify if 
it's ever more efficient to write `memset(&x, 0, 
typeof(x).sizeof)` instead of `x = typeof(x).init` which seems 
like the kind of optimization that belongs in the compiler 
instead.


Not unless `mem{set, cpy, move}` etc. are made into compiler 
intrinsics. Carrying libc dependencies into low-level sensitive 
code just stinks. If anything, the `memcpy` calls should be 
removed from `moveEmplace`, not `memset` calls added to it.

They're also not CTFE-able.


Re: Shared - Another Thread

2018-10-20 Thread Stanislav Blinov via Digitalmars-d
On Saturday, 20 October 2018 at 02:09:56 UTC, Dominikus Dittes 
Scherkl wrote:
On Saturday, 20 October 2018 at 00:46:36 UTC, Nicholas Wilson 
wrote:

Mutable = value may change
const = I will not change the value
immutable = the value will not change

unshared = I (well the current thread) owns the reference
shared = reference not owned, no unordered access, no 
(unordered) writes

threadsafe = ???

unshared = the current thread owns the reference
threadsafe = I guarantee no race conditions or deadlocks will 
occur

shared = every thread may have references


Exactly, "thredsafe" in nothing more than a contract between 
programmers.
When you have "const" data, it is trivial for the compiler to 
enforce that: it just doesn't allow you to mutate it. But the 
compiler cannot reason about whether your logic is "threadsafe" 
or not, there can be no static enforcement of "thread-safety". It 
only holds as an implicit contract between programmers: the 
authors of data and functions, and the users of that data and 
functions, i.e. the API and the callers.


Re: Shared - Another Thread

2018-10-19 Thread Stanislav Blinov via Digitalmars-d
On Friday, 19 October 2018 at 13:40:54 UTC, Dominikus Dittes 
Scherkl wrote:

On Thursday, 18 October 2018 at 16:24:39 UTC, Manu wrote:
On Wednesday, 17 October 2018 at 22:56:26 UTC, H. S. Teoh 
wrote:
What cracks me up with Manu's proposal is that it is its 
simplicity and lack of ambition that is criticized the most. 
shared is a clusterfuck, according to what I gathered from 
the forum, I never had yet to use it in my code. Manu's idea 
makes it a little less of a clusterfuck, and people attack 
the idea because it doesn't solve all and everything that's 
wrong with shared. Funny.




Elaborate on this... It's clearly over-ambitious if anything.
What issues am I failing to address?


First of all, you called it "shared", but what your concept 
describes is "theadsave".
If you had called it the later, it would have been clear to 
everybody that thread local data is indeed automatically 
threadsave, because only one thread has access to it (that 
"implicit conversion"). But if something is "shared" (in the 
common-world sense), it is of course no more "threadsave" - you 
have to implement special methods to treat it.


Conflating "shared" and "threadsave" in that manner was, I 
think, the biggest mistake of your proposal.


He talked about it in a previous thread, and generally I would 
agree with him that such conflation is indeed beneficial provided 
that some concessions are made for `shared`. Moreover, yet 
another attribute? Please no...


struct X {
void foo(threadsafe const shared Bar* bar) @nogc @trusted 
notrhow pure const shared threadsafe;

}

Attribute explosion is bad enough already.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Friday, 19 October 2018 at 02:20:22 UTC, Manu wrote:

I've given use cases constantly, about taking object ownership, 
promotions, and distribution for periods (think parallel for),


Manu, you haven't shown *any* code in which conversion from 
mutable to shared, an *implicit* one at that, was even present, 
let alone useful. While at the same time blatantly dismissing 
*all* examples to the contrary as "bad code" or any other reason, 
especially the "you just don't understand". How do you expect us 
to understand if all you do is talk about how it's useful, but 
don't explain why?
You can talk and brag all you want, but until you do show at 
least one example, I'm sorry but you have no case. And what's 
this all about? A *small*, localized portion of your proposal, 
that isn't event the source of the problems you're talking about 
in the current state of affairs.
In all examples flying about in this thread, all of *required* 
casts were *the opposite*, from shared to mutable, and you 
yourself acknowledge that those are indeed required on the lowest 
level.
As I've said numerous times, from all that's been said and 
implied in this thread, the *only* use of implicit casting that 
comes to mind is avoiding writing some forwarding methods, that's 
about it. I honestly can't see how this is valuable to warrant 
such a drastic change in the language. Now maybe it's that I'm 
dumb, then just say so already and we'll all move on. Just don't 
act like you're the only one who knows it all and everyone else 
is an unwashed pleb clinging to worshiping the faces in the trees.


I can achieve all my goals with full @safety, absolutely no 
casts in user code, and I have infrastructure in production 
that applies these patterns successfully. It's worth pursuing.


Okay... Then please do show *one* example of useful implicit 
conversion from mutable to shared.


I've spent years thinking on this, I'm trying to move the 
needle on this issue for the first time in over a decade at 
least,


And you're behaving in this thread as if everyone else, myself 
included, were sitting on our thumbs counting flies for this same 
decade.


and you have violently opposed, in principle, from the very 
first post, and make no effort to actually understand the 
proposition.


I haven't done such a thing. I have asked you, numerous times, 
one, and only one question, and you never so much as replied to 
*that question*. What is the big value of this implicit 
conversion that it would warrant changing current language rules 
regarding type conversions?


It's clearly a contest from your insurance that my proposal in 
worthless in every single post you've made. You want me to 
admit defeat and desist.


And here you are, continuing to presume. Now you suddenly know 
what I want. Marvelous.


I *don't* want you to admit any defeat, *or* desist. I *want* you 
to succeed. I *want* to help make `shared` useful so that I and 
everyone else can actually start writing code with it, not in 
spite of it. I've agreed with you on pretty much everything in 
your proposal, *except one thing*. I want you to demonstrate the 
practical value of *that thing*, and it's benefits over the 
current state of affairs, and I asked you several times to 
explain to us mere unwashed mortals exactly how it's useful. What 
have we been doing for 17 pages? Discussing the benefits of 
disabling reads/writes on shared? No. Estimating how much 
existing code could go to trash, what parts of DRuntime/Phobos 
would need a rewrite? No. We were in constant back-and-forth of 
"But... nope... but... nope" about this implicit conversion, 
which you value so much yet for some reason fail to defend. 
Saying "I had good time with it" is not a very practical defense, 
not for a language construct anyway.


Fuck you. You win. I don't have the time or energy to argue 
against a wall.


If you ask to destroy, be prepared for a fight. Or don't ask. 
Just stop appealing to your own authority.


You are obscene, you're complete unproductive, and destructive 
from no apparent reason.


Give it all you've got, please. Let it all out all at once.

I hope you continue to love shared, just the way it is... 
useless.


Yet another presumption. Good on you.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Friday, 19 October 2018 at 01:53:00 UTC, Manu wrote:


This is a red-herring.
In short, he made up this issue, it doesn't exist.
This is just hot air, and only strengthen my conviction.



Produce, or drop this presumptious crap.



You are an obscene person. I'm out.


Oooh, I'm srry, come baack!
Really though, what is it that you wanted to achieve here? You 
ask for counter-arguments, are given them on *17 pages already*, 
are asked numerous times to actually demonstrate the value of a 
small contained portion of your proposal, and all you do is shrug 
this all off just because you presume to "know better", and on 
top of that have the audacity to call someone else *obscene*? 
Wow... just... wow!



You win.


I didn't know it was a contest.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Friday, 19 October 2018 at 01:22:53 UTC, Manu wrote:
On Thu, Oct 18, 2018 at 3:10 PM Simen Kjærås via Digitalmars-d 
 wrote:



Now, Two very good points came up in this post, and I think 
it's worth stating them again, because they do present 
possible issues with MP:


It is easy to respond to these.


1) How does MP deal with reorderings in non-shared methods?

I don't know. I'd hide behind 'that's for the type implementor 
to handle', but it's a subtle enough problem that I'm not 
happy with that answer.


This is a red-herring. Opaque function calls are never 
reordered.
If they are inlined, the compiler has full visibility to the 
internal machinery present.


You don't say?.. And what, exactly, stops the optimizer from 
removing "unnecessary" reads or rearranging them with stores, 
given the original code, which, if you freaking read it, you'd 
see there's no indication that it's not allowed to do so.


If you call one function that performs an atomic op, then 
another that performs an atomic op, it is impossible for the 
CPU to reorder atomic op's around eachother, that would defeat 
the entire point of hardware atomic operations.


I'm not talking about CPU reordering at all. I'm talking about 
the optimizer.



In short, he made up this issue, it doesn't exist.


Ys, of course I have. What else have I made up, can you tell? 
You know what doesn't exist though? Even one example of a useful 
implicit conversion form mutable to shared from you. Not even one.



2) What about default members like opAssign and postblit?

The obvious solution is for the compiler to not generate these 
when a type has a shared method or is taken as shared by a 
free function in the same module. I don't like the latter part 
of that, but it should work.


These aren't issues either. There's nothing wrong with atomic
assignment; you just have to implement an atomic assignment.


You just haven't read the code. Those members aren't even 
`shared`. The *least* you can do is disable them *if* you're 
going to cast your variable to `shared`. Otherwise your 
"interface" remains non-threadsafe.


Postblit is being replaced with copy-ctor's and `shared` is one 
of the explicit reasons why! Copy-ctor's are also fine, it 
would express an atomic assignment.


And this strengthens *my* belief that you haven't at all thought 
about this. There is literally *no* purpose for any `shared` 
types to have any copy-ctors. The only feasible copy primitives 
are from shared to local and from local to shared. Not to mention 
that again, to even talk about your "implicit" conversions, you 
must first think about what can happen to the *owned* 
(non-`shared`) reference after the conversion. Hint: you can't 
copy it. You can't assign *to it*. Not via default-generated 
postblits and opAssigns, which are not, and can not, be "atomic".
I'm fully aware about postblits being "replaced" by copy-ctors, 
I'm also fully aware how "much" thought was put into that wrt. 
`shared`.



This is just hot air, and only strengthen my conviction.


You know what, I'm fed up with you too. Just show me one, *one* 
non-contrived example of useful implicit conversion from mutable 
to shared. So far you haven't produced *any at all*. Then we can 
talk about what is hot air here. Produce, or drop this 
presumptious crap.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Friday, 19 October 2018 at 00:29:01 UTC, Timon Gehr wrote:

On 18.10.18 23:34, Erik van Velzen wrote:
If you have an object which can be used in both a thread-safe 
and a thread-unsafe way that's a bug or code smell.


Then why do you not just make all members shared? Because with 
Manu's proposal, as soon as you have a shared method, all 
members effectively become shared. It just seems pointless to 
type them as unshared anyway and then rely on convention within 
@safe code to prevent unsafe accesses. Because, why? It just 
makes no sense.


Let's assume you have something like this:

module foo;

private shared int sharedState;

struct Accessor {

int flags;

void addUser(this T)()   {
   static if (is(T == shared))
   sharedState.atomicInc(); // unconditionally increment 
when it's a shared reference

   else {
   // owner may optimize shared access based on it's own 
state

   if (!(flags & SKIP_LOCKS)) sharedState.atomicInc();
   }
}

void removeUser(this T)() {
static if (is(T == shared))
sharedState.atomicDec();
else {
if (!(flags & SKIP_LOCKS)) sharedState.atomicDec();
}
}

void setFlags(int f) { flags = f; }
}

The 'Accessor' doesn't really hold any shared state, but it 
accesses a shared module global. Now, the *owner* (e.g. code that 
instantiated a local Accessor) may use non-`shared` interface to 
track additional state and make decisions whether or not access 
the global. A concrete example would be e.g. an I/O lock, where 
if you know you don't have any threads other than main, you can 
skip syscalls locking/unlocking the handle.


With the proposal I posted in the beginning, you would then not 
only get implicit conversion of class references to shared, but 
also back to unshared.


Is it in your first posts in this thread? I must've skipped that.

I think the conflation of shared member functions and thread 
safe member functions is confusing. shared on a member function 
just means that the `this` reference is shared.


Nod.

The only use case for this is overloading on shared. The D 
approach to multithreading is that /all/ functions should be 
thread safe, but it is easier for some of them because they 
don't even need to access any shared state. It is therefore 
helpful if the type system cleanly separates shared from 
unshared state.


Nod nod.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Friday, 19 October 2018 at 00:36:11 UTC, Timon Gehr wrote:

On 19.10.18 02:29, Stanislav Blinov wrote:

On Thursday, 18 October 2018 at 23:47:56 UTC, Timon Gehr wrote:

I'm pretty sure you will have to allow operations on shared 
local variables. Otherwise, how are you ever going to use a 
shared(C)? You can't even call a shared method on it because 
it involves reading the reference.


Because you can't really "share" C (e.g. by value). You share 
a C*, or, rather a shared(C)*.


(Here, I intended C to be a class, if that was unclear.)


In that case, it's already a pointer, and the only real issue is 
the interaction with GC, which I mentioned before *needs* to be 
addressed. And that is only when C was allocated by GC.


The pointer itself, which you own, isn't shared at all, and 
shouldn't be: it's your own reference to shared data. You can 
read and write that pointer all you want. What you must not be 
able to do is read and write *c.

...


Presumably you could have a local variable shared(C) c, then 
take its address &c and send it to a thread which will be 
terminated before the scope of the local variable ends.


I assume you mean *after*, because if that thread terminates 
before there's no problem.



So, basically, the lack of tail-shared is an issue.


Well, not exactly. Irrespective of Manu's proposal, it's just 
inherent in D: sharing implies escaping, there's really no way 
around it. Provisions must be made in DIP1000 and in the language 
in general. However, that is a good point *against* implicit 
conversions, let alone @safe ones.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 23:47:56 UTC, Timon Gehr wrote:

I'm pretty sure you will have to allow operations on shared 
local variables. Otherwise, how are you ever going to use a 
shared(C)? You can't even call a shared method on it because it 
involves reading the reference.


Because you can't really "share" C (e.g. by value). You share a 
C*, or, rather a shared(C)*. The pointer itself, which you own, 
isn't shared at all, and shouldn't be: it's your own reference to 
shared data. You can read and write that pointer all you want. 
What you must not be able to do is read and write *c.


Although, when it's a global?.. I'm not sure. We can have the 
compiler always generate a by-reference access, i.e. make that 
part of the language spec. Because full-on read of C.sizeof can't 
be statically proven thread-safe generically anyway (that's why 
generated copying and assignment don't make any sense for 
`shared`).


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 22:09:02 UTC, Manu wrote:

The 2 different strategies are 2 different worlds, one is my 
proposal,

the other is more like what we have now. They are 2 different
rule-sets.
You are super-attached to some presumptions, and appear to 
refuse to

analyse the proposal from the grounds it defines.


Please see my exchange with Simen in case you're skipping my 
posts.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 22:08:14 UTC, Simen Kjærås wrote:

On Thursday, 18 October 2018 at 16:31:02 UTC, Stanislav Blinov


Now, if the compiler generated above in the presence of any 
`shared` members or methods, then we could begin talking about 
it being threadsafe...


Again, this is good stuff. This is an actual example of what 
can go wrong. Thanks!


You're welcome.

No, void atomicInc(shared int*) is perfectly safe, as long as 
it doesn't cast away shared.


Eh? If you can't read or write to your shared members, how *do* 
you implement your "safe" shared methods without casting away 
shared? Magic?!


Again, the problem is int already has a non-thread-safe 
interface, which Atomic!int doesn't.


*All* structs and classes have a non-thread-safe interface, as I 
have demonstrated, and thankfully you agree with that. But *there 
is no mention of it* in the OP, nor there was *any recognition* 
of it up to this point. When I asked about copying, assignment 
and destructors in the previous thread (thread, not post), Manu 
quite happily proclaimed those were fine given some arbitrary 
conditions. Yet those are quite obviously *not fine*, especially 
*if* you want to have a "safe" implicit conversion. *That* 
prompted me to assume that Manu didn't actually think long and 
hard about his proposal and what it implies. Without recognizing 
those issues:


struct S {
private int x;
void foo() shared;
}

void shareWithThread(shared S* s);

auto s = make!S; // or new, whatever
shareWithThread(&s); // Manu's implicit conversion

// 10 kLOC below, written by some other guy 2 years later:
*s = S.init;

^ that is a *terrible*, and un-greppable BUG. How fast would you 
spot that in a review? Pretty fast if you saw or wrote the other 
code yesterday. A week later? A month? A year?..


Again, that is assuming *only* what I'm *certain about* in Manu's 
proposal, not something he or you assumed but didn't mention.


And once more, for clarity, this interface includes any 
function that has access to its private members, free function, 
method, delegate return value from a function/method, what have 
you. Since D's unit of encapsulation is the module, this has to 
be the case. For int, the members of that interface include all 
operators. For pointers, it includes deref and pointer 
arithmetic. For arrays indexing, slicing, access to .ptr, etc. 
None of these lists are necessarily complete.


Aight, now, *now* I can perhaps try to reason about this from 
your point of view. Still, I would need some experimentation to 
see if such approach could actually work. And that would mean 
digging out old non-`shared`-aware code and performing some... 
dubious... activities.


I have no idea where I or Manu have said you can't make 
functions that take shared(T)*.


Because that was the only way to reason about your 
interpretations of various examples until you said this:


I think we have been remiss in the explanation of what we 
consider the interface.
For clarity: the interface of a type is any method, function, 
delegate or otherwise that may affect its internals. That means 
any free function in the same module, and any non-private 
members.


Now compare that to what is stated in the OP and correlate with 
what I'm saying, you might understand where my opposition comes 
from.


Now, Two very good points came up in this post, and I think 
it's worth stating them again, because they do present possible 
issues with MP:


1) How does MP deal with reorderings in non-shared methods?

I don't know. I'd hide behind 'that's for the type implementor 
to handle', but it's a subtle enough problem that I'm not happy 
with that answer.



2) What about default members like opAssign and postblit?

The obvious solution is for the compiler to not generate these 
when a type has a shared method or is taken as shared by a free 
function in the same module. I don't like the latter part of 
that, but it should work.


Something I didn't yet stress about (I think only mentioned 
briefly somewhere) is, sigh, destructors. Right now, `shared` 
allows you to either have a `~this()` or a `~this() shared`, but 
not both. In my mind, `~this() shared` is an abomination. One 
should either:


1) have data that starts life shared (a global, or e.g. new 
shared(T)), and simply MUST NOT have a destructor. Such data is 
ownerless, or you can say that everybody owns it. Therefore 
there's no deterministic way of knowing whether or not or when to 
call the destructor. You can think of it as an analogy with 
current stance on finalizers with GC.


2) have data that starts life locally (e.g. it's not declared 
`shared`, but converted later). Such types MAY have a destructor, 
because they always have a cleanly defined owner: whoever holds 
the non-`shared` reference (recall that copying MUST be 
*disabled* for any shared-aware type). But that destructor MUST 
NOT be `shared`.


Consequently, types such as these:

shared struct S { /* ... */

Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 21:51:52 UTC, aliak wrote:
On Thursday, 18 October 2018 at 18:12:03 UTC, Stanislav Blinov 
wrote:

On Thursday, 18 October 2018 at 18:05:51 UTC, aliak wrote:

Right, but the argument is a shared int*, so from what I've 
understood... you can't do anything with it since it has no 
shared members. i.e. you can't read or write to it. No?


Obviously the implementation would cast `shared` away, just 
like it would if it were Atomic!int. But for some reason, Manu 
thinks that latter is OK doing that, but former is voodoo. Go 
figure.


Sounds like one is encapsulated within a box that carefully


Unit of "encapsulation" in D is either a module or a package, not 
a struct. Free functions are a very valid way of accessing 
"encapsulated" data.


handles thread safety and makes promises with the API and the 
other is not.


Nope.

void foo(const T* x);

makes a promise to not write through x. It assumes '*x' itself 
may not be const.


void foo(shared T* x);

makes a promise to threat '*x' in a thread-safe manner. But per 
MP, it *assumes* that '*x' is shared. And if it isn't, good luck 
finding that spot in your code.



I don't think you can apply shared on a free function, i.e.:

void increment(shared int*) shared;

in which case increment would not, and cannot be a threadsafe 
api in Manu's world.


Wrong. In Manu's "world", this is somehow considered "safe":

void T_method_increment(ref shared T);

...because that is what a method is, while this:

void increment(shared T*);
void increment(ref shared T);

...is considered "unsafe" because reasons. Do you see the 
difference in signatures? I sure don't.


So once you throw an Object in to shared land all you could do 
is call shared methods on it, and since they'd have been 
carefully written with sharing in mind... it does seem a lot 
more usable.


Same goes with free functions.


On these two cases:

increment(shared int* p1) {
 // I have no guarantees that protecting and accessing p1 will 
not cause problems

 //
 // but you don't have this guarantee in any world (current nor 
MP) because you can

 // never be sure that p1 was not cast from a mutable.
}


Except that you *have to* *explicitly* cast it, which is:
a) documentation
b) greppable
c) easily fails review for people not authorized to do so


int* p2;
increment(p2);
// I have no guarantee that accessing p2 is safe anymore.
// But that would apply only if the author of increment was 
being unsafe.

// and "increment" cannot be marked as shared.


No. *You*, the caller of an API (the "increment"), do not 
necessarily control that API. By allowing implicit conversion you 
waive all claims on your own data. In Manu's world, "increment" 
*assumes* you're doing the right thing. Yet at the same time, 
Manu happily talks about how only "experts" can do the right 
thing. How these two things co-exist in his world, I have no idea.


The "have no guarantee" holds in both cases. Except case (1) 
would require actually checking what the hell you're doing before 
making a cast, while in case (2) you just blindly write unsafe 
code.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
Manu, Erik, Simen... In what world can a person consciously say 
"casting is unsafe", and yet at the same time claim that 
*implicit casting* is safe? What the actual F, guys?




Re: Shared - Another Thread

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 18 October 2018 at 20:59:59 UTC, Erik van Velzen 
wrote:


Let me start by saying I'm willing to admit that I was 
factually wrong.


Also keep in mind that "me having an impression" is something 
that is can't be independently verified and you'll have to take 
my at my word. Just that the exact reason for that impression 
was lost to the sands of time.


Quite a simple reason: it was years ago, however old you are now 
you were younger and less experienced, and probably didn't 
understand something back then.


Your impression was wrong. Open e.g. TDPL and read up on 
`shared` how it was envisioned back then.


I don't think the book really supports your argument. The first 
paragraph about shared sound to me like "the compiler will 
automagically fix it".


Then I don't know what to tell you. It literally talks about 
compiler forbidding unsafe operations and *requiring* you to go 
the extra mile, by just rejecting invalid code (something that 
Manu is proposing to forego!). But that's *code*, not logic.


Only tangentially it is mentioned that you're actually supposed 
to write special code yourself. You would have to be a compiler 
expert to draw the correct conclusion.


Tangetially?! There's a whole section on writing `shared`-aware 
code (none of which would even compile today, I don't know if 
it's addressed in his errata).


Also the last paragraph the quote below is interesting in light 
of our other discussion about casting to shared.


From  the book:

[snip]


Yeah, some of that never happened and never will. But that aside, 
none of it says "threading will be safe by default". It says 
"threading will be a lot less unsafe by default". And *that* is 
what we must achieve.


Re: Shared - Another Thread

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 18 October 2018 at 20:10:18 UTC, Erik van Velzen 
wrote:


When shared stood up in its current form,  expectation was made 
"this will be threadsafe automatically - we'll figure out how 
in the future".


It never was like that. At all. I don't think either Walter or 
Andrei are idiots, do you?


Because it works for global variables. But it doesn't seem like 
an expectation we can deliver on.


(I have no direct reference to this but that was certainly my 
impression)


Your impression was wrong. Open e.g. TDPL and read up on `shared` 
how it was envisioned back then.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 18 October 2018 at 19:51:17 UTC, Erik van Velzen 
wrote:

On Thursday, 18 October 2018 at 19:26:39 UTC, Stanislav Blinov


Manu said clearly that the receiving thread won't be able to 
read or write the pointer.


Yes it will, by casting `shared` away. *Just like* his 
proposed "wrap everything into" struct will. There's exactly 
no difference.




Casting is inherently unsafe. Or at least, there's no 
threadsafe guarantee.


So? That's the only way to implement required low-level access, 
especially if we imagine that the part of Manu's proposal about 
disabling reads and writes on `shared` values is a given. It's 
the only way to implement Manu's Atomic!int, or at least 
operation it requires, for example.


You can still disagree on the merits, but so far it has been 
demonstrated as a sound idea.


No, it hasn't been.


I think you are missing the wider point. I can write 
thread-unsafe code *right now*, no casts required. Just put 
shared at the declaration. The proposal would actually give 
some guarantees.


No, I think you are missing the wider point. You can write 
thread-unsafe code regardless of using `shared`, and regardless 
of it's implementation details. Allowing *implicit automatic 
promotion* of *mutable thread-local* data to shared will allow 
you to write even more thread-unsafe code, not less.


The solid part of the proposal is about disabling reads and 
writes. The rest is pure convention: write structs instead of 
functions, and somehow (???) benefit from a totally unsafe 
implicit cast.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 18 October 2018 at 19:04:58 UTC, Erik van Velzen 
wrote:
On Thursday, 18 October 2018 at 17:47:29 UTC, Stanislav Blinov 
wrote:
On Thursday, 18 October 2018 at 17:17:37 UTC, Atila Neves 
wrote:

On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote:
Assuming the rules above: "can't read or write to members", 
and the understanding that `shared` methods are expected to 
have threadsafe implementations (because that's the whole 
point), what are the risks from allowing T* -> shared(T)* 
conversion?


int i;
tid.send(&i);
++i;  // oops, data race


Doesn't work. No matter what you show Manu or Simen here they 
think it's just a bad contrived example. You can't sway them 
by the fact that the compiler currently *prevents* this from 
happening.


Manu said clearly that the receiving thread won't be able to 
read or write the pointer.


Yes it will, by casting `shared` away. *Just like* his proposed 
"wrap everything into" struct will. There's exactly no difference.



Because int or int* does not have threadsafe member functions.


int doesn't have any member functions. Or it can have as many as 
you like per UFCS. Same goes for structs. Because "methods" are 
just free functions in disguise, so that whole distinction in 
Manu's proposal is a weaksauce convention at best.


You can still disagree on the merits, but so far it has been 
demonstrated as a sound idea.


No, it hasn't been.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 18:24:47 UTC, Manu wrote:

I have demonstrated these usability considerations in 
production. I am

confident it's the right balance.


Then convince us. So far you haven't.


I propose:
 1. Normal people don't write thread-safety, a very small 
number of
unusual people do this. I feel very good about biasing 100% of 
the
cognitive load INSIDE the shared method. This means the expert, 
and
ONLY the expert, must make decisions about thread-safety 
implementation.


No argument.

 2. Implicit conversion allows users to safely interact with 
safe
things without doing unsafe casts. I think it's a complete 
design fail
if you expect any user anywhere to perform an unsafe cast to 
call a
perfectly thread-safe function. The user might not properly 
understand

their obligations.


Disagreed. "Normal" people wouldn't be doing any unsafe casts and 
*must not be able to* do something as unsafe as silent promotion 
of thread-local mutable data to shared. But it's perfectly fine 
for the "expert" user to do such a promotion, explicitly.


 3. The practical result of the above is, any complexity 
relating to
safety is completely owned by the threadsafe author, and not 
cascaded
to the user. You can't expect users to understand, and make 
correct

decisions about threadsafety. Safety should be default position.


Exactly. And an implicit conversion from mutable to shared isn't 
safe at all.


I recognise the potential loss of an unsafe optimised 
thread-local path.
1. This truly isn't a big deal. If this is really hurting you, 
you

will notice on the profiler, and deploy a thread-exclusive path
assuming the context supports it.


2. I will trade that for confidence in safe interaction every 
day of

the week. Safety is the right default position here.


Does not compute. Either you're an "expert" from above and live 
with that burden, or you're not.


2. You just need to make the unsafe thread-exclusive variant 
explicit, eg:



struct ThreadSafe
{
private int x;
void unsafeIncrement() // <- make it explicit
{
   ++x; // User has asserted that no sharing is possible, 
no reason to use atomics

}
void increment() shared
{
   atomicIncrement(&x); // object may be shared
}
}


No. The above code is not thread-safe at all. The private int 
*must* be declared shared. Then it becomes:


struct ThreadSafe {
// These must be *required* if you want to assert any thread 
safety. Be nice

// if the compiler did that for us.
@disable this(this);
@disable void opAssign(typeof(this));

private shared int x; // <- *must* be shared

void unsafeIncrement() @system {
x.assumeUnshared += 1;
}

// or deduced:

void unsafeIncrement()() // assumeUnshared must be @system, 
thus this unsafeIncrement will also be deduced @system

{
x.assumeUnshared += 1;
}

void increment() shared {
x.atomicOp!"+="(1);
}
}

I think this is quiet a reasonable and clearly documented 
compromise.


With the fixes above, it is. Without them, it will only be 
apparent from documentation, and who writes, or reads, that?..


I think absolutely-reliably-threadsafe-by-default is the right 
default position.


But it is exactly the opposite of automatic promotions from 
mutable to shared!


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 18 October 2018 at 18:26:27 UTC, Steven 
Schveighoffer wrote:

On 10/18/18 1:47 PM, Stanislav Blinov wrote:
On Thursday, 18 October 2018 at 17:17:37 UTC, Atila Neves 
wrote:

On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote:
1. shared should behave exactly like const, except in 
addition to inhibiting write access, it also inhibits read 
access.


How is this significantly different from now?

-
shared int i;
++i;

Error: read-modify-write operations are not allowed for 
shared variables. Use core.atomic.atomicOp!"+="(i, 1) instead.

-

There's not much one can do to modify a shared value as it is.


i = 1;
int x = i;
shared int y = i;


This should be fine, y is not shared when being created.


'y' isn't, but 'i' is. It's fine on amd64, but that's incidental.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 18:05:51 UTC, aliak wrote:

Right, but the argument is a shared int*, so from what I've 
understood... you can't do anything with it since it has no 
shared members. i.e. you can't read or write to it. No?


Obviously the implementation would cast `shared` away, just like 
it would if it were Atomic!int. But for some reason, Manu thinks 
that latter is OK doing that, but former is voodoo. Go figure.


Re: Shared - Another Thread

2018-10-18 Thread Stanislav Blinov via Digitalmars-d
Pardon the snarkiness, I probably need to get some air from that 
other shared thread.




Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 17:17:37 UTC, Atila Neves wrote:

On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote:
1. shared should behave exactly like const, except in addition 
to inhibiting write access, it also inhibits read access.


How is this significantly different from now?

-
shared int i;
++i;

Error: read-modify-write operations are not allowed for shared 
variables. Use core.atomic.atomicOp!"+="(i, 1) instead.

-

There's not much one can do to modify a shared value as it is.


i = 1;
int x = i;
shared int y = i;

And so on. The compiler needs to forbid this.

Unless I'm missing something, I can't arbitrarily do anything 
with shared members right now.


Except arbitrarily read and write them :)

From there, it opens up another critical opportunity; T* -> 
shared(T)*

promotion.



I don't think that works. See below.


Welcome to the club.

Assuming the rules above: "can't read or write to members", 
and the understanding that `shared` methods are expected to 
have threadsafe implementations (because that's the whole 
point), what are the risks from allowing T* -> shared(T)* 
conversion?


int i;
tid.send(&i);
++i;  // oops, data race


Doesn't work. No matter what you show Manu or Simen here they 
think it's just a bad contrived example. You can't sway them by 
the fact that the compiler currently *prevents* this from 
happening.


All the risks that I think have been identified previously 
assume that you can arbitrarily modify the data.


Do you have any examples of arbitrarily modifying shared data? 
I can't think of any.


See near the beginning of this post ;)

That's  insanity... assume we fix that... I think the 
promotion actually becomes safe now...?


I don't think so, no.


+100500.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 17:10:03 UTC, aliak wrote:

Out of curiosity, when it comes to primitives, what could you 
do under MP in void "atomicInc(shared int*)" that would be 
problematic?


void atomicInc(shared int*) {
  // i.e. what goes here?
}


1. Anything if int* implicitly converts to shared int* (per MP), 
because then that function is indeed unsafe.
2. Only actual platform-specific implementation bugs otherwise, 
and these are beyond what `shared` can provide.


Re: Shared - Another Thread

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 16:31:33 UTC, Vijay Nayar wrote:

Imagine a simple algorithm that does logic on very long 
numbers, split into bytes.  One multi-threaded implementation 
may use 4 threads.  The first operating on bytes 0, 4, 8, etc.  
The second operating on bytes 1, 5, 9, etc.


In this case, a mutex or lock isn't actually needed, because 
the algorithm itself assures that threads don't collide.


Yes, they do collide. You just turned your cache into a giant 
clusterf**k. Keyword: MESIF.


Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 13:09:10 UTC, Simen Kjærås wrote:

Well, sorta. But that's not a problem, because you can't do 
anything that's not threadsafe to something that's shared.


Yes you can. You silently agree to another function's 
assumption that you pass shared data, while actually passing 
thread-local data and keeping treating it as thread-local. 
I.e. you silently agree to a race.


No, you don't. If I give you a locked box with no obvious way 
to open it, I can expect you not to open it.


You contradict yourself and don't even notice it. Per your rules, 
the way to open that locked box is have shared methods that 
access data via casting. Also per your rules, there is absolutely 
no way for the programmer to control whether they're actually 
sharing the data. Therefore, some API can steal a shared 
reference without your approval, use that with your "safe" shared 
methods, while you're continuing to threat your data as not 
shared.


It's the same thing. If you have a shared(T), and it doesn't 
define a thread-safe interface, you can do nothing with it. If 
you are somehow able to cause a race with something with which 
you can do nothing, please tell me how, because I'm pretty sure 
that implies the very laws of logic are invalid.


You and Manu both seem to think that methods allow you to "define 
a thread-safe interface".


struct S {
void foo() shared;
}

Per your rules, S.foo is thread-safe. It is here that I remind 
you, *again*, what S.foo actually looks like, given made-up 
easy-to-read mangling:


void struct_S_foo(ref shared S);

And yet, for some reason, you think that these are not 
thread-safe:


void foo(shared int*);
void bar(ref shared int);

I mean, how does that logic even work with you?

Per your rules, there would be *nothing* in the language to 
prevent calling S.foo with an unshared Other.


That's true. And you can't do anything to it, so that's fine.


Yes you can do "anything" to it.


No, you can't. You can do thread-safe things to it. That's 
nothing, *unless* Other defines a shared (thread-safe) 
interface, in which case it's safe, and everything is fine.


Example:

struct Other {
private Data payload;

// shared function. Thread-safe, can be called from a
// shared object, or from an unshared object.
void twiddle() shared { payload.doSharedStuff(); }

// unshared function. Cannot be called from a shared object.
// Promises not to interfere with shared data, or to so only
// in thread-safe ways (by calling thread-safe methods, or
// by taking a mutex or equivalent).
void twaddle() { payload.doSharedThings(); }

// Bad function. Promises not to interfere with shared data,
// but does so anyway.
// Give the programmer a stern talking-to.
void twank() {
payload.fuckWith();
}
}

struct S {
   void foo(shared Other* o) shared {
   // No can do - can't call non-shared functions on shared 
object.

   // o.twaddle();

   // Can do - twiddle is always safe to call.
   o.twiddle();
   }
}


That's already wrong starting at line 2. It should be:

struct Other {
private shared Data payload; // shared, there's no question 
about it


// shared function. Thread-safe, can be called from a
// shared object, or from an unshared object.
void twiddle() shared { payload.doSharedStuff(); }

// unshared function. Cannot be called from a shared object.
// Promises not to interfere with shared data, or to so only
// in thread-safe ways (by calling thread-safe methods, or
// by taking a mutex or equivalent).
void twaddle() {
// fine so long as there's a
// 'auto doSharedThings(ref shared Data)'
// or an equivalent method for Data.
// Otherwise it just wouldn't compile, as it should.
payload.doSharedThings();
}

// No longer a bad function, because it doesn't compile, and 
the

// programmer can do their own auto-spanking.
void twank() {
payload.fuckWith(); // Error: cannot fuckWith() 'shared 
Data'

}
}

struct S {
   void foo(shared Other* o) shared {
   // No can do - can't call non-shared functions on shared 
object.

   // o.twaddle();
   // ^Yep, agreed

   // Can do - twiddle is always safe to call.
   o.twiddle();
   }
}

Well, that was easy, wasn't it?

Your implementation of 'twaddle' is *unsafe*, because the 
compiler doesn't know that 'payload' is shared. For example, when 
inlining, it may reorder the calls in it and cause races or other 
UB. At least one of the reasons behind `shared` *was* to serve as 
compiler barrier.
What I don't see in your example is where it would be necessary 
to cast mutable to shared, let alone auto-cast it. And that's the 
heart of this discussion.


If you just do this:

auto other = new /*shared*/ Other;

...then at the moment, per current rules, you can either twiddle 
or twaddle (depending on whether you remove the comment or not), 

Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 18 October 2018 at 11:35:21 UTC, Simen Kjærås wrote:
On Thursday, 18 October 2018 at 10:08:48 UTC, Stanislav Blinov 
wrote:

Manu,

how is it that you can't see what *your own* proposal means??? 
Implicit casting from mutable to shared means that everything 
is shared by default! Precisely the opposite of what D 
proclaims.


Well, sorta. But that's not a problem, because you can't do 
anything that's not threadsafe to something that's shared.


Yes you can. You silently agree to another function's assumption 
that you pass shared data, while actually passing thread-local 
data and keeping treating it as thread-local. I.e. you silently 
agree to a race.


You also essentially forbid defining *any* functions that take 
`shared T*` argument(s). You keep asking for concrete "holes". 
Don't you see what the previous "atomicInc" example implies???


I certainly don't. Please do elucidate.


What the hell? I do in the very next paragraph. Do people read 
sentence by sentence and assume context does not exist or what?


If *any* free function `foo(shared T* bar)`, per your 
definition, is not threadsafe, then no other function with 
shared argument(s) can be threadsafe at all. So how do you 
call functions on shared data then? You keep saying "methods, 
methods..."


struct Other { /* ... */ }

struct S {
void foo(shared Other*) shared;
}

Per your rules, there would be *nothing* in the language to 
prevent calling S.foo with an unshared Other.


That's true. And you can't do anything to it, so that's fine.


Yes you can do "anything" to it. If you couldn't, you wouldn't be 
able to implement `shared` at all. Forbidding reads and writes 
isn't enough to guarantee that you "can't do anything with it". 
*Unless* you forbid implicit conversion from mutable to shared. 
Then, and only then, your statement can hold.


So the only way to make your proposal work would be to forbid 
all functions from taking `shared T*` or `ref shared T` 
argument.


No. Please read this thread again. From the beginning, every 
word.


Are you kidding me? Maybe it's *you* who should do that?..

Actually, don't do that, because Manu's proposal is simple and 
elegant:


1. the rule must be applied that shared object can not be read 
or written


No objection there, I fully support that. I even stated multiple 
times how it can be extended and why.


2. attributing a method shared is a statement and a promise 
that the method is threadsafe


No objection here either.


The rest just follows naturally.


Nothing follows naturally. The proposal doesn't talk at all about 
the fact that you can't have "methods" on primitives, that you 
can't distinguish between shared and unshared data if that 
proposal is realized, that you absolutely destroy D's 
TLS-by-default treatment...


There's actually one more thing: The one and only thing you can 
do (without unsafe casting) with a shared object, is call 
shared methods and free functions on it.


Functions that you must not be allowed to write per this same 
proposal. How quaint.


To sum up, things you implied but never specified in your 
proposal:


1. Primitive types can't be explicitly `shared`.


Sure they can, they just can't present a thread-safe interface, 
so you can't do anything with a shared(int).


Ergo... you can't have functions taking pointers to shared 
primitives. Ergo, `shared ` becomes a useless 
language construct.



2. Free functions taking `shared` arguments are not allowed.


Yes, they are. They would be using other shared methods or free 
functions on the shared argument, and would thus be 
thread-safe. If defined in the same module as the type on which 
they operate, they would have access to the internal state of 
the object, and would have to be written in such a way as to 
not violate the thread-safety of other methods and free 
functions that operate on it.


This contradicts (1). Either you can have functions taking shared 
T* arguments, thus
creating threadsafe interface for them, or you can't. If, per (1) 
as you say, you can't




3. Only `shared` methods can implement threadsafe operations 
on `shared` data (which contradicts (2) already) <- this one 
you did specify.


Non-shared methods are perfectly free to be thread-safe (and 
they should be, in the sense that they shouldn't interfere with 
shared methods). A better way to state this is that only shared 
methods may be called on a shared object. A shared object may 
also be passed to a function taking a shared parameter.



4. Every variable is implicitly shared, whether intended so or 
not.


Well, yes, in the same sense that every variable is also 
implicitly const, whether intended so or not.


I sort of expected that answer. No, nothing is implicitly const. 
When you pass a reference to a function taking const, *you keep 
mutable reference*, the function agrees to that, and it's only 
"promise" is to not modify data through the reference you gave 
it. But *you still keep mutable reference*

Re: shared - i need it to be useful

2018-10-18 Thread Stanislav Blinov via Digitalmars-d

Manu,

how is it that you can't see what *your own* proposal means??? 
Implicit casting from mutable to shared means that everything is 
shared by default! Precisely the opposite of what D proclaims.


You also essentially forbid defining *any* functions that take 
`shared T*` argument(s). You keep asking for concrete "holes". 
Don't you see what the previous "atomicInc" example implies???


If *any* free function `foo(shared T* bar)`, per your definition, 
is not threadsafe, then no other function with shared argument(s) 
can be threadsafe at all. So how do you call functions on shared 
data then? You keep saying "methods, methods..."


struct Other { /* ... */ }

struct S {
void foo(shared Other*) shared;
}

Per your rules, there would be *nothing* in the language to 
prevent calling S.foo with an unshared Other.


So the only way to make your proposal work would be to forbid all 
functions from taking `shared T*` or `ref shared T` argument. 
Except we can't do that, because a method is just a function with 
an implicit first argument. The code above is the same as this:


void foo(ref shared S, shared Other*);

It's literally *the same signature*. So there's nothing in the 
language to prevent calling that on an unshared S either.


To sum up, things you implied but never specified in your 
proposal:


1. Primitive types can't be explicitly `shared`.
2. Free functions taking `shared` arguments are not allowed.
3. Only `shared` methods can implement threadsafe operations on 
`shared` data (which contradicts (2) already) <- this one you did 
specify.
4. Every variable is implicitly shared, whether intended so or 
not.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 17 October 2018 at 23:12:48 UTC, Manu wrote:
On Wed, Oct 17, 2018 at 2:15 PM Stanislav Blinov via 
Digitalmars-d  wrote:


On Wednesday, 17 October 2018 at 19:25:33 UTC, Manu wrote:
> On Wed, Oct 17, 2018 at 12:05 PM Stanislav Blinov via 
> Digitalmars-d  wrote:

>>
>> On Wednesday, 17 October 2018 at 18:46:18 UTC, Manu wrote:
>>
>> > I've said this a bunch of times, there are 2 rules:
>> > 1. shared inhibits read and write access to members
>> > 2. `shared` methods must be threadsafe
>> >
>> >>From there, shared becomes interesting and useful.
>>
>> Oh God...
>>
>> void atomicInc(shared int* i) { /* ... */ }
>>
>> Now what? There are no "methods" for ints, only UFCS. Those 
>> functions can be as safe as you like, but if you allow 
>> implicit promotion of int* to shared int*, you *allow 
>> implicit races*.

>
> This function is effectively an intrinsic. It's unsafe by 
> definition.


Only if implicit conversion is allowed. If it isn't, that's 
likely @trusted, and this:


void atomicInc(ref shared int);

can even be @safe.


In this case, with respect to the context (a single int) 
atomicInc()
is ALWAYS safe, even with implicit conversion. You can 
atomicInc() a

thread-local int perfectly safely.


Yes, *you* can. *Another* function can't unless *you* allow for 
it to be safe. You can't do that if that function silently 
assumes you gave it shared data, when in fact you did not.


The signatures of those two functions are exactly the same. 
How is that different from a function taking a shared int 
pointer or reference?


It's not, atomicInc() of an int is always safe with respect to 
the int itself.
You can call atomicInc() on an unshared int and it's perfectly 
fine,
but now you need to consider context, and that's a problem for  
the

design of the higher-level scope.

To maintain thread-safety, the int in question must be 
appropriately contained.


Exactly. And that means it can't convert to shared without my say 
so :)


The problem is that the same as the example I presented before, 
which I'll repeat:


struct InvalidProgram
{
  int x;
  void fun() { ++x; }
  void gun() shared { atomicInc(&x); }
}

The method gun() (and therefore the whole object) is NOT 
threadsafe by

my definition, because fun() violates the threadsafety of gun().
The situation applies equally here that:
int x;
atomicInc(&x);
++x; // <- by my definition, this 'API' (increment an int) 
violates the threadsafety of atomicInc(), and atomicInc() is 
therefore not threadsafe.


No. The 'API' is just the atomicInc function. You, the user of 
that API, own the int. If the API wants a shared int from you, 
you have to be in agreement. You can't have any agreement if the 
API is only making promises and assumptions.


`int` doesn't present a threadsafe API, so int is by 
definition, NOT threadsafe. atomicInc() should be @system, and 
not @trusted.


Exactly. `int` isn't threadsafe and therefore cannot 
automatically convert to `shared int`.


If you intend to share an int, use Atomic!int, because it has a 
threadsafe API.


No. With current implementation of `shared`, which disallows your 
automatic promotion,
your intent is enforced. You cannot share a local `int` unless 
*you know* it's safe to do so and therefore can cast that int to 
shared.


atomicInc(shared int*) is effectively just an unsafe intrinsic, 
and


It is only unsafe if you allow int* to silently convert to shared 
int*. If you can't do that, you can't call `atomicInc` on an int*.


One could argue that it should be void free(ref void* p) { /* 
... */ p = null; }



void *p2 = p;
free(p);
p2.crash();


That's exactly analogous to what you're proposing: leaking 
`shared` references while keeping unshared data.


As a matter of fact, in my own allocators memory blocks 
allocated by them are passed by value and are non-copyable, 
they're not just void[] as in std.experimental.allocator. One 
must 'move' them to pass ownership, and that includes 
deallocation. But that's another story altogether.


Right, now you're talking about move semantics to implement 
transfer of ownership... you might recall I was arguing this 
exact case to express transferring ownership of objects between 
threads earlier.


You went off on an irrelevant tangent there, and I feel like you 
didn't even see my reply. You don't pass any ownership when you 
share. You just share. As an owner, you get to access the 
un-`shared` interface freely. Others do not.


This talk of blunt casts and "making sure everything is good" 
is all just great, but it doesn't mean  anything interesting 
with respect to `shared`. It should be interesting even without 
unsafe casts.

Re: Shared - Another Thread

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 17 October 2018 at 21:55:48 UTC, H. S. Teoh wrote:

But nobody will be building a fusion engine out of race 
conditions anytime in the foreseeable future. :-D


We should be so blessed...



Re: Shared - Another Thread

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 17 October 2018 at 21:29:07 UTC, Stefan Koch wrote:


in any case it would certainly mess up
 the state of everyone involved; which is exactly what happens 
win multi-threaded situations.


^ that is very true. And that is why:

- one must not keep shared and local data close together (e.g. 
within same cache line)

- one must not implicitly convert local data to shared data

Now, I perfectly understand what Manu wants: for `shared` to stop 
being a stupid keyword that nobody uses, and start bringing in 
value to the language. At the moment, the compiler happily allows 
you to write and read `shared` unhindered, which isn't useful at 
all. It also allows you to have weird things like shared 
destructors and postblits (which got extended to whole shared 
copy ctors in a DIP!). Latter is especially painful when 
attempting to define the whole type `shared`.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 17 October 2018 at 19:25:33 UTC, Manu wrote:
On Wed, Oct 17, 2018 at 12:05 PM Stanislav Blinov via 
Digitalmars-d  wrote:


On Wednesday, 17 October 2018 at 18:46:18 UTC, Manu wrote:

> I've said this a bunch of times, there are 2 rules:
> 1. shared inhibits read and write access to members
> 2. `shared` methods must be threadsafe
>
>>From there, shared becomes interesting and useful.

Oh God...

void atomicInc(shared int* i) { /* ... */ }

Now what? There are no "methods" for ints, only UFCS. Those 
functions can be as safe as you like, but if you allow 
implicit promotion of int* to shared int*, you *allow implicit 
races*.


This function is effectively an intrinsic. It's unsafe by 
definition.


Only if implicit conversion is allowed. If it isn't, that's 
likely @trusted, and this:


void atomicInc(ref shared int);

can even be @safe.


It's a tool for implementing threadsafe machinery.
No user can just start doing atomic operations on random ints 
and say
"it's threadsafe", you must encapsulate the threadsafe 
functionality
into some sort of object that aggregates all concerns and 
presents an

intellectually sound api.


Threadsafety starts and ends with the programmer. By your logic 
*all* functions operating on `shared` are unsafe then. As far as 
compiler is concerned, there would be no difference between these 
two:


struct S {}
void atomicInc(ref shared S);

and

struct S { void atomicInc() shared { /* ... */ } }

The signatures of those two functions are exactly the same. How 
is that different from a function taking a shared int pointer or 
reference?




Let me try one:

void free(void*) { ... }

Now what? I might have dangling pointers... it's a catastrophe!


One could argue that it should be void free(ref void* p) { /* ... 
*/ p = null; }
As a matter of fact, in my own allocators memory blocks allocated 
by them are passed by value and are non-copyable, they're not 
just void[] as in std.experimental.allocator. One must 'move' 
them to pass ownership, and that includes deallocation. But 
that's another story altogether.



It's essentially the same argument.
This isn't a function that professes to do something that 
people might

misunderstand and try to use in an unsafe way, it's a low-level
implementation device, which is used to build larger *useful*
constructs.


You're missing the point, again. You have an int. You pass a 
pointer to it to some API that takes an int*. You continue to use 
your int as just an int. The API changes, and now the function 
you called previously takes a shared int*. Implicit conversion 
works, everything compiles, you have a race. Now, that's of 
course an extremely stupid scenario. The point is: the caller of 
some API *must* assert that they indeed pass shared data. It's 
insufficient for the API alone to "promise" taking shared data. 
That's the difference with promotion to `const`.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 17 October 2018 at 18:46:18 UTC, Manu wrote:


I've said this a bunch of times, there are 2 rules:
1. shared inhibits read and write access to members
2. `shared` methods must be threadsafe


From there, shared becomes interesting and useful.


Oh God...

void atomicInc(shared int* i) { /* ... */ }

Now what? There are no "methods" for ints, only UFCS. Those 
functions can be as safe as you like, but if you allow implicit 
promotion of int* to shared int*, you *allow implicit races*.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 17 October 2018 at 14:14:56 UTC, Nicholas Wilson 
wrote:
On Wednesday, 17 October 2018 at 07:24:13 UTC, Stanislav Blinov 
wrote:
On Wednesday, 17 October 2018 at 05:40:41 UTC, Walter Bright 
wrote:



When Andrei and I came up with the rules for:

   mutable
   const
   shared
   const shared
   immutable

and which can be implicitly converted to what, so far nobody 
has found a fault in those rules...


Here's one: shared -> const shared shouldn't be allowed. 
Mutable aliasing in single-threaded code is bad enough as it 
is.


Could you explain that a bit more? I don't understand it, 
mutable -> const is half the reason const exists.


Yes, but `shared` is not `const`. This 'might change' rule is 
poisonous for shared data: you essentially create extra work for 
the CPU for very little gain. There's absolutely no reason to 
share a read-only half-constant anyway. Either give immutable, or 
just flat out copy.


I was thinking that mutable -> shared const as apposed to 
mutable -> shared would get around the issues that Timon posted.


It wouldn't, as Timon already explained. Writes to not-`shared` 
mutables might just not be propagated beyond the core. Wouldn't 
happen on amd64, of course, but still. It's not about atomicity 
of reads, it just depends on architecture. On some systems you 
have to explicitly synchronize memory.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 17 October 2018 at 14:44:19 UTC, Guillaume Piolat 
wrote:


The fact that this _type constructor_ finds its way into 
_identifiers_ create some concern: 
https://github.com/dlang/phobos/blob/656798f2b385437c239246b59e0433148190938c/std/experimental/allocator/package.d#L642


Well, ISharedAllocator is indeed overboard, but at least 
`allocateShared` would've been a very useful identifier indeed.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 17 October 2018 at 13:36:53 UTC, Stanislav Blinov 
wrote:



Explicit cast from mutable to unsafe, on the other hand:


Blargh, to shared of course.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d

Jesus Manu, it's soon 8 pages of dancing around a trivial issue.

Implicit casting from mutable to shared is unsafe, case closed.
Explicit cast from mutable to unsafe, on the other hand:

- is an assertion (on programmer's behalf) that this instance is 
indeed unique

- is self-documenting
- is greppable (especially if implemented as assumeShared or 
other descriptive name).


I don't understand why are you so fixated on this. The other 
parts of your proposal are much more important.


Re: shared - i need it to be useful

2018-10-17 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 17 October 2018 at 05:40:41 UTC, Walter Bright 
wrote:



When Andrei and I came up with the rules for:

   mutable
   const
   shared
   const shared
   immutable

and which can be implicitly converted to what, so far nobody 
has found a fault in those rules...


Here's one: shared -> const shared shouldn't be allowed. Mutable 
aliasing in single-threaded code is bad enough as it is.


Re: You don't like GC? Do you?

2018-10-16 Thread Stanislav Blinov via Digitalmars-d

On Tuesday, 16 October 2018 at 11:42:55 UTC, Tony wrote:

On Monday, 15 October 2018 at 08:21:11 UTC, Eugene Wissner



He doesn't argue against garbage collection.


Thanks, Eugene, I was starting to lose hope in humanity.


Well, can you state what he does argue against?


I did state what I was arguing against, if you actually read the 
thread and not only pick select statements I'm sure you'll find 
it.


Wouldn't C++ or Rust, with their smart pointers, be a better 
choice for someone who wants to use a compiles-to-object-code 
language, but can't suffer any garbage collector delays?


What is up with people and this thread? Who is talking about 
garbage collector delays? If you do use the GC, they're a given, 
and you work with them. *Just like you should with everything 
else*.


I'm talking about code that doesn't give a  about utilizing 
machine resources correctly. Crap out "new" everywhere, it lets 
you write code fast. Is it actually a good idea to collect here? 
Hell if you know, you don't care, carry on! Crap out classes 
everywhere, it lets you write code fast. Pull in a zillion of 
external dependencies 90% of which you have no idea what they're 
for, what they do and how much cruft they bring with them, they 
let you write code fast. Oh look, you have no threads! Maybe you 
should write a, a... a task system! Yes, full of classes and, and 
futures and... stuff. But no, nononono, writing is too long, 
let's take a ready one. Implemented by another awesome programmer 
just like you! And then spawn... h, a second thread! Yes! Two 
threads are better than one! What for? It doesn't matter what 
for, don't think about it. Better yet! Spawn four! Six! Twelve! 
And then serialize them all with one mutex, because to hell with 
learning that task system you downloaded, you have code to write. 
What did you say? Pointers? Nah, you have twelve threads and a 
mutex. Surely you need reference counted objects. Pointers are 
bad for you, they will have you think...
Then, after this jellyfish wobbly pile of crud is starting to 
swell and smell, then start "optimizing" it. Profile first 
though. Profile, measure! Only first write more cruft in order to 
measure what needs to be measured, otherwise you might 
accidentally measure all those libXXX you used and all those 
cache misses you wrote. And then fix it. By doing more of the 
above, as luck would have it.


"Don't be a computer..." What a joke.


Re: shared - i need it to be useful

2018-10-16 Thread Stanislav Blinov via Digitalmars-d

On Tuesday, 16 October 2018 at 03:00:21 UTC, Manu wrote:

I don't see how an *implicit* cast can be a restriction. At 
all.



Because a shared pointer can't access anything.
You can't do anything with a shared instance, so the can be no 
harm done.


That just doesn't compute. You obviously *can* do "anything" with 
a shared instance, per your own requirements all you need is 
methods. But at this point, the caller has an unshared reference, 
and your API possibliy stole away a shared one without the caller 
ever knowing of it happening.


It's like we're talking about wholly different things here. 
Casting should be done by the caller, i.e. a programmer that 
uses some API. If that API expects shared arguments, the 
caller better make sure they pass shared values. Implicit 
conversion destroys any obligations between the caller and the 
API.


Why? What could a function do with shared arguments?


How, exactly, do you propose getting shared state from one thread 
to another? Exclusively through globals?


The problem that I'm suggesting is exactly that: an `int*` is 
not, and can not, be a `shared int*` at the same time. 
Substitute int for any type. But D is not Rust and it can't 
statically prevent that, except for disallowing trivial 
programming mistakes, which, with implicit conversion 
introduced, would also go away.


Why not? The guy who receives the argument receives an argument 
that
*may be shared*, and as such, he's restricted access to it 
appropriately.


But the guy that *provides* that argument may have no idea of 
this happening. This is unshared aliasing.


module yourapi;

// My code does not know about this function at all
void giveToThread(shared int* ptr) { /* ... */ }

void yourAPI(int* ptr) {
giveToThread(ptr);
}

module mycode;

int x;

void main() {
yourAPI(&x);
}

Just like if you receive a const thing, you can't write to it, 
even if the caller's thing isn't const.


You do know why those Rust guys disallowed mutable aliasing? That 
is, having both mutable and immutable references at the same 
time. That's a long known problem that in C++ and D is only 
"solved" by programmer discipline and not much else.



If you receive a shared thing, you can't read or write to it.


You can, per your model, through methods. All the while the 
original is being read and written freely.


> a good chance they don't need it though, they might not  
> interact with a thread-unsafe portion of the class.


Or they might.


Then you will implement synchronisation, or have violated your 
thread-safety promise.


Such a good promise it is when it's simply ignored by an implicit 
cast.


Because that is exactly the code that a good amount of 
"developers" will write. Especially those of the "don't think 
about it" variety. Don't be mistaken for a second: if the 
language allows it, they'll write it.


This is not even an argument.
Atomic!int must be used with care. Any threading of ANY KIND 
must be

handled with care.


Implicit casts are in another galaxy as far as handling with care 
is concerned.


Saying we shouldn't make shared useful because someone can do 
something wrong is like saying we shouldn't have atomic int's 
and we shouldn't have spawn(). They're simply too dangerous to 
give to users...


We should make `shared` useful. We shouldn't allow unshared 
aliasing.


Can you actually provide an example of a mixed shared/unshared 
class that even makes sense then? As I said, at this point I'd 
rather see such definitions prohibited entirely.


I think this is a typical sort of construction:

struct ThreadsafeQueue(T)
{
  void QueueItem(T*) shared;
  T* UnqueueItem() shared;
}


That's an interface for a multi-producer multi-consumer, yet 
you're using it as a multi-producer single-consumer. Those would 
have very different implementations.



struct SpecialWorkList
{
  struct Job { ... }

  void MakeJob(int x, float y, string z) shared  // <- any 
thread may produce a job

  {
Job* job = new Job; // <- this is thread-local
PopulateJob(job, x, y, z); // <- preparation of a job might 
be complex, and worthy of the SpecialWorkList implementation


...except you can't call PopulateJob from MakeJob.

The two methods after compiler rewrite are (pseudocode):

void struct_method_SpecialWorkList_MakeJob(shared 
SpecialWorkList* this, int x, float y, string z);
void struct_method_SpecialWorkList_PopulateJob(SpecialWorkList* 
this, Job* job, ...);


Or are you also suggesting to allow implicitly demoting 
shared(T)* to T* ?!? Then you can just throw away `shared`.


jobList.QueueItem(job);  // <- QueueItem encapsulates 
thread-safety, no need for blunt casts


None were needed here regardless.


  void Flush() // <- not shared, thread-local consumer
  {


I.e. this can only be called by the thread that owns this 
instance, because only that thread can have an unshared reference 
to it.


  void GetSpecialSystemState() // <- this has NOTHING to do 
with the threadsafe part of Sp

Re: shared - i need it to be useful

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Tuesday, 16 October 2018 at 00:15:54 UTC, Manu wrote:
On Mon, Oct 15, 2018 at 4:35 PM Stanislav Blinov via 
Digitalmars-d  wrote:


What?!? So... my unshared methods should also perform all 
that's necessary for `shared` methods?


Of course! You're writing a threadsafe object... how could you 
expect otherwise?


See below.

Just to be clear, what I'm suggesting is a significant 
*restriction*
to what shared already does... there will be a whole lot more 
safety under my proposal.


I don't see how an *implicit* cast can be a restriction. At all.

The cast gives exactly nothing that attributing a method as 
shared
doesn't give you, except that attributing a method shared is so 
much

more sanitary and clearly communicates intent at the API level.


It's like we're talking about wholly different things here. 
Casting should be done by the caller, i.e. a programmer that uses 
some API. If that API expects shared arguments, the caller better 
make sure they pass shared values. Implicit conversion destroys 
any obligations between the caller and the API.


> You can write bad code with any feature in any number of 
> ways.


Yup. For example, passing an int* to a function expecting 
shared int*.


I don't understand your example. What's the problem you're 
suggesting?


The problem that I'm suggesting is exactly that: an `int*` is 
not, and can not, be a `shared int*` at the same time. Substitute 
int for any type. But D is not Rust and it can't statically 
prevent that, except for disallowing trivial programming 
mistakes, which, with implicit conversion introduced, would also 
go away.


...And therefore they lack any synchronization. So I don't see 
how they *can* be "compatible" with `shared` methods.


I don't understand this statement either. Who said they lack 
synchronisation? If they need it, they will have it. There's a 
good chance they don't need it though, they might not  interact 
with a thread-unsafe portion of the class.


Or they might.

> If your shared method is incompatible with other methods, 
> your class is broken, and you violate your promise.


Nope.


So certain...


class BigCounter {

 this() { /* don't even need the mutex if I'm not sharing 
this

*/ }

 this(Mutex m = null) shared {
 this.m = m ? m : new Mutex;
 }

 void increment() { value += 1; }
 void increment() shared { synchronized(m)
*value.assumeUnshared += 1; }

private:
 Mutex m;
 BigInt value;
}


You've just conflated 2 classes into one. One is a threadlocal
counter, the other is a threadsafe counter. Which is it?
Like I said before: "you can contrive a bad program with 
literally any language feature!"


Because that is exactly the code that a good amount of 
"developers" will write. Especially those of the "don't think 
about it" variety. Don't be mistaken for a second: if the 
language allows it, they'll write it.



They're not "compatible" in any shape or form.


Correct, you wrote 2 different things and mashed them together.


Can you actually provide an example of a mixed shared/unshared 
class that even makes sense then? As I said, at this point I'd 
rather see such definitions prohibited entirely.



Or would you have
the unshared ctor also create the mutex and unshared increment 
also take the lock? What's the point of having them  then? 
Better disallow mixed implementations altogether (which is 
actually not that bad of an idea).


Right. This is key to my whole suggestion. If you write a 
shared thing, you accept that it's shared! You don't just 
accept it, you jam the stake in the ground.


Then, once more, `shared` should then just be a type qualifier 
exclusively, and mixing shared/unshared methods should just not 
be allowed.



There's a relatively small number of things that need to be
threadsafe, you won't see `shared` methods appearing at random. 
If you use shared, you promise threadsafety OR the members of 
the thing are inaccessible without some sort of 
lock-&-cast-away treatment.


As above.


import std.concurrency;
import core.atomic;

void thread(shared int* x) {
 (*x).atomicOp!"+="(1);
}

shared int c;

void main() {
 int x;
 auto tid = spawn(&thread, &x); // "just" a typo
}

You're saying that's ok, it should "just" compile. It 
shouldn't. It should produce an error and a mild electric 
discharge into the developer's chair.


Yup. It's a typo. You passed a stack pointer to a scope that 
outlives the caller.
That class of issue is not on trial here. There's DIP1000, and 
all sorts of things to try and improve safety in terms of 
lifetimes.


I'm sorry, I'm not very good at writing "real" examples for 
things that don't exist or don'

Re: shared - i need it to be useful

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 21:51:43 UTC, Manu wrote:

If a shared method is incompatible with an unshared method, 
your class is broken.


What?!? So... my unshared methods should also perform all that's 
necessary for `shared` methods?



Explicit casting doesn't magically implement thread-safety, it
basically just guarantees failure.


It doesn't indeed. It does, however, at least help prevent silent 
bugs. Via that same guaranteed failure. That failure is about all 
the help we can get from the compiler anyway.


What I suggest are rules that lead to proper behaviour with 
respect to writing a thread-safe API.

You can write bad code with any feature in any number of ways.


Yup. For example, passing an int* to a function expecting shared 
int*.



I see it this way:
If your object has shared methods, then it is distinctly and
*deliberately* involved in thread-safety. You have deliberately
opted-in to writing a thread-safe object, and you must deliver 
on your promise.


The un-shared API of an object that supports `shared` are not 
exempt from the thread-safety commitment, they are simply the 
subset of the API that may not be called from a shared context.


And therefore they lack any synchronization. So I don't see how 
they *can* be "compatible" with `shared` methods.


If your shared method is incompatible with other methods, your 
class is broken, and you violate your promise.


Nope.

class BigCounter {

this() { /* don't even need the mutex if I'm not sharing this 
*/ }


this(Mutex m = null) shared {
this.m = m ? m : new Mutex;
}

void increment() { value += 1; }
void increment() shared { synchronized(m) 
*value.assumeUnshared += 1; }


private:
Mutex m;
BigInt value;
}

They're not "compatible" in any shape or form. Or would you have 
the unshared ctor also create the mutex and unshared increment 
also take the lock? What's the point of having them  then? Better 
disallow mixed implementations altogether (which is actually not 
that bad of an idea).


Nobody writes methods of an object such that they don't work 
with each other... methods are part of a deliberately crafted 
and packaged
entity. If you write a shared object, you do so deliberately, 
and you buy responsibility of making sure your objects API is 
thread-safe.

If your object is not thread-safe, don't write shared methods.


Ahem... Okay...

import std.concurrency;
import core.atomic;

void thread(shared int* x) {
(*x).atomicOp!"+="(1);
}

shared int c;

void main() {
int x;
auto tid = spawn(&thread, &x); // "just" a typo
}

You're saying that's ok, it should "just" compile. It shouldn't. 
It should produce an error and a mild electric discharge into the 
developer's chair.


Re: shared - i need it to be useful

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 20:57:46 UTC, Manu wrote:
On Mon, Oct 15, 2018 at 1:15 PM Stanislav Blinov via 
Digitalmars-d  wrote:


On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote:

> Assuming the rules above: "can't read or write to members", 
> and the understanding that `shared` methods are expected to 
> have threadsafe implementations (because that's the whole 
> point), what are the risks from allowing T* -> shared(T)* 
> conversion?

>
> All the risks that I think have been identified previously 
> assume that you can arbitrarily modify the data. That's 
> insanity... assume we fix that... I think the promotion 
> actually becomes safe now...?


You're still talking about implicit promotion?


Absolutely. This is critical to make shared useful, and I think 
there's a path to make it work.



No, it does not
become safe no matter what restrictions you put on `shared`
instances, because the caller of any function that takes 
`shared`

arguments remains blissfully unaware of this promotion.


It doesn't matter... what danger is there of distributing a 
shared pointer? Are you concerned that someone with a shared 
reference might call a threadsafe method? If that's an invalid 
operation, then the method has broken it's basic agreed 
contract.


No, on the contrary. Someone with an unshared pointer may call 
unshared method or read/write data while someone else accesses it 
via `shared` interface precisely because you allow T to escape to 
shared(T). You *need* an explicit cast for this.


And I agree that it's conceivable that you could contrive a bad 
program, but you can contrive a bad program with literally any 
language feature!





Re: You don't like GC? Do you?

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 20:12:47 UTC, 12345swordy wrote:
On Monday, 15 October 2018 at 19:57:59 UTC, Stanislav Blinov 
wrote:
If you want to have an argument, I suggest you stop quote 
mining and start paying attention.


If you wanted an argument from me, then you need to stop with 
the "LOL YOU MAD BRO" rhetoric.


...and again he grabs a single quote.

Look, so far all you've contributed to this thread is one poor 
excuse and a ton of victim act. Neither are of any particular use.


Re: shared - i need it to be useful

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote:

Assuming the rules above: "can't read or write to members", and 
the understanding that `shared` methods are expected to have 
threadsafe implementations (because that's the whole point), 
what are the risks from allowing T* -> shared(T)* conversion?


All the risks that I think have been identified previously 
assume that you can arbitrarily modify the data. That's 
insanity... assume we fix that... I think the promotion 
actually becomes safe now...?


You're still talking about implicit promotion? No, it does not 
become safe no matter what restrictions you put on `shared` 
instances, because the caller of any function that takes `shared` 
arguments remains blissfully unaware of this promotion.


Re: You don't like GC? Do you?

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 18:00:24 UTC, 12345swordy wrote:
On Monday, 15 October 2018 at 17:30:28 UTC, Stanislav Blinov 
wrote:

On Monday, 15 October 2018 at 16:46:45 UTC, 12345swordy wrote:
On Monday, 15 October 2018 at 00:02:31 UTC, Stanislav Blinov 
wrote:

I'm arrogants, huh?

When you say statements like this.

you don't give a flying duck about your impact on the 
industry.

It come across as condescending and arrogant.


Yep, and everything else that's inconvenient you'd just cut 
out.
You mean the part that you straw man me, and resort to personal 
attacks? No need for me to address it.


Pfff... *I* am "straw man"ing you? That's just hilarious.

"Not everything needs to be fast" - I never said everything needs 
to be fast. I'm saying everything *doesn't need to be slow* due 
to lazy people doing lazy things because they "don't want to 
think about it". So who's straw man-ing who, exactly? Do you even 
understand the difference?


By saying that you're more interested in saving your development 
time as opposed to processing time *for web apps, no less*, 
you're admitting that you don't care about the consequences of 
your actions. You finding it "personal" only supports that 
assessment, so be my guest, be offended. Or be smart, and stop 
and think about what you're doing.



Did I hit a nerve?..


Case in point.


If you want to have an argument, I suggest you stop quote mining 
and start paying attention.


Re: You don't like GC? Do you?

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 16:46:45 UTC, 12345swordy wrote:
On Monday, 15 October 2018 at 00:02:31 UTC, Stanislav Blinov 
wrote:

I'm arrogants, huh?

When you say statements like this.


you don't give a flying duck about your impact on the industry.

It come across as condescending and arrogant.


Yep, and everything else that's inconvenient you'd just cut out. 
Did I hit a nerve?..


Re: You don't like GC? Do you?

2018-10-15 Thread Stanislav Blinov via Digitalmars-d

On Monday, 15 October 2018 at 10:11:15 UTC, rjframe wrote:

On Sat, 13 Oct 2018 12:22:29 +, Stanislav Blinov wrote:


And?.. Would you now go around preaching how awesome the GC is 
and that everyone should use it?


For something like I did, yes.

The article the OP links to may want GC for everything; the 
excerpt the OP actually quoted is talking about applications 
where memory management isn't the most important thing. I 
completely agree with that excerpt.


Yeah, well, what's the title of this thread, and what's the 
conclusion of that post?


Automation is what literally all of us do. But you should not 
automate something you don't understand.


Re: You don't like GC? Do you?

2018-10-14 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 14 October 2018 at 20:26:10 UTC, 12345swordy wrote:

It not an excuse, it's reality. The d language have multiple 
issues, the idea to have the language to have built in support 
for GC is NOT one of them.


Read this thread again then, carefully. You *have to* understand 
D's GC in order to use it correctly, efficiently, and safely. And 
to do that, you *have to* understand your data and what you're 
doing with it. And to do that you *have to* work with the 
machine, not in spite of it. At which point you may well 
reconsider using the GC in the first place. Or you may not. But 
at least that will be an informed decision based on actual value, 
not this "save time" fallacy.


We develop our software using C# and the GC is a huge time 
saver for us as we are developing web apps.


So you're in this for a quick buck, and to hell with everything 
else. Got it. And C#, so likely also everything is an "object", 
and screw the heap wholesale, right?.. Save time writing code, 
waste time processing data. Cool choice.



I find your side remarks to be very arrogant and condescending.


I'm arrogant, huh? It's people like you who think that "the" way 
to program is produce crappy code fast.


It's so funny how all of you guys seem to think that I'm against 
the GC. I'm not. I'm against stupid "advice" like the one given 
in the OP. Almost all of you seem like you're in the same boat: 
you don't give a flying duck about your impact on the industry.


Re: You don't like GC? Do you?

2018-10-14 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 13 October 2018 at 21:44:45 UTC, 12345swordy wrote:

Not everyone have the time nor skills of doing manual memory 
management. Even more so when correctness is way more important 
than speed.


Not everything needs to be fast.


That's a lamest excuse if I ever seen one. If you can't be 
bothered to acquire one of the most relevant skills for writing 
code for modern systems, then:


a) Ideally, you shouldn't be writing code
b) At the very least, you're not qualified to give any advice 
pertaining to writing code


PS. "Correctness" also includes correct use of the machine and 
it's resources.


Re: You don't like GC? Do you?

2018-10-13 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 13 October 2018 at 13:17:41 UTC, Atila Neves wrote:

Then five years later, try and hunt down that mysterious heap 
corruption. Caused by some destructor calling into buggy 
third-party code. Didn't want to think about that one either?


That hasn't happened to me.


It rarely does indeed. Usually it's someone else that has to sift 
through your code and fix your bugs years later. Because by that 
time you're long gone on another job, happily writing more code 
without thinking about it.


There is no "sometimes" here. You're writing programs for 
specific machines. All. The. Time.


I am not. The last time I wrote code for a specific machine it 
was for my 386, probably around 1995.


Yes you are. Or what, you're running your executables on a 1990 
issue calculator? :P Somehow I doubt that.


Precisely where in memory your data is, how it got there and 
how it's laid out should be bread and butter of any D 
programmer.


Of any D programmer writing code that's performance sensitive.


All code is performance sensitive.


If that were true, nobody would write code in Python. And yet...


Nobody would write code in Python if Python didn't exist. That it 
exists means there's a demand. Because there are an awful lot of 
folks who just "don't want to think about it".
Remember 2000s? Everybody and their momma was a developer. Web 
developer, Python, Java, take your pick. Not that they knew what 
they were doing, but it was a good time to peddle crap.
Now, Python in an of itself is not a terrible language. But 
people write *system* tools and scripts with it. WTF? I mean, if 
you could care less how the machine works, you have *no* business 
developing *anything* for an OS.



If it's not speed, it's power consumption. Or memory. Or I/O.


Not if it's good enough as it is. Which, in my my experience, 
is frequently the case. YMMV.


That is not a reason to intentionally write *less* efficient code.

"Not thinking" about any of that means you're treating your 
power champion horse as if it was a one-legged pony.


Yes. I'd rather the computer spend its time than I mine. I 
value the latter far more than the former.


And what if your code wastes someone else's time at some later 
point? Hell with it, not your problem, right?


Advocating the "not thinking" approach makes you an outright 
evil person.


Is there meetup for evil people now that I qualify? :P


Any gathering of like-minded programmers will do.


Re: You don't like GC? Do you?

2018-10-13 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 13 October 2018 at 13:08:30 UTC, Atila Neves wrote:

On Friday, 12 October 2018 at 23:24:56 UTC, Stanislav Blinov


Funny. Now for real, in a throwaway script, what is there to 
gain from a GC? Allocate away and forget about it.


In case you run out of memory, the GC scans. That's the gain.


Correction, in case the GC runs out of memory, it scans.


In fact, the GC runtime will only detract from performance.

Demonstrably untrue. It puzzles me why this myth persists.

Myth, is it now?

Yes.


Please demonstrate.

Unless all you do is allocate memory, which isn't any kind of 
useful application, pretty much on each sweep run the GC's 
metadata is *cold*.


*If* the GC scans.


"If"? So... ahem... what, exactly, is the point of a GC that 
doesn't scan? What are you even arguing here? That you can 
allocate and never free? You can do that without GC just as well.


There are trade-offs, and one should pick whatever is best 
for the situation at hand.


Exactly. Which is *not at all* what the OP is encouraging to 
do.


I disagree. What I got from the OP was that for most code, the 
GC helps. I agree with that sentiment.


Helps write code faster? Yes, I'm sure it does. It also helps 
write slower unsafe code faster, unless you're paying attention, 
which, judging by your comments, you're not and aren't inclined 
to.


Alright, from one non-native English speaker to another, well 
done, I salute you.


The only way I'd qualify as a non-native English speaker would 
be to pedantically assert that I can't be due to not having 
learned it first. In any case, I'd never make fun of somebody's 
English if they're non-native, and that's most definitely not 
what I was trying to do here - I assume the words "simple" and 
"easy" exist in most languages. I was arguing about semantics.


Just FYI, they're the same word in my native language :P

To the point: *that* is a myth. The bugs you're referring to 
are not *solved* by the GC, they're swept under a rug.


Not in my experience. They've literally disappeared from the 
code I write.


Right. At the expense of introducing unpredictable behavior in 
your code. Unless you thought about that.


Because the bugs themselves are in the heads, stemming from 
that proverbial programmer laziness. It's like everyone is 
Scarlett O'Hara with a keyboard.



IMHO, lazy programmers are good programmers.


Yes, but not at the expense of users and other programmers who'd 
use their code.


For most applications, you *do* know how much memory you'll 
need, either exactly or an estimation.


I don't, maybe you do. I don't even care unless I have to. See 
my comment above about being lazy.


Too bad. You really, really should.

Well, I guess either of those do take more arguments than a 
"new", so yup, you do indeed write "less" code. Only that you 
have no clue how much more code is hiding behind that "new",


I have a clue. I could even look at the druntime code if I 
really cared. But I don't.


You should.

how many indirections, DLL calls, syscalls with libc's 
wonderful poison that is errno... You don't want to think 
about that.


That's right, I don't.


You should. Everybody should.

Then two people start using your script. Then ten, a hundred, 
a thousand. Then it becomes a part of an OS distribution. And 
no one wants to "think about that".


Meh. There are so many executables that are part of 
distributions that are written in Python, Ruby or JavaScript.


Exactly my point. That's why we *must not* pile more crap on top 
of that. That's why we *must* think about the code we write. Just 
because your neighbour sh*ts in a public square, doesn't mean 
that you must do that too.


For me, the power of tracing GC is that I don't need to think 
about ownership, lifetimes, or manual memory management.


Yes you do, don't delude yourself.


No, I don't. I used to in C++, and now I don't.


Yes you do, you say as much below.

Pretty much the only way you don't is if you're writing purely 
functional code.


I write pure functional code by default. I only use 
side-effects when I have to and I isolate the code that does.



But we're talking about D here.
Reassigned a reference? You thought about that. If you didn't, 
you just wrote a nasty bug. How much more hypocrisy can we 
reach here?


I probably didn't write a nasty bug if the pointer that was 
reassigned was to GC allocated memory. It lives as long as it 
has to, I don't think about it.


In other words, you knew what you were doing, at which point I'd 
ask, what's the problem with freeing the no-longer-used memory 
there and then? There's nothing to "think" about.


"Fun" fact: it's not @safe to "new" anything in D if your 
program uses any classes. Thing is, it does unconditionally 
thanks to DRuntime.


I hardly ever use classes in D, but I'd like to know more about 
why it's not @safe.


rikki's example isn't exactly the one I was talking about, so 
here goes:


module mycode;

import std.stdio;

import third

Re: You don't like GC? Do you?

2018-10-13 Thread Stanislav Blinov via Digitalmars-d

On Saturday, 13 October 2018 at 12:15:07 UTC, rjframe wrote:

...I didn't even keep the script; I'll never need it again. 
There are times when the easy or simple solution really is the 
best one for the task at hand.


And?.. Would you now go around preaching how awesome the GC is 
and that everyone should use it?


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 23:32:34 UTC, Nicholas Wilson wrote:

On Friday, 12 October 2018 at 20:12:26 UTC, Stanislav Blinov


That's done first and foremost by stripping out unnecessary 
allocations, not by writing "new" every other line and closing 
your eyes.


If you need perf in your _scripts_, a use LDC and b) pass -O3 
which among many other improvements over baseline will promote 
unnecessary garbage collection to the stack.


If you *need* perf, you write performant code. If you don't need 
perf, the least you can do is *not write* lazy-ass pessimized 
crap.


I mean come on, it's 2018. We're writing code for multi-core 
and multi-processor systems with complex memory interaction.


We might be sometimes. I suspect that is less likely for a 
script to fall in that category.


Jesus guys. *All* code falls in that category. Because it is 
being executed by those machines. Yet we all oh so like to 
pretend that doesn't happen, for some bizarre reason.


Precisely where in memory your data is, how it got there and 
how it's laid out should be bread and butter of any D 
programmer. It's true that it isn't critical for one-off 
scripts, but so is deallocation.


Saying stuff like "do more with GC" is just outright harmful.


That is certainly not an unqualified truth. Yes one shouldn't 
`new` stuff just for fun, but speed of executable is often not 
what one is trying to optimise when writing code, e.g. when 
writing a script one is probably trying to minimise 
development/debugging time.


That's fine so long as it doesn't unnecessarily *pessimize* 
execution. Unfortunately, when you advertise GC for it's 
awesomeness in your experience with "throwaway" scripts, you're 
sending a very, *very* wrong message.



Kids are reading, for crying out loud.
Oi, you think thats bad? Try reading what some of the other 
Aussies post, *cough* e.g. a frustrated Manu *cough*


:)


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 21:39:13 UTC, Atila Neves wrote:

D isn't Java. If you can, put your data on the stack. If you 
can't, `new` away and don't think about it.


Then five years later, try and hunt down that mysterious heap 
corruption. Caused by some destructor calling into buggy 
third-party code. Didn't want to think about that one either?


The chances you'll have to optimise the code are not high. If 
you do, the chances that the GC allocations are the problem are 
also not high. If the profiler shows they are... then remove 
those allocations.


I mean come on, it's 2018. We're writing code for multi-core 
and multi-processor systems with complex memory interaction.


Sometimes we are. Other times it's a 50 line script.


There is no "sometimes" here. You're writing programs for 
specific machines. All. The. Time.


Precisely where in memory your data is, how it got there and 
how it's laid out should be bread and butter of any D 
programmer.


Of any D programmer writing code that's performance sensitive.


All code is performance sensitive. Whoever invented that 
distinction should be publicly humiliated. If it's not speed, 
it's power consumption. Or memory. Or I/O. "Not thinking" about 
any of that means you're treating your power champion horse as if 
it was a one-legged pony.
Advocating the "not thinking" approach makes you an outright evil 
person.


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 21:34:35 UTC, Atila Neves wrote:


---
When writing a throwaway script...


...there's absolutely no need for a GC.


True. There's also absolutely no need for computer languages 
either, machine code is sufficient.


Funny. Now for real, in a throwaway script, what is there to gain 
from a GC? Allocate away and forget about it.



In fact, the GC runtime will only detract from performance.



Demonstrably untrue. It puzzles me why this myth persists.


Myth, is it now? Unless all you do is allocate memory, which 
isn't any kind of useful application, pretty much on each sweep 
run the GC's metadata is *cold*. What's worse, you don't control 
how much data there is and where it is. Need I say more? If you 
disagree, please do the demonstration then.


There are trade-offs, and one should pick whatever is best for 
the situation at hand.


Exactly. Which is *not at all* what the OP is encouraging to do.

What this means is that whenever I have disregarded a block 
of information, say removed an index from an array, then that 
memory is automatically cleared and freed back up on the next 
sweep. While the process of collection and actually checking


Which is just as easily achieved with just one additional line 
of code: free the memory.


*Simply* achieved, not *easily*. Decades of bugs has shown 
emphatically that it's not easy.


Alright, from one non-native English speaker to another, well 
done, I salute you. I also used the term "dangling pointer" 
previously, where I should've used "non-null". Strange you didn't 
catch that.
To the point: *that* is a myth. The bugs you're referring to are 
not *solved* by the GC, they're swept under a rug. Because the 
bugs themselves are in the heads, stemming from that proverbial 
programmer laziness. It's like everyone is Scarlett O'Hara with a 
keyboard.


For most applications, you *do* know how much memory you'll need, 
either exactly or an estimation. Garbage collection is useful for 
cases when you don't, or can't estimate, and even then a limited 
subset of that.



Don't be a computer. Do more with GC.


Writing a throwaway script there's nothing stopping you from 
using mmap or VirtualAlloc.


There is: writing less code to achieve the same result.


Well, I guess either of those do take more arguments than a 
"new", so yup, you do indeed write "less" code. Only that you 
have no clue how much more code is hiding behind that "new", how 
many indirections, DLL calls, syscalls with libc's wonderful 
poison that is errno... You don't want to think about that. Then 
two people start using your script. Then ten, a hundred, a 
thousand. Then it becomes a part of an OS distribution. And no 
one wants to "think about that".


The "power" of GC is in the language support for non-trivial 
types, such as strings and associative arrays. Plain old 
arrays don't benefit from it in the slightest.


For me, the power of tracing GC is that I don't need to think 
about ownership, lifetimes, or manual memory management.


Yes you do, don't delude yourself. Pretty much the only way you 
don't is if you're writing purely functional code. But we're 
talking about D here.
Reassigned a reference? You thought about that. If you didn't, 
you just wrote a nasty bug. How much more hypocrisy can we reach 
here?


"Fun" fact: it's not @safe to "new" anything in D if your program 
uses any classes. Thing is, it does unconditionally thanks to 
DRuntime.



I also don't have to please the borrow checker gods.


Yeah, that's another extremum. I guess "rustacians" or whatever 
the hell they call themselves are pushing that one, don't they? 
"Let's not go for a GC, let's straight up cut out whole paradigms 
for safety's sake..."


Yes, there are other resources to manage. RAII nearly always 
manages that, I don't need to think about that either.


Yes you do. You do need to write those destructors or scoped 
finalizers, don't you? Or so help me use a third-party library 
that implements those? There's fundamentally *no* difference from 
memory management here. None, zero, zip.


Sad thing is, you're not alone. Look at all the major OSs today. 
How long does it take to, I don't know, open a project in the 
Visual Studio on Windows? Or do a search in a huge file opened in 
'less' on Unix? On an octacore 4GHz machine with 32Gb 3GHz 
memory? Should just instantly pop up on the screen, shouldn't it? 
Why doesn't it then? Because most programmers think the way you 
do: "oh it doesn't matter here, I don't need to think about 
that". And then proceed to advocate those "awesome" laid-back 
solutions that oh so help them save so much time coding. Of 
course they do, at everyone else's expense. Decades later, we're 
now trying to solve problems that shouldn't have existed in the 
first place. You'd think that joke was just that, a joke...


But let's get back to D. Look at Phobos. Why does stdout.writefln 
need to allocate? How many times does 

Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 21:15:04 UTC, welkam wrote:

People in this thread mostly said that for some things GC is 
just awesome. When you need to get shit done fast and dirty GC 
saves time and mental capacity. Not all code deals with 
sockets, DB, bank transactions, multithreading, etc.


Read the OP again then. What message does it send? What broad 
conclusion does it draw from a niche use case?


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 19:55:02 UTC, Nicholas Wilson wrote:

Freeing your mind and the codebase of having to deal with 
memory leaves it in an easier place to deal with the less 
common higher impact leaks: file descriptors, sockets, database 
handles ect. (this is like chopping down the forest so you can 
see the trees you care about ;) ).


That's done first and foremost by stripping out unnecessary 
allocations, not by writing "new" every other line and closing 
your eyes.


I mean come on, it's 2018. We're writing code for multi-core and 
multi-processor systems with complex memory interaction. 
Precisely where in memory your data is, how it got there and how 
it's laid out should be bread and butter of any D programmer. 
It's true that it isn't critical for one-off scripts, but so is 
deallocation.


Saying stuff like "do more with GC" is just outright harmful. 
Kids are reading, for crying out loud.


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 19:06:36 UTC, Dejan Lekic wrote:

What a bunch of nonsense! I used to talk like this some 20 
years ago when all I saw in the computing world was C and C++...


Sure garbage collection is not for every project, depends what 
industry you are in I guess... In my case (business 
applications/services) I have never had the need to turn off 
garbage collection!


However, someone in the gaming industry, embedded or realtime 
systems would indeed need to turn off the GC...


Who said anything about turning it off? I'm pointing out that 
using the GC for the sake of simplicity is precisely the wrong 
reason to do so, that's it. Bunch of nonsense, right. Have fun 
writing sloppy code then.


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 18:50:26 UTC, Neia Neutuladh wrote:

Over the lifetime of the script, it processed more memory than 
my computer had. That means I needed a memory management 
strategy other than "allocate everything". The GC made that 
quite easy.


Now *that* is a good point. Then again, until you run out of 
address space you're still fine with just plain old 
allocate-and-forget. Not that it's a good thing for production 
code, but for one-off scripts? Sure.


People demonstrably have trouble doing that. We can do it 
most of the time, but everyone occasionally forgets.


The GC isn't a cure for forgetfulness. One can also forget to 
close a file or a socket, or I dunno, cancel a financial 
transaction.


By lines of code, programs allocate memory much more often than 
they deal with files or sockets or financial transactions. So 
anything that requires less discipline when dealing with memory 
will reduce bugs a lot, compared with a similar system dealing 
with sockets or files.


My point is it's irrelevant whether it's memory allocation or 
something else. If you allow yourself to slack on important 
problems, that habit *will* bite you in the butt in the future.
But the other end of the spectrum is also harmful. That's how we 
get those "good" APIs such as XCB that fragment the hell out of 
your heap, force libc on you and make you collect their garbage.


It's good enough for a lot of people most of the time without 
thinking about things much.


That's precisely the line of thinking that gave us Java, C#, 
Python and other bastard languages that didn't want to concern 
themselves with the hardware all that much. 30 years of 
"progress" down the drain.


It reduces the frequency of problems and it eliminates 
use-after-free


Not in D it doesn't. Unless you only ever write @safe code, in 
which case you're not in the "without thinking about things much" 
camp.


and double-free, which are sources of data corruption, which is 
hard to track down.


Agreed.

And in the context of a one-off script, I'm probably not going 
to worry about using the GC efficiently as long as I'm not 
running out of memory.


Sure, *that's* the appropriate message. Not the "use the GC, it's 
not as bad as you think".


If you "forget" who owns the data, you may as well "forget" 
who writes it and when. Would GC help then as well? You need 
to expend pretty much the same effort to track that.


That's why we have the const system.


Oh please, really? Const in D? And you're still talking about 
people that don't like to think about things much?


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Friday, 12 October 2018 at 17:31:30 UTC, Neia Neutuladh wrote:

Throwaway scripts can allocate a lot of memory and have 
nontrivial running times. It's less common for scripts than for 
long-running processes, granted, but I've written scripts to go 
through gigabytes of data.


Your point being?.. It's not like you need a GC to allocate 
gigabytes of storage. With D it's super easy to just allocate a 
huge hunk and simply (literally) slice through it.


People demonstrably have trouble doing that. We can do it most 
of the time, but everyone occasionally forgets.


The GC isn't a cure for forgetfulness. One can also forget to 
close a file or a socket, or I dunno, cancel a financial 
transaction.
GC isn't magic. In fact, to use it correctly you need to pay 
*more* attention than when managing memory manually. Don't leave 
dangling pointers. Nurse uninitialized data. Massage it to not 
sweep in hot paths... People seem to forget that and advertise it 
as some sort of magic wand that does all you want without you 
having to think.


Beyond that, the concept you're failing to mention here is 
ownership. You need to use your own mental effort to figure out 
what memory is owned by what part of the code. The GC lets you 
ignore that.


Nope, it doesn't. If you "forget" who owns the data, you may as 
well "forget" who writes it and when. Would GC help then as well? 
You need to expend pretty much the same effort to track that.


Writing a throwaway script there's nothing stopping you from 
using mmap or VirtualAlloc. The "power" of GC is in the 
language support for non-trivial types, such as strings and 
associative arrays. Plain old arrays don't benefit from it in 
the slightest.


A string is a plain old array.


An ASCII string, perhaps. Not a Unicode one. Count 
statically-typed compiled languages with native strings, please.


 and languages with manual memory management also support 
associative arrays.


Of course they do. But again, are those built-in types?


Re: You don't like GC? Do you?

2018-10-12 Thread Stanislav Blinov via Digitalmars-d

On Thursday, 11 October 2018 at 21:22:19 UTC, aberba wrote:

"It takes care of itself
---
When writing a throwaway script...


...there's absolutely no need for a GC. In fact, the GC runtime 
will only detract from performance.


What this means is that whenever I have disregarded a block of 
information, say removed an index from an array, then that 
memory is automatically cleared and freed back up on the next 
sweep. While the process of collection and actually checking


Which is just as easily achieved with just one additional line of 
code: free the memory.



Don't be a computer. Do more with GC.


Writing a throwaway script there's nothing stopping you from 
using mmap or VirtualAlloc. The "power" of GC is in the language 
support for non-trivial types, such as strings and associative 
arrays. Plain old arrays don't benefit from it in the slightest.




Re: Deep nesting vs early returns

2018-10-08 Thread Stanislav Blinov via Digitalmars-d

On Friday, 5 October 2018 at 21:34:38 UTC, Jonathan M Davis wrote:

It's one of those things that I would have thought would just 
be obvious with experience, but if nothing else, some folks 
still try to stick to the whole "single return" idea even 
though I think that most folks agree at this point that it 
causes more problems than it solves.


Sadly, one of these "folks" is DMD's inliner. Then again, returns 
aren't it's only bane.


Re: Thread-safe attribution

2018-10-07 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 7 October 2018 at 04:16:43 UTC, Manu wrote:

We're not trying to 'stuff the nuances' into a keyword... what 
I'm trying to achieve is a mechanism for attributing that a 
function has

implemented thread-safety *in some way*, and how that works is a
detail for the function.
What the attribute needs to do, is control the access rights to 
the
object appropriately, so that if you're in custody of a shared 
object,
you should exclusively be limited to performing guaranteed 
thread-safe
operations on the object, OR, you must perform synchronisation 
and

cast shared away (as is current use of shared).


Then I maintain that `T*` should *not* implicitly cast to 
`threadsafe T*`. It should at least be an explicit cast that you 
could grep for. Consider:


struct Bob {
int x;
int y;

void readAndMutate() /* thread-local */ {
if (x) y += 1;
}

void readAndMutate() threadsafe {
auto lock = getSomeLockThatPresumablyLocksThisInstance();
auto unshared = cast(Bob*) &this;
unshared.readAndMutate();
}
}

void sendToAnotherThread(threadsafe Bob* bob) {
// pass the pointer to some thread...
}

Bob bob; // not marked `threadsafe`

void main() {

bob.x = 1;

sendToAnotherThread(&bob);

bob.x = 0; // <-- that's a bug
auto copyOfBob = bob; // <-- that's another bug

// do the rest of main...
}

Basically, *any* non-`threadsafe` access to `bob` should ideally 
be a compiler error after the cast, but I don't think the 
language could enforce that.


These contrived examples aren't that convincing, I'm sure, but 
imagine this implicit cast hiding somewhere in a 10Kloc module.
In your own words: "I don't think that you should...": then we 
would need at least some way to succinctly find such problems.


There's actually an even more subtle bug in `main` above. Since 
the `bob` instance is not marked in any way, the compiler 
(optimizer) would have no idea that e.g. moving reads and writes 
to bob's fields across that implicit cast must be illegal. I 
guess a cast should then act as a compiler barrier.


You need to read the rest of the thread... I've moved on from 
this initial post.


Yeah, sorry about that. The thread also got split :\



Re: Thread-safe attribution

2018-10-06 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 7 October 2018 at 02:01:17 UTC, Manu wrote:

The thing I'm trying to model is an attribute along the lines 
of

`shared`, but actually useful ;)
I'll use the attribute `threadsafe` in place of `shared`, and 
see

where that goes.

Consider:
struct Bob
{
  int x;
  threadsafe Atomic!int y;


Storing shared and local data together? Ew, cache poison. Be that 
as it may, given that definition:


1. Can you copy Bob or assign to it? If so, how?
2. Can Bob have a destructor? Who calls it if it can?

[snip]


  x.m1(); // ERROR, method not threadsafe
  x.m2(); // fine
  x.overloaded(); // fine, use the threadsafe overload


I guess these three should be y., not x.?


  threadsafe Bob* p = &x; // can take threadsafe reference to
thread-local object


I'm not sure what's that supposed to accomplish. It looks like a 
silent cast, which is already a red flag. What is the purpose of 
this? Give that pointer to another thread while the original 
continues to treat 'x' as thread-local? I.e. if the arguments 
were indeed `ref`, the caller would be blissfully unaware of such 
a "transaction" taking place.


This is loosely what `shared` models, but there's a few 
differences:

1. thread-local can NOT promote to shared
2. shared `this` applies to members

For `shared` to be useful, it should be that a shared 
reference to something inhibits access to it's thread-local 
stuff. And in that world, then I believe that thread-local 
promotion to shared would work like const does.


I guess I'm wondering; should `shared` be transitive? Perhaps 
that's what's wrong with it...?


IHMO, `shared` should be transitive, but... most uses of `shared` 
should just boil down to primitive types (i.e. atomics) and 
pointers to shared primitives and structs. With dereferencing 
latter requiring to be synchronized, casting away `shared` in the 
process, and I don't see how the language as it is can help with 
that, as accessing different kinds of data requires different 
code. We can't just stuff all the nuances of thread-safety and 
multi-core communication into one keyword, whatever that keyword 
may be.
What could be the static (compile-time) guarantees of 
`threadsafe` methods?


Re: DIP 1014

2018-10-04 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 4 October 2018 at 12:08:38 UTC, Shachar Shemesh 
wrote:


Two distinct things. Kinke was talking about how to pass a 
struct through the ABI. You are talking about special-casing a 
specific name.


Not just name, but argument passing as well.

Not to mention, your special case is to transform it to 
something you can *already* specify in the language. Why?


Because that syntax pertains specifically to construction, which 
is what a compiler move is; is not currently used by the language 
(the fact that the compiler doesn't error on it is an oversight); 
enforces calling convention.


Which is, however, not a reason to formalize it and make it a 
requirement for an isolated specific case, such as this one, 
utilizing a syntax that is currently not used by the language.


There is positively nothing in DIP 1014 that is "syntax not 
used by the language". Quite the contrary.


Which is what I said in the very next sentence, so I'm not sure 
what your point is here. It's like we're having a discussion but 
we aren't at the same time.


As opposed to trying to fit existing language semantics to 
something that the language didn't seem to want to allow in 
the first place.


Formalize it as a suggestion, and we can discuss the "as 
opposed to".


Alright, let's get back to it after the weekend then.

Like I said, I think there's a lot you're glossing over here 
(such as backwards compatibility).


Backwards compatibility? With what, exactly? Non-existing 
explicit moves?


Re: DIP 1014

2018-10-04 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 4 October 2018 at 08:32:44 UTC, Shachar Shemesh 
wrote:

On 04/10/18 11:16, Paolo Invernizzi wrote:
While I want to thank you both, about the quality of this 
thread, what kind of "consequences that go beyond what I think 
you understand" are you thinking of? Can you give an example?


Assuming I understand Stanislav's proposal correctly (an


Yes it seems you do.

assumption I'm reluctant to make, hence my request for 
something more formal), it boils down to two points:


* move the data as part of the call hook rather than before
* Use a different name and signature on the hook function


Yes, exactly.

The first one we can argue for or against. My original proposal 
was phrased the way it was precisely because that's the way 
copying works in D (copy first, patch the data later). About a 
week after I submitted it, Andrei came forward with requesting 
to move to copy constructors.


The second, to me, is a non-starter. The only way you'd get a 
function who's signature is:

void someName(Type rhs);
But which actually maintains rhs's address from before the call 
is if the compiler treats "someName" as a special case, and 
emits code which would normally be emitted for the function:

void someName(ref Type rhs);


It would have to be special if you don't want to leave room for 
the compiler implementors. The calling convention for particular 
types (i.e. those that do have a move hook defined) would have to 
be enforced in some way. See the neighbor thread wrt move 
semantics by kinke.


That's why it was important for me to clear up whether there is 
*ever* a case in the current language where that happens 
(answer: only by accident, which is the same as saying "no").


Which is, however, not a reason to formalize it and make it a 
requirement for an isolated specific case, such as this one, 
utilizing a syntax that is currently not used by the language. As 
opposed to trying to fit existing language semantics to something 
that the language didn't seem to want to allow in the first place.


So to get that to work, you'd need to insert a special case 
into the ABI of the language: if the function's name is 
someName, treat it differently.


Yes, that's what I'm talking about.

At this point, you might as well call a spade a spade, and just 
give the function that signature explicitly. 
s/someName/opPostMove/, and you get DIP 1014 (as far as point 
#2 is concerned).


Re: DIP 1014

2018-10-04 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 4 October 2018 at 03:06:35 UTC, Shachar Shemesh 
wrote:


If you do *anything* to that program, and that includes even 
changing its compilation flags (try enabling inlining), it will 
stop working.


You should have known that when you found out it doesn't work 
on ldc: ldc and dmd use the same front-end. If you think 
something works fundamentally different between the two, you 
are probably wrong.


For the love of Pete, that program was an example of how a move 
hook should work, *not* a demonstration of achieving the DIP 
behavior without changing the language. I know the example is 
brittle and have said as much.


Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 18:58:37 UTC, Shachar Shemesh 
wrote:



On 03/10/18 20:43, Stanislav Blinov wrote:
On Wednesday, 3 October 2018 at 15:33:00 UTC, Shachar Shemesh 
wrote:
I.e. - I am asserting if a move was not caught. The program 
fails to run on either ldc or dmd. To me, this makes perfect 
sense as for the way D is built. In essence, opAssign isn't 
guaranteed to run. Feel free to build a struct where that 
assert passes to convince me.


That's a slightly different issue here.


Well, I view this issue as a deal breaker. If you need to move 
the object *in order* to pass it to your move hook, then 
anything that requires knowing the address of the old instance 
will, by definition, not work.


I feel like we're still not on the same page here. 
this(typeof(this)) doesn't work like a move hook in D right now. 
I'm *suggesting* making that be a move ctor, instead of 
opPostMove from your DIP (see below).


Look at the output. The operator is being run, it can't *not* 
run,


Sure it can. Just look at the example I posted on the other 
thread 
(https://forum.dlang.org/post/pp2v16$1014$1...@digitalmars.com). 
The hook you mention is downright @disabled there.


In fact, had that not been the case, this DIP would never have 
happened.


Yup, we're definitely not on the same page :) That's not what I'm 
talking about at all.



Here is the flaw in your logic:

    void opAssign(Tracker rhs)

rhs is passed by value. This means that already at the point 
opAssign is called, rhs *already* has a different address 
than the one it was passed in with.


Currently that is only true if you define a destructor.


No, that's not true. Try printing the instance's address in the 
constructor and again in your operator.


It *is* true when the type doesn't have a destructor. Extending 
that to a move hook, it will also be true because destruction 
will be elided.
I know what you're talking about, that happens for types that 
have destructors.


In the presence of a move hook, 'rhs' would first have to pass 
through that hook, which will not take destructors into 
account at all.


I'm sorry, I'm not following. What is the difference between 
what you're proposing and opPostMove as defined?


1. with this(typeof(this)) the type of argument would never 
change. With opPostMove, it may. Remember that 'is(Tracker == 
const(Tracker))' is false.

2. you won't have to always move all the bits unconditionally.
3. symmetry with this(this)


This is the run I got:
$ ./movetest2
Address of temporary is 'b382e390', counter points to 'b382e390'
... which is '0' bytes from the address of temporary.


...I'm not sure what I should have seen, or what I should have 
concluded from it. This is your original program, unmodified.


This illustrates the intended behavior of a move hook if it 
existed in the language. The 'rhs' that was passed to the call 
was constructed at that same address (&rhs.counter == 
&rhs.localCounter). I.e. this is how I'm suggesting a 
this(typeof(this)) *could* work.


this(typeof(this)), of course, would need to be special in the 
ABI, but again, that's one special function instead of two.


No. My proposal requires one amendment to argument passing in 
the ABI, but no special cases at all. Changes to the ABI are 
not the same as changes to the run time library.


How so? Or, more to the point, what's argument passing OR runtime 
have to do with this?


Let's take a step back for a moment and look at what should 
actually be happening for this hook to work (which you briefly 
mention in the DIP):


1. The compiler constructs the value. In your case, it 
constructs two:


It does not. It copies the bits from one to the other.


Poor choice of words on my part. It *creates* two. Whereas with 
this(typeof(this)) no implicit copying of bits is required, a-la 
a C++ move constructor. The programmer is free to choose the bits 
they need, or do a blit-and-patch if so desired. Except that 
unlike a C++ move constructor, no state bookkeeping would be 
necessary (same is true with your DIP as is).


the original and the new one. In my case, it constructs the 
original and then passes it over to the move ctor (one blit 
potentially avoided).

2. It calls the hook (move ctor).


I'm not sure I follow you on that one. What did you mean?


In your case, that would be when it calls __move_post_blt.


3. In your case, it calls the opPostMove.


In all cases, you need to call the hook, whatever it is, for 
those structs that have it, and do some default handling for 
those that don't.


Correct, which would just be whatever the compilers already do.

4. In any case, it *doesn't* destruct the original. Ever. The 
alternative would be to force the programmer to put the 
original back into valid state, and suddenly we're back to C++ 
with all it's pleasantries.


That last part is quite different from the current model, in 
which the compiler always destructs function arguments. That's 
why my example fails when a de

Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 3 October 2018 at 18:38:50 UTC, Manu wrote:
On Wed, Oct 3, 2018 at 7:00 AM Stanislav Blinov via 
Digitalmars-d  wrote:



Any function in D that has a signature of the form

ReturnType foo(Type x);

in C++ would have an equivalent signature of

ReturnType foo(Type&& x); // NOT ReturnType foo(Type x);


What are you talking about? Equivalent C++ is:

ReturnType foo(Type x);


C++ has rvalue references, move semantics are explicit. D doesn't 
have any of that. Perhaps I wasn't quite clear in that above 
statement though.


Given some type Bar, compare these two calls in C++ in D, and 
tell me, which signature in C++ should correspond to D? I'm not 
talking about ABI, I'm talking about semantics.


foo(std::move(bar)); // C++
foo(move(bar)); // D

remembering that D's move() doesn't call postblit.

void foo(Bar bar) wouldn't satisfy that last bit, would it?

Yes, the semantics are different, as in D the move occurs before 
the call. But in D right now you *must* assume that the argument 
may have been moved, with all the consequences the language 
currently entails (and some of which the DIP attempts to 
resolve), whereas in C++ you can be explicit about it via 
overloading for rvalue references.



It's impossible to perform copy elision when passing an lvalue
by-value *to* a function, but that's a case where you depend on 
move semantics.


Of course it's impossible. I'm not sure I understand your point 
here.


Also, within the function that receives an argument by value, 
you

depend on move to construct something with that argument.

Type&& passes a reference, which means you can perform the move 
direct from source to destination, without a stop at the middle 
man.


Yup, no argument there.


Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 3 October 2018 at 08:21:38 UTC, Manu wrote:

Okay, so copy elision is working... but moves otherwise are 
not? That's still not what we've been peddling all these years. 
A whole lot of design surface area is dedicated to implicit 
move semantics... and they don't work? What does it do? 
postblit unnecessarily?


No. The problem is that the language is under-specified. It is 
built on the *assumption* that no one ever should create 
self-referencing data. But it does not enforce that. Which 
eventually leads to someone trying to make such data and then run 
into a wall, or worse, a heisenbug.


Thing is, there isn't anything wrong with self-referencing data 
per se. It's that the language plumbing should either disallow it 
wholesale (i.e. Rust) or allow a graceful way of handling it. 
Neither is present in D. The latter could be added though, that's 
what the DIP is about.


Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 17:43:08 UTC, Stanislav Blinov 
wrote:


But IMHO, it's something that should be fixed by not making 
these facilities built into the language.


s/not//



Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 15:33:00 UTC, Shachar Shemesh 
wrote:

On 03/10/18 17:29, Stanislav Blinov wrote:

OMG, that's so simple!!! Why didn't I think of it?

Oh wait, I did.


Now I see why sometimes your posts are greeted with hostility.


Yes. I am actually sorry about that. I was responding to your 
assumption that I'm wrong. Had your post been phrased as "why 
didn't you", instead of "you're wrong wrong wrong" I wouldn't 
have responded that way.


Like I said, I am sorry.


I am sorry as well since I wasn't clear in my initial post.


> Allow me to further illustrate with something that can be
written in D > today:

I am not sure what you were trying to demonstrate, so instead I 
wanted to see if you succeeded. I added the following to your 
Tracker struct:


 ~this() {
writefln("%s destructed", &this);
assert(counter is null || counter is &localCounter);
}

I.e. - I am asserting if a move was not caught. The program 
fails to run on either ldc or dmd. To me, this makes perfect 
sense as for the way D is built. In essence, opAssign isn't 
guaranteed to run. Feel free to build a struct where that 
assert passes to convince me.


That's a slightly different issue here.
Look at the output. The operator is being run, it can't *not* 
run, unlike postblit (ironically, right now it doesn't run on 
fixed-size arrays though). In fact, as soon as you define a 
destructor, the compiler will generate a by-value opAssign if you 
haven't defined one.
That's a separate problem. Currently, presence of a destructor 
makes the compilers generate different code, because it cannot 
elide destruction of arguments, because explicit move semantics 
do not exist in the language. That's why I haven't included a 
destructor in the example to begin with.



Here is the flaw in your logic:

void opAssign(Tracker rhs)

rhs is passed by value. This means that already at the point 
opAssign is called, rhs *already* has a different address than 
the one it was passed in with.


Currently that is only true if you define a destructor. That 
would not be true, however, if a move hook in any form existed in 
the language. That was my point.
I only used opAssign as something resembling the supposed new 
behavior, not as a "look, it already works". In the presence of a 
move hook, 'rhs' would first have to pass through that hook, 
which will not take destructors into account at all.
Consider your own DIP: what you're suggesting is the ability to 
take the address of the original when a move is taking place. My 
example shows that in the simplest case even today, address of 
the original is already the address of the argument. Except it 
cannot be enforced in any way right now. A move hook will have to 
enforce that, as it will have to be called for every move.


I did not follow your logic on why this isn't so, but I don't 
see how you can make it not so without changing the ABI quite 
drastically.


The changes are literally the same as the ones you're proposing:

"When moving a struct's instance, the compiler MUST call 
__move_post_blt giving it both new and old instances' addresses."


That is the same that would have to happen with this(typeof(this) 
rhs), where &this is the address of new instance, and &rhs is the 
address of old instance, but there's no need for opPostMove then. 
I guess what I should've said from the start is that the 
semantics you're proposing fit nicely within one special 
function, instead of two.


this(typeof(this)), of course, would need to be special in the 
ABI, but again, that's one special function instead of two.


Let's take a step back for a moment and look at what should 
actually be happening for this hook to work (which you briefly 
mention in the DIP):


1. The compiler constructs the value. In your case, it constructs 
two: the original and the new one. In my case, it constructs the 
original and then passes it over to the move ctor (one blit 
potentially avoided).

2. It calls the hook (move ctor).
3. In your case, it calls the opPostMove.
4. In any case, it *doesn't* destruct the original. Ever. The 
alternative would be to force the programmer to put the original 
back into valid state, and suddenly we're back to C++ with all 
it's pleasantries.


That last part is quite different from the current model, in 
which the compiler always destructs function arguments. That's 
why my example fails when a destructor is present.


The other thing to note (again something that you mention but 
don't expand on), and that's nodding back to my comment about 
making move() and emplace() intrinsics, is that creating such a 
hook *will* invalidate current behavior of move(). Which is 
perhaps more easily fixed with your implementation, actually, 
*except* for the part about eliding destruction. Unions are 
unreliable for that unless we also change the spec that talks 
about them.
But IMHO, it's something that should be fixed by not making these 
facilities built into the 

Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 14:07:58 UTC, Shachar Shemesh 
wrote:


If you read the DIP, you will notice that the *address* in 
which the old instance resides is quite important...


Allow me to further illustrate with something that can be written 
in D today:


import std.stdio;

struct Tracker {
static int globalCounter;
int localCounter;
int* counter;

this (bool local) {
if (local) counter = &localCounter;
else counter = &globalCounter;
}

// this should be this(Tracker rhs)
void opAssign(Tracker rhs) {
// note: taking address of local parameter
// note: LDC will already 'optimize' the move which in 
the absence
// of any move hooks will mess up the address; try with 
DMD
printf("Address of temporary is '%x', counter points to 
'%x'\n", &rhs, rhs.counter);

auto d = cast(void*) rhs.counter - cast(void*) &rhs;
printf("... which is '%ld' bytes from the address of 
temporary.\n", d);

localCounter = rhs.localCounter;
counter = rhs.counter;
if (counter is &rhs.localCounter)
counter = &localCounter;
}
}

auto createCounter(bool local = true) {
Tracker result = Tracker(local);
return result;
}

auto createCounterNoNRV(bool local = true) {
return Tracker(local);
}

void main() {

Tracker stale1, stale2;

stale1 = createCounter();
stale2 = createCounter(false);

Tracker stale3, stale4;

stale3 = createCounterNoNRV();
stale4 = createCounterNoNRV(false);
}


If you run the above with DMD, you'll see what I mean about 
obviating the address. If we get this(typeof(this)) (that is 
*always* called on move) into the language, the behavior would be 
set in stone regardless of compiler.


Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 14:07:58 UTC, Shachar Shemesh 
wrote:

On 03/10/18 16:56, Stanislav Blinov wrote:

struct S {
     this(S rhs);


OMG, that's so simple!!! Why didn't I think of it?

Oh wait, I did.


Now I see why sometimes your posts are greeted with hostility.


And this simply and utterly doesn't work.

If you read the DIP, you will notice that the *address* in 
which the old instance resides is quite important for 
performing the actual move. This is not available with the 
interface you're suggesting, mainly because by the time you 
have rhs, it has already moved.


In other words, for the interface above to work, the type must 
already be movable, which kinda contradict what we're trying to 
achieve here.


In the presence of such a constructor, the compiler will have to 
call it every time it moves the value, same as what you're 
proposing for __move_post_blt. This obviates the need of an 
address: address of the argument will always already be 
sufficient, even though it's not ref, as the chain of calls for 
this(S) will inevitably start with the address of something 
constructed in place.



in C++ would have an equivalent signature of

ReturnType foo(Type&& x); // NOT ReturnType foo(Type x);


No, it is not. You see, in C++, x is an "rvalue *reference*". x 
has not moved by this point in the run, it has simply had its 
address passed to foo.


You've misunderstood me. Yes, in C++ there's an obvious 
difference between pass-by-value and pass-by-rvalue-reference, 
and it is always user's responsibility to write a move ctor. Not 
so in D. In D, you can always assume that anything passed by 
value *is* an rvalue reference, precisely because of D's take on 
move semantics. I.e. any argument passed by value can assumed to 
be moved or constructed in place (that's the main difference from 
C++, where it must be explicitly specified).


Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
On Wednesday, 3 October 2018 at 13:56:29 UTC, Stanislav Blinov 
wrote:


Aendment, this should of course be:


this(Tracker oldLocation) {
localCounter = oldLocation.locaclCounter;
counter = oldLocation.counter;
if( counter is &oldLocation.localCounter )
counter = &localCounter;
}





Re: DIP 1014

2018-10-03 Thread Stanislav Blinov via Digitalmars-d
Shachar, as I don't see a better place of discussing that DIP at 
the moment, I'll pour some observations and thoughts in here if 
you don't mind, will add some comments on GitHub later.
As I see it right now, it's a case of over-engineering of a quite 
simple concept.


1. A new function, called __move_post_blt, will be added to 
DRuntime.


That's unnecessary, if not downright harmful for the language. We 
should strive to remove things from DRuntime, not add to it. The 
core language should deal with type memory, not a .so or dll. And 
it's extraneous, because...


2 and 3. onPostMove and __move_post_blt:

They're unnecessary as well. All that's required is to allow a 
by-value constructor, e.g:


struct S {
this(S rhs);
}

Any function in D that has a signature of the form

ReturnType foo(Type x);

in C++ would have an equivalent signature of

ReturnType foo(Type&& x); // NOT ReturnType foo(Type x);

because passing by value in D always implies a possible move. The 
'x' in such functions can be safely cannibalized without any 
repercussions, as it is either a temporary on the call site, or, 
which is especially pertaining to the original bugzilla 
discussion, constructed in place via copy elision.


Thus in effect this(S) would be an equivalent of C++'s move 
constructor. We already have a de-facto move-assignment in the 
form of opAssign(S), this(S) would be a natural extension to that.


Note, per above, that it is NOT a copy constructor, although user 
code may want to create a copy *before* calling it, to create the 
temporary.


Such approach reduces added complexity. The only potential 
problem with it would be a need to "special-case" initialization 
from .init, although at the moment, I think even that may be 
unnecessary: this is a hook after all.


Your example from the DIP would become:

struct Tracker {
static uint globalCounter;
uint localCounter;
uint* counter;

@disable this(this);

this(bool local) {
localCounter = 0;
if( local )
counter = &localCounter;
else
counter = &globalCounter;
}

this(Tracker oldLocation) {
if( counter is &oldLocation.localCounter )
counter = &localCounter;
}

void increment() {
(*counter)++;
}

}

Usage:

auto old = Tracker(true);
// ...
auto new = move(old); // calls Tracker.this(Tracker);

...this avoids any need to inject special postblits into user 
code.


As I see it, in addition to the above, what would be really 
desirable is for move() and emplace() family of calls to become 
compiler intrinsics instead of library constructs. Those at the 
moment are complete poison: being templates they infect user code 
with dependencies on libc (moveEmplace calls memset and memcpy of 
all things) and unnecessary calls to DRuntime (typeid), and they 
of course blow up the amount of generated code in the form of 
template instantiations. That's despite the fact that the 
compiler possesses ALL the necessary knowledge at the time of 
those calls.


Re: One for experts in std.parallelism

2017-06-12 Thread Stanislav Blinov via Digitalmars-d
On Monday, 12 June 2017 at 14:21:29 UTC, Andrei Alexandrescu 
wrote:

On 06/12/2017 12:13 AM, Stanislav Blinov wrote:
On Sunday, 11 June 2017 at 19:06:48 UTC, Andrei Alexandrescu 
wrote:

I tried to eliminate the static shared ~this as follows:

https://github.com/dlang/phobos/pull/5470/commits/a4b2323f035b663727349d390719509d0e3247ba


However, unittesting fails at src/core/thread.d(2042). I 
suspect it's because the atexit call occurs too late, after 
the shared static this in src/core/thread.d has already been 
called.


Thoughts on how to make this work?


Thanks,

Andrei


Which atexit call? The shared static ~this is still present in 
that commit.


Eh, the link doesn't work anymore; I hoped it would be 
immutable. Anyhow, I just tried this again:


private final class ParallelismThread : Thread
{
this(void delegate() dg)
{
super(dg);
static shared bool once;
import std.concurrency;
initOnce!once({
import core.stdc.stdlib;
atexit(&doThisAtExit);
return true;
}());
}

TaskPool pool;
}

// Kill daemon threads.
//shared static ~this()
extern(C) void doThisAtExit()
{
foreach (ref thread; Thread)
{
auto pthread = cast(ParallelismThread) thread;
if (pthread is null) continue;
auto pool = pthread.pool;
if (!pool.isDaemon) continue;
pool.stop();
pthread.join();
}
}

There's no more shared static ~this(). Instead, we have an 
atexit registration. It fails in src/core/thread.d(2042).



Andrei


To me, it feels like the options are:
1. Take a performance hit and unregister "parallel" threads from 
the runtime at creation, and keep a separate lock-protected list.

2. Modify runtime to allow creating unregistered Threads
3. Push thread_term() call back if at all possible (i.e. by also 
registering it with atexit).


I'll put the PR for (1) up for discussion a bit later. (2) and 
(3) I'll need to dig into runtime somewhat first.


Re: One for experts in std.parallelism

2017-06-11 Thread Stanislav Blinov via Digitalmars-d
On Sunday, 11 June 2017 at 19:06:48 UTC, Andrei Alexandrescu 
wrote:

I tried to eliminate the static shared ~this as follows:

https://github.com/dlang/phobos/pull/5470/commits/a4b2323f035b663727349d390719509d0e3247ba

However, unittesting fails at src/core/thread.d(2042). I 
suspect it's because the atexit call occurs too late, after the 
shared static this in src/core/thread.d has already been called.


Thoughts on how to make this work?


Thanks,

Andrei


Which atexit call? The shared static ~this is still present in 
that commit.


Re: Expressing range constraints in CNF form

2017-06-11 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 11 June 2017 at 16:28:23 UTC, Timon Gehr wrote:


I'd prefer

bool msg(bool constraint, string message){ return constraint; }

This does not require the compiler to dive into a branch it 
wouldn't consider otherwise, and the pairing of constraint to 
message is less ad-hoc.


Where were you while Steven was destroying me? :)

http://forum.dlang.org/thread/mcxeymbslqtvfijxi...@forum.dlang.org


Re: Concept proposal: Safely catching error

2017-06-08 Thread Stanislav Blinov via Digitalmars-d
On Thursday, 8 June 2017 at 14:13:53 UTC, Steven Schveighoffer 
wrote:



void foo(Mutex m, Data d) pure
{
   synchronized(m)
   {
// ... manipulate d
   } // no guarantee m gets unlocked
}

-Steve


Isn't synchronized(m) not nothrow?


Re: better string

2017-06-07 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 7 June 2017 at 23:57:44 UTC, Mike B Johnson wrote:

Or will simply setting "alias string = wstring;" at the top of 
my program end up having the entire program, regardless of what 
it is, use wstring's instead of strings?


It doesn't work that way and it can't work that way: you'd never 
be able to link against anything if it did.


The reason I say this is because I converted my program to use 
wstrings...


Why? Why trade one variable-width encoding for another, 
especially a nasty one like UTF-16?


Re: C++17 cannot beat D surely

2017-06-06 Thread Stanislav Blinov via Digitalmars-d

On Wednesday, 7 June 2017 at 03:06:34 UTC, Ali Çehreli wrote:

On 06/06/2017 06:09 PM, Steven Schveighoffer wrote:

> But it is pretty well known that enum'ing an array can have
it allocate
> wherever it is used.

One of the most effective examples is comparing .ptr with 
(seemingly) itself:


void main() {
enum e = [ 1 ];
static assert(e.ptr != e.ptr);
assert(e.ptr != e.ptr);
}

Both asserts pass. I'm surprised that e.ptr is usable at 
compile time. Fine, I guess... :)


Ali


A bit OT, but how about this one?

struct Tree {
struct Node {
Node* left, right;
}

Node* root = new Node;
}

void main() {
Tree tree1, tree2;
assert(tree1.root);
assert(tree2.root);
assert(tree1.root != tree2.root); // fails
}

Not `static`, not `enum`, but `new` at compile time and `root` is 
of course statically initialized :)


Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-06 Thread Stanislav Blinov via Digitalmars-d

On Tuesday, 6 June 2017 at 13:29:32 UTC, Stanislav Blinov wrote:

Structs, on the other hand, are another matter. Although, even 
creating a block with BlkAttr.FINALIZE *and* passing the struct 
typeinfo to GC.malloc() doesn't seem to compel it to run the 
struct dtors on collect, which is weird, considering it does so 
for a new'ed array of structs :\


Ah, but of course I forgot to .init, never mind.


Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-06 Thread Stanislav Blinov via Digitalmars-d
On Tuesday, 6 June 2017 at 13:05:51 UTC, Steven Schveighoffer 
wrote:


Actually, separate blocks might be a decent choice: there's 
another
problem with sharing blocks for different data in that 
finalizers can't

be run. So such a function would either not support types with
finalizers at all, or would have to allocate separate blocks 
for such
types anyway. Don't know, need to actually implement and 
measure if it's

even worth the trouble.


Looking at your code, you have allocated separate blocks for 
the objects and the bytes, but your objects still are just 
pointers (e.g. C.sizeof == (void *).sizeof). You are allocating 
a new block for each class instance anyway in the loop inside 
main.


Which is exactly why I used them. The code is a short example of 
single-block array allocation, not a generic demonstration of 
intent. Classes were just an easy way to stuff pointers into an 
array while being able to see if they're being looked at by the 
GC or not. Perhaps I should've used pointers to structs, but oh 
well.


I don't think there is any support in the GC to run the 
finalizers of an array of class instances (read: an array of 
class instance storage that you point at), so this whole 
exercise may be moot if you are using classes :)


Heh, well, there is no support in the GC to *create* an array of 
class instances in the first place ;) We can, of course, do it 
manually, but that's a different story.


Structs, on the other hand, are another matter. Although, even 
creating a block with BlkAttr.FINALIZE *and* passing the struct 
typeinfo to GC.malloc() doesn't seem to compel it to run the 
struct dtors on collect, which is weird, considering it does so 
for a new'ed array of structs :\


Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-06 Thread Stanislav Blinov via Digitalmars-d
On Tuesday, 6 June 2017 at 12:07:28 UTC, Steven Schveighoffer 
wrote:

On 6/6/17 6:36 AM, Stanislav Blinov wrote:
On Tuesday, 6 June 2017 at 09:57:31 UTC, Steven Schveighoffer 
wrote:



No, 2 allocations instead of N.


Oh, I misunderstood you. You mean two blocks total, for 
scanned and
non-scanned data. This could indeed work for cases when more 
than two

arrays are needed. Still, it's one extra allocation.



True. But O(2) is better than O(N) :)

-Steve


Heh, this isn't malloc. We could end up in a situation when even 
one allocation is slower than N, depending on when they happen :)


Actually, separate blocks might be a decent choice: there's 
another problem with sharing blocks for different data in that 
finalizers can't be run. So such a function would either not 
support types with finalizers at all, or would have to allocate 
separate blocks for such types anyway. Don't know, need to 
actually implement and measure if it's even worth the trouble.


Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-06 Thread Stanislav Blinov via Digitalmars-d
On Tuesday, 6 June 2017 at 09:57:31 UTC, Steven Schveighoffer 
wrote:



No, 2 allocations instead of N.


Oh, I misunderstood you. You mean two blocks total, for scanned 
and non-scanned data. This could indeed work for cases when more 
than two arrays are needed. Still, it's one extra allocation.




Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-06 Thread Stanislav Blinov via Digitalmars-d
On Tuesday, 6 June 2017 at 09:57:31 UTC, Steven Schveighoffer 
wrote:


Note that p[0 .. sz] is treated as an opaque range of memory 
assumed to be suitably managed by the caller. In particular, 
if p points into a GC-managed memory block, addRange does not 
mark this block as live.


Is that paragraph wrong?


No I am wrong. I assumed that because the gc is part of the 
static space it's metadata would also be scanned. But the data 
is allocated using c malloc, so it's not scanned. This makes 
any partial insertion using addrange more dangerous actually 
because you have to take care to keep a pointer to that block 
somewhere


You mean explicitly pointer to the block itself? It is being kept 
with the first array.
Or you mean the pointer passed to addRange? That one would not 
necessarily belong to the first array. But I thought the GC does 
track pointers that point to block interiors. After all, that's 
how slices work.


Then it's just the obvious() function. The whole point of the 
exercise is to make one GC allocation instead of N :) But 
still GC, so as not to put additional responsibility on the 
caller.


No, 2 allocations instead of N.


In this example. But obviously I'd like to make this generic, 
i.e.:


struct S { /* ... */ }

auto arrays = allocateArrays!(int, "ints", numInts, S, "Ss", 
numSs, void*, "pointers", numPtrs);


So it won't necessarily be 2. And of course the function would 
calculate the alignment, initialize the contents (or not, e.g. 
via passing a dummy type Uninitialized!T), etc.


I do have a use case for allocating up to 4 arrays at once, for 
example.


Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-05 Thread Stanislav Blinov via Digitalmars-d
On Monday, 5 June 2017 at 13:11:25 UTC, Steven Schveighoffer 
wrote:


adding a range marks it as having pointers to scan, AND stores 
a reference to that array, so it won't be GC collected (nor 
will anything it points to). The intention is for it to be used 
on non-GC memory, like C malloc'd memory, where it doesn't 
matter that the GC is pointing at it.




Huh?

https://dlang.org/phobos/core_memory.html#.GC.addRange :

Note that p[0 .. sz] is treated as an opaque range of memory 
assumed to be suitably managed by the caller. In particular, if 
p points into a GC-managed memory block, addRange does not mark 
this block as live.


Is that paragraph wrong?

I would say that you are better off allocating 2 arrays -- one 
with NO_SCAN where you put your non-pointer-containing data, 
and one without the flag to put your other data. This is 
similar to your "selective" function, but instead of allocating 
1 array, with a tuple of slices into it, just allocate 2 arrays 
and return the tuple of those 2 arrays.


Then it's just the obvious() function. The whole point of the 
exercise is to make one GC allocation instead of N :) But still 
GC, so as not to put additional responsibility on the caller.




Re: Question to GC experts: NO_SCAN for a part of the block

2017-06-04 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 4 June 2017 at 09:38:45 UTC, ag0aep6g wrote:


Should this work, and if not, why?


As far as I can tell, the `addRange` call works correctly, but 
maybe too well in a sense. It keeps the `new`ed `C`s alive as 
long as `arrays3.Cs` has pointers to them. And `arrays3.Cs` has 
those pointers until the very end.


Yeah, after playing around with the code a bit, shuffling the 
calls, making new allocations, etc., I saw those dtors indeed 
being run. I was just expecting the behavior to be the same as 
for normal 'new'ed arrays, but I guess there are nuances.


If you add `GC.removeRange(arrays3.Cs.ptr);` at the of `main`, 
the dtors show up.


Yeah, well, calling it manually sort of defeats the purpose :)

Overwriting `arrays3.Cs`'s elements with `null`s also works. My 
guess is that the `removeRange` call isn't done automatically 
before the final run of the GC. Maybe it should be?


If at all possible, I think that'd be good, if only for 
consistency.


But I have a vague memory that the GC isn't required to call 
destructors on everything in the final run. Or maybe it's not 
guaranteed that there is a final run when the program ends?


That's what puzzled me, seeing as dtors from the other two arrays 
ran at the end. I understand how particular dtors may not be run 
due to circular or self-references, but there are none in this 
case.


Anyway, that would mean everything's working as intended, and 
you just can't rely on destructors like that.


Seems it is, and that is great. Now all that's left is some 
benchmarking to see if saving on allocations actually yields any 
fruit.


Thanks!


Re: Sorting in D Blog Post Review Request

2017-06-04 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 4 June 2017 at 08:34:23 UTC, Mike Parker wrote:

I'd like to publish it tomorrow (Monday), so I've put the draft 
in a gist [2] and I'm looking to get anyone who's willing to 
read it and let me know here if I've made any errors.



https://gist.github.com/mdparker/c674888dea1e0ead0c6a8fd28b0333d3


Added a couple of comments.


Re: C++17 cannot beat D surely

2017-06-03 Thread Stanislav Blinov via Digitalmars-d

On Sunday, 4 June 2017 at 05:38:24 UTC, H. S. Teoh wrote:

And if you need a particular functionality both at compile-time 
and during runtime, there's no need to write it twice in two 
different sublanguages. You just write one function once, and 
call it from both CTFE and at runtime. It Just Works(tm).


...except when you have `if (__ctfe)` :P


  1   2   3   >