[Issue 24100] proposal to implement "partially pure" functions

2023-08-22 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=24100

--- Comment #1 from andr...@yopmail.com ---
"pure" will become an attribute of the deterministic function.
pure(static) - current pure.

--


[Issue 24100] New: proposal to implement "partially pure" functions

2023-08-22 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=24100

  Issue ID: 24100
   Summary: proposal to implement "partially pure" functions
   Product: D
   Version: D2
  Hardware: x86
OS: Windows
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: andr...@yopmail.com

proposal to implement "partially pure" functions
(in the footsteps of Issue 24096)

Rationale:
1. Expand the range of access restrictions to arbitrary program data.

2. Make a pure function deterministic.


  usage relationship diagram.

   no pure
 /|\ 
 pure(in) pure(out) pure(ref)
\ |/  \
 pure I\O



A possible syntax for defining a "partially pure" function is:

pure(in) type fun(...);  //does not change external variables
pure(out) type fun(...); //not changed by external variables
pure(ref) type fun(...);  //can use I/O functions but cannot use
pure(in)/pure(out) functions

example:

int a;

void f(){int x = a;} //ok
void f(){int x; a = x;} //ok
void f(){writeln("no ok"} //ok

pure(in) void f(){int x = a;} // ok
pure(in) void f(){int x; a = x;} //error
pure(in) void f(){writeln("no ok"} // error

pure(out) void f(){int x; a = x;} // ok
pure(out) void f(){int x = a;} //error
pure(out) void f(){writeln("no ok"} // error

pure(ref) void f(){ writeln("ok");} // ok
pure(ref) void f(){int x = a;} // error
pure(ref) void f(){int x; a = x;} //error

pure void f(){int x = a;} // error
pure void f(){int x; a = x;} //error
pure void f(){writeln("no ok"} // error


pure(in) -  can be thought of as a closure.
pure(out) - can be comprehended as a method/subprogram.
pure(ref) - can be thought of as a non-monad IO / function for resource
management.

--


Re: SLF4D - A proposal for a common logging interface for Dub projects

2023-02-25 Thread Andrew via Digitalmars-d-announce

On Saturday, 25 February 2023 at 09:32:18 UTC, max haughton wrote:
On the contrary I would argue that it's much easier (not 
necessarily better) to provide extensibility using classes. 
Nobody ever got fired for writing a class, as the saying goes.


I'd actually sort of agree with you here, but my opinion (which 
isn't necessarily correct, mind you) is that it's better to 
provide a fixed interface (a struct with a set of pre-defined 
methods) for developers to use, than to let them use a class 
which could be extended from another. Essentially I just tried to 
make the developer-experience as simple as possible, while 
letting logging providers customize pretty much everything that 
happens after a log message is generated.


Re: SLF4D - A proposal for a common logging interface for Dub projects

2023-02-25 Thread max haughton via Digitalmars-d-announce

On Saturday, 25 February 2023 at 07:52:09 UTC, Andrew wrote:

[...]


On the contrary I would argue that it's much easier (not 
necessarily better) to provide extensibility using classes. 
Nobody ever got fired for writing a class, as the saying goes.


Re: SLF4D - A proposal for a common logging interface for Dub projects

2023-02-24 Thread Andrew via Digitalmars-d-announce
On Friday, 24 February 2023 at 22:01:18 UTC, Tobias Pankrath 
wrote:

Why not std.experimental.logger?


There are a few reasons that I had for this:
- There are already quite a few different logging implementations 
entrenched in the D ecosystem besides just std.logger (it's no 
longer experimental!), and I don't think everyone is going to 
push to std.logger just by virtue of it being in phobos.
- std.logger uses a mutable public shared logger instance. I 
personally don't like this approach, and I think it's cleaner if 
the logging configuration for an entire application is setup in 
one place, on application startup. That's why with SLF4D you 
(optionally) configure a logging provider as soon as you enter 
`main()`.
- The base `Logger` from std.logger is a class when, in my 
opinion, it doesn't need to be, if you want to provide extensible 
ways for handling the log messages that an application produces. 
It's easier for developers to learn to use a simple struct and 
its fixed set of methods, than to understand a type hierarchy.
- std.logger makes some approaches towards filtering, formatting, 
and distributing log messages, but it still seems very primitive 
to me, compared to mature logging frameworks in other languages 
(primarily python and java), and I would rather have the 
application logic completely separated from the logic controlling 
what happens to log messages after they're created.




Re: SLF4D - A proposal for a common logging interface for Dub projects

2023-02-24 Thread Tobias Pankrath via Digitalmars-d-announce

On Wednesday, 22 February 2023 at 21:46:32 UTC, Andrew wrote:
I've been spending some time in the last few weeks to prototype 
a logging framework that's inspired by 
[SLF4J](https://www.slf4j.org/). To that end, I've created 
[SLF4D](https://github.com/andrewlalis/slf4d), which provides a 
common logging interface, and a pluggable architecture to allow 
third-parties to handle log messages generated by any logger in 
an application.




Why not std.experimental.logger?


Re: SLF4D - A proposal for a common logging interface for Dub projects

2023-02-24 Thread psyscout via Digitalmars-d-announce

On Wednesday, 22 February 2023 at 21:46:32 UTC, Andrew wrote:
I've been spending some time in the last few weeks to prototype 
a logging framework that's inspired by 
[SLF4J](https://www.slf4j.org/). To that end, I've created 
[SLF4D](https://github.com/andrewlalis/slf4d), which provides a


Hi Andrew,

this looks great.
I have a Java background, so it resonates :)

Thank you for your effort!


SLF4D - A proposal for a common logging interface for Dub projects

2023-02-22 Thread Andrew via Digitalmars-d-announce
I've been spending some time in the last few weeks to prototype a 
logging framework that's inspired by 
[SLF4J](https://www.slf4j.org/). To that end, I've created 
[SLF4D](https://github.com/andrewlalis/slf4d), which provides a 
common logging interface, and a pluggable architecture to allow 
third-parties to handle log messages generated by any logger in 
an application.


Here's a short example of how it can be used in your code:

```d
import slf4d;

void main() {
  auto log = getLogger();
  log.info("This is an info message.");
  log.errorF!"This is an error message: %d"(42);
}
```

The library includes a default "logging provider" that just 
outputs formatted messages to stdout and stderr, but a 
third-party provider can be used by calling 
`configureLoggingProvider(provider)`.


The idea is that I can create logging providers to wrap the 
various logging facilities available in the D ecosystem already 
(Phobos, Vibe-D, etc.), so SLF4D can serve as a common interface 
to any provider.


I'd appreciate any feedback on this so far! This first version 
should be mostly stable, but there may of course be bugs. Thanks!


Re: Proposal for porting D runtime to WebAssembly

2020-01-15 Thread Elronnd via Digitalmars-d-announce

(_start is the wasm's equivalent of _Dmain)
Not really; _start (in libc) is used on Linux too, which sets 
up the C runtime, then calls C main, which calls druntime's 
_d_run_main which in turn calls _Dmain.


Small correction: _start generally calls __libc_start_main() or 
similar, with the addresses of main, argc, argv, envp, module ini 
and fini, and possibly some other stuff I forgot about.


Re: Proposal for porting D runtime to WebAssembly

2020-01-07 Thread Petar via Digitalmars-d-announce
On Tuesday, 7 January 2020 at 08:17:37 UTC, Sebastiaan Koppe 
wrote:
On Sunday, 5 January 2020 at 08:24:21 UTC, Denis Feklushkin 
wrote:
On Friday, 3 January 2020 at 10:34:40 UTC, Sebastiaan Koppe 
wrote:



- reals (probably are going to be unsupported)


It seems to me for now they can be threated as double without 
any problems


Yeah, that is what I have done so far.


I believe that's the best choice even long term. `real` is 
supposed to represent the largest natively supported FP type by 
the underlying ISA. In WebAssembly that's f64, so there's no need 
emulate anything. Of course, people who need wider 
integer/fixed/floating types can use third-party libraries for 
that.
There are other platforms where D's real type is the same as 
double, so I don't see a reason to worry.


Re: Proposal for porting D runtime to WebAssembly

2020-01-07 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Saturday, 4 January 2020 at 16:28:24 UTC, kinke wrote:
On Friday, 3 January 2020 at 10:34:40 UTC, Sebastiaan Koppe 
wrote:
You can track the work here: 
https://github.com/skoppe/druntime/tree/wasm


I gave it a quick glance; looks pretty good, and like pretty 
much work. ;) - Thx.


Great. Thanks for looking.

The compiler should probably help a bit by firstly predefining 
a version `CRuntime_WASI` (either for all wasm targets, or for 
triples like wasm32-unknown-unknown-wasi) and secondly emitting 
TLS globals as regular globals for now, so that you don't have 
to add `__gshared` everywhere.


Yes. I will probably manage to do the first, but for the second 
one I definitely need some pointers.



- reals (probably are going to be unsupported)


It's probably just a matter of checking which type clang uses 
for C `long double` when targeting wasm, and making LDC use the 
same type.


Could be. I personally prefer to avoid them because wasm only 
supports f32/f64, which I guess means they will be emulated (I 
have no idea though, maybe some wasm hosts do the right thing). 
But some people might need them, so if fixing the ABI is not a 
big deal, we could include them.


- wasi libc needs to be distributed (either in source and 
compiled into wasm druntime) or statically linked


I'd prefer a static lib (and referencing that one via 
`-defaultlib=druntime-ldc,phobos2-ldc,wasi` in ldc2.conf's wasm 
section).


Good.

Building it via LDC CI for inclusion in (some?) prebuilt LDC 
packages is probably not that much of a hassle with a clang 
host compiler.


I don't think so either. I have already got it building, so I 
just need to go over my notes.



once ldc-build-druntime works


If you need some CMake help (excluding C files etc.), 
https://github.com/ldc-developers/ldc/pull/2787 might have 
something useful.


Thanks.


(_start is the wasm's equivalent of _Dmain)


Not really; _start (in libc) is used on Linux too, which sets 
up the C runtime, then calls C main, which calls druntime's 
_d_run_main which in turn calls _Dmain.


Ahh, fumbling as I go along. Thanks for the correction.


Re: Proposal for porting D runtime to WebAssembly

2020-01-07 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Sunday, 5 January 2020 at 08:24:21 UTC, Denis Feklushkin wrote:
On Friday, 3 January 2020 at 10:34:40 UTC, Sebastiaan Koppe 
wrote:



- reals (probably are going to be unsupported)


It seems to me for now they can be threated as double without 
any problems


Yeah, that is what I have done so far.


Re: Proposal for porting D runtime to WebAssembly

2020-01-05 Thread Denis Feklushkin via Digitalmars-d-announce

On Friday, 3 January 2020 at 10:34:40 UTC, Sebastiaan Koppe wrote:


- reals (probably are going to be unsupported)


It seems to me for now they can be threated as double without any 
problems


Re: Proposal for porting D runtime to WebAssembly

2020-01-04 Thread kinke via Digitalmars-d-announce

On Friday, 3 January 2020 at 10:34:40 UTC, Sebastiaan Koppe wrote:
You can track the work here: 
https://github.com/skoppe/druntime/tree/wasm


I gave it a quick glance; looks pretty good, and like pretty much 
work. ;) - Thx.


The compiler should probably help a bit by firstly predefining a 
version `CRuntime_WASI` (either for all wasm targets, or for 
triples like wasm32-unknown-unknown-wasi) and secondly emitting 
TLS globals as regular globals for now, so that you don't have to 
add `__gshared` everywhere.



- reals (probably are going to be unsupported)


It's probably just a matter of checking which type clang uses for 
C `long double` when targeting wasm, and making LDC use the same 
type.


- wasi libc needs to be distributed (either in source and 
compiled into wasm druntime) or statically linked


I'd prefer a static lib (and referencing that one via 
`-defaultlib=druntime-ldc,phobos2-ldc,wasi` in ldc2.conf's wasm 
section).
Building it via LDC CI for inclusion in (some?) prebuilt LDC 
packages is probably not that much of a hassle with a clang host 
compiler.



once ldc-build-druntime works


If you need some CMake help (excluding C files etc.), 
https://github.com/ldc-developers/ldc/pull/2787 might have 
something useful.



(_start is the wasm's equivalent of _Dmain)


Not really; _start (in libc) is used on Linux too, which sets up 
the C runtime, then calls C main, which calls druntime's 
_d_run_main which in turn calls _Dmain.


Re: Proposal for porting D runtime to WebAssembly

2020-01-03 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Saturday, 23 November 2019 at 10:29:24 UTC, Johan Engelen 
wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


I'm assuming you already started some work in this area? Where 
can we track it?


Great initiative!
  Johan


You can track the work here: 
https://github.com/skoppe/druntime/tree/wasm


Almost all unittests pass.

I am in the process of getting `ldc-build-druntime` to build it, 
as well as hooking into main().


I really wanted to make a pr, so that others can build it as 
well, but I am pressed for time due to family weekend trip. It is 
on my list once I get back, as well as incorpareting all info 
from this thread back into the proposal.


Some things to tackle before going beta:

- AA unittests fail
- reals (probably are going to be unsupported)
- wasi libc needs to be distributed (either in source and 
compiled into wasm druntime) or statically linked

- CI (but should be doable once ldc-build-druntime works)
- hooking into main() (I thought about making a @weak _start() in 
druntime so that users can still override it when they want) 
(_start is the wasm's equivalent of _Dmain)
- probably need help from LDC to spill i32 pointer on the shadow 
stack




Re: Proposal for porting D runtime to WebAssembly

2019-11-27 Thread Robert M. Münch via Digitalmars-d-announce

On 2019-11-23 09:51:13 +, Sebastiaan Koppe said:

This is my proposal for porting D runtime to WebAssembly. I would like 
to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Not sure if you are aware of this:

https://wasmtime.dev/

Maybe it helps or gives some inspiration.

--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster



Re: Proposal for porting D runtime to WebAssembly

2019-11-26 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Tuesday, 26 November 2019 at 09:18:05 UTC, Thomas Brix wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


An alternative idea, would be to use emscriptens fork of musl 
to have a full C-library. AFAIK this includes threading.


LLVM is supposed to support TLS in wasm since version 9.


Yes, indeed. https://reviews.llvm.org/D64537 gives a good 
overview.


I believe it is best to first actually have a version of druntime 
on wasm, rather than eagerly pulling in all the latest features. 
I find the scope I set in the proposal to be quite reasonable.


Adding tls, threading and exception handling would be much easier 
after this work is done and merged. And it would also be 
something others might want to contribute to.


Re: Proposal for porting D runtime to WebAssembly

2019-11-26 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Monday, 25 November 2019 at 13:50:20 UTC, Georgi D wrote:

Hi Sebastiaan,

If you are looking at the C++ coroutines I would recommend 
looking into the  proposal for "First-class symmetric 
coroutines in C++".


The official paper can be found here: 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1430r1.pdf


There is also a presentation with some nice animations 
explaining the proposal here:

https://docs.google.com/presentation/d/1B5My9nh-P2HLGI8Dtfm6Q7ZfHD9hJ27kJUgnfA2syv4/edit?usp=sharing

There paper is still in early development, for example the 
syntax has changed since then as well as some other pieces.


If you are interested I can connect you with the author of the 
paper who can explain it with more details.


Georgi


Thanks for that. It would be great, but I don't have time for 
that at the moment.


Re: Proposal for porting D runtime to WebAssembly

2019-11-26 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Monday, 25 November 2019 at 18:44:01 UTC, thedeemon wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Please correct me where I'm wrong, but on the level of 
WebAssembly there are no registers, there is an operand stack 
outside the address space, there are local variables to the 
current function, again outside the accessible address space of 
program's linear memory, and there is the linear memory itself. 
So scanning the stack becomes a really hard (should I say 
impossible?) part. What some compilers do is they organize 
another stack manually in the linear memory and store the 
values that would otherwise be on the normal stack, there.


Yeah, that accurately describes the situation. I will update the 
wording in the document to use 'stack', 'shadow stack' (also 
sometimes called 'user stack') and the local variable. Thanks.


One solution that I employed in spasm's experimental gc is to 
only run it directly from javascript. This way there can't be 
anything hiding in the stack or in a local variable. Although 
that approach doesn't work for all use cases.


Which means in case of D you'll have to seriously change the 
codegen, to change how local variables are stored, and to use a 
kind of shadow stack for temporaries in expressions that may be 
pointers. Do you really have a plan about it?


Well, no, not fully. That is why I said 'unknown'. But there must 
be a solution somewhere.


LLVM already puts pointers to stack or local variables in the 
shadow stack. As well as for structs-by-val that don't fit the 
stack. We could adjust LDC to nudge LLVM to maintain live roots 
on the shadow stack as well.


Go's approach is to put everything on the shadow stack. (see: 
https://docs.google.com/document/d/131vjr4DH6JFnb-blm_uRdaC0_Nv3OUwjEY5qVCxCup4/preview#heading=h.mjo1bish3xni)


There is also the possibility of a code transformation. Binaryen 
has a spill-the-pointer pass that effectively gets you go's 
solution (but only for i32's) (see: 
https://github.com/WebAssembly/binaryen/blob/master/src/passes/pass.cpp#L310)


I am favoring the first option, but I don't know how hard that 
would be. Will update the document with this info.


Thank you for questioning this.


Re: Proposal for porting D runtime to WebAssembly

2019-11-26 Thread Thomas Brix via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


An alternative idea, would be to use emscriptens fork of musl to 
have a full C-library. AFAIK this includes threading.


LLVM is supposed to support TLS in wasm since version 9.


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread thedeemon via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


On the GC part. It says "The only unknown part is how to dump the 
registers to the stack to ensure no pointers are held in the 
registers only."


Please correct me where I'm wrong, but on the level of 
WebAssembly there are no registers, there is an operand stack 
outside the address space, there are local variables to the 
current function, again outside the accessible address space of 
program's linear memory, and there is the linear memory itself. 
So scanning the stack becomes a really hard (should I say 
impossible?) part. What some compilers do is they organize 
another stack manually in the linear memory and store the values 
that would otherwise be on the normal stack, there. Which means 
in case of D you'll have to seriously change the codegen, to 
change how local variables are stored, and to use a kind of 
shadow stack for temporaries in expressions that may be pointers. 
Do you really have a plan about it?


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Steven Schveighoffer via Digitalmars-d-announce

On 11/25/19 7:52 AM, Sebastiaan Koppe wrote:
So it became clear to me I need to have druntime available. It will 
allow people to use the (almost) complete set of D features and it opens 
up some metaprogramming avenues that are closed off right now. With that 
I will be able to create some nice DSL, in line with JSX/SwiftUI or 
.


There are plenty of opportunities here. It is not unfeasible to connect 
spasm to Qt, or dlangui, and create a cross-platform UI library, 
something like flutter.


On the other hand, I am very excited about WebAssembly in general. It is 
certainly at the beginning of the hype curve and I suspect some very 
exciting things will appear in the future. Some of them are already here 
right now. For instance, you can target ARM by compiling D code to wasm 
and then use wasmer to compile it to ARM. With D connecting itself to 
the wasm world it exposes itself to a lot of cool things, which we 
mostly get for free.


As an example, it is just a matter of time before a PaaS provider fully 
embraces wasm. Instead of having docker containers you just compile to 
wasm, which will be pretty small and can boot in (sub) milli-seconds 
(plus they don't necessarily need a linux host kernel running and can 
run it closer to the hypervisor.)


As someone who does web application development, all of this sounds 
awesome. I would LOVE to have a real programming language to do the 
client-side stuff.


-Steve


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Ola Fosheim Grøstad via Digitalmars-d-announce
On Monday, 25 November 2019 at 13:52:29 UTC, Sebastiaan Koppe 
wrote:
You don't have to wait for that. That future is already here. 
The in and output could also be distributed storage, event 
streams or some queue.


Yes, I am most familiar with Google Cloud. Earlier this year 
Google Functions was not available in European datacenters IIRC, 
but now it is at least available in London and Belgium. So things 
are moving in that direction, somewhat slowly. It is annoying to 
not have Google Functions when working with Google Firebase, so 
if webworkers is possible then that could make things much better 
(even for simple things like generating thumbnail images).


Like AWS' glue that focuses on Scala or Python, or google's 
functions that only support js/python and go. Understandable, 
but I rather choose my own language. Wasm makes that possible.


Let's hope there is a way for other services than CloudFlare. 
CloudFlare Workers look cool, but their KV store has very low 
propagation guarantees on updates (60 seconds).




Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Joseph Rushton Wakeling via Digitalmars-d-announce
On Monday, 25 November 2019 at 13:00:23 UTC, Sebastiaan Koppe 
wrote:
Yes, definitely. But what do you mean with improved support? 
Like better pattern matching over either types?


Yes, that sort of thing.  And maybe a move towards trying to use 
this kind of error handling in newer editions of the standard 
library (I'm reluctant to push too strongly on that, but I get 
the impression there is some inclination to move in this 
direction, as a reflection of wider design trends).


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Monday, 25 November 2019 at 13:28:17 UTC, Ola Fosheim Grøstad 
wrote:
On Monday, 25 November 2019 at 12:52:46 UTC, Sebastiaan Koppe 
wrote:
As an example, it is just a matter of time before a PaaS 
provider fully embraces wasm.


This sounds interesting, I've been pondering about serverless 
FaaS (function as a service), where you basically (hopefully) 
get functions triggered by NoSQL database updates and not have 
to bother with your own webserver.


This is already doable with dynamodb, or kinesis streams. Or 
google's dataflow.


Using wasm just makes that more seamless (and faster).

I see that CloudFlare has support for webassembly in their 
workers, but for Google Functions I only see Node10, but maybe 
they can run webassembly as well? I haven't found anything 
definitive on it though...


Node has good wasm support, I don't know how you would get the 
wasm binary in, but it probably can be done.


Instead of having docker containers you just compile to wasm, 
which will be pretty small and can boot in (sub) milli-seconds 
(plus they don't necessarily need a linux host kernel running 
and can run it closer to the hypervisor.)


Yes, but the biggest potential I see is when you don't have to 
set up servers to process data.


I rather not setup servers for anything.

Just throw the data into the distributed database, which 
triggers a Function that updates other parts of the database 
and then triggers another function that push the resulting PDF 
(or whatever) to a service that serves the files directly (i.e. 
cached close to the user like CloudFlare).


You don't have to wait for that. That future is already here. The 
in and output could also be distributed storage, event streams or 
some queue.


The problem, however, is often when using those tools you get 
pushed into a small set of supported programming languages. Like 
AWS' glue that focuses on Scala or Python, or google's functions 
that only support js/python and go. Understandable, but I rather 
choose my own language. Wasm makes that possible.


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Georgi D via Digitalmars-d-announce
On Saturday, 23 November 2019 at 23:21:49 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 11/23/19 3:48 PM, Sebastiaan Koppe wrote:
On Saturday, 23 November 2019 at 15:23:41 UTC, Alexandru 
Ermicioi wrote:


I was wondering whats your position on Fibers?


I am not going to support them in this initial port. And to be


I did started working on a couple DIPs for them, though. 
Interestingly, I just found out today about C++'s proposed 
coroutines and was shocked by how similar they are to what I 
was designing; even right down to details like how the 
existence of a yield instruction is what triggers the compiler 
to treat the function as a coroutine, and the requirement that 
a coroutine's return type be a special type that includes the 
state information.


Still, a few differences, though. For example, unlike the C++ 
proposal, I'm hoping to avoid the need for additional keywords 
and heap allocation. And I also started a secondary DIP that 
builds on the coroutine foundation to make a much cleaner 
user-experience using the coroutines to generate ranges (what I 
would expect to be the most common use-case).


Hi Sebastiaan,

If you are looking at the C++ coroutines I would recommend 
looking into the  proposal for "First-class symmetric coroutines 
in C++".


The official paper can be found here: 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1430r1.pdf


There is also a presentation with some nice animations explaining 
the proposal here:

https://docs.google.com/presentation/d/1B5My9nh-P2HLGI8Dtfm6Q7ZfHD9hJ27kJUgnfA2syv4/edit?usp=sharing

There paper is still in early development, for example the syntax 
has changed since then as well as some other pieces.


If you are interested I can connect you with the author of the 
paper who can explain it with more details.


Georgi







Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Ola Fosheim Grøstad via Digitalmars-d-announce
On Monday, 25 November 2019 at 12:52:46 UTC, Sebastiaan Koppe 
wrote:
As an example, it is just a matter of time before a PaaS 
provider fully embraces wasm.


This sounds interesting, I've been pondering about serverless 
FaaS (function as a service), where you basically (hopefully) get 
functions triggered by NoSQL database updates and not have to 
bother with your own webserver.


I see that CloudFlare has support for webassembly in their 
workers, but for Google Functions I only see Node10, but maybe 
they can run webassembly as well? I haven't found anything 
definitive on it though...


https://blog.cloudflare.com/webassembly-on-cloudflare-workers/
https://cloud.google.com/functions/docs/

Instead of having docker containers you just compile to wasm, 
which will be pretty small and can boot in (sub) milli-seconds 
(plus they don't necessarily need a linux host kernel running 
and can run it closer to the hypervisor.)


Yes, but the biggest potential I see is when you don't have to 
set up servers to process data.


Just throw the data into the distributed database, which triggers 
a Function that updates other parts of the database and then 
triggers another function that push the resulting PDF (or 
whatever) to a service that serves the files directly (i.e. 
cached close to the user like CloudFlare).


Seems like it could be less hassle, but not sure if will catch on 
or fizzle out... I think I'll wait and see what happens. :-)




Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Monday, 25 November 2019 at 12:19:30 UTC, Joseph Rushton 
Wakeling wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Thanks for putting this together, it looks very carefully 
thought out.


Thanks!

Exceptions can be thrown but not catched. A thrown exception 
will
terminate the program. Exceptions are still in the proposal 
phase.
When the proposal is accepted exceptions can be fully 
supported.


This would suggest that there may be some benefit in D 
providing improved support for return-type-based error 
propagation (as with `Result` from Rust), no ... ?


Yes, definitely. But what do you mean with improved support? Like 
better pattern matching over either types?


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Monday, 25 November 2019 at 09:01:15 UTC, Dukc wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


This proposal is so perfectly balanced between value and 
implementability that I can find nothing to add or remove.


Thanks!

I'm interested, what's your motivation in doing all this? If I 
understood correctly, your primary motivation to write Spasm 
was to write better optimized front-end programs than you get 
with JS frameworks.


That is a fair question. Spasm has been very successful if you 
look at rendering speed. It (almost) beats everything else out 
there [1]. Well, that is not surprising since everything is known 
at compile time; it literally compiles down to the same code as 
if you issued low-level dom calls manually. I am very happy about 
that.


With regards to developer experience it is behind. First of all 
you have to deal with betterC. This alone is already a hurdle for 
many. Second is the DSL, or lack of it. It doesn't come close to 
something like e.g. SwiftUI. In fact, I wrote a (unfinished) 
material-ui component library on top of spasm and I was 
struggling at times.


So it became clear to me I need to have druntime available. It 
will allow people to use the (almost) complete set of D features 
and it opens up some metaprogramming avenues that are closed off 
right now. With that I will be able to create some nice DSL, in 
line with JSX/SwiftUI or 
.


There are plenty of opportunities here. It is not unfeasible to 
connect spasm to Qt, or dlangui, and create a cross-platform UI 
library, something like flutter.


On the other hand, I am very excited about WebAssembly in 
general. It is certainly at the beginning of the hype curve and I 
suspect some very exciting things will appear in the future. Some 
of them are already here right now. For instance, you can target 
ARM by compiling D code to wasm and then use wasmer to compile it 
to ARM. With D connecting itself to the wasm world it exposes 
itself to a lot of cool things, which we mostly get for free.


As an example, it is just a matter of time before a PaaS provider 
fully embraces wasm. Instead of having docker containers you just 
compile to wasm, which will be pretty small and can boot in (sub) 
milli-seconds (plus they don't necessarily need a linux host 
kernel running and can run it closer to the hypervisor.)


There are tons of possibilities here, and I want D to be a viable 
option when that day comes.


So it is not just about frontends anymore.

But wouldn't it be easier to just use Rust since it has already 
implemented all this?


All the rust frameworks for web apps that I have seen rely on 
runtime techniques like the virtual dom. As a consequence they 
spend more cpu time and result in bigger files. That may be 
perfectly fine for most (and it probably is), but I wanted to 
squeeze it as much as I could. Maybe it is possible to do that in 
rust as well, I don't know. D's metaprogramming seemed a more 
natural fit.


[1] except Svelte, which is a little bit smaller in code size, 
and a tiny bit faster. But they build a whole compiler just for 
that. Lets wait for host bindings support in wasm and measure 
again.


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Joseph Rushton Wakeling via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Thanks for putting this together, it looks very carefully thought 
out.


On this particular part:

Exceptions can be thrown but not catched. A thrown exception 
will
terminate the program. Exceptions are still in the proposal 
phase.

When the proposal is accepted exceptions can be fully supported.


This would suggest that there may be some benefit in D providing 
improved support for return-type-based error propagation (as with 
`Result` from Rust), no ... ?


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Dukc via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


This proposal is so perfectly balanced between value and 
implementability that I can find nothing to add or remove.


I'm interested, what's your motivation in doing all this? If I 
understood correctly, your primary motivation to write Spasm was 
to write better optimized front-end programs than you get with JS 
frameworks. But wouldn't it be easier to just use Rust since it 
has already implemented all this?


Re: Proposal for porting D runtime to WebAssembly

2019-11-25 Thread Dukc via Digitalmars-d-announce
On Sunday, 24 November 2019 at 20:42:24 UTC, Sebastiaan Koppe 
wrote:


LLVM errors out saying it can't select tls for wasm. We could 
modify ldc to not emit TLS instructions under WebAssembly.


No need do make that rule WASM-specific. Do this for all programs 
that have thearding disabled.





Re: Proposal for porting D runtime to WebAssembly

2019-11-24 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Sunday, 24 November 2019 at 18:46:04 UTC, Jacob Carlborg wrote:

On 2019-11-23 10:51, Sebastiaan Koppe wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


What will happen to code that uses TLS? Will it be promoted to 
a global variable or will it fail to compile?


LLVM errors out saying it can't select tls for wasm. We could 
modify ldc to not emit TLS instructions under WebAssembly.


But yeah, right now, you need to __gshared everything.

I know.


Re: Proposal for porting D runtime to WebAssembly

2019-11-24 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-11-23 10:51, Sebastiaan Koppe wrote:
This is my proposal for porting D runtime to WebAssembly. I would like 
to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


What will happen to code that uses TLS? Will it be promoted to a global 
variable or will it fail to compile?


--
/Jacob Carlborg


Re: Proposal for porting D runtime to WebAssembly

2019-11-24 Thread Alexandru Ermicioi via Digitalmars-d-announce
On Saturday, 23 November 2019 at 23:21:49 UTC, Nick Sabalausky 
(Abscissa) wrote:

I did started working on a couple DIPs for them, though.


Can you share a link to DIP draft?
I'd like to read how it would work.

Thank you,
Alexandru.




Re: Proposal for porting D runtime to WebAssembly

2019-11-24 Thread Ola Fosheim Grøstad via Digitalmars-d-announce
On Saturday, 23 November 2019 at 23:21:49 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 11/23/19 3:48 PM, Sebastiaan Koppe wrote:
years, but never got the impression anyone else cared. The fact 
that C# has had them for eons and D still seems to have no 
interest in coroutines that *don't* involve the overhead of 
fibers bothers me to no end.


Fun fact: Simula had stackless coroutines in the 1960s... :-)

Well, I guess I have to add that they were stackless because the 
language was implemented with closure-like-objects, so there was 
no stack, only activation records on the heap. Actually, I 
believe the MIPS architecture had this as their default too (or 
maybe it was another CPU, anyway, it has been a thing.)


Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Nick Sabalausky (Abscissa) via Digitalmars-d-announce

On 11/23/19 3:48 PM, Sebastiaan Koppe wrote:

On Saturday, 23 November 2019 at 15:23:41 UTC, Alexandru Ermicioi wrote:


I was wondering whats your position on Fibers?


I am not going to support them in this initial port. And to be honest I 
rather see us moving towards stackless coroutines.


I really hope you're right. I've been pushing for those for years, but 
never got the impression anyone else cared. The fact that C# has had 
them for eons and D still seems to have no interest in coroutines that 
*don't* involve the overhead of fibers bothers me to no end.


I did started working on a couple DIPs for them, though. Interestingly, 
I just found out today about C++'s proposed coroutines and was shocked 
by how similar they are to what I was designing; even right down to 
details like how the existence of a yield instruction is what triggers 
the compiler to treat the function as a coroutine, and the requirement 
that a coroutine's return type be a special type that includes the state 
information.


Still, a few differences, though. For example, unlike the C++ proposal, 
I'm hoping to avoid the need for additional keywords and heap 
allocation. And I also started a secondary DIP that builds on the 
coroutine foundation to make a much cleaner user-experience using the 
coroutines to generate ranges (what I would expect to be the most common 
use-case).


Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Saturday, 23 November 2019 at 15:23:41 UTC, Alexandru Ermicioi 
wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


I was wondering whats your position on Fibers?


I am not going to support them in this initial port. And to be 
honest I rather see us moving towards stackless coroutines.



Can they be implemented in current WebAssembly?


I haven't looked into it. I suppose they could be, since go has 
their goroutines supported in wasm as well.


But I don't think it is easy. WebAssembly uses the Harvard 
architecture, which means code and data is separate and code 
isn't addressable. That is why wasm uses a function table and 
indexes instead of function pointer addresses. So things like 
moving the instruction pointer are out.


If so I'd guess they would be a nice match for async related 
functionality javascript is known for.


You can still use the JavaScript eventloop, either browser or 
node.




Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Saturday, 23 November 2019 at 12:40:20 UTC, Ola Fosheim Gr 
wrote:
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Yes, if I read this right the plan is to keep the runtime 
small. That is good, small footprint is important.


Small footprint is super important, especially when targeting the 
browser.


The first stage is getting something to work though, but I will 
definitely chisel bytes off afterwards.


Also, if applicable, structure the object file in way that 
compress well (gzip). E.g. the layout of compiler emitted data 
structures and constants on the heap.


I don't know how much control we have (or want) over this. In the 
end LLVM and wasm-lld do that and we just piggyback that.


Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Sebastiaan Koppe via Digitalmars-d-announce
On Saturday, 23 November 2019 at 10:29:24 UTC, Johan Engelen 
wrote:
Perhaps you can explicitly clarify that "port" in this context 
means that you will add the required version(WebAssembly) 
blocks in the official druntime, rather than in a fork of 
druntime.


Indeed. It will not be a fork, but the changes will be upstreamed 
into the official druntime.


(WebAssembly predefined version now explicitly mentions that it 
is for 32bit. Do you want to broaden this to 64bit aswell, or 
add a new version identifier?)


I haven't seen anybody working on wasm64. I know it exists, but 
that is about it.


I do not know what the future of wasm64 will hold. Probably there 
will come a time somebody needs it, but as of yet everybody 
focuses on wasm32, and I don't see that changing anytime soon.


Still, I think it is a good idea to be prepared. Personally I 
would add wasm32 and wasm64 and also define WebAssembly whenever 
one of them is. Don't know if that is the smart thing to do.


I read that Clang uses a triple with explicit mention of WASI: 
--target wasm32-wasi
Are you planning for the same with LDC? Will you need a new 
predefined version identifier for WASI-libc? Perhaps group all 
required compiler features in a section (and move the `real` 
story there).


Rust uses that as well. It would make sense for us to use that as 
well. Good idea.


The ultimate goal is to not use libc, but directly call the wasi 
api. In the mean, yes, we should introduce the WASI-libc version. 
I have now put all that under the WebAssembly version, but that 
is conflating things. (although it is not a big deal, since the 
linker will strip them out if unused.)


Will add to a separate compiler section in the gist.


Can you elaborate on how you envision CI testing?


We can use any of the WASI runtimes. I personally use Wasmer 
(written in rust, uses cranelift which is also used in Firefox). 
Another option (or in parallel) would be using the V8 in either 
node or an headless browser (although that would be better suited 
for testing JavaScript interoperability).


I would go with wasmer first.

Do you want to add that to LDC testing? (this may also mean 
that you first add a new change to LDC's druntime, confirming 
functionality with LDC CI, and then upstreaming the change)


Yes, in fact, I am already targetting LDC's druntime.

I'm assuming you already started some work in this area? Where 
can we track it?


Will post the link here after some clean up. A few days.


Great initiative!
  Johan


Thanks, these are some very good points.




Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Alexandru Ermicioi via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


I was wondering whats your position on Fibers?
Can they be implemented in current WebAssembly?

If so I'd guess they would be a nice match for async related 
functionality javascript is known for.


Best regards,
Alexandru.


Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Ola Fosheim Gr via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Yes, if I read this right the plan is to keep the runtime small. 
That is good, small footprint is important.


Also, if applicable, structure the object file in way that 
compress well (gzip). E.g. the layout of compiler emitted data 
structures and constants on the heap.





Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Johan Engelen via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


Perhaps you can explicitly clarify that "port" in this context 
means that you will add the required version(WebAssembly) blocks 
in the official druntime, rather than in a fork of druntime.
(WebAssembly predefined version now explicitly mentions that it 
is for 32bit. Do you want to broaden this to 64bit aswell, or add 
a new version identifier?)


I read that Clang uses a triple with explicit mention of WASI: 
--target wasm32-wasi
Are you planning for the same with LDC? Will you need a new 
predefined version identifier for WASI-libc? Perhaps group all 
required compiler features in a section (and move the `real` 
story there).


Can you elaborate on how you envision CI testing?
Do you want to add that to LDC testing? (this may also mean that 
you first add a new change to LDC's druntime, confirming 
functionality with LDC CI, and then upstreaming the change)


I'm assuming you already started some work in this area? Where 
can we track it?


Great initiative!
  Johan





Re: Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Andre Pany via Digitalmars-d-announce
On Saturday, 23 November 2019 at 09:51:13 UTC, Sebastiaan Koppe 
wrote:
This is my proposal for porting D runtime to WebAssembly. I 
would like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d


While I can't say anything on the details, the document looks 
well prepared. Thanks a lot for your work, it is very good 
starting point.


Kind regards
Andre


Proposal for porting D runtime to WebAssembly

2019-11-23 Thread Sebastiaan Koppe via Digitalmars-d-announce
This is my proposal for porting D runtime to WebAssembly. I would 
like to ask you to review it. You can find it here: 
https://gist.github.com/skoppe/7617ceba6afd67b2e20c6be4f922725d





Re: How is your DConf proposal looking?

2019-03-10 Thread Luís Marques via Digitalmars-d-announce

On Friday, 8 March 2019 at 20:19:07 UTC, Stefan Koch wrote:

I'll submit mine in a few hours.

It's going to be about a hot topic :)


This year I started working early on my proposal, but I only 
finished the final revision today. It's, huh, ambitious. 
Hopefully not too much, but there really wasn't any other way to 
slice it, IMO.


I've been super busy since I moved abroad and started a new job, 
but I've been slowly cleaning up my past projects for public 
release, so I think 2019 and 2020 are going to be great years for 
me ;-)


Re: How is your DConf proposal looking?

2019-03-08 Thread Stefan Koch via Digitalmars-d-announce

On Friday, 8 March 2019 at 18:53:27 UTC, Ali Çehreli wrote:

On 03/08/2019 02:18 AM, Bastiaan Veelo wrote:

On Thursday, 7 March 2019 at 16:57:11 UTC, Ali Çehreli wrote:

Reminder... :)

  http://dconf.org/2019/index.html

Ali


It's shaping up :-)

Bastiaan.


Great! :)

I've decided to submit a proposal myself this year because I 
did use D at work recently. There must be some interesting 
things to talk about in there. Actually, I know that's the case 
because when I mention my D work to colleagues, I feel like I 
can talk for hours.


I'm pretty sure it's the same with others; so, please submit 
something.


Ali


I'll submit mine in a few hours.

It's going to be about a hot topic :)


Re: How is your DConf proposal looking?

2019-03-08 Thread Bastiaan Veelo via Digitalmars-d-announce

On Thursday, 7 March 2019 at 16:57:11 UTC, Ali Çehreli wrote:

Reminder... :)

  http://dconf.org/2019/index.html

Ali


It's shaping up :-)

Bastiaan.


How is your DConf proposal looking?

2019-03-07 Thread Ali Çehreli via Digitalmars-d-announce

Reminder... :)

  http://dconf.org/2019/index.html

Ali


Re: Proposal: __not(keyword)

2018-09-15 Thread Steven Schveighoffer via Digitalmars-d

On 9/14/18 11:06 AM, Adam D. Ruppe wrote:


It also affects attrs brought through definitions though:

shared class foo {
    int a; // automatically shared cuz of the above line of code
    __not(shared) int b; // no longer shared
}


Aside from Jonathan's point, which I agree with, that the cost(bool) 
mechanism would be preferable in generic code (think not just negating 
existing attributes, but determining how to forward them), the above is 
different then just negation.


Making something unshared *inside* something that is shared breaks 
transitivity, and IMO the above simply would be the same as not having 
any attribute there.


In other words, I would expect:

shared foo f;

static assert(is(typeof(f.b)) == shared(int));

I'm not sure how the current behavior works, but definitely wanted to 
clarify that we can't change something like that without a major 
language upheaval.


-Steve


Re: Proposal: __not(keyword)

2018-09-15 Thread Adam D. Ruppe via Digitalmars-d
On Friday, 14 September 2018 at 18:13:49 UTC, Eugene Wissner 
wrote:

Makes the code unreadable.


It is the foo: that causes this, not the __not...

For @nogc, pure and so forth there were imho a better proposal 
with a boolean value:
@gc(true), @gc(false), pure(true), pure(false) etc. It is also 
consistent with the existing UDA syntax.


Yes, I still actually prefer that proposal, but it has been 
around for a long time and still isn't here.


I want something, ANYTHING to unset these things. I don't care 
which proposal or which syntax, I just want it to be possible.


Re: Proposal: __not(keyword)

2018-09-14 Thread Jonathan M Davis via Digitalmars-d
On Friday, September 14, 2018 12:44:11 PM MDT Neia Neutuladh via 
Digitalmars-d wrote:
> On Friday, 14 September 2018 at 18:13:49 UTC, Eugene Wissner
>
> wrote:
> > Makes the code unreadable. You have to count all attributes in
> > the file, then negate them. Nobody should write like this and
> > therefore it is good, that there isn't something like __not.
> >
> > For @nogc, pure and so forth there were imho a better proposal
> > with a boolean value:
> > @gc(true), @gc(false), pure(true), pure(false) etc. It is also
> > consistent with the existing UDA syntax.
>
> The two proposals are extremely similar in effect. Under Adam D
> Ruppe's proposal, I could write:
>
> __not(@nogc) void foo() {}
>
> Here, @nogc wasn't set, so I didn't need to specify any
> attributes. If @nogc: had been specified a thousand times just
> above this function, __not(@nogc) would still make `foo` be
> not-@nogc.
>
> Identically, under your proposal, I could write:
>
> @gc(true) void foo() {}
>
> If this is the entire file, the annotation has no effect. If
> @gc(false) had been specified a thousand times just above this
> function, the annotation would still make `foo` be not-@nogc.
>
> There's no counting of attributes to negate. You just negate
> everything that doesn't apply to this function.

The main reason that attr(bool) is better is that it would allow you to do
stuff like use an enum for the bool, so its value could then depend on other
code, meaning that it would work better with metaprogramming. IIRC, at one
point, Andrei actually proposed that we add attr(bool), but it never
actually went anywhere. I expect that it would stand a good chance of being
accepted if proposed via DIP (especially if a dmd PR were provided at the
same time).

- Jonathan M Davis





Re: Proposal: __not(keyword)

2018-09-14 Thread Neia Neutuladh via Digitalmars-d
On Friday, 14 September 2018 at 18:13:49 UTC, Eugene Wissner 
wrote:
Makes the code unreadable. You have to count all attributes in 
the file, then negate them. Nobody should write like this and 
therefore it is good, that there isn't something like __not.


For @nogc, pure and so forth there were imho a better proposal 
with a boolean value:
@gc(true), @gc(false), pure(true), pure(false) etc. It is also 
consistent with the existing UDA syntax.


The two proposals are extremely similar in effect. Under Adam D 
Ruppe's proposal, I could write:


__not(@nogc) void foo() {}

Here, @nogc wasn't set, so I didn't need to specify any 
attributes. If @nogc: had been specified a thousand times just 
above this function, __not(@nogc) would still make `foo` be 
not-@nogc.


Identically, under your proposal, I could write:

@gc(true) void foo() {}

If this is the entire file, the annotation has no effect. If 
@gc(false) had been specified a thousand times just above this 
function, the annotation would still make `foo` be not-@nogc.


There's no counting of attributes to negate. You just negate 
everything that doesn't apply to this function.


Re: Proposal: __not(keyword)

2018-09-14 Thread Eugene Wissner via Digitalmars-d

On Friday, 14 September 2018 at 18:06:55 UTC, Adam D. Ruppe wrote:
Here's the simple idea: __not(anything) just turns off whatever 
`anything` does in the compiler.


__not(final) void foo() {} // turns off the final flag (if it 
is set)
__not(@nogc) void foo() {} // turns off the @nogc flag (if it 
is set)


__not(const)(int) a; // not const

All it does is invert the flags; the implementation would be 
like `flags &= ~WHATEVER;` so unless it was already set, it 
does nothing and does not check for contradictions.



const:
   int b; // const
  __not(const)(int) a; // not const
immutable:
   int c; // immutable int
   __not(const)(int) a; // still immutable int; there was no 
const set to turn off.



It also affects attrs brought through definitions though:

shared class foo {
   int a; // automatically shared cuz of the above line of code
   __not(shared) int b; // no longer shared
}



This is just a generic way to get the flipped attributes WHICH 
WE DESPERATELY NEED IN ALL SITUATIONS and I don't want to argue 
over keywords line impure and whatever __not(shared) would be 
called etc.


const:
   int b; // const
  __not(const)(int) a; // not const
immutable:
   int c; // immutable int
   __not(const)(int) a; // still immutable int; there was no
const set to turn off.

Makes the code unreadable. You have to count all attributes in 
the file, then negate them. Nobody should write like this and 
therefore it is good, that there isn't something like __not.


For @nogc, pure and so forth there were imho a better proposal 
with a boolean value:
@gc(true), @gc(false), pure(true), pure(false) etc. It is also 
consistent with the existing UDA syntax.


Re: Proposal: __not(keyword)

2018-09-14 Thread Neia Neutuladh via Digitalmars-d

On Friday, 14 September 2018 at 18:06:55 UTC, Adam D. Ruppe wrote:
Here's the simple idea: __not(anything) just turns off whatever 
`anything` does in the compiler.


From your lips to G*d's ears.


Proposal: __not(keyword)

2018-09-14 Thread Adam D. Ruppe via Digitalmars-d
Here's the simple idea: __not(anything) just turns off whatever 
`anything` does in the compiler.


__not(final) void foo() {} // turns off the final flag (if it is 
set)
__not(@nogc) void foo() {} // turns off the @nogc flag (if it is 
set)


__not(const)(int) a; // not const

All it does is invert the flags; the implementation would be like 
`flags &= ~WHATEVER;` so unless it was already set, it does 
nothing and does not check for contradictions.



const:
   int b; // const
  __not(const)(int) a; // not const
immutable:
   int c; // immutable int
   __not(const)(int) a; // still immutable int; there was no 
const set to turn off.



It also affects attrs brought through definitions though:

shared class foo {
   int a; // automatically shared cuz of the above line of code
   __not(shared) int b; // no longer shared
}



This is just a generic way to get the flipped attributes WHICH WE 
DESPERATELY NEED IN ALL SITUATIONS and I don't want to argue over 
keywords line impure and whatever __not(shared) would be called 
etc.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Kagamin via Digitalmars-d

On Friday, 14 September 2018 at 12:00:19 UTC, Arafel wrote:
Since I think this is commonly agreed, I was only trying to 
suggest a possible way to improve it (see my other messages in 
the thread), that's it.


Well, indeed synchronized classes are already treated a little 
special, e.g. they don't allow public fields. As they are 
unlikely to be used in low-level multithreading, I think their 
behavior can be changed to provide a middle ground between C and 
D concurrency - java style: 
https://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html
If `synchronized` class is changed to not imply `shared`, then it 
all can work.
1. `synchronized` class won't imply `shared` (java disallows 
synchronized constructors because it doesn't make sense)
2. methods of a synchronized class are callable on both shared 
and unshared instances
3. maybe even make it applicable only to class, not individual 
methods


AFAIK Andrei wanted some sort of compiler-assisted concurrency, 
maybe he will like such proposal.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Arafel via Digitalmars-d

On 09/14/2018 01:37 PM, Kagamin wrote:

On Friday, 14 September 2018 at 11:13:00 UTC, Arafel wrote:
As I recently discovered, "__gshared" means "static", so not so easy 
to use for class instance members. In fact, that's exactly what I'd 
like to have.


__gshared is for global storage. If you don't use global storage, you 
can simply not qualify anything shared, and you won't have to deal with it.


Sure, then let's remove the "shared" keyword altogether.

Now, seriously, I understand that manually managed shared classes are 
not the preferred paradigm for many (most?) D programmers, and I'm not 
even against removing it altogether and then making it clear that it's 
not supported.


But in my view this situation where there *seems* to be support for it 
in the language, but it's just a minefield once you try, just gives a 
bad impression of the language.


Since I think this is commonly agreed, I was only trying to suggest a 
possible way to improve it (see my other messages in the thread), that's 
it.


I'll anyway keep working with (against) "shared" and finding workarounds 
because for me the benefits in other areas of the language compensate, 
but I'm sure many people won't, and it's a pity because I think it has 
the potential to become something useful.


A.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Kagamin via Digitalmars-d

On Friday, 14 September 2018 at 11:13:00 UTC, Arafel wrote:
As I recently discovered, "__gshared" means "static", so not so 
easy to use for class instance members. In fact, that's exactly 
what I'd like to have.


__gshared is for global storage. If you don't use global storage, 
you can simply not qualify anything shared, and you won't have to 
deal with it.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Arafel via Digitalmars-d


If you prefer C-style multithreading, D supports it with __gshared. 


As I recently discovered, "__gshared" means "static", so not so easy to 
use for class instance members. In fact, that's exactly what I'd like to 
have.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Kagamin via Digitalmars-d

On Friday, 14 September 2018 at 09:36:44 UTC, Arafel wrote:
1) It's not transparent, not even remotely clear how to get 
this working.
2) It should be if shared is to be used. If shared is to be 
disowned / left as it is, then there needs to be an alternative 
to casting voodoo because right now doing "manual" 
multithreading is hell.


If you prefer C-style multithreading, D supports it with 
__gshared. There's std.concurrency too.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Arafel via Digitalmars-d


You can do
     private Unshared!(S*) s;
     this(){ s = new S(1); }


Yeah, there are workarounds, also some other minor issues. For example, 
I wanted to use it with a pointer type, and then take the pointer of 
that (don't ask me: C library), and I had to find a workaround for this 
as well.


My point is that:

1) It's not transparent, not even remotely clear how to get this working.
2) It should be if shared is to be used. If shared is to be disowned / 
left as it is, then there needs to be an alternative to casting voodoo 
because right now doing "manual" multithreading is hell.


A.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Kagamin via Digitalmars-d

On Friday, 14 September 2018 at 08:18:25 UTC, Arafel wrote:

shared synchronized class A
{
private Unshared!S s; // Should this even be possible? What 
about the @disable this??
	// I would expect at least one, if not both of the following, 
to work

//private Unshared!S s2 = S(1);
//private Unshared!S s3 = 1;

this(){
s = S(1);
//s = 1;
s.f;
}
void f() {
writeln(s.i);
}
}


You can do
private Unshared!(S*) s;
this(){ s = new S(1); }


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Arafel via Digitalmars-d

On 09/14/2018 09:32 AM, Kagamin wrote:

struct Unshared(T)
{
     private T value;
     this(T v) shared { opAssign(v); }
     T get() shared { return *cast(T*) }
     alias get this;
     void opAssign(T v) shared { *cast(T*)=v; }
}

shared synchronized class A
{
     private Unshared!(int[]) a;
     private Unshared!SysTime t;
     this(){ t=Clock.currTime; }
     int[] f()
     {
     return a;
     }
}


Almost there, you have to make "get" ref for it to work when calling 
methods that mutate the instance, though.



```
import std.datetime.systime;
import std.stdio;
import core.time;

shared struct Unshared(T)
{
private T value;
this(T v) shared { opAssign(v); }
ref T get() shared { return *cast(T*) }
alias get this;
void opAssign(T v) shared { *cast(T*)=v; }
}

shared synchronized class A
{
private Unshared!(int[]) a;
private Unshared!SysTime t;
this(){ t=Clock.currTime; }
int[] f()
{
return a;
}
void g() {
writeln(t);
t += 1.dur!"minutes"; // Doesn't work without "ref"
writeln(t);
t = t + 1.dur!"minutes";
writeln(t);
}
}

void main() {
shared A a = new shared A;
a.g;
}
```

Still, I don't know how to deal with structs with @disabled this() 
and/or specific constructors and other corner cases, it currently 
doesn't work 100% as it should:


```
import std.stdio;

struct S {
@disable this();
int i;
this(int i_) {
i = 2 * i_;
}

void opAssign(int i_) {
i = 2 * i_;
}

void f() {
i *= 2;
}
}

shared struct Unshared(T)
{
private T value;
this(T v) shared { *cast(T*)=v; }
ref T get() shared { return *cast(T*) }
alias get this;
void opAssign(T v) shared { *cast(T*)=v; }
}

shared synchronized class A
{
private Unshared!S s; // Should this even be possible? What about 
the @disable this??

// I would expect at least one, if not both of the following, to work
//private Unshared!S s2 = S(1);
//private Unshared!S s3 = 1;

this(){
s = S(1);
//s = 1;
s.f;
}
void f() {
writeln(s.i);
}
}

void main() {
// This is rightly not possible: @disable this()
// S s1;
S s = 2; // This doesn't work in Unshared
s = 1; // Neither does this
s.f;
writeln(s.i); // 4 as it should

shared A a = new shared A;
a.f; // 4 as it should
}
```

That's why I think it should be in the language itself, or at a minimum 
in Phobos once all the bugs are found and ironed out, if possible.


A.


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Kagamin via Digitalmars-d

struct Unshared(T)
{
private T value;
this(T v) shared { opAssign(v); }
T get() shared { return *cast(T*) }
alias get this;
void opAssign(T v) shared { *cast(T*)=v; }
}

shared synchronized class A
{
private Unshared!(int[]) a;
private Unshared!SysTime t;
this(){ t=Clock.currTime; }
int[] f()
{
return a;
}
}


Re: Proposal to make "shared" (more) useful

2018-09-14 Thread Arafel via Digitalmars-d

On 09/13/2018 09:49 PM, Jonathan M Davis wrote:


Have you read the concurrency chapter in The D Programming Language by
Andrei? It sounds like you're trying to describe something vere similar to
the synchronized classes from TDPL (which have never been fully implemented
in the language). They would make it so that you had a class with shared
members but where the outer layer of shared was stripped away inside member
functions, because the compiler is able to guarantee that they don't escape
(though it can only guarantee that for the outer layer). Every member
function is synchronized and no direct access to the member variables
outside of the class (even in the same module) is allowed. It would make
shared easier to use in those cases where it makes sense to wrapped
everything protected by a mutex in a class (though since it can only safely
strip away the outer layer of shared, it's more limited than would be nice,
and there are plenty of cases where it doesn't make sense to stuff something
in a class just to use it as shared).



I hadn't read the book, but that's indeed the gist of what I'm 
proposing. I think it could be enough to restrict it to value types, 
where it's easier to assume (and even check) that there are no external 
references.




[snip]

If we're going to find ways to make shared require less manual work, it
means finding a way to protect a shared object (or group of shared objects)
with a mutex in a way that is able to guarantee that when you operate on the
data, it's protected by that mutex and that no reference to that data has
escaped. TDPL's synchronized classes are one attempt to do that, but the
requirement that no references escape (so that shared can safely be cast
away) makes it so that only the outer layer of shared can be cast away, and
it's extremely difficult to do better than that with having holes such that
it isn't actually guaranteed to be thread-safe when shared is cast away.
Maybe someone will come up with something that will work, but I wouldn't bet
on it. Either way, I don't see how any solution is going to be acceptable
which does not actually guarantee thread-safety, because it would be
violating the guarantees of shared otherwise. A programmer can choose to
cast away shared in an unsafe manner (or use __gshared) and rely on their
ability to ensure that the code is thread-safe rather than letting shared do
its job, but that's not the sort of thing that we're going to do with a
language construct, and given that the compiler assumes that anything that
isn't shared or immutable is thread-local, it's very much a risky thing to
do.



I completely agree with this argument, however please note that there 
must be a sensible way to work with shared, otherwise we enter in the 
"the perfect is the enemy of the good" area.


For reference types it's somehow workable, because you can just cast 
away and store it in a new variable:


```
class A {
this() { }
}

shared synchronized class B {
this(A a) {
a_ = cast (shared) new A; // no shared this()
}
void foo() {
A a = cast () a_;
// Work with it
}
private:
A a_;
}
```

It's still somewhat cumbersome, specially if you have many such members, 
but still doable.


However, this is not possible for value types, and it makes it nigh on 
impossible to work with them in a sensible way. You have either to use 
pointers, or cast away every type you want to use it. None of them are 
what I would call "practical".


While not the biggest problem (see the later point), I still think that 
synchronized classes are a good compromise, specially with the 
restriction of only applying to full value types (no internal references 
allowed). Of course it is still perhaps possible to bypass that 
mechanism, but so is the case with many other ones (assumeUnique?).


If it's hard enough to do by mistake, it can be assumed that the people 
messing with it should know what they are doing.


Finally, you suggest using __gshared, and I'm not sure you're not having 
the same misunderstanding I had: __gshared implies "static", so it's not 
a valid solution for class fields in most cases.



As for __gshared, it's intended specifically for C globals, and using it for
anything else is just begging for bugs. Because the compiler assumes that
anything which is not marked as shared or immutable is thread-local, having
such an object actually be able to be mutated by another thread risks subtle
bugs of the sort that shared was supposed to prevent in the first place.
Unfortunately, due to some of the difficulties in using shared and some of
the misunderstandings about it, a number of folks have just used __gshared
instead of shared, but once you do that, you're risking subtle bugs, because
that's not at all what __gshared is intended for. If you're using __gshared
for anything other than a C global, it's arguably a bug. Certainly, it's a
risky 

Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Jonathan M Davis via Digitalmars-d
On Thursday, September 13, 2018 7:53:49 AM MDT Arafel via Digitalmars-d 
wrote:
> Hi all,
>
> I know that many (most?) D users don't like using classes or old,
> manually controlled, concurrency using "shared" & co., but still, since
> they *are* in the language, I think they should at least be usable.
>
> After having had my share (no pun intended) of problems using shared,
> I've finally settled for the following:
>
> * Encapsulate all the shared stuff in classes (personal preference,
> easier to pass around).
> * When possible, try to use "shared synchronized" classes, because even
> if there are potential losses of performance, the simplicity is often
> worth it. This mean that the classed is declared:
>
> ```
> shared synchronized class A { }
> ```
>
> and now, the important point:
>
> * Make all _private non-reference fields_ of shared, synchronized
> classes __gshared.
>
> AIUI the access of those fields is already guaranteed to be safe by the
> fact that *all* the methods of the class are already synchronized on
> "this", and nothing else can access them.
>
> Of course, assuming you then don't escape references to them, but I
> think that would be a *really* silly thing to do, at least in the most
> common case... why on earth are they then private in the first place?.
>
> Now, the question is, would it make sense to have the compiler do this
> for me in a transparent way? i.e. the compiler would automatically store
> private fields of shared *and* synchronized classes in the global storage.
>
> Bonus points if it detects and forbids escaping references to them,
> although it could also be enough to warn the user.

Have you read the concurrency chapter in The D Programming Language by
Andrei? It sounds like you're trying to describe something vere similar to
the synchronized classes from TDPL (which have never been fully implemented
in the language). They would make it so that you had a class with shared
members but where the outer layer of shared was stripped away inside member
functions, because the compiler is able to guarantee that they don't escape
(though it can only guarantee that for the outer layer). Every member
function is synchronized and no direct access to the member variables
outside of the class (even in the same module) is allowed. It would make
shared easier to use in those cases where it makes sense to wrapped
everything protected by a mutex in a class (though since it can only safely
strip away the outer layer of shared, it's more limited than would be nice,
and there are plenty of cases where it doesn't make sense to stuff something
in a class just to use it as shared).

> This way I think there would an easy and sane way of using shared,
> because many of its worst quirks (for one, try using a struct like
> SysTime that overrides OpAssign, but not for shared objects, as a field)
> would be transparently dealt with.

The fact that most operations are not allowed with shared is _on purpose_.
If anything, too many operations are currently legal. What's really supposed
to be happening is that every single operation on a shared object is either
guaranteed to be thread-safe, or it's illegal. And if it's illegal, that
means that you either need to use atomics to do an operation (since they're
thread-safe), or you need to protect the shared object with a mutex and
temporarily cast away shared while the mutex is locked so that you can
actually do something with the object - and then make sure that no
thread-local references exist when the mutex is released.

Something like copying a shared object shouldn't even be legal in general.
An object that defines opAssign prevents it now, but the fact that it's
legal on any type where copying is not guaranteed to be thread-safe is a
bug. It's one of those details of shared that has never been fully fleshed
out like it should be. Walter and Andrei have been discussing finishing
shared, but it hasn't been a high enough priority for it actually get fully
sorted out yet. Once it is, unless you're dealing with a type that isn't
guaranteed to be thread-safe when copying it, it won't be legal copy it
without first casting away shared. Anything less than that would violate
what shared is supposed to do.

What you should be thinking when dealing with any shared object and whether
a particular operation should be allowed is whether that operation is
guaranteed to be thread-safe. If the compiler can't guarantee that the
operation is thread-safe, then it's not supposed to be legal. The main area
that Walter and Andrei haven't agreed upon yet is how much the compiler can
or should do to ensure that something is thread-safe rather than just making
an operation illegal (e.g. whether memory barriers should be involved). So,
_maybe_ some operations will end up as legal thanks to the compiler adding
extra code to do something to ensure thread-safety, but in most situations,
it's just going to be illegal.

So, ultimately, every type is either going to need to be 

Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Arafel via Digitalmars-d

On 09/13/2018 05:11 PM, Adam D. Ruppe wrote:

On Thursday, 13 September 2018 at 14:43:51 UTC, Arafel wrote:
 Why must __gshared be static?? (BTW, thanks a lot, you have just 
saved me a lot of debugging!!).


The memory location differences of shared doesn't apply to class 
members. All members are stored with the instance, and shared only 
changes the type. (Contrast to global variables, where shared changes 
where they are stored - the default is to put them in thread-local 
storage, and shared moves it back out of that.)


Class static variables btw follow the same TLS rules. A static member is 
really the same as a global thing, just in a different namespace.



Now, the rule of __gshared is it puts it in that global memory storage 
using the unshared type. Unshared type you like here, but also, since 
normally, class members are stored in the object, changing the memory 
storage to the global shared area means it is no longer with the 
object... thus it becomes static.


Then, how on earth are we supposed to have a struct like SysTime as a 
field in a shared class? Other than the "fun" of having a shared 
*pointer* to such a struct that then you can cast as non-shared as 
needed...


What you want is an unshared type without changing the memory layout. 
There's no syntax for this at declaration, but there is one at usage 
point: you can cast away attributes on an lvalue:


shared class Foo {
     void update() {
     // the cast below strips it of all attributes, 
including shared,

     // allowing the assignment to succeed
     cast() s = Clock.currTime;
     }
     SysTime s;
}



Using the private field with a public/protected/whatever accessor 
method, you can encapsulate this assignment a little and make it more 
sane to the outside world.


Thanks a lot!! I remember having tried casting shared away, and ending 
up with a duplicate, but I have just tried it now and indeed it seems to 
work, will have to try with more complex use cases (comparing, adding 
dates and intervals, etc.), but it looks promising.


The problem might have been that I think I tried:

shared SysTime s_;
SysTime s = cast () s_; // Now I've got a duplicate! Ugh!

Because that works with classes... but (in hindsight) obviously not with 
value types.


I still think that it would be useful:

1) Allow __gshared for non-static fields to have this meaning, it would 
make it much more intuitive. A library solution is perhaps possible, but 
cumbersome.


2) Make it (sometimes) automatic as the original proposal.

Of course 1) is the most important part.

A.


Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Arafel via Digitalmars-d

On 09/13/2018 05:16 PM, Kagamin wrote:

struct Unshared(T)
{
     private T value;
     T get() shared { return cast(T)value; }
     alias get this;
     void opAssign(T v) shared { value=cast(shared)v; }
}

shared synchronized class A
{
     private Unshared!(int[]) a;
     int[] f()
     {
     return a;
     }
}



My current attempt, still work in progress:


```
import std.stdio;
import std.datetime.systime;

shared struct GShared(T) {
ubyte[T.sizeof] payload;

this(T t) {
*(cast(T*) ) = t;
}
this(shared T t) {
*(cast(T*) ) = cast() t;
}
void opAssign(T t) {
*(cast(T*) ) = t;
}
void opAssign(shared T t) {
*(cast(T*) ) = cast() t;
}
ref T data() {
return *(cast(T*) );
}
alias data this;
}

shared synchronized class A {
this() {
t = Clock.currTime;
}

void printIt() {
writeln(t);
}

private:
GShared!SysTime t;
}

void main() {
shared A a = new shared A;
a.printIt;
}
```


Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Arafel via Digitalmars-d

On 09/13/2018 05:16 PM, Kagamin wrote:

struct Unshared(T)
{
     private T value;
     T get() shared { return cast(T)value; }
     alias get this;
     void opAssign(T v) shared { value=cast(shared)v; }
}

shared synchronized class A
{
     private Unshared!(int[]) a;
     int[] f()
     {
     return a;
     }
}



Doesn't work:

```
import std.datetime.systime;

struct Unshared(T)
{
private T value;
T get() shared { return cast(T)value; }
alias get this;
void opAssign(T v) shared { value=cast(shared)v; }
}

shared synchronized class A {
private Unshared!SysTime t;

this() {
t = Clock.currTime;
}
}

void main() {
shared A a = new shared A;
}
```

Gives you:

onlineapp.d(6): Error: non-shared const method 
std.datetime.systime.SysTime.opCast!(SysTime).opCast is not callable 
using a shared mutable object
onlineapp.d(6):Consider adding shared to 
std.datetime.systime.SysTime.opCast!(SysTime).opCast
onlineapp.d(8): Error: template std.datetime.systime.SysTime.opAssign 
cannot deduce function from argument types !()(shared(SysTime)) shared, 
candidates are:
/dlang/dmd/linux/bin64/../../src/phobos/std/datetime/systime.d(612): 
   std.datetime.systime.SysTime.opAssign()(auto ref const(SysTime) rhs)
onlineapp.d(12): Error: template instance `onlineapp.Unshared!(SysTime)` 
error instantiating


Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Kagamin via Digitalmars-d

struct Unshared(T)
{
private T value;
T get() shared { return cast(T)value; }
alias get this;
void opAssign(T v) shared { value=cast(shared)v; }
}

shared synchronized class A
{
private Unshared!(int[]) a;
int[] f()
{
return a;
}
}



Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Adam D. Ruppe via Digitalmars-d

On Thursday, 13 September 2018 at 14:43:51 UTC, Arafel wrote:
 Why must __gshared be static?? (BTW, thanks a lot, you have 
just saved me a lot of debugging!!).


The memory location differences of shared doesn't apply to class 
members. All members are stored with the instance, and shared 
only changes the type. (Contrast to global variables, where 
shared changes where they are stored - the default is to put them 
in thread-local storage, and shared moves it back out of that.)


Class static variables btw follow the same TLS rules. A static 
member is really the same as a global thing, just in a different 
namespace.



Now, the rule of __gshared is it puts it in that global memory 
storage using the unshared type. Unshared type you like here, but 
also, since normally, class members are stored in the object, 
changing the memory storage to the global shared area means it is 
no longer with the object... thus it becomes static.


Then, how on earth are we supposed to have a struct like 
SysTime as a field in a shared class? Other than the "fun" of 
having a shared *pointer* to such a struct that then you can 
cast as non-shared as needed...


What you want is an unshared type without changing the memory 
layout. There's no syntax for this at declaration, but there is 
one at usage point: you can cast away attributes on an lvalue:


shared class Foo {
void update() {
// the cast below strips it of all attributes, 
including shared,

// allowing the assignment to succeed
cast() s = Clock.currTime;
}
SysTime s;
}



Using the private field with a public/protected/whatever accessor 
method, you can encapsulate this assignment a little and make it 
more sane to the outside world.


Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Arafel via Digitalmars-d

On 09/13/2018 04:27 PM, Adam D. Ruppe wrote:

On Thursday, 13 September 2018 at 13:53:49 UTC, Arafel wrote:
* Make all _private non-reference fields_ of shared, synchronized 
classes __gshared.


so __gshared implies static. Are you sure that's what you want?


Indeed it isn't! Why must __gshared be static?? (BTW, thanks a lot, you 
have just saved me a lot of debugging!!).


Then, how on earth are we supposed to have a struct like SysTime as a 
field in a shared class? Other than the "fun" of having a shared 
*pointer* to such a struct that then you can cast as non-shared as needed...


But let's say that this solution seems quite.. sub-optimal... at so 
many levels...


Re: Proposal to make "shared" (more) useful

2018-09-13 Thread Adam D. Ruppe via Digitalmars-d

On Thursday, 13 September 2018 at 13:53:49 UTC, Arafel wrote:
* Make all _private non-reference fields_ of shared, 
synchronized classes __gshared.


so __gshared implies static. Are you sure that's what you want?


Proposal to make "shared" (more) useful

2018-09-13 Thread Arafel via Digitalmars-d

Hi all,

I know that many (most?) D users don't like using classes or old, 
manually controlled, concurrency using "shared" & co., but still, since 
they *are* in the language, I think they should at least be usable.


After having had my share (no pun intended) of problems using shared, 
I've finally settled for the following:


* Encapsulate all the shared stuff in classes (personal preference, 
easier to pass around).
* When possible, try to use "shared synchronized" classes, because even 
if there are potential losses of performance, the simplicity is often 
worth it. This mean that the classed is declared:


```
shared synchronized class A { }
```

and now, the important point:

* Make all _private non-reference fields_ of shared, synchronized 
classes __gshared.


AIUI the access of those fields is already guaranteed to be safe by the 
fact that *all* the methods of the class are already synchronized on 
"this", and nothing else can access them.


Of course, assuming you then don't escape references to them, but I 
think that would be a *really* silly thing to do, at least in the most 
common case... why on earth are they then private in the first place?.


Now, the question is, would it make sense to have the compiler do this 
for me in a transparent way? i.e. the compiler would automatically store 
private fields of shared *and* synchronized classes in the global storage.


Bonus points if it detects and forbids escaping references to them, 
although it could also be enough to warn the user.


This way I think there would an easy and sane way of using shared, 
because many of its worst quirks (for one, try using a struct like 
SysTime that overrides OpAssign, but not for shared objects, as a field) 
would be transparently dealt with.


A.


Re: DIP Proposal: @manualScoped to prevent automatic field destruction

2018-07-27 Thread FeepingCreature via Digitalmars-d

Update:

https://github.com/dlang/dmd/pull/5830

Turns out unions with fields with destructors is **define 
behavior**. I did not know this. Probably the spec needs updating.


So that pretty much alleviates the need for this DIP. Though I 
kind of half-believe that this DIP is actually *better* than 
unrestricted unions, so I'd still leave it open.


Re: DIP Proposal: @manualScoped to prevent automatic field destruction

2018-07-27 Thread FeepingCreature via Digitalmars-d

On Friday, 27 July 2018 at 11:44:10 UTC, aliak wrote:
A) I'd suggest "@nodestruct" instead, since it sounds like that 
what it's supposed to do?
Yes-ish, but it's also supposed to fill the hole in the 
typesystem created by T.init, and "you can only assign T.init to 
types marked @nodestruct" sounds kind of magic.


B) is this basically for the case of invariants being run 
before destructors where T.init are not valid runtime instances 
of T?

Yep, that's the reason I'm looking at it.

C) If it is, then this seems to me that this is something that 
should just work without a programmer needing to know about how 
T.init and invariants are implemented, so an implementation 
that doesn't call invariants before a destructor only if an 
instance was never constructed at runtime is maybe the way to 
go? Though I have no idea how possible that is.


Basically impossible without giving every type a hidden "bool 
initialized" field. So, basically impossible.
The advantage of doing it with @manualScoped is twofold. First, 
it also covers the case of opAssign methods taking parameters 
that don't need to be destroyed at scope end. Second, even a 
constructor may return a T.init (and, for instance, increment a 
static variable), so if being T.init skipped the destructor we'd 
again get a constructor/destructor mismatch. @manualScoped makes 
it clear this is a variable that contains a value with a manually 
managed lifetime, so users take responsibility to call 
moveEmplace/destroy as required to make constructor/destructor 
calls match up, which is the goal.


Basically, think of a @manualScoped variable as a "weak value" in 
analogy to weak references.



Cheers,
- Ali

maybe your PR where invariants is not called before a 
destructor if an instance is a T.init is maybe the way to go? 
[0]


The PR itself was just a way to hack around this. The problem 
isn't the invariant check on destruction, the problem is the 
destruction without the matching construction.





Re: DIP Proposal: @manualScoped to prevent automatic field destruction

2018-07-27 Thread aliak via Digitalmars-d

On Friday, 27 July 2018 at 09:30:00 UTC, FeepingCreature wrote:
A new UDA is introduced: @manualScoped. It is valid for fields 
in structs and classes, as well as variables and parameters. 
Fields marked with @manualScoped are not automatically 
destructed on scope end.


For instance, a function taking a struct as a @manualScoped 
value will lead to a copy constructor call, but no destructor 
call. It is assumed the passed value will be moved into another 
field via move() or moveEmplace().


In @safe, only @manualScoped fields may be initialized with 
.init. This is to indicate that init represents a hole in the 
typesystem, and using it forces you to engage in manual 
lifecycle management.


The goal of this DIP is to make the union hack unnecessary and 
resolve the value/variable problem with .init initialized 
struct destruction, where { S s = S.init; } led to a destructor 
call but no corresponding constructor call.


Opinions?


A) I'd suggest "@nodestruct" instead, since it sounds like that 
what it's supposed to do?
B) is this basically for the case of invariants being run before 
destructors where T.init are not valid runtime instances of T?
C) If it is, then this seems to me that this is something that 
should just work without a programmer needing to know about how 
T.init and invariants are implemented, so an implementation that 
doesn't call invariants before a destructor only if an instance 
was never constructed at runtime is maybe the way to go? Though I 
have no idea how possible that is.


Cheers,
- Ali

maybe your PR where invariants is not called before a destructor 
if an instance is a T.init is maybe the way to go? [0]


DIP Proposal: @manualScoped to prevent automatic field destruction

2018-07-27 Thread FeepingCreature via Digitalmars-d
A new UDA is introduced: @manualScoped. It is valid for fields in 
structs and classes, as well as variables and parameters. Fields 
marked with @manualScoped are not automatically destructed on 
scope end.


For instance, a function taking a struct as a @manualScoped value 
will lead to a copy constructor call, but no destructor call. It 
is assumed the passed value will be moved into another field via 
move() or moveEmplace().


In @safe, only @manualScoped fields may be initialized with 
.init. This is to indicate that init represents a hole in the 
typesystem, and using it forces you to engage in manual lifecycle 
management.


The goal of this DIP is to make the union hack unnecessary and 
resolve the value/variable problem with .init initialized struct 
destruction, where { S s = S.init; } led to a destructor call but 
no corresponding constructor call.


Opinions?


C++ static exceptions proposal

2018-07-24 Thread Trass3r via Digitalmars-d

http://wg21.link/P0709
Interesting read. Looks like they want to bake something like 
llvm::Expected into the language. I wonder if D shares all 
these dynamic exceptions issues.
In any case it should become relevant if they really change the C 
ABI as well.


Re: DUB colored output proposal/showcase

2018-06-19 Thread Guillaume Piolat via Digitalmars-d

On Friday, 8 June 2018 at 20:35:50 UTC, Jacob Carlborg wrote:

On 2018-06-08 15:38, Steven Schveighoffer wrote:


Looks excellent! Two thumbs up from me. Is it cross-platform?

Note on some platforms (ahem, Macos) the background is white, 
so this should be correctly colored for that possibility.


On macOS everyone should use iTerm :), which has a dark 
background by default.


+1


Re: DUB colored output proposal/showcase

2018-06-19 Thread Anton Fediushin via Digitalmars-d

On Tuesday, 19 June 2018 at 19:22:09 UTC, Jacob Carlborg wrote:

On 2018-06-09 00:45, gdelazzari wrote:

Actually, I was thinking about that too. In fact, what if a 
user is
using a "classic" dark-background theme on macOS's terminal? 
Or another
terminal which by default uses a dark background, like the one 
mentioned
above? He would get all the colors and the text contrast 
messed up if I
put a different color scheme for macOS only. The only valid 
option would
be to check the background color of the terminal, but I don't 
think
that's possible at all in a standardized way, unless someone 
can prove

me wrong. That would be cool.


As I mentioned, I think the only way to do this is to avoid 
using white and black colors and assume all other colors (at 
least the standard ones) work with the selected theme. For 
regular text, reset to the default foreground color instead of 
explicitly using black or white.


Not just black and white but also some shades of grey. Recently I 
fixed the same problem in the vibe.d's logger 
(https://github.com/vibe-d/vibe-core/pull/82) and all I can say 
is that the only way to deal with colours is to test it on both 
black and white background.


Re: DUB colored output proposal/showcase

2018-06-19 Thread Jacob Carlborg via Digitalmars-d

On 2018-06-09 00:45, gdelazzari wrote:


Actually, I was thinking about that too. In fact, what if a user is
using a "classic" dark-background theme on macOS's terminal? Or another
terminal which by default uses a dark background, like the one mentioned
above? He would get all the colors and the text contrast messed up if I
put a different color scheme for macOS only. The only valid option would
be to check the background color of the terminal, but I don't think
that's possible at all in a standardized way, unless someone can prove
me wrong. That would be cool.


As I mentioned, I think the only way to do this is to avoid using white 
and black colors and assume all other colors (at least the standard 
ones) work with the selected theme. For regular text, reset to the 
default foreground color instead of explicitly using black or white.


--
/Jacob Carlborg


Re: DUB colored output proposal/showcase

2018-06-09 Thread gdelazzari via Digitalmars-d

On Saturday, 9 June 2018 at 01:17:26 UTC, Nick Sabalausky wrote:

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:

Take a look at these screenshots:

https://imgur.com/a/3RJyd6m



Very nice!

But related to your motivation for  this, I do really wish dub 
had far less output by default. For example, I don't need to be 
told over and over that each of my dependencies are up-to-date 
(or which version was chosen. I can always look at 
dub.selections.json or use --verbose if I need to check that.) 
And I dont need to be reminded about --force every time my 
build succeeds. Or for that matter, be told whether or not the 
compiler gave an error. If there are errors I can already see 
that they're there. Etc.


While I mostly agree with you, I have to note that the reminder 
that you can use --force to force rebuilding everything only pops 
up when you "dub build" a package you just built, i.e. if you 
make some changes to the code and build, it won't show up in the 
output, as you can see in the first screenshot at my link. I 
think this is fine and makes sense, since a first time user may 
try to run "dub build" to rebuild the project without obtaining 
that effect, and that message will be useful to understand how to 
actually force rebuilding it if he/she really wants to.


I agree that the "Up-to-date" that pops up for every dependency 
is too verbose, and can definitely be removed. Also the "Failed, 
dub exited with code X" doesn't make a lot of sense as you said. 
Also it's not consistent since, if we keep that message, then why 
not also printing something like "Success, dub exited with code 
0, build completed" at the end of a successful build? So yeah, 
agreed, it should go and doesn't carry more information that what 
you already know.


If anyone has other suggestions regarding Dub's output I'll be 
happy to take them, since I'm working on that anyway.


Re: DUB colored output proposal/showcase

2018-06-08 Thread Nick Sabalausky via Digitalmars-d

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:

Take a look at these screenshots:

https://imgur.com/a/3RJyd6m



Very nice!

But related to your motivation for  this, I do really wish dub 
had far less output by default. For example, I don't need to be 
told over and over that each of my dependencies are up-to-date 
(or which version was chosen. I can always look at 
dub.selections.json or use --verbose if I need to check that.) 
And I dont need to be reminded about --force every time my build 
succeeds. Or for that matter, be told whether or not the compiler 
gave an error. If there are errors I can already see that they're 
there. Etc.


Re: DUB colored output proposal/showcase

2018-06-08 Thread gdelazzari via Digitalmars-d
For anyone interested, I'm implementing everything in this branch 
of my fork.


https://github.com/gdelazzari/dub/tree/colored-output

Seems like I managed to cleanly substitute the logging module 
with a brand new one :P actually it's most copy-paste with 
changes to allow colored output, but yeah. I also "imported" 
d-colorize (by copying all its file in Dub's codebase - to avoid 
having a Dub package dependency on Dub itself).


This is the relevant part (the new logging module) if anyone 
wants to contribute with feedback or enhancements, since is some 
of the first D code I'm writing. It's still missing the no-TTY 
detection, but it's coming.


https://github.com/gdelazzari/dub/blob/colored-output/source/dub/logging.d


Re: DUB colored output proposal/showcase

2018-06-08 Thread gdelazzari via Digitalmars-d

On Friday, 8 June 2018 at 20:41:41 UTC, Jacob Carlborg wrote:

On 2018-06-08 15:38, Steven Schveighoffer wrote:

Note on some platforms (ahem, Macos) the background is white, 
so this should be correctly colored for that possibility.


I would assume that the theme of the terminal is setup so that 
all colors (except white and black) work together with the 
background color of the theme.


Actually, I was thinking about that too. In fact, what if a user 
is using a "classic" dark-background theme on macOS's terminal? 
Or another terminal which by default uses a dark background, like 
the one mentioned above? He would get all the colors and the text 
contrast messed up if I put a different color scheme for macOS 
only. The only valid option would be to check the background 
color of the terminal, but I don't think that's possible at all 
in a standardized way, unless someone can prove me wrong. That 
would be cool.


Re: DUB colored output proposal/showcase

2018-06-08 Thread Steven Schveighoffer via Digitalmars-d

On 6/8/18 4:35 PM, Jacob Carlborg wrote:

On 2018-06-08 15:38, Steven Schveighoffer wrote:


Looks excellent! Two thumbs up from me. Is it cross-platform?

Note on some platforms (ahem, Macos) the background is white, so this 
should be correctly colored for that possibility.


On macOS everyone should use iTerm :), which has a dark background by 
default.




I gotta say, I'm quite satisfied with the default console. I guess I'm 
not everyone :P


-Steve


Re: DUB colored output proposal/showcase

2018-06-08 Thread Jacob Carlborg via Digitalmars-d

On 2018-06-08 15:38, Steven Schveighoffer wrote:

Note on some platforms (ahem, Macos) the background is white, so this 
should be correctly colored for that possibility.


I would assume that the theme of the terminal is setup so that all 
colors (except white and black) work together with the background color 
of the theme.


--
/Jacob Carlborg


Re: DUB colored output proposal/showcase

2018-06-08 Thread Jacob Carlborg via Digitalmars-d

On 2018-06-08 15:38, Steven Schveighoffer wrote:


Looks excellent! Two thumbs up from me. Is it cross-platform?

Note on some platforms (ahem, Macos) the background is white, so this 
should be correctly colored for that possibility.


On macOS everyone should use iTerm :), which has a dark background by 
default.


--
/Jacob Carlborg


Re: DUB colored output proposal/showcase

2018-06-08 Thread gdelazzari via Digitalmars-d

On Friday, 8 June 2018 at 16:33:29 UTC, Basile B. wrote:

On Friday, 8 June 2018 at 16:20:12 UTC, gdelazzari wrote:

On Friday, 8 June 2018 at 16:11:27 UTC, Basile B. wrote:
While this look okay please in the initial PR don't forget to 
add code to deactivate colors when DUB will be piped.


Sure, I won't forget about that. On Linux (and I guess also 
MacOS) it should be enough to check if stdout is a tty 
(isatty() from C std) or not, while I don't know how that 
could be done on Windows. Do you have any idea?


Certainly. look at how DMD does that ;)


I had a look at it, really helpful, thanks!


Re: DUB colored output proposal/showcase

2018-06-08 Thread Basile B. via Digitalmars-d

On Friday, 8 June 2018 at 16:20:12 UTC, gdelazzari wrote:

On Friday, 8 June 2018 at 16:11:27 UTC, Basile B. wrote:
While this look okay please in the initial PR don't forget to 
add code to deactivate colors when DUB will be piped.


Sure, I won't forget about that. On Linux (and I guess also 
MacOS) it should be enough to check if stdout is a tty 
(isatty() from C std) or not, while I don't know how that could 
be done on Windows. Do you have any idea?


Certainly. look at how DMD does that ;)


Re: DUB colored output proposal/showcase

2018-06-08 Thread gdelazzari via Digitalmars-d

On Friday, 8 June 2018 at 16:11:27 UTC, Basile B. wrote:
While this look okay please in the initial PR don't forget to 
add code to deactivate colors when DUB will be piped.


Sure, I won't forget about that. On Linux (and I guess also 
MacOS) it should be enough to check if stdout is a tty (isatty() 
from C std) or not, while I don't know how that could be done on 
Windows. Do you have any idea?


Re: DUB colored output proposal/showcase

2018-06-08 Thread Basile B. via Digitalmars-d

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:
Hello everyone, I'm a new user of the language (I've been 
playing around with it for some months) and I'm really liking 
it.


[...]

I started this thread to have a discussion on this before 
submitting any pull request (which, in the case of this change 
being apprecciated, I'll happily take the time to make).


Thanks to anyone in advance,
Giacomo


While this look okay please in the initial PR don't forget to add 
code to deactivate colors when DUB will be piped.


Re: DUB colored output proposal/showcase

2018-06-08 Thread gdelazzari via Digitalmars-d

On Friday, 8 June 2018 at 15:34:06 UTC, Uknown wrote:
I love it! I have very little experience with terminal colours, 
but as far as colourizing text on POSIX its fairly easy. You 
just need to emit the right ANSI escape sequences [0]. This is 
what the colorize-d library does.. For Windows before Windows 
10, things are more messy. You need to use `handle`s, to get 
the current state and then correctly set the colours. The real 
hard part here is adjusting the colour scheme based on the 
terminal background colour.


[0]: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors


Sure, I sort of know how that works, I did some stuff some time 
ago. Also, regarding Windows, the colorize-d library already 
handles what you described. I think I'll simply take the source 
of colorize-d and copy-paste the files in the Dub source, as I 
saw they've done with some parts of Vibe.d in order to avoid 
having to fetch Dub packages while compiling Dub itself.


I would love to help you on this. Is there anything in 
particular you need help with?


Actually, it's not a difficult task. But, being a bit of a 
perfectionst, I already spotted the possibility of writing a 
separate module handling all the terminal output, in order to 
better separate things. At the moment Dub uses the log module 
from Vibe.d (as I wrote before, which it seems they've just 
copy-pasted in dub/internal/vibecompat/core/log.d) to print stuff 
to the terminal, I think replacing it with a module that handles 
colors and "tags" like in the screenshots I attacched would be 
the best option. But in order to do something like this cleanly 
we should first define well how to structure the output and thus 
the module handling it. Also documenting stuff a bit. Then we'll 
need to replace all the calls to logInfo, logDiagnostic, 
logError, etc... in the entire codebase :P


A quicker option is to just leave the log calls there, add the 
escape sequences in order to color the wanted words (as it's 
currently done in the proof-of-concept) and then fix the Vibe.d 
log module to handle colors on Windows with the same "workaround" 
that colorize-d uses. That's a faster way indeed, but a bit dirty 
IMHO. Also, to handle different color schemes (for MacOS/white 
background terminals) it may become a mess.


So, having a module which handles all the terminal output seems 
the best option to me, if we want to do stuff cleanly. The main 
problem is to define its requirements, how it should interface 
with the rest of the code, how to structure it, etc... then 
writing the code is the simplest part, as always. I can handle 
this by myself, but if anyone wants to help that would be really 
appreciated, especially on planning how to structure the changes. 
Maybe we can discuss about the implementation on IRC or some 
other platform?


Re: DUB colored output proposal/showcase

2018-06-08 Thread Johannes Loher via Digitalmars-d

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:
Hello everyone, I'm a new user of the language (I've been 
playing around with it for some months) and I'm really liking 
it.


[...]


I really like this very much! I think this a great improvement 
for dub and I believe ist is very Important for is to get out 
tooling to be on par with the tools from orher languages.


I would love to help you on this. Is there anything in particular 
you need help with?


Re: DUB colored output proposal/showcase

2018-06-08 Thread Uknown via Digitalmars-d

On Friday, 8 June 2018 at 13:51:05 UTC, gdelazzari wrote:
On Friday, 8 June 2018 at 13:38:59 UTC, Steven Schveighoffer 
wrote:

Looks excellent! Two thumbs up from me. Is it cross-platform?

Note on some platforms (ahem, Macos) the background is white, 
so this should be correctly colored for that possibility.


-Steve


At the moment it's "probably" Linux-only, but that's because I 
only wanted a proof of concept and I worked on it on my Linux 
installation. I imported this library/Dub package 
https://github.com/yamadapc/d-colorize and just used it. Which, 
by the way, it's no-good at the moment since I saw that Dub 
doesn't use Dub packages itself - probably because, otherwise, 
you don't have a way to easily compile it without Dub itself, I 
guess :P so I'll need to either write my custom color 
outputting code within Dub's source or just import that library.


Of course making it cross-platform is a mandatory thing to me. 
Windows also needs some specific stuff to output colors, as you 
can see in the library I linked, so there are definitely some 
things to do to support all the platforms. I may even take a 
look at how DMD itself outputs colored output, I guess it will 
be nice to keeps things consistent.


As for MacOS having a different background... I don't really 
own a Mac nor I have ever used one before, so I don't even know 
how tools usually output their colored text on it. At the 
moment it just sets the foreground color to 
green/yellow/blue/whatever, without changing the background, if 
that was your concern. If you meant that yellow-on-white is not 
readable... well... I guess so. Maybe two different color 
palettes should be used? IDK, as I said I never used a Mac 
before so I don't really know how other tools handle this, 
maybe if some Mac user could help on this, it would be great.


Thanks for the appreciation by the way!


I love it! I have very little experience with terminal colours, 
but as far as colourizing text on POSIX its fairly easy. You just 
need to emit the right ANSI escape sequences [0]. This is what 
the colorize-d library does.. For Windows before Windows 10, 
things are more messy. You need to use `handle`s, to get the 
current state and then correctly set the colours. The real hard 
part here is adjusting the colour scheme based on the terminal 
background colour.


[0]: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors


Re: DUB colored output proposal/showcase

2018-06-08 Thread Bastiaan Veelo via Digitalmars-d

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:
Hello everyone, I'm a new user of the language (I've been 
playing around with it for some months) and I'm really liking 
it.

[...]

Take a look at these screenshots:

https://imgur.com/a/3RJyd6m


Nice!!






Re: DUB colored output proposal/showcase

2018-06-08 Thread Steven Schveighoffer via Digitalmars-d

On 6/8/18 9:51 AM, gdelazzari wrote:

On Friday, 8 June 2018 at 13:38:59 UTC, Steven Schveighoffer wrote:

Looks excellent! Two thumbs up from me. Is it cross-platform?

Note on some platforms (ahem, Macos) the background is white, so this 
should be correctly colored for that possibility.




At the moment it's "probably" Linux-only, but that's because I only 
wanted a proof of concept and I worked on it on my Linux installation. I 
imported this library/Dub package https://github.com/yamadapc/d-colorize 
and just used it. Which, by the way, it's no-good at the moment since I 
saw that Dub doesn't use Dub packages itself - probably because, 
otherwise, you don't have a way to easily compile it without Dub itself, 
I guess :P so I'll need to either write my custom color outputting code 
within Dub's source or just import that library.


Yeah, I would expect that the colorization is simply a matter of 
outputting the right control characters. You probably just need to 
include some simple stuff inside dub source itself. But I'm far from 
experienced on this.




Of course making it cross-platform is a mandatory thing to me. Windows 
also needs some specific stuff to output colors, as you can see in the 
library I linked, so there are definitely some things to do to support 
all the platforms. I may even take a look at how DMD itself outputs 
colored output, I guess it will be nice to keeps things consistent.


As for MacOS having a different background... I don't really own a Mac 
nor I have ever used one before, so I don't even know how tools usually 
output their colored text on it. 


I'm assuming it's similar to Linux, it's just that the background is 
white instead of black.


At the moment it just sets the 
foreground color to green/yellow/blue/whatever, without changing the 
background, if that was your concern. If you meant that yellow-on-white 
is not readable... well... I guess so.


Yes. In fact, I've used the new vibe.d and it appears not to adjust its 
colorization to my screen, it's light grey on white (almost impossible 
to read).


Maybe two different color 
palettes should be used? IDK, as I said I never used a Mac before so I 
don't really know how other tools handle this, maybe if some Mac user 
could help on this, it would be great.


The way I would solve it is to have a "light" mode and a "dark" mode, 
and version the default mode based on the OS (Linux, windows, etc. all 
dark mode by default, macos light mode by default).




Thanks for the appreciation by the way!


Thanks for the effort!

-Steve


Re: DUB colored output proposal/showcase

2018-06-08 Thread Atila Neves via Digitalmars-d

On Friday, 8 June 2018 at 13:35:36 UTC, gdelazzari wrote:
Hello everyone, I'm a new user of the language (I've been 
playing around with it for some months) and I'm really liking 
it.


[...]


I like it!

Atila


  1   2   3   4   5   6   7   8   9   10   >