Re: Beta 2.094.0

2020-09-12 Thread Manu via Digitalmars-d-announce
On Fri, Sep 11, 2020 at 5:50 PM Martin Nowak via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> Glad to announce the first beta for the 2.094.0 release, ♥ to the
> 49 contributors.
>
> This is the first release to be built with LDC on all platforms,
> so we'd welcome some more thorough beta testing.
>
> http://dlang.org/download.html#dmd_beta
> http://dlang.org/changelog/2.094.0.html
>
> As usual please report any bugs at
> https://issues.dlang.org
>
> -Martin


What a monster release! We haven't had one like this for a while!


Re: Visual D 1.0.0 released

2020-07-09 Thread Manu via Digitalmars-d-announce
The tooling needs detailed build configuration knowledge, which is
relatively available to extract from the msbuild runtime. Makefiles are not
any sort of fun to extract such knowledge from, and I'm not aware of
standard tooling to hook into here.
dub should be simple, but that only works for simple D projects and small
libraries, it all falls over at scale. Even DMD itself is too large a D
project for Code-D to work well with.
There's also no sense of 'active configuration', which makes it impossible
to apply the proper build configuration when navigating or highlighting
code.

For example; VisualD not only *works*, but it can even do goto-definition
between languages; if you extern(C++) some function, and then "go to
definition" from your D code, it'll find it in the C++ code and navigate
there because of the centralised code database engine.
Code-D often can't even go to the definition of D functions in D code
reliably ;)

There is so much more work in VisualD than people can easily see at first
glance.

On Thu, Jul 9, 2020 at 8:55 PM rikki cattermole via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On 09/07/2020 10:22 PM, Manu wrote:
> > Then the general autocomplete engine, which is fairly dependent on the
> > detail expressed in the project files.
>
> DCD is due for a rewrite into using dmd-fe.
>
> However as it stands, I do not believe it is mature enough to use as a
> library for this purpose. So I commend Rainer for helping to mature it!
>
> It'll help in the long run to get IDE's up to VisualD's experience for
> everything but debugging.
>


Re: Visual D 1.0.0 released

2020-07-09 Thread Manu via Digitalmars-d-announce
FWIW, I actually agree with everything you said about linux as a dev
environment vs windows. But that wasn't the question... as an IDE and
debugger integration, there is absolutely no comparison to VisualD, not by
miles.

It would be really cool if parts from VisualD were more suitable for
VSCode, but I can't see that being easy or practical.
One is the Concorde integration, which is pretty deep, and GDB is just not
even remotely as good, and the vscode debug UX is embarrassing by contrast.
Then the general autocomplete engine, which is fairly dependent on the
detail expressed in the project files. While vcxproj files are very shit to
write, it's much easier on the tooling than trying to extract sufficient
build config from make.
Nobody writes VS project files, you generate them, just the same as
makefiles... nobody writes makefiles.


On Thu, Jul 9, 2020 at 6:45 PM Petar via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Thursday, 9 July 2020 at 00:03:02 UTC, Manu wrote:
> >
> > Not really. VisualD is objectively the most functional and
> > competent
> > IDE/Debugger solution, BY FAR.
> > It's not an opinion, it's a measurable fact.
> >
> > Obviously, if you are into vim/emacs/whatever, then you don't
> > actually
> > really care much about IDE support and debugging, and in that
> > case, this
> > question is not relevant to you.
> > I agree that Code-D + VSCode is probably the second best
> > solution, but
> > there's really no comparison; the debugger is a kind of
> > funny/sad joke, the
> > D debug experience is poorly integrated, and the
> > intellisense/autocomplete
> > is nowhere near the same standard. There's no competition.
> >
> > Code-D is great work, but it's still catching up, and it may
> > never do so because VSCode just has an embarrassingly bad
> > debugger :(
>
> Professionally, I've used Visual Studio for the first 3-4 years
> of my career. Back then the company I worked for was a MSFT
> partner, so we all had the Professional or Ultimate edition that
> had all the bells and whistles. I agree that VS has probably the
> best debugger, though I'd actually say that the debugging
> experience is much better with C# than C++. Debugging C++ (with
> /Od and with or without /Zo) feels wanky compared C# which has
> always been rock-solid.
>
> However, I've since moved to Linux and I couldn't be happier. I
> haven't had to fire up Windows for the past 1-2 years. On my work
> machine, I neither have a dual boot, nor even a Windows VM, just
> Linux. Windows really sucks as a dev environment. And I'm telling
> this as someone who would for years be one of the first among my
> colleagues and friends to install the latest Windows, VS, MSVC,
> .NET FX /.NET Core preview builds, Chocolatey, vcpkg, WSL,
> Windows Terminal, Cygwin, Msys, Msys2 and so on.
> The only salvation I see is WSL2, but still, it's overall a
> pretty bad dev UX. No matter how much effort is put in a GUI IDE,
> nothing beets Unix as an IDE and especially modern distros, such
> as NixOS (my daily driver). Yes, it takes much more effort for
> beginners than VS, but it's all worth it.
>
> Coming back to VS Code, for what I do on my daily job it's really
> destroying the "real" VS:
> * It's cross-platform, so I can take my dev environment on
> whichever OS I work.
> * You don't need to create a "project file" to effectively work
> on a project
> * On Windows, admin user is not necessary to install & update.
> This makes the update process unnoticeable, where VS, before
> their new modular installer was unbearably slow (1h min).
> * Start time is much better. Additionally, in many cases, you
> don't need to restart when you install/uninstall an extension -
> this make's it much easier to test extensions for 1-2 mins and
> then throw them away.
> * The extensions integrate much better - in many cases it takes <
> 10 secs to install something, while with VS it takes at least
> 1min in my experience, sometimes even several minutes, depending
> on the size of the extension.
> * VS Code integrates much better with the system - on Windows you
> just right-click to open a folder or file and it's opened in less
> then 1-3secs. In the terminal you just type `code ` and
> it's done. I know this works already with full VS and I have used
> it, but its much slower startup time defeats this workflow.
> * For beginners (which don't know vim), VS Code is actually not a
> bad choice as the default git editor (it's just `git config
> --global core.editor "code --wait"`) (e.g. for interactive
> rebase, writing commit messages, git add -p edit, and so on)
> * Given that I spend at least at 30-7

Re: Visual D 1.0.0 released

2020-07-08 Thread Manu via Digitalmars-d-announce
On Wed, Jul 8, 2020 at 10:15 PM aberba via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Wednesday, 8 July 2020 at 01:26:55 UTC, Manu wrote:
> > On Tue, Jul 7, 2020 at 10:00 PM JN via Digitalmars-d-announce <
> > digitalmars-d-announce@puremagic.com> wrote:
> >
> >> On Saturday, 4 July 2020 at 13:00:16 UTC, Rainer Schuetze
> >> wrote:
> >> > See
> >> > https://rainers.github.io/visuald/visuald/VersionHistory.html for
> the complete list of changes.
> >> >
> >> > Cheers,
> >> > Rainer
> >>
> >> Anyone who uses VisualD and Code-D can compare the two? (Yes,
> >> I know the difference between Visual Studio and Visual Studio
> >> Code).
> >>
> >
> > The difference is night vs day... VisualD is, by far, like
> > REALLY FAR, the
> > most mature and useful IDE and debug environment for D.
>
> That's depends on what you're comfortable with and if you're a
> core windows guy... how you use it too.
>

Not really. VisualD is objectively the most functional and competent
IDE/Debugger solution, BY FAR.
It's not an opinion, it's a measurable fact.

Obviously, if you are into vim/emacs/whatever, then you don't actually
really care much about IDE support and debugging, and in that case, this
question is not relevant to you.
I agree that Code-D + VSCode is probably the second best solution, but
there's really no comparison; the debugger is a kind of funny/sad joke, the
D debug experience is poorly integrated, and the intellisense/autocomplete
is nowhere near the same standard. There's no competition.

Code-D is great work, but it's still catching up, and it may never do so
because VSCode just has an embarrassingly bad debugger :(


Re: Visual D 1.0.0 released

2020-07-08 Thread Manu via Digitalmars-d-announce
On Wed, Jul 8, 2020 at 7:05 PM Greatsam4sure via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Wednesday, 8 July 2020 at 01:26:55 UTC, Manu wrote:
> > On Tue, Jul 7, 2020 at 10:00 PM JN via Digitalmars-d-announce <
> > digitalmars-d-announce@puremagic.com> wrote:
> >
> >> On Saturday, 4 July 2020 at 13:00:16 UTC, Rainer Schuetze
> >> wrote:
> >> > See
> >> > https://rainers.github.io/visuald/visuald/VersionHistory.html for
> the complete list of changes.
> >> >
> >> > Cheers,
> >> > Rainer
> >>
> >> Anyone who uses VisualD and Code-D can compare the two? (Yes,
> >> I know the difference between Visual Studio and Visual Studio
> >> Code).
> >>
> >
> > The difference is night vs day... VisualD is, by far, like
> > REALLY FAR, the
> > most mature and useful IDE and debug environment for D.
> > TL;DR: if you are a D dev, and you use Windows, you should
> > definitely try
> > Visual Studio + VisualD. I for one couldn't work without it!
>
>
> VodualD is great. I appreciate the people behind it. Great thanks
> to your all.
>
> Setting up visual D is not user friendly. I downloaded
> visualD+dmd+LDC since version 0.52 I could not run ordinary Hello
> World. All kind of errors. I seek help on the learn group several
> times to not help. My experience with visual D is bad.
>

I've been testing the first-install process for almost 10 years.
I haven't had any problems with first-install for at least 6 years.

Make sure to create bug reports for issues like that; what version of VS
are you using? Are there any non-standard elements to your installation or
dev environment?


Re: Visual D 1.0.0 released

2020-07-07 Thread Manu via Digitalmars-d-announce
On Tue, Jul 7, 2020 at 10:00 PM JN via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Saturday, 4 July 2020 at 13:00:16 UTC, Rainer Schuetze wrote:
> > See
> > https://rainers.github.io/visuald/visuald/VersionHistory.html
> > for the complete list of changes.
> >
> > Cheers,
> > Rainer
>
> Anyone who uses VisualD and Code-D can compare the two? (Yes, I
> know the difference between Visual Studio and Visual Studio Code).
>

The difference is night vs day... VisualD is, by far, like REALLY FAR, the
most mature and useful IDE and debug environment for D.
TL;DR: if you are a D dev, and you use Windows, you should definitely try
Visual Studio + VisualD. I for one couldn't work without it!


Re: Visual D 1.0.0 released

2020-07-04 Thread Manu via Digitalmars-d-announce
This is huge!

Congrats on the super cool milestone with a bunch of really great new stuff.
Thanks so much for your tireless work Rainer!
I wouldn't be here without all your effort on this.

On Sat, Jul 4, 2020 at 11:05 PM Rainer Schuetze via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> Hello,
>
> after having passed the 10 year anniversary of public availability
> recently, it is finally time to release version 1.0 of Visual D, the
> Visual Studio extension that adds D language support to VS 2008-2019.
>
> You can find the installer at
> http://rainers.github.io/visuald/visuald/StartPage.html
>
> Highlights from this release:
>
> - semantic engine based on dmd front end now enabled by default and
> updated to 2.092. If you are low on memory or run a 32-bit Windows, you
> should switch back to the legacy engine.
>
> - debugger extension mago will now evaluate struct or class properties
> (methods or fields) __debugOverview, __debugExpanded and __debugTextView
> to customize the debugger display. mago can even display forward ranges
> as a list, but that is currently rather slow, so it is disabled by
> default (see debugger options).
>
> - the bar on the top of the edit window now displays the current edit
> scope and allows faster navigation within a source file (needs the dmd
> based engine)
>
> - ever wondered how to navigate to the type of a variable declared by
> `auto` inference? clicking an identifier in a tool tip from intellisense
> will now jump to its definition (only with the dmd based engine)
>
> See https://rainers.github.io/visuald/visuald/VersionHistory.html for
> the complete list of changes.
>
> Cheers,
> Rainer
>


Re: Mir updates

2020-04-02 Thread Manu via Digitalmars-d-announce
On Tue, Mar 31, 2020 at 12:15 AM 9il via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Monday, 30 March 2020 at 12:23:03 UTC, jmh530 wrote:
> > On Monday, 30 March 2020 at 06:33:13 UTC, 9il wrote:
> >> [snip]
> >
> > Thanks, I like 'em.
> >
> > I noticed that the little icon in the tabs has changed from
> > most of them. However, the mir random is unchanged from before.
>
> Probably it is because of your browser cache, likely will be
> updated after a while.
>
> > Also, on the mir.glas page, one of the lines says
> > "matrix-vector operations %3 done, partially optimized for
> > now". Another line says "l3 was moved to mir-glas", which is
> > confusing because it should be at the mir-glas documentation
> > page anyway.
>
> We don't have documentation for mir-glas library, only for mir
> (backports) package, which has mir.glas package.
>
> I don't know what to do with mir-glas, it is too good to be
> forgotten, but I don't see a commercial perspective in it.
>

Why not? Where does it fall short of being useful?


Re: Bison 3.5 is released, and features a D backend

2020-01-29 Thread Manu via Digitalmars-d-announce
On Tue, Jan 28, 2020 at 11:05 AM Akim Demaille via
Digitalmars-d-announce  wrote:
>
> On Wednesday, 1 January 2020 at 09:47:11 UTC, Akim Demaille wrote:
> > Hi all!
> >[...]
> > If you would like to contribute, please reach out to us via
> > bison-patc...@gnu.org, or help-bi...@gnu.org.
>
> Hi,
>
> There was no answer.  Should I understand that there's no need
> for Bison in D?

This is very interesting to me for one. I have some projects that glue
Bison C output to my D applications, and that's a hassle to manage; a
whole lot of little C shim's that call through to my D code. I wasn't
aware Bison has attempted to emit D code, or I would have been using
that!
So, for what it's worth, I think this is definitely useful to the D community.

That said, you appeared to be asking for contributors in your OP. As a
Bison end-user, I just treat it like a black box, and I don't know
anything about Bison's implementation, or really even very much about
how it works beyond the fact that it just does.
I read your post, but it didn't occur to me that I was the person you
were looking for, so I didn't reply. It's possible there are many
people with a similar thought?

Depending on what you need, I may be able to offer some sort of help,
most likely in terms of advice for how the D output presents and folds
into the users project, and whether it works or not. I don't have time
to become a Bison dev though; there'd be a huge learning curve for me,
and I'm really time-poor as is.

I don't think your take away should be that it's not useful to people,
but finding contributors who can hack on Bison is a different
question.


Re: DIP 1024---Shared Atomics---Accepted

2020-01-13 Thread Manu via Digitalmars-d-announce
On Mon, Jan 13, 2020 at 1:40 AM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/10/2020 2:48 PM, Manu wrote:
> > On Thu, Jan 9, 2020 at 6:35 PM Walter Bright via
> > Digitalmars-d-announce  wrote:
> >>
> >> On 1/7/2020 6:31 PM, Manu wrote:
> >>> It will still do that, either now... or later. So, why wait?
> >>
> >> Because customers have their own schedules.
> >
> > Customers update their compilers according to their schedules, and
> > they can use `-revert` if they're not ready to migrate, that's the
> > whole point...
> > You didn't answer me though, if it's accepted, and it's implemented...
> > why not enable it? and when will we do it?
> > Explain the reason for the delay or choice in timing? The transition
> > you describe must happen at some time... and delay changes nothing;
> > the transition is exactly the same.
>
> We decided a couple years ago to implement disruptive new features first with
> -preview=feature, and some time later make it the default and have a
> -revert=feature.
>
> So far, it has worked well. I don't see any reason to change it.

Yes, but we've had the -preview for close to a year now... I'm asking
what "some time later" means?
Obviously the lib needs to be fixed (that Rainer pointed out).


Re: DIP 1024---Shared Atomics---Accepted

2020-01-10 Thread Manu via Digitalmars-d-announce
On Thu, Jan 9, 2020 at 6:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/7/2020 6:31 PM, Manu wrote:
> > It will still do that, either now... or later. So, why wait?
>
> Because customers have their own schedules.

Customers update their compilers according to their schedules, and
they can use `-revert` if they're not ready to migrate, that's the
whole point...
You didn't answer me though, if it's accepted, and it's implemented...
why not enable it? and when will we do it?
Explain the reason for the delay or choice in timing? The transition
you describe must happen at some time... and delay changes nothing;
the transition is exactly the same.


Re: DIP 1024---Shared Atomics---Accepted

2020-01-07 Thread Manu via Digitalmars-d-announce
On Wed, Jan 8, 2020 at 12:20 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/6/2020 10:17 PM, Manu wrote:
> > Well it was a preview for an unaccepted DIP, so it could have been
> > withdrawn. I guess I have increased confidence now, but it still seems
> > unnecessary to delay.
>
> Preview means for accepted DIPs as well when they break existing code.
>
> >> Don't really have a schedule at the moment. It'll likely be at least a 
> >> year.
> > A year? That's disappointing. What is the reason to delay this? It
> > doesn't break anything, and it likely fixes bugs on contact.
>
> It breaks all code that manipulates shared data directly, whether it was
> correctly written or not.

It will still do that, either now... or later. So, why wait?


Re: DIP 1024---Shared Atomics---Accepted

2020-01-06 Thread Manu via Digitalmars-d-announce
On Sat, Jan 4, 2020 at 2:15 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/3/2020 3:41 AM, Manu wrote:
> > We've already had this -preview for quite a while; I have enabled it
> > in an experimental context, but I don't tend to write and deploy code
> > that depends on future-features. I stick to the current language when
> > writing code I intend to share.
> > Do you have some sense of when we will make this part of the language?
> > The DIP is accepted, but it didn't describe that it would be enabled
> > at some future time...?
>
> You shouldn't be reluctant to use preview switches. It's only that way to ease
> the transition for people, not because we're going to withdraw it.

Well it was a preview for an unaccepted DIP, so it could have been
withdrawn. I guess I have increased confidence now, but it still seems
unnecessary to delay.

> Don't really have a schedule at the moment. It'll likely be at least a year.

A year? That's disappointing. What is the reason to delay this? It
doesn't break anything, and it likely fixes bugs on contact.


Re: DIP 1024---Shared Atomics---Accepted

2020-01-03 Thread Manu via Digitalmars-d-announce
On Fri, Jan 3, 2020 at 8:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/2/2020 11:31 PM, Manu wrote:
> > Okay, although I don't really understand; if we have accepted the
> > feature, but we don't enable the feature... then nobody will use it,
> > and no code will be written that's compatible.
> > This kinda seems like a future-acceptance?
> > Nobody enables `-preview`s.
>
> Those who need it (you!) will use it. That's what matters.

We've already had this -preview for quite a while; I have enabled it
in an experimental context, but I don't tend to write and deploy code
that depends on future-features. I stick to the current language when
writing code I intend to share.
Do you have some sense of when we will make this part of the language?
The DIP is accepted, but it didn't describe that it would be enabled
at some future time...?


Re: DIP 1024---Shared Atomics---Accepted

2020-01-02 Thread Manu via Digitalmars-d-announce
On Fri, Jan 3, 2020 at 9:20 AM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/2/2020 4:17 AM, Manu wrote:
> > On Thu, Jan 2, 2020 at 7:45 PM Walter Bright via
> > Digitalmars-d-announce  wrote:
> >>
> >> On 1/2/2020 12:01 AM, Manu wrote:
> >>> Quick quick, we need a PR to issue deprecation messages for those
> >>> invalid read/writes! :)
> >>
> >> It's already been merged!
> >>
> >> https://github.com/dlang/dmd/pull/10209
> >>
> >> Some really fast work there :-)
> >
> > Doesn't the acceptance of the DIP suggest that it should no longer be
> > `-preview`; it should be enabled and an option to disable the feature
> > via `-revert` should be introduced?
>
> We switch to -revert after some time has passed (a year or two) so people have
> time to adapt.
>
> > Or short of that, a deprecation message should be emit when compiling?
>
> I'm not sure that's necessary with the preview/revert switches.

Okay, although I don't really understand; if we have accepted the
feature, but we don't enable the feature... then nobody will use it,
and no code will be written that's compatible.
This kinda seems like a future-acceptance?
Nobody enables `-preview`s.


Re: DIP 1024---Shared Atomics---Accepted

2020-01-02 Thread Manu via Digitalmars-d-announce
On Thu, Jan 2, 2020 at 7:45 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/2/2020 12:01 AM, Manu wrote:
> > Quick quick, we need a PR to issue deprecation messages for those
> > invalid read/writes! :)
>
> It's already been merged!
>
> https://github.com/dlang/dmd/pull/10209
>
> Some really fast work there :-)

Doesn't the acceptance of the DIP suggest that it should no longer be
`-preview`; it should be enabled and an option to disable the feature
via `-revert` should be introduced?
Or short of that, a deprecation message should be emit when compiling?


Re: DIP 1024---Shared Atomics---Accepted

2020-01-02 Thread Manu via Digitalmars-d-announce
On Thu, Jan 2, 2020 at 4:45 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/1/2020 9:53 PM, Manu wrote:
> > On Thu, Jan 2, 2020 at 3:40 PM Mike Parker via Digitalmars-d-announce
> >  wrote:
> >>
> >> DIP 1024, "Shared Atomics", was accepted without comment.
> >>
> >> https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1024.md
> >
> > This has been a long time coming!
>
> A New Year's present for all of us!

Quick quick, we need a PR to issue deprecation messages for those
invalid read/writes! :)


Re: DIP 1024---Shared Atomics---Accepted

2020-01-01 Thread Manu via Digitalmars-d-announce
On Thu, Jan 2, 2020 at 3:40 PM Mike Parker via Digitalmars-d-announce
 wrote:
>
> DIP 1024, "Shared Atomics", was accepted without comment.
>
> https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1024.md

This has been a long time coming!


Re: Release D 2.089.0

2019-11-06 Thread Manu via Digitalmars-d-announce
On Tue., 5 Nov. 2019, 11:35 pm John Chapman via Digitalmars-d-announce, <
digitalmars-d-announce@puremagic.com> wrote:

> On Wednesday, 6 November 2019 at 01:16:00 UTC, Manu wrote:
> > On Tue, Nov 5, 2019 at 5:14 PM Manu  wrote:
> >>
> >> On Tue, Nov 5, 2019 at 1:20 PM John Chapman via
> >> Digitalmars-d-announce 
> >> wrote:
> >> >
> >> > On Tuesday, 5 November 2019 at 19:05:10 UTC, Manu wrote:
> >> > > Incidentally, in your sample above there, `a` and `b` are
> >> > > not shared... why not just write: `cas(, null, b);` ??
> >> > > If source data is not shared, you shouldn't cast to shared.
> >> >
> >> > Because casts were needed in 2.088 and earlier and I just
> >> > updated to 2.089, unaware of the API change. Should I log
> >> > `null` not working as a bug?
> >>
> >> Yes
> >
> > But I also think you should update your code to not perform the
> > casts. Can you confirm that the null works when removing the
> > shared casts?
>
> Yes and no - it compiles when removing the casts, but AVs at
> runtime.
>
> Bug filed: https://issues.dlang.org/show_bug.cgi?id=20359


Thanks! I'll look into these as soon as I have a moment. Sorry for the
inconvenience.


Re: Release D 2.089.0

2019-11-05 Thread Manu via Digitalmars-d-announce
On Tue, Nov 5, 2019 at 5:14 PM Manu  wrote:
>
> On Tue, Nov 5, 2019 at 1:20 PM John Chapman via Digitalmars-d-announce
>  wrote:
> >
> > On Tuesday, 5 November 2019 at 19:05:10 UTC, Manu wrote:
> > > Incidentally, in your sample above there, `a` and `b` are not
> > > shared... why not just write: `cas(, null, b);` ?? If source
> > > data is not shared, you shouldn't cast to shared.
> >
> > Because casts were needed in 2.088 and earlier and I just updated
> > to 2.089, unaware of the API change. Should I log `null` not
> > working as a bug?
>
> Yes

But I also think you should update your code to not perform the casts.
Can you confirm that the null works when removing the shared casts?


Re: Release D 2.089.0

2019-11-05 Thread Manu via Digitalmars-d-announce
On Tue, Nov 5, 2019 at 1:20 PM John Chapman via Digitalmars-d-announce
 wrote:
>
> On Tuesday, 5 November 2019 at 19:05:10 UTC, Manu wrote:
> > Incidentally, in your sample above there, `a` and `b` are not
> > shared... why not just write: `cas(, null, b);` ?? If source
> > data is not shared, you shouldn't cast to shared.
>
> Because casts were needed in 2.088 and earlier and I just updated
> to 2.089, unaware of the API change. Should I log `null` not
> working as a bug?

Yes


Re: Release D 2.089.0

2019-11-05 Thread Manu via Digitalmars-d-announce
On Mon, Nov 4, 2019 at 11:55 PM John Chapman via
Digitalmars-d-announce  wrote:
>
> On Tuesday, 5 November 2019 at 06:44:29 UTC, Manu wrote:
> > On Mon., 4 Nov. 2019, 2:05 am John Chapman via
> > Digitalmars-d-announce, < digitalmars-d-announce@puremagic.com>
> > wrote:
> >
> >> Something has changed with core.atomic.cas - it used to work
> >> with `null` as the `ifThis` argument, now it throws an AV. Is
> >> this intentional?
> >>
> >> If I use `cast(shared)null` it doesn't throw but if the change
> >> was deliberate shouldn't it be mentioned?
> >>
> >
> > Changes were made because there were a lot of problems with
> > that module...
> > but the (reasonably comprehensive) unit tests didn't reveal any
> > such
> > regressions. We also build+test many popular OSS projects via
> > buildkite,
> > and there weren't problems.
> > Can you show the broken code?
>
> Sure - this AVs on DMD 2.088 Windows:
>
> import core.atomic;
> void main() {
>Object a, b = new Object;
>cas(cast(shared), null, cast(shared)b);
> }

Oh... a class.
Yeah, that's an interesting case that I actually noted had a low
testing surface area.
It's also theoretically broken; despite what's practical, I think it's
improperly spec-ed that shared classes can be used with atomics.
With a struct, you can declare `shared(T)* s_ptr`, but with classes
you can only `shared(C) c_ptr`, where the difference is that `s_ptr`
can be read/written... but `c_ptr` is typed such that the pointer
itself is shared (because classes are implicitly a pointer), so that
`c_ptr` can't be safely read/write-able...
So, I actually think that atomic API is mal-formed, and it should not
support `shared` arguments, but I tried to preserve existing
behaviour, while being more strict about what is valid. I obviously
missed something with `null` here.

Incidentally, in your sample above there, `a` and `b` are not
shared... why not just write: `cas(, null, b);` ?? If source data is
not shared, you shouldn't cast to shared.


Re: Release D 2.089.0

2019-11-04 Thread Manu via Digitalmars-d-announce
On Mon., 4 Nov. 2019, 2:05 am John Chapman via Digitalmars-d-announce, <
digitalmars-d-announce@puremagic.com> wrote:

> On Sunday, 3 November 2019 at 13:35:36 UTC, Martin Nowak wrote:
> > Glad to announce D 2.089.0, ♥ to the 44 contributors.
> >
> > This release comes with corrected extern(C) mangling in mixin
> > templates, atomicFetchAdd and atomicFetchSub in core.atomic,
> > support for link driver arguments, better support of LDC in
> > dub, and plenty of other bug fixes and improvements.
> >
> > http://dlang.org/download.html
> > http://dlang.org/changelog/2.089.0.html
> >
> > -Martin
>
> Something has changed with core.atomic.cas - it used to work with
> `null` as the `ifThis` argument, now it throws an AV. Is this
> intentional?
>
> If I use `cast(shared)null` it doesn't throw but if the change
> was deliberate shouldn't it be mentioned?
>

Changes were made because there were a lot of problems with that module...
but the (reasonably comprehensive) unit tests didn't reveal any such
regressions. We also build+test many popular OSS projects via buildkite,
and there weren't problems.
Can you show the broken code?

>


Re: When will you announce DConf 2020?

2019-11-03 Thread Manu via Digitalmars-d-announce
On Sun, Nov 3, 2019 at 8:20 AM Murilo via Digitalmars-d-announce
 wrote:
>
> On Sunday, 3 November 2019 at 06:33:48 UTC, Mike Parker wrote:
> > On Sunday, 3 November 2019 at 00:51:38 UTC, Murilo wrote:
> >> Hi guys. I'm eager to attend the next DConf, which is why I'm
> >> already planning everything about how I will travel from
> >> Brazil to the UK(or maybe Germany). When will you announce the
> >> place and date of the next DConf?
> >
> > When our plans are finalized.
>
> Okay but keep in mind that the earlier I buy the airplane ticket
> the cheaper it gets so please don't take too long to finish that.

If you have ideas for an interesting talk, submit a proposal, and
perhaps you may have your costs covered?
It's great to see people coming from such different parts of the world!


Re: Release D 2.088.0

2019-09-07 Thread Manu via Digitalmars-d-announce
On Sat, Sep 7, 2019 at 9:05 AM jmh530 via Digitalmars-d-announce
 wrote:
>
> On Saturday, 7 September 2019 at 07:16:36 UTC, Manu wrote:
> > [snip]
> >
> > What's the story with string though; the second line (linking
> > back to the C++ reference) of the doco isn't there... O_o
>
> Hmm, I didn't notice that. It also is a problem for
> core.stdcpp.array. I'm looking at other uses of LINK2 in druntime
> and they are usually of the form $(LINK2 http:/xxx, some_text).
> Without digging in to the documentation generator, I think LINK2
> is meant to be used when you want to replace the link with some
> text. I don't see many cases of raw links being used (the source
> shows up as a raw link, but it's not done that way in the files),
> someone else may know better than I do.

The text before the link is gone too.
I don't know how to iterate on the docs, since they only appear from
CI, and I have no idea how to create them myself :/


Re: Release D 2.088.0

2019-09-07 Thread Manu via Digitalmars-d-announce
On Fri, Sep 6, 2019 at 3:50 AM jmh530 via Digitalmars-d-announce
 wrote:
>
> On Thursday, 5 September 2019 at 20:55:15 UTC, Manu wrote:
> > [snip]
> >
> > Interesting... you can see in the code, there are doco comments
> > everywhere, but the docs are empty O_o
> > Also the second line of the description linking to the C++ docs
> > is
> > missing too... where did all the docs go?
> >
>
> The point I was trying to make wrt basic_string was that the top
> of it looks like
>
> /**
>   * D language counterpart to C++ std::basic_string.
>   *
>   * C++ reference: $(LINK2
> https://en.cppreference.com/w/cpp/string/basic_string)
>   */
> extern(C++, class)
> extern(C++, (StringNamespace))
> struct basic_string(T, Traits = char_traits!T, Alloc =
> allocator!T)
>
> whereas the top of vector looks like
>
> extern(C++, class) struct vector(T, Alloc = allocator!T)
>
> It has no top-level comment. With no top-level comment, all the
> other documentation won't show up.

I'll give it a good do-over.

What's the story with string though; the second line (linking back to
the C++ reference) of the doco isn't there... O_o


Re: Release D 2.088.0

2019-09-05 Thread Manu via Digitalmars-d-announce
On Tue, Sep 3, 2019 at 7:30 AM jmh530 via Digitalmars-d-announce
 wrote:
>
> On Tuesday, 3 September 2019 at 14:02:43 UTC, bachmeier wrote:
> > [snip]
> >
> > Those are a big deal. From a marketing perspective, those are
> > gold IMO.
>
> If these are as big a deal as people seem to think, the
> documentation could be improved by including a brief example of
> how to use.
>
> In addition, the documentation page for vector [1] seems a bit
> thin. It doesn't have the top-level comment like basic_string
> does [2]. At a minimum, that should be fixed before going on a
> marketing blitz...
>
> [1] https://dlang.org/phobos/core_stdcpp_vector.html
> [2]
> https://github.com/dlang/druntime/blob/f07859b9b33740d7d7357ca3e27077f91c02dfc8/src/core/stdcpp/string.d#L59

Interesting... you can see in the code, there are doco comments
everywhere, but the docs are empty O_o
Also the second line of the description linking to the C++ docs is
missing too... where did all the docs go?

I've tried to iterate on the docs a couple of times, but I have no
idea how I'm supposed to do it, because they're only published when
the PR is merged... how are you supposed to iterate locally?
That empty doco is not what I expect from looking at the source.

But yeah, I agree. More will come online soon-ish. We can give one
release to harden them up a bit before making a fuss about it.


Re: Release D 2.088.0

2019-09-05 Thread Manu via Digitalmars-d-announce
On Tue, Sep 3, 2019 at 4:51 AM Daniel Kozak via Digitalmars-d-announce
 wrote:
>
> On Tue, Sep 3, 2019 at 10:48 AM Manu via Digitalmars-d-announce
>  wrote:
> >
> > On Tue., 3 Sep. 2019, 1:00 am Martin Nowak via Digitalmars-d-announce, 
> >  wrote:
> >>
> >> Glad to announce D 2.088.0, ♥ to the 58 contributors.
> >>
> >> This release comes with a new getLocation trait, a getAvailableDiskSpace
> >> in std.file, removal and deprecation of lots of various outdated APIs,
> >> an core.atomic.cas with result value, and a couple of more changes.
> >>
> >> http://dlang.org/download.html
> >> http://dlang.org/changelog/2.088.0.html
> >>
> >> -Martin
> >
> >
> > Huzzah!
> >
> > I like to think std::string and std::vector are a pretty big deal too ;)
>
> It will be as soon as gcc with new ABI will be supported ;-)

The old ABI works now at least. The new ABI is blocked on move
constructors; libstdc++ has an interior pointer >_<



Re: Visual D 0.50.0 released

2019-09-04 Thread Manu via Digitalmars-d-announce
On Wed, Jun 26, 2019 at 1:30 AM a11e99z via Digitalmars-d-announce
 wrote:
>
> On Wednesday, 26 June 2019 at 02:35:53 UTC, Bart wrote:
> > On Tuesday, 25 June 2019 at 19:47:40 UTC, Rainer Schuetze wrote:
>
> Before I told about problems with VD on my laptop.
> Most of time I use desktop with VS2019 and VD0.49.2 - its working.
> I dont have a few days for "debugging" my installation for now so
> I put my laptop in case and I am afraid install new VD to
> desktop. I will deeply plunge to this jungle at July.
> I filled some issues/enhancement to bugtracker yesterday.
> In any case thanks for VD, I like it and I need it.

Your problems are easy to resolve.
BuildTools has some weird paths... but you're running VS; why are you
using the separate build tools distribution when you have VS
installed? This is thoroughly non-standard and weird. Just install the
proper C++ tools?
The path issue that lead to optlink rather than MS link should be
trivial to resolve, then you will not have linking problems.


Re: Release D 2.088.0

2019-09-03 Thread Manu via Digitalmars-d-announce
On Tue., 3 Sep. 2019, 1:00 am Martin Nowak via Digitalmars-d-announce, <
digitalmars-d-announce@puremagic.com> wrote:

> Glad to announce D 2.088.0, ♥ to the 58 contributors.
>
> This release comes with a new getLocation trait, a getAvailableDiskSpace
> in std.file, removal and deprecation of lots of various outdated APIs,
> an core.atomic.cas with result value, and a couple of more changes.
>
> http://dlang.org/download.html
> http://dlang.org/changelog/2.088.0.html
>
> -Martin
>

Huzzah!

I like to think std::string and std::vector are a pretty big deal too ;)

>


Re: Visual D 0.50.0 released

2019-09-03 Thread Manu via Digitalmars-d-announce
On Tue, Sep 3, 2019 at 12:10 AM Rainer Schuetze via
Digitalmars-d-announce  wrote:
>
>
>
> On 23/06/2019 19:58, Rainer Schuetze wrote:
> > Hi,
> >
> > today a new version of Visual D has been released. Its main new features are
> >
> > - additional installer available that includes DMD and LDC
> >
> > - now checks for updates for Visual D, DMD and LDC, assisted download
> > and install
> >
> > - debugger improvements: better support for dynamic type of classes,
> > show exception messages, conditional breakpoints
> >
> > - highlight references to symbol at caret (experimental)
> >
> > See https://rainers.github.io/visuald/visuald/VersionHistory.html for
> > the complete list of changes
> >
> > Visual D is a Visual Studio extension that adds D language support to
> > VS2008-2019. It is written in D, its source code can be found on github:
> > https://github.com/D-Programming-Language/visuald, pull requests welcome.
> >
> > The installers can be found at
> > http://rainers.github.io/visuald/visuald/StartPage.html
> >
> > Visual D is now also available in the Visual Studio Marketplace:
> > https://marketplace.visualstudio.com/items?itemName=RainerSchuetze.visuald
> >
> > Happy coding,
> > Rainer
> >
>
> I just released a bug fix version 0.50.1 with a few enhancements:
>
> - fixes some integration issues with VS 2019 16.2
> - mago: improve function call in watch window
> - better version highlighting for files not in project
>
> Full list of changes as usual here:
> https://rainers.github.io/visuald/visuald/VersionHistory.html



Thanks again Rainer!



Re: Silicon Valley C++ Meetup - August 28, 2019 - "C++ vs D: Let the Battle Commence"

2019-08-27 Thread Manu via Digitalmars-d-announce
On Tue, Aug 27, 2019 at 12:25 PM Ali Çehreli via
Digitalmars-d-announce  wrote:
>
> I will be presenting a comparison of D and C++. RSVP so that we know how
> much food to order:
>
>https://www.meetup.com/ACCU-Bay-Area/events/263679081/
>
> It will not be streamed live but some people want to record it; so, it
> may appear on YouTube soon.
>
> As always, I have way too many slides. :) The contents are
>
> - Introduction
> - Generative programming with D
> - Thousand cuts of D
> - C++ vs. D
> - Soapboxing
>
> Ali

Wednesday? :(
I briefly pondered skipping up for the weekend...



Re: D GUI Framework (responsive grid teaser)

2019-05-27 Thread Manu via Digitalmars-d-announce
On Mon, May 27, 2019 at 2:00 PM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Monday, 27 May 2019 at 20:14:26 UTC, Manu wrote:
> > Computers haven't had only one thread for almost 20 years. Even
> > mobile
> > phones have 8 cores!
> > This leads me back to my original proposition.
>
> If Robert is aiming for embedded and server rendering then he
> probably wants a simple structure with limited multi-threading.

Huh? Servers take loads-of-cores as far as you possibly can! Zen2
parts announced the other day, they'll give our servers something like
256 threads!

Even embedded parts have many cores; look at every mobile processor.
But here's the best part; if you design your software to run well on
computers... it does!
Multi-core focused software tends to perform better on single-core
setups than software that was written for single-core in my
experience.
My most surprising example was when we rebooted our engine in 2005 for
XBox360 and PS3 because we needed to fill 6-8 cores with work and our
PS2 era architecture did not do that effectively. At the time, we
worried about how the super-scalar choices we were making would affect
Gamecube which still had just one core. It was a minor platform so we
thought we'd just wear the loss to minimise tech baggage... boy were
we wrong! Right out of the gate, our scalability-focused architecture
ran better on the single-core machines than the previous highly mature
code that had received years of optimisation. It looked like there
were more moving parts in the architecture, but it still ran
meaningfully faster.
The key reason was proper partitioning of work. If you write a
single-threaded app, you are almost 100% guaranteed to blatantly
disregard software engineering in favour of a laser focus on your API
and user experience, and you will write bad software as a result.
Every time.



Re: D GUI Framework (responsive grid teaser)

2019-05-27 Thread Manu via Digitalmars-d-announce
On Mon, May 27, 2019 at 1:05 AM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Monday, 27 May 2019 at 05:31:29 UTC, Manu wrote:
> > How does the API's threadsafety mechanisms work? How does it
> > scale to my 64-core PC? How does it schedule the work? etc...
>
> Ah yes, if you don't run the GUI on a single thread then you have
> a lot to take into account.

Computers haven't had only one thread for almost 20 years. Even mobile
phones have 8 cores!
This leads me back to my original proposition.



Re: D GUI Framework (responsive grid teaser)

2019-05-26 Thread Manu via Digitalmars-d-announce
On Sun, May 26, 2019 at 10:25 PM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Monday, 27 May 2019 at 05:01:36 UTC, Manu wrote:
> > Performance is a symptom of architecture, and architecture *is*
> > the early stage.
>
> I expected that answer, but the renderer itself can just be a
> placeholder.

Actually, I'm not really interested in rendering much. From the
original posts, the rendering time is most uninteresting cus it's the
end of the pipeline, the time that I was commenting on at the start is
the non-rendering time, which was substantial.

> So yes, you need to think about where accelerating
> datastructures/processes fit in. That is clear. But you don't
> need to have them implemented.

How does the API's threadsafety mechanisms work? How does it scale to
my 64-core PC? How does it schedule the work? etc...



Re: D GUI Framework (responsive grid teaser)

2019-05-26 Thread Manu via Digitalmars-d-announce
On Sun, May 26, 2019 at 8:50 PM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Monday, 27 May 2019 at 03:35:48 UTC, Nick Sabalausky
> (Abscissa) wrote:
> > suggestion that Robert could get this going an order of
> > magnitude faster without too terribly much trouble. Luckily,
> > Ethan explained my stance better than I was able to.
>
> I think you guys overestimate the importance of performance at
> this early stage.

Performance is a symptom of architecture, and architecture *is* the early stage.

> The hardest problem is to create a good usability experience and
> also provide an easy to use API for the programmer.

They're somewhat parallel problems, although the architecture will
inform the API design substantially.
If you don't understand your architecture up front, then you'll likely
just write a typical ordinary thing, and then it doesn't matter what
the API looks like; someone will always feel compelled to re-write a
mediocre library.
I think it's possible to check both boxes, but it begins with
architectural concerns. That doesn't work as an afterthought... (or
you get Unity, or [insert library that you're not satisfied with])



Re: D GUI Framework (responsive grid teaser)

2019-05-26 Thread Manu via Digitalmars-d-announce
On Sun, May 26, 2019 at 6:35 PM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Monday, 27 May 2019 at 00:33:45 UTC, Nick Sabalausky
> (Abscissa) wrote:
> > flat-out wrong) to say about game programming. People hear the
> > word "game", associate it with "insignificant" and promptly
> > shut their brains off.
>
> Not insignificant, but also not necessarily relevant for the
> project in this thread.
>
> There is nothing wrong with Robert's approach from a software
> engineering and informatics perspective.
>
> Why do you guys insist on him doing it your way?

I don't insist, I was just inviting him to the chat channel where a
similar effort is already ongoing, and where there are perf experts
who can help.

> Anyway, if you were to pick up a starting point for a generic GUI
> engine then you would be better off with Skia than with Unity,
> that is pretty certain. And it is not an argument that is
> difficult to make.

Unity is perhaps the worst possible comparison point. That's not an
example of "designing computer software like a game engine", it's more
an example of "designing a game engine like a GUI application", which
is completely backwards. Optimising Unity games is difficult and
tiresome, and doesn't really have much relation to high-end games.
There's virtually no high-end games written in Unity, it's made for
small hobby or indy stuff. They favour accessibility over efficiency
at virtually all costs.
They do have the new HPC# ECS framework bolted on the side though,
that's the start of something sensible in Unity.



Re: D GUI Framework (responsive grid teaser)

2019-05-26 Thread Manu via Digitalmars-d-announce
On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce
 wrote:
>
> On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
> > On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
> >>
> >> Browsers are actually doing quite well with simple 2D graphics
> >> today.
> >
> > Browsers have been rendering that on GPU for years.
>
> Just because (for example) Chrome supports GPU rendering doesn't
> mean every device it runs on does too. For example...
>
> Open an SVG in your browser, take a screenshot and zoom in on an
> almost vertical / horizontal edge, EG..
>
> https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg
>
> If you look for an almost vertical or almost horizontal line and
> check whether the antialiasing is stepped or smooth. GPU
> typically maxes out at 16x for path rendering, CPU you generally
> get 256x analytical.

What? ... this thread is bizarre.

Why would a high quality SVG renderer decide to limit to 16x AA? Are
you suggesting that they use hardware super-sampling to render the
SVG?
Why would you use SSAA to render an SVG that way?
I can't speak for their implementation, which you can only possible
speculate upon if you read the source code... but I would; for each
pixel, calculate the distance from the line, and use that as the
falloff value relative to the line weighting property.

How is the web browser's SVG renderer even relevant? I have absolutely
no idea how this 'example' (or almost anything in this thread) could
be tied to the point I made way back at the start before it went way
off the rails. Just stop, it's killing me.



Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread Manu via Digitalmars-d-announce
On Wed, May 22, 2019 at 5:34 PM H. S. Teoh via Digitalmars-d-announce
 wrote:
>
> On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce 
> wrote:
> > On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
> >  wrote:
> > >
> > > On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce 
> > > wrote:
> [...]
> > > > I couldn't possibly agree less; I think cool kids would design
> > > > literally all computer software like a game engine, if they
> > > > generally cared about fluid experience, perf, and battery life.
> > > [...]
> > >
> > > Wait, wha...?!  Write game-engine-like code if you care about
> > > *battery life*??  I mean... fluid experience, sure, perf, OK, but
> > > *battery life*?!  Unless I've been living in the wrong universe all
> > > this time, that's gotta be the most incredible statement ever.  I've
> > > yet to see a fluid, high-perf game engine *not* drain my battery
> > > like there's no tomorrow, and now you're telling me that I have to
> > > write code like a game engine in order to extend battery life?
> >
> > Yes. Efficiency == battery life. Game engines tend to be the most
> > efficient software written these days.
> >
> > You don't have to run applications at an unbounded rate. I mean, games
> > will run as fast as possible maximising device resources, but assuming
> > it's not a game, then you only execute as much as required rather than
> > trying to produce frames at the highest rate possible. Realtime
> > software is responding to constantly changing simulation, but non-game
> > software tends to only respond to input-driven entropy; if entropy
> > rate is low, then exec-to-sleeping ratio heavily biases towards
> > sleeping.
> >
> > If you have a transformation to make, and you can do it in 1ms, or
> > 100us, then you burn 10 times less energy doing it in 100us.
> [...]
>
> But isn't that just writing good code in general?

Yes, but I can't point at many industries that systemically do that.

>  'cos when I think of
> game engines, I think of framerate maximization, which equals maximum
> battery drain because you're trying to do as much as possible in any
> given time interval.

And how do you do "as much as possible"? I mean, if you write some
code, and then push data through the pipe until resources are at
100%... where do you go from there?
... make the pipeline more efficient.
Hardware isn't delivering much improvement these days, we have had to
get MUCH better at efficiency in the last few years to maintain
competitive advantage.
I don't know any other industry so laser focused on raising the bar on
that front in a hyper-competitive way. We don't write code like we
used to... we're all doing radically different shit these days.


> Moreover, I've noticed a recent trend of software trying to emulate
> game-engine-like behaviour, e.g., smooth scrolling, animations, etc..
> In the old days, GUI apps primarily only respond to input events and
> that was it -- click once, the code triggers once, does its job, and
> goes back to sleep.  These days, though, apps seem to be bent on
> animating *everything* and smoothing *everything*, so one click
> translates to umpteen 60fps animation frames / smooth-scrolling frames
> instead of only triggering once.

That's a different discussion. I don't actually endorse this. I'm a
fan of instantaneous response from my productivity software...
'Instantaneous' being key, and running without delay means NOT waiting
many cycles of the event pump to flow typical modern event-driven code
through some complex latent machine to finally produce an output.

> All of which *increases* battery drain rather than decrease it.

I'm with you. Don't unnecessarily animate!

> And this isn't just for mobile apps; even the pervasive desktop browser
> nowadays seems bent on eating up as much CPU, memory, and disk as
> physically possible -- everybody has their neighbour's dog wants ≥60fps
> hourglass / spinner animations and smooth scrolling, eating up GBs of
> memory, soaking up 99% CPU, and cluttering the disk with caches of
> useless paraphrenelia like spinner animations.

You're conflating a lot of things here... running smooth and eating
GBs of memory are actually at odds with eachother. If you try and do
both things, then you're almost certainly firmly engaged in gratuitous
systemic inefficiency.
I'm entirely against that, that's my whole point!

You should use as little memory as possible. I have no idea how a
webpage eats as much memory as it does... that's a perfect example of
the sort of terrible software engineering I'm against!

> Such is the result of trying to em

Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread Manu via Digitalmars-d-announce
On Wed, May 22, 2019 at 3:40 PM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
> > I couldn't possibly agree less; I think cool kids would design
> > literally all computer software like a game engine, if they
> > generally
> > cared about fluid experience, perf, and battery life.
>
> A game engine is designed for full redraw on every frame.

I mean, you don't need to *draw* anything... it's really just a style
of software design that lends to efficiency.
Our servers don't draw anything!

> He said he wanted to draw pixel by pixel and only update pixels
> that change. I guess this would be useful on a slow I2C serial
> bus. It is also useful for X-Windows. Or any other scenario where
> you transmit graphics over a wire.
>
> Games aren't really relevant in those two scenarios, but I don't
> know what the framework is aiming for either.

Minimising wasted calculation is always relevant. If you don't change
part of an image, then you'd better have the tech to skip rendering it
(or skip transmitting it in this scenario), otherwise you're wasting
resources like a boss ;)

> > There's a reason games can simulate a rich world full of
> > dynamic data and produce hundreds of frames a second, is
>
> Yes, it is because they cut corners and make good use of special
> cases... The cool kids in the demo-scene even more so. That does
> not make them good examples to follow for people who care about
> accuracy and correctness. But I don't know the goal for this GUI
> framework is.

I don't think you know what you're talking about.
I don't think we 'cut corners' (I'm not sure what that even means)...
we have data to process, and aim to maximise efficiency, that is all.
Architecture is carefully designed towards that goal; it changes your
patterns. You won't tend to have OO hierarchies and sparsely allocated
graphs, and you will naturally tend to arrange data in tables destined
for batch processing. These are key to software efficiency in general.

> So could you make good use of a GPU, even in the early stages in
> this case? Yes. If you keep it as a separate stage so that you
> have no dependencies to the object hierarchy.

'Object hierarchy' is precisely where it tends to go wrong. There are
a million ways to approach this problem space; some are naturally much
more efficient, some rather follow design pattern books and propagate
ideas taught in university to kids.

> I would personally
> have done it in two passes for a prototype. Basically translating
> the object hierarchy into geometric data every frame then use a
> GPU to take that and push it to the screen. Not very efficient,
> perhaps, but good enough to get 60FPS with max flexibility.

Sure, maybe that's a reasonable design. Maybe you can go a step
further and transform your arrangement a 'hierarchy'? Data structures
are everything.

> Is that related to games, yes sure, or any other realt-time
> simulation software. So not really game specific.

Right. I only advocate good software engineering!
But when I look around, the only field I can see that's doing a really
good job at scale is gamedev. Some libs here and there enclose some
tight worker code, but nothing much at the systemic level.



Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread Manu via Digitalmars-d-announce
On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
 wrote:
>
> On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce 
> wrote:
> > On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
> > Digitalmars-d-announce  wrote:
> [...]
> > > But you shouldn't design a UI framework like a game engine.
> > >
> > > Especially not if you also want to run on embedded devices
> > > addressing pixels over I2C.
> >
> > I couldn't possibly agree less; I think cool kids would design
> > literally all computer software like a game engine, if they generally
> > cared about fluid experience, perf, and battery life.
> [...]
>
> Wait, wha...?!  Write game-engine-like code if you care about *battery
> life*??  I mean... fluid experience, sure, perf, OK, but *battery
> life*?!  Unless I've been living in the wrong universe all this time,
> that's gotta be the most incredible statement ever.  I've yet to see a
> fluid, high-perf game engine *not* drain my battery like there's no
> tomorrow, and now you're telling me that I have to write code like a
> game engine in order to extend battery life?

Yes. Efficiency == battery life. Game engines tend to be the most
efficient software written these days.
You don't have to run applications at an unbounded rate. I mean, games
will run as fast as possible maximising device resources, but assuming
it's not a game, then you only execute as much as required rather than
trying to produce frames at the highest rate possible. Realtime
software is responding to constantly changing simulation, but non-game
software tends to only respond to input-driven entropy; if entropy
rate is low, then exec-to-sleeping ratio heavily biases towards
sleeping.

If you have a transformation to make, and you can do it in 1ms, or
100us, then you burn 10 times less energy doing it in 100us.

> I think I need to sit down.

If you say so :)



Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread Manu via Digitalmars-d-announce
On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
Digitalmars-d-announce  wrote:
>
> On Wednesday, 22 May 2019 at 17:01:39 UTC, Manu wrote:
> > You can make a UI run realtime ;)
> > I mean, there are video games that render a complete screen
> > full of
> > zillions of high-detail things every frame!
>
> But you shouldn't design a UI framework like a game engine.
>
> Especially not if you also want to run on embedded devices
> addressing pixels over I2C.

I couldn't possibly agree less; I think cool kids would design
literally all computer software like a game engine, if they generally
cared about fluid experience, perf, and battery life.
This extends to server software in data-centers, even more so in that
case. People really should look at games for how to write good
software in general.

There's a reason games can simulate a rich world full of dynamic data
and produce hundreds of frames a second, is because the industry has
spent decades getting really good at software design and patterns that
treat computers like computers with respect to perf.



Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread Manu via Digitalmars-d-announce
On Tue, May 21, 2019 at 12:55 PM Robert M. Münch via
Digitalmars-d-announce  wrote:
>
> On 2019-05-21 16:51:43 +, Manu said:
>
> >> The screencast shows a responsive 40x40 grid. Layouting the grid takes
> >> about 230ms, drawing it about 10ms.
> >
> > O_o ... I feel like 230 *microseconds* feels about the right time, and
> > ~100 microseconds for rendering.
>
> I don't think that's fast enough :-)

It probably is :P

> >> So this gives us 36 FPS which is IMO pretty good for a desktop app target
> >
> > Umm, no. I would expect 240fps is the modern MINIMUM for a desktop
> > app, you can easily make it that fast.
>
> ;-) Well, they key is to layout & render only changes. A responsive
> grid is an evil test-case as this requires a full cylce on every frame.

The worst case defines your application performance, and grids are
pretty normal.
You can make a UI run realtime ;)
I mean, there are video games that render a complete screen full of
zillions of high-detail things every frame!



Re: D GUI Framework (responsive grid teaser)

2019-05-21 Thread Manu via Digitalmars-d-announce
On Sun, May 19, 2019 at 2:05 PM Robert M. Münch via
Digitalmars-d-announce  wrote:
>
> Hi, we are currently build up our new technology stack and for this
> create a 2D GUI framework.
>
> https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0
>
>
> The screencast shows a responsive 40x40 grid. Layouting the grid takes
> about 230ms, drawing it about 10ms.

O_o ... I feel like 230 *microseconds* feels about the right time, and
~100 microseconds for rendering.

> So this gives us 36 FPS which is IMO pretty good for a desktop app target

Umm, no. I would expect 240fps is the modern MINIMUM for a desktop
app, you can easily make it that fast.

Incidentally, we have a multimedia library workgroup happening to
build out a flexible and as-un-opinionated-as-we-can gfx and gui
libraries which may serve a wider number of users than most existing
libraries, perhaps you should join that effort, and leverage the perf
experts we have?

There's a channel #graphics on the dlang discord.



Re: Visual D 0.49.0 released

2019-04-21 Thread Manu via Digitalmars-d-announce
On Sun, Apr 21, 2019 at 1:40 AM Rainer Schuetze via
Digitalmars-d-announce  wrote:
>
>
>
> On 09/04/2019 22:34, Crayo List wrote:
> > On Sunday, 7 April 2019 at 19:41:43 UTC, Rainer Schuetze wrote:
> >> Hello,
> >>
> >> the new release of Visual D has just been uploaded. Some major
> >> improvements of 0.49.0:
> >>
> >> * support for Visual Studio 2019
> >> * parallel compilation supported by VC projects
> >> * catch up with recent language changes
> >> * new "Language" configuration page for -transition=/-preview=/-revert=
> >> options
> >>
> >> See http://rainers.github.io/visuald/visuald/VersionHistory.html for
> >> the full list of changes.
> >>
> >> Visual D is a Visual Studio extension that adds D language support to
> >> VS2008-2019. It is written in D, its source code can be found on
> >> github: https://github.com/D-Programming-Language/visuald, pull
> >> requests welcome.
> >>
> >> The installer can be found at
> >> http://rainers.github.io/visuald/visuald/StartPage.html
> >>
> >> Rainer
> >
> > Is there a way to donate to this project?
>
> Thanks for considering a donation, but there is nothing setup to do so.
>
> > Or maybe buy you a beer or a six-pack?
>
> Maybe at DConf, though I'm not yet sure I can make it.

They recently convinced me to make the long flight, I really hope you
can join us too!

I'd love to sit down with you and study some productivity process
stuff together.
There's a lot of little stuff that I never find is worth bugging or
complaining about when I'm working, but we should go through some edit
and debug sessions on different projects and take a careful look at
where we're at together.

Your work is perhaps the most important work for commercial adoption
in my industry, and as I introduce D to my colleagues, VisualD is
their first impression of the ecosystem. I've noticed such a
difference in response recently where VisualD is starting to feel
quite robust and useful in how the entire value proposition of D is
received by them.


Re: LDC 1.15.0-beta1

2019-03-09 Thread Manu via Digitalmars-d-announce
On Sat, Mar 9, 2019 at 12:00 PM kinke via Digitalmars-d-announce
 wrote:
>
> Glad to announce the first beta for LDC 1.15:
>
> * Based on D 2.085.0.
> * Support for LLVM 8.0. The prebuilt packages ship with LLVM
> 8.0.0-rc4 and include the Khronos SPIRV-LLVM-Translator, so that
> dcompute can now emit OpenCL too.
> * New -lowmem switch to enable the GC for the front-end, trading
> compile times for less required memory (in some cases, by more
> than 60%).
> * Dropped support for 32-bit macOS. Min macOS version for
> prebuilt package raised to 10.9.
> * Fix: functions annotated with `pragma(inline, true)` are
> implicitly cross-module-inlined again.
>
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.15.0-beta1
>
> Please help test, and thanks to all contributors!

Can you explain what this means:

* Fix: functions annotated with `pragma(inline, true)` are
implicitly cross-module-inlined again.

??


Re: Release D 2.085.0

2019-03-03 Thread Manu via Digitalmars-d-announce
On Sat, Mar 2, 2019 at 10:25 AM Martin Nowak via
Digitalmars-d-announce  wrote:
>
> Glad to announce D 2.085.0, ♥ to the 49 contributors.
>
> This release comes with context-aware assertion messages, lower GC
> memory usage, a precise GC, support to link custom GCs, lots of
> Objective-C improvements¹, and toolchainRequirements for dub.
> This release also ended official support for OSX-32.
>
> http://dlang.org/download.html
> http://dlang.org/changelog/2.085.0.html
>
> ¹: There is a pending Objective-C fix
> (https://github.com/dlang/dmd/pull/9402) that slipped 2.085.0 but will
> be released with 2.085.1 soon (~1.5 weeks).
>
> -Martin

The windows installer is not signed?
Did something happen? O_o



Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-25 Thread Manu via Digitalmars-d-announce
On Mon, Feb 25, 2019 at 9:30 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 2/25/2019 7:17 PM, Manu wrote:
> > break my DIP
>
> The review process is not about "why not add this feature" , but "why should 
> we
> have this feature".
>
> Additionally, it is most assuredly about finding flaws in it. Isn't it best to
> find out the flaws before going further with it than finding them in the 
> field?
>
> As I mentioned before, it's supposed to be brutal. Any
> testing/certification/review process is about trying to break it.
>
> It has (hopefully) nothing to do with how hard (or little) you worked on it, 
> nor
> the cut of your jib, nor acceptance (or not) of mediocrity/merit in other 
> DIPs.

I'm talking about this DIP. Allowing a mutable copy argument feels super weird.
The reasons are clear, but that doesn't make it feel less weird.
I feel like the problem is with const, not with this DIP, but I'm not
about to convince anybody, so we're all good here.


Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-25 Thread Manu via Digitalmars-d-announce
On Mon, Feb 25, 2019 at 3:10 PM Olivier FAURE via
Digitalmars-d-announce  wrote:
>
> On Monday, 25 February 2019 at 16:00:54 UTC, Andrei Alexandrescu
> wrote:
> > Thorough feedback has been given, likely more so than for any
> > other submission. A summary for the recommended steps to take
> > can be found here:
> >
> > https://forum.dlang.org/post/q2u429$1cmg$1...@digitalmars.com
> >
> > It is not desirable to demand reviewers to do more work on the
> > review or to defend it. Acceptance by bullying is unlikely to
> > create good results. The target of work is squarely the
> > proposal itself.
>
> Agreed.
>
> Honestly, I am not impressed with the behavior of several members
> here.
>
> I understand that the rvalue DIP went through a long process,
> that some people really wanted it to be accepted, and that it was
> frustrating to wait so long only for it to be refused, but at
> some point, you guys have to accept that the people in charge
> refused it.

No, you've missed the point **completely**.
I'm not even remotely surprised it was rejected, I never imagined that
I'd change peoples minds on this after trying to do so for 10 years
running.

> They explained why they did, their reasons matched
> concerns other users had, and they explained how to move the
> proposal forward.

This sentence couldn't be more wrong.

I'm going to write this again because you prompted me to, I've said it
elsewhere lots, but apparently you've missed it;
What pissed me off was that the rejection text was almost completely
wrong, it almost felt like they just skimmed it and made up details
according to presumption, and then when I raised the topic on what was
actually wrong looking for actionable feedback, it was made clear that
it was not open to amendment, I *must* write a whole new DIP and
completely reboot the process because all the text was rubbish, and I
should employ someone else competent to do it with me. Then I was
insulted a couple more times; it was implied that the DIP was so bad I
didn't even understand the implications of my own text (I did), and
that it had holes large enough to drive a truck through (it
doesn't)... and only then after a few cycles of referring to the
*actual* text that was written, it was conceded that those criticisms
were indeed incorrect, and then we were able to arrive at some useful
feedback, all of which is of trivial-amendment magnitude; fix the
rewrite to address exceptions, and add some additional text to clarify
a point of misunderstanding that I thought I couldn't have made more
clear if I tried.
Even at the tail end of that though, the result remained the same:
rewrite the DIP, reboot the process, another few hundred days later...
it was expressly rejected that an amendment would be accepted for
consideration, despite agreeing at the end of the thread that that's
all that's required to address the *true* criticisms.

That was a worthless experience, and it didn't help anyone.

> So again, I get that this is frustrating, but repeatedly
> complaining and asking for an appeal and protesting about other
> DIPs being accepted is *not* professional behavior.

I'm not a bloody professional, I'm a volunteer!
I do think it would have been useful to amend the rejection text to be
true at the very least, and match the proposal that is written.
I held that position before the thread had played out to where useful
action points emerged, simply because I wanted to have any idea how to
move forward. At the conclusion of that thread, we have the data, and
I don't care, although still no path to have it reconsidered with
amendments, and I'm not gonna take a few hundred more days to start
over.

The reason I bring it up here is not that I'm salty (I am), but
because I'm literally astonished that it's been agreed it's fine that
a copy constructor can mutate the source... and I can't help but draw
contrast to the exact same sorts of arguments that people were using
to break my DIP, and countless other proposals that I've seen over the
years. My DIP was just one of very very many instances of where this
class of issue (unexpected mutation of caller-owned data) would be
used to destroy something, but we're accepting it here at a very
fundamental level of the language.
I just can't see how it's fine in this case, after being show-stopping
for as long as I've been watching.

And to circle right back to the start; I suspect the only reason that
it's considered acceptable here, is that this is an issue of extremely
high importance, and nobody has any better ideas.
To repeat my comment; the problem as I see it, is that `const` as
defined is extremely problematic, and rather than address that hard
issue, we'll just make a compromise in this case.

Anyway, I actually support this DIP, I'm for practical solutions to
problems... the only point I was trying to make at the start of this
thread is that this sets a precedent, which if we're fair, requires a
re-examination of so many rejected ideas gone by.


Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-25 Thread Manu via Digitalmars-d-announce
On Mon, Feb 25, 2019 at 12:20 PM Andrei Alexandrescu via
Digitalmars-d-announce  wrote:
>
> On 2/25/19 2:41 PM, bachmeier wrote:
> > On Monday, 25 February 2019 at 19:24:55 UTC, Mike Parker wrote:
> >
> >> From the process document:
> >>
> >> “the DIP Manager or the Language Maintainers may allow for exceptions
> >> which waive requirements or responsibilities at their discretion.”
> >>
> >> If you were to write a DIP for a feature they think important enough,
> >> it could be fast tracked, too.
> >
> > I hate to be so negative, but when I see D's corporate management
> > structure, the lack of community contribution is obvious. It doesn't
> > exactly motivate contributions. This is no way to run an open source
> > project. I understand that it works well for Facebook because everyone
> > on the team is paid six figures, and they can be replaced in two hours,
> > but an open source project is not Facebook.
> >
> > I know the whole argument about why it is that way. That doesn't mean
> > it's going to work.
>
> What do you recommend? Should we carry a final review here?

In my case, you could have produced useful and not-completely-wrong
rejection text with the rejection, and then not insulted me a few
times before eventually producing some actionable feedback.
I mean, its in your interest to foster contribution, not repel it.



Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-24 Thread Manu via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 6:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> I agree with your point that C++ const can be used in a lot more places than D
> const. Absolutely true.
>
> Missing from the post, however, is an explanation of what value C++ const
> semantics have. How does it:
>
> 1. make code easier to understand?

const code is self documenting and protective against modification by
issuing the user helpful error messages.


> 2. prevent common programming bugs?

You can't modify const data, for instance, a copy constructor can't
freely modify the source value...


> 3. help with multithreaded coding problems?

This is a different conversation about `immutable` and `shared`.
`const` doesn't say anything about D's decisions relating to
thread-locality by default, which obviously still applies.

Maybe you're trying to argue that a const object which contains an
escape-pointer to mutable data may lead to races? But that's not the
case, because all data is thread-local in D, so there's no races on
the mutable data either way unless it's `shared`... and then we need
to refer back to the thread I created months ago where `shared` is
useless and broken, and we REALLY need to fix that. (that is; `shared`
must have NO READ OR WRITE ACCESS to data members, only shared
methods, otherwise it's completely hollow)


> 4. improve code generation?

Not a lot. But this is a red-herring; D's const won't improve code
generation where it's not present in the code.
Contrary to C++, D has a much higher probability of seeing the whole
AST and not encountering opaque extern barriers, which means it would
be relatively easy for D to recognise that the const object contains
no pointers to mutable data (assessed recursively), and then enable
any such optimisations that const offers to D today.


> I know technically what it does (after all, I implemented it), but its value
> escapes me.

I mean, you speak as if `const` is a synonym for `mutable` in C++...
const things are const. It is however possible that they contain a
pointer that leads out of the const data back into the mutable world,
and that's *desirable* in a whole lot of circumstances. Take that
away, and we arrive where we are in D.

It's also easy to NOT have pointers to mutable data escaping const
objects; make them const too!
If you want to implement a semantic where the const-ness of a member
tracks the const-ness of the owner, maybe we can apply `inout` to
behave that way.
Assuming we apply rules similar to C++, it looks like:

  const(S) const_s; // const instance of S
  struct S
  {
int* a; // becomes `int const(*)`
const(int)* b; // const(int*)
inout(int)* c; // becomes const(int*) (or immtable(int*), etc)
  }

Alternatively, if const were spec-ed similar to C++ const, it would be
very easy to implement TransitiveConst!T as a tool. By any of these
means, could deploy it deliberately instead of unwillingly.


Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-24 Thread Manu via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 4:25 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> Thanks for letting me know you're abandoning the rvalue ref DIP.

It's not an "rvalue ref" DIP (which I think has confused a lot of
people), it's an rvalue *by-ref* DIP.
In my head, an "rvalue ref" DIP is something completely different,
useful for implementing move semantics of mismatching types.

Are you talking about my DIP or that other thing?

> I had held off
> working on it because I didn't want to duplicate efforts; we're short-staffed
> enough as it is.

'abandoning's a strong word, but I don't have motivation to work on it
right now. Please, be my guest!


Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-24 Thread Manu via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 4:40 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> The problem with C++ const is it only goes one level, i.e. what I call
> "head-const". If you pass a T to a const parameter, anything T references
> remains mutable. It's more of a suggestion than anything reliable or
> enforceable. It only works if your data structures are simple aggregates, not
> graphs.
>
> D's const has teeth. Nothing can be modified through T. If you're used to
> writing code that tweaks const data under the hood, D's const will never work
> for you. Yes, it means rethinking how the data and code is organized, and that
> can be painful. But it is how FP works. FP offers a number of advantages, and
> D's const offers a path into that.
>
> For example, most of DMD is written in the C++ style where functions 
> frequently
> are written to both return some information *and* tweak the data structure. 
> This
> does not work with const. It needs to be reorganized so that getting 
> information
> about a data structure is separated from modifying the data structure. I've 
> made
> such changes in a few places in DMD, and have been very pleased with the 
> results
> - the code is much easier to understand.
>
> To sum up, you're quite right that you cannot write C++ style code using D
> const. It hast to be in a much more FP style. If you're not accustomed with FP
> style, this can be a difficult learning process. I know this from firsthand
> experience :-)

I agree with these facts, but your case-study is narrow, and you have
to stop saying "C++ style", which it really isn't.
It's very much D-style... almost all D code is written this way.
It's in conflict with too many other patterns, and they're not "C++
patterns", they're very legitimate D patterns.

Function pointers and delegates are often incompatible with const;
practically any code with some sort of call-back behaviour, and
anything that forms *any form* of traversible network where you'd like
any part of it to const fails. I've never written a program that was a
perfect tree. A small feature library maybe, but not a program that
does anything interesting.

It's great that we can write FP-ish code in D, it's particularly
useful for narrow, self-contained tasks; it helps me intellectually
factor some potentially complex leaf-level call-trees out of the
program structure, and I appreciate when libraries take that form; it
helps them have a smaller footprint in the larger complex suite. But
const doesn't play into that much, and if that can't interact with
normal D code, which is most code, then it's just not a useful piece
of language.

The proposition that everyone start writing straight-up FP code in D
is unrealistic, and if they wanted that, they'd use Rust every time.
People are here because they don't want to write Rust.

> For me the only real annoyance with const is I often cannot use "single
> assignment" style declarations with pointers:
>
> I.e.:
>
>  const int* p = 
>  p =  // error, good
>  *p = 4; // also error, not what I wished
>
> This C++ const does provide, and it's good, but it's not really worth that 
> much.

Are you serious? You can't honestly say C++ const is worthless?
Especially in comparison to D's const, which is _actually_ almost
completely worthless.
It really doesn't make anything better, and there's a whole class of
troublesome language issues that emerge from it being defined this
way.
The way C++ defines const is such that const can be used, and you can
integrate that code with other code.

I mean it seriously where I say I've tried to defend D's const for as
long as I've used D, but I can't escape the plain and honest reality
that D's const is not useful for almost anything practical.
Even the way you describe it above is like indulging in a little bit
of fetish, and I understand that, I try that every time thinking "I'm
gonna get it right... this time for sure! What a cool guy I am!", but
that never works out beyond a very small scope. const with a narrow
scope is where it's least impactful.

Then to make matters worse, `const` is a combinatorial testing
nightmare; you write your code mostly without const (because
conventional wisdom), and then you try and call into your lib from
various contexts and it just doesn't work. You need to set-up heaps of
tests to try and prove out that your code is const-robust that are
very easy to miss otherwise.
Then someone else tries to use your code with their code which is
using const (attempting at least); I've seen lots of libraries where
it would have been possible to support const, at least to some extent,
but they just didn't because "don't use const", but the result is that
the client of that library can't use const in their own code because
the lib undermines their effort in some way.

I don't like this concept that a piece of library code 'supports'
const, but that's where we are.

None of this is issue with C++ const, because it's defined in a way
that's useful, 

Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-24 Thread Manu via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 1:25 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 2/24/2019 1:02 PM, Manu wrote:
> > I mean like, my DIP was almost violently rejected,
>
> I thought it was clear what was needed to be done with it,

To be fair, initial criticism was 75% just plain wrong (like the text
wasn't even read properly, with no request for clarifications), and
100% unproductive.
True actionable criticisms became clear only after quite a laborious
and somewhat insulting series of exchanges.

> and I thought you were going to rewrite it. Was I mistaken?

It's not on my short list. I don't really even wanna look at it at
this point, my motivation couldn't be more depleted. There's no part
of me that has any desire to re-engage that process right now.
I'd encourage anybody else to take it and run though. It's still my #1
frustration... it's not getting less annoying!

Incidentally, the key problems that upset people about my proposal,
and probably the reason it wasn't that way from the very start are all
predicated on this same `const` issue.

> > but in here there's text like this:
> >
> > "The parameter of the copy constructor is passed by a mutable
> > reference to the source object. This means that a call to the copy
> > constructor may legally modify the source object:"
> >
> > I can't see how that could be seen in any way other than what might
> > reasonably be described as "a hole large enough to drive a truck
> > through"...
>
> What's the hole? BTW, the D copy-ctor semantics are nearly identical to that 
> of C++.

Mutable copy-from argument is one of the same arguments people made
against my DIP, except about 100x worse being a live object owned by
someone else that may be undesirably mutated, rather than an expiring
rvalue that nobody will ever see again.

I'm mostly just amazed that the same bunch of minds that historically
take such strong issue with this sort of thing can find that it's okay
in this case...
I can't imagine a more concerning case of this class of problem being
manifest, but in this case, we've judged that it's fine?

If this is acceptable now, then I think it's in order that we comb
back over decades of other rejected opportunities and revisit them
with this precedent.

> > But anyway, that's pretty wild. I think there's a clear pattern we've
> > been seeing here with practically every lifetime management DIP, and
> > also in general for forever, is that D's `const` just fundamentally
> > doesn't work.
>
> I don't see what const has to do with lifetime management. For example, it is
> irrelevant to dip25 and dip1000.

I say lifetime *management*; I feel copying/moving and friends are an
associated part of lifetime management beyond just tracking ownership.
Construction/destruction are features of lifetime management in my
brain.
We've had const problems with copying and constructors forever,
including this DIP, and the problems that this DIP exists for to
address.

> > Couple this with the prevailing wisdom which is to
> > recommend that people "don't use const, because you can't write
> > programs and use const"
>
> That is true for writing C++ style code. D const is much more in line with FP
> programming style.

It's true for writing D style code; most D-style code is not FP
code... at best, a few call-trees at the leaves of the application.
The overwhelming recommendation I see posted very frequently in the
forum is "don't use const", and the nature of all the articles I've
read on the topic as the years progress are moving towards a more
clearly stated and unashamed position of "don't use const".

I understand the narrow use case where it can be applicable to FP
style programming, but it comes up quite infrequently as an
opportunity, and attempts are often met with a rude awakening at some
point that you work far enough into your project that the fantasy of
your flawless design start to break while true details of the program
structure begin to emerge.

Almost every attempt I've made to try and use D's const effectively
has failed at some point down the path as I reach some level of
complexity where the program structure has relationships that start to
look like a graph. It just naturally occurs that data in a const
structure may point back to the outer non-const world again, and
that's totally *fine* structurally and intellectually, it's just that
D can't express it.

You basically have 2 options when this inevitably emerges; you sweep
your code removing const from a lot of things (which sadly highlights
a whole lot of wasted energy in doing so, and in your foolishly trying
in the first place), or you make some HeadConst!(T) thing which casts
const away, whereby you deploy UB and a quiet prayer that the compiler
doesn't do anything bad.

I've tried to defend D's cons

Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-24 Thread Manu via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 2:50 AM Mike Parker via Digitalmars-d-announce
 wrote:
>
> Walter and Andrei have requested the Final Review round be
> dropped for DIP 1018, "The Copy Constructor", and have given it
> their formal approval. They consider copy constructors a critical
> feature for the language.
>
> Walter provided feedback on Razvan's implementation. When it
> reached a state with which he was satisfied, he gave the green
> light for acceptance.
>
> The DIP:
> https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1018.md
>
>
> The implementation:
> https://github.com/dlang/dmd/pull/8688

I mean like, my DIP was almost violently rejected, but in here there's
text like this:

"The parameter of the copy constructor is passed by a mutable
reference to the source object. This means that a call to the copy
constructor may legally modify the source object:"

I can't see how that could be seen in any way other than what might
reasonably be described as "a hole large enough to drive a truck
through"...

But anyway, that's pretty wild. I think there's a clear pattern we've
been seeing here with practically every lifetime management DIP, and
also in general for forever, is that D's `const` just fundamentally
doesn't work. Couple this with the prevailing wisdom which is to
recommend that people "don't use const, because you can't write
programs and use const"

I think we need to throw in the towel, C++'s const is right, and D's
const is just wrong, and no amount of pretending that's not true will
resolve the endless stream of issues.
Where's the DIP to migrate to C++-style const? That is the predicate
for basically every important development I've seen lately...
including this one.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 7:35 PM Steven Schveighoffer via
Digitalmars-d-announce  wrote:
>
> On 1/30/19 10:05 PM, Manu wrote:
> > On Wed, Jan 30, 2019 at 6:40 PM Nicholas Wilson via
> > Digitalmars-d-announce  wrote:
> >> You should clarify that ;)
> >
> > Yes, as said above, read `short(10)`. I can understand the confusion
> > that it may look like a variable when taken out of context; but listed
> > beneath the heading immediately above which says:
> > "This inconvenience extends broadly to every manner of **rvalue**
> > passed to functions"
> > It didn't occur to me the reader might interpret the clearly stated
> > list of cases of rvalues passed to functions to include arguments that
> > are not rvalues.
> > The name was just chosen to indicate the argument is a short, perhaps
> > an enum, or any expression that is a short... I could have used
> > `short(10)`, but apparently I didn't think of it at the time.
> >
> > Is this the basis for the claims of "a hole you could drive a truck
> > through"? Again, a request for clarification, and a
> > couldn't-possibly-be-more-trivial revision may resolve this.
> >
>
> I think changing it to `short(10)` helps the argument that you didn't
> intend it to mean conversions from lvalues, but I'd recommend still
> spelling out that they are forbidden.

I mean, the heading of the DIP is "ref T accepts r-values", the whole
abstract talks about nothing but rvalues, the header of the confusing
block couldn't say 'rvalues' more clearly... I didn't consider that it
was possible to confuse this as anything other than an rvalue DIP...
but yes, I can certainly spell it out.

> Leaving the reader to infer intent is not as good as clarifying intent
> directly. The whole rvalue vs. lvalue thing is confusing to me, because
> I assumed an lvalue converted to a different type changes it to an
> rvalue. I think of it like an implicit function that returns that new value.

Obviously all of this is true, but I didn't think of it that way;
didn't realise there was a point of confusion, and nobody during the
community reviews appeared to raise confusion either.
I'll obviously revise it, except that it's rejected and moved to the
rejected folder.

For reference, the key point that justifies its mention in the first
place is a little further down:
"It is important that T be defined as the parameter type, and not auto
(ie, the argument type), because it will allow implicit conversions to
occur naturally, with identical behavior as when the parameter is not
ref."
It was important to consider mis-matching types (implicit
conversions), because there is detail in the rules that allows them to
work properly and make the call uniform with the same function if it
passed by-val.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 7:05 PM Nicholas Wilson via
Digitalmars-d-announce  wrote:
>
> On Thursday, 31 January 2019 at 02:10:05 UTC, Manu wrote:
> > On Wed, Jan 30, 2019 at 1:05 PM Andrei Alexandrescu via
> >> fun(my_short); // implicit type conversions (ie, short->int
> >> promotion)
> >> 
> >
> > Oh I see.
> >
> >> fun(short(10)); // implicit type conversions (ie, short->int
> >> promotion)
> >
> > I did not intend for this DIP to apply to anything other than
> > rvalues.
> > I can totally see how that's not clear. `my_short` should be an
> > rvalue
> > of some form, like the rest.
> > Is that the only such line?
>
> I think so.
>
> >> Presumably my_short is a variable of type short. Is that
> >> correct?
> >
> > It is not. It should be an rvalue like everything else. Perhaps
> > it's an enum... but I should write `short(10)`, that would be
> > clear.
>
> It would.
>
> >> * DIP 1016 proposes a hole in the language one could drive a
> >> truck through.
> >
> > I still can't see a truck-sized hole.
> >
> >> * The problem goes undetected in community review.
> >
> > I don't know how I could have influenced this outcome.
> >
> >> * Its own author seems to not have an understanding of what
> >> the DIP proposes.
> >
> > More classy comments. I can't get enough of the way you
> > belittle people.
> >
> > I made a 1-word error, where I should have written `short(10)`
> > to be clear.
> > 1-word error feels amendment-worthy, and not a call for "let's
> > start
> > over from scratch".
>
> You should just PR it back to review

I can't do that, it's been rejected, with mostly incorrect rejection
text affixed to the bottom.

> with that fix and a note
> about how it lowers to statements (incl. an example of
> lambdification for if/while/for/switch statements (see
> https://forum.dlang.org/post/qysmnatmjquuhylaq...@forum.dlang.org
> ))

I'm pretty sure that's not necessary. I haven't understood why this
noise about expressions. This DIP applies to statements.
I can't see how there's any problem with the lowering if the statement
is a control statement?

if (ref_fun(10)) { ... }
==>
{
  int __tmp = 10;
  if (ref_fun(__tmp)) { ... }
}

What's the trouble?


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 6:40 PM Nicholas Wilson via
Digitalmars-d-announce  wrote:
>
> On Wednesday, 30 January 2019 at 18:29:37 UTC, Manu wrote:
> > On Wed, Jan 30, 2019 at 9:20 AM Neia Neutuladh via
> > Digitalmars-d-announce 
> > wrote:
> >> The result of a CastExpression is an rvalue. An implicit cast
> >> is a compiler-inserted CastExpression. Therefore all lvalues
> >> with a potential implicit cast are rvalues.
> >
> > But there's no existing language rule that attempts to perform
> > an implicit cast where an lvalue is supplied to a ref arg...?
> > Why is the cast being attempted? 'p' is an lvalue, and whatever
> > that does should remain exactly as is (ie, emits a compile
> > error).
> >
> > We could perhaps allow this for `const` args, but that feels
> > like separate follow-up work to me, and substantially lesser
> > value. This DIP doesn't want to change anything about lvalues.
>
> It appears to say it does:
>
> fun(my_short); // implicit type conversions (ie, short->int
> promotion)
>
> You should clarify that ;)

Yes, as said above, read `short(10)`. I can understand the confusion
that it may look like a variable when taken out of context; but listed
beneath the heading immediately above which says:
"This inconvenience extends broadly to every manner of **rvalue**
passed to functions"
It didn't occur to me the reader might interpret the clearly stated
list of cases of rvalues passed to functions to include arguments that
are not rvalues.
The name was just chosen to indicate the argument is a short, perhaps
an enum, or any expression that is a short... I could have used
`short(10)`, but apparently I didn't think of it at the time.

Is this the basis for the claims of "a hole you could drive a truck
through"? Again, a request for clarification, and a
couldn't-possibly-be-more-trivial revision may resolve this.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 12:40 PM 12345swordy via
Digitalmars-d-announce  wrote:
>
> On Wednesday, 30 January 2019 at 18:29:37 UTC, Manu wrote:
> > On Wed, Jan 30, 2019 at 9:20 AM Neia Neutuladh via
> > Digitalmars-d-announce 
> > wrote:
> >>
> >> On Wed, 30 Jan 2019 09:15:36 -0800, Manu wrote:
> >> > Why are you so stuck on this case? The DIP is about
> >> > accepting rvalues,
> >> > not lvalues...
> >> > Calling with 'p', an lvalue, is not subject to this DIP.
> >>
> >> The result of a CastExpression is an rvalue. An implicit cast
> >> is a compiler-inserted CastExpression. Therefore all lvalues
> >> with a potential implicit cast are rvalues.
> >
> > But there's no existing language rule that attempts to perform
> > an
> > implicit cast where an lvalue is supplied to a ref arg...?
> > Why is the cast being attempted?
> Because of the rewrite that your proposed in your dip.
>
> void fun(ref int x);
>
> fun(10);
>
> {
>T __temp0 = void;
>fun(__temp0 := 10);
> }
>
> lets replace 10 with a short variable named: S

"a short variable named: S" is an lvalue, so why would the rewrite be
attempted? S must be an rvalue for any rewrite to occur. We're talking
about rvalues here.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 9:20 AM Neia Neutuladh via
Digitalmars-d-announce  wrote:
>
> On Wed, 30 Jan 2019 09:15:36 -0800, Manu wrote:
> > Why are you so stuck on this case? The DIP is about accepting rvalues,
> > not lvalues...
> > Calling with 'p', an lvalue, is not subject to this DIP.
>
> The result of a CastExpression is an rvalue. An implicit cast is a
> compiler-inserted CastExpression. Therefore all lvalues with a potential
> implicit cast are rvalues.

But there's no existing language rule that attempts to perform an
implicit cast where an lvalue is supplied to a ref arg...?
Why is the cast being attempted? 'p' is an lvalue, and whatever that
does should remain exactly as is (ie, emits a compile error).

We could perhaps allow this for `const` args, but that feels like
separate follow-up work to me, and substantially lesser value. This
DIP doesn't want to change anything about lvalues.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Manu via Digitalmars-d-announce
On Tue., 29 Jan. 2019, 10:25 pm Walter Bright via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com wrote:

> On 1/29/2019 3:45 AM, Andrei Alexandrescu wrote:
> > I am talking about this:
> >
> > int[] a = cast(int[]) alloc.allocate(100 * int.sizeof);
> > if (alloc.reallocate(a, 200 * int.sizeof)
> > {
> >  assert(a.length == 200);
> > }
>
> Even simpler:
>
>void func(ref void* p) {
>  free(p); // frees (1)
>  p = malloc(100);  // (2)
>}
>
>int* p = cast(int*)malloc(16);  // (1)
>func(p);// p copied to temp for conversion to
> void*
>free(p);// frees (1) again
>// (2) is left dangling
>
> It's a memory corruption issue, with no way to detect it.
>

Why are you so stuck on this case? The DIP is about accepting rvalues, not
lvalues...
Calling with 'p', an lvalue, is not subject to this DIP.

>


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-29 Thread Manu via Digitalmars-d-announce
On Mon, Jan 28, 2019 at 9:25 AM Andrei Alexandrescu via
Digitalmars-d-announce  wrote:
>
> On 1/24/19 2:18 AM, Mike Parker wrote:
> > Walter and Andrei have declined to accept DIP 1016, "ref T accepts
> > r-values", on the grounds that it has two fundamental flaws that would
> > open holes in the language. They are not opposed to the feature in
> > principle and suggested that a proposal that closes those holes and
> > covers all the bases will have a higher chance of getting accepted.
> >
> > You can read a summary of the Formal Assessment at the bottom of the
> > document:
> >
> > https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1016.md
>
> Hi everyone, I've followed the responses to this, some conveying
> frustration about the decision and some about the review process itself.
> As the person who carried a significant part of the review, allow me to
> share a few thoughts of possible interest.
>
> * Fundamentally: a DIP should stand on its own and be judged on its own
> merit, regardless of rhetoric surrounding it, unstated assumptions, or
> trends of opinion in the forums. There has been a bit of material in
> this forum discussion that should have been argued properly as a part of
> the DIP itself.
>
> * The misinterpretation of the rewrite (expression -> statement vs.
> statement -> statement) is mine, apologies. (It does not influence our
> decision and should not be construed as an essential aspect of the
> review.) The mistake was caused by the informality of the DIP, which
> shows rewrites as a few simplistic examples instead of a general rewrite
> rule. Function calls are expressions, so I naturally assumed the path
> would be to start with the function call expression. Formulating a
> general rule as a statement rewrite is possible but not easy and fraught
> with peril, as discussion in this thread has shown. I very much
> recommend going the expression route (e.g. with the help of lambdas)
> because that makes it very easy to expand to arbitrarily complex
> expressions involving function calls. Clarifying what temporaries get
> names and when in a complex expression is considerably more difficult
> (probably not impossible but why suffer).
>
> * Arguments of the form: "You say DIP 1016 is bad, but look at how bad
> DIP XYZ is!" are great when directed at the poor quality of DIP XYZ.
> They are NOT good arguments in favor of DIP 1016.
>
> * Arguments of the form "Functions that take ref parameters just for
> changing them are really niche anyway" should be properly made in the
> DIP, not in the forums and assumed without stating in the DIP. Again,
> what's being evaluated is "DIP" not "DIP + surrounding rhetoric". A good
> argument would be e.g. analyzing a number of libraries and assess that
> e.g. 91% uses of ref is for efficiency purposes, 3% is unclear, and only
> 6% is for side-effect purpose. All preexisting code using ref parameters
> written under the current rule assumes that only lvalues will be bound
> to them. A subset of these functions take by ref for changing them only.
> The DIP should explain why that's not a problem, or if it is one it is a
> small problem, etc. My point is - the DIP should _approach_ the matter
> and build an argument about it. One more example from preexisting code
> for illustration, from the standard library:
>
> // in the allocators API
> bool expand(ref void[] b, size_t delta);
> bool reallocate(ref void[] b, size_t s);
>
> These primitives modify their first argument in essential ways. The
> intent is to fill b with the new slice resulted after
> expansion/reallocation. Under the current rules, calling these
> primitives is cumbersome, but usefully so because the processing done
> requires extra care if typed data is being reallocated. Under DIP 1016,
> a call with any T[] will silently "succeed" by converting the slice to
> void[], passing the temporary to expand/reallocate, then return as if
> all is well - yet the original slice has not been changed. The DIP
> should create a salient argument regarding these situations (and not
> only this example, but the entire class). It could perhaps argue that:
>
> - Such code is bad to start with, and should not have been written.
> - Such code is so rare, we can take the hit. We then have a
> recommendation for library writers on how to amend their codebase (use
> @disable or some other mechanisms).
> - The advantages greatly outweigh this problem.
> - The bugs caused are minor easy to find.
> - ...
>
> Point being: the matter, again should be _addressed_ by the DIP.
>
> * Regarding our recommendation that the proposal is resubmited as a
> distinct DIP as opposed to a patch on the existing DIP: this was not
> embracing bureaucracy. Instead, we considered that the DIP was too poor
> to be easily modified into a strong proposal, and recommended that it be
> rewritten simply because it would be easier and would engender a
> stronger DIP.
>
> * Regarding the argument "why not make this an iterative 

Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-28 Thread Manu via Digitalmars-d-announce
On Fri, Jan 25, 2019 at 10:20 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/25/2019 7:44 PM, Manu wrote:
> > I never said anything about 'rvalue references',
>
> The DIP mentions them several times in the "forum threads" section. I see you
> want to distinguish the DIP from that; I recommend a section clearing that up.
>
> However, my points about the serious problems with @disable syntax remain.

I think the `@disable` semantic is correct; I understand your
criticism that you have to search for the negative to understand the
restruction, but that perspective arises presumably from a presumption
that you want to explicity state inclusion, which is the opposite of
the intent.
The goal is to state exclusion, we are *adding* restrictions (ie,
removing potential calls) from the default more-inclusive behaviour,
and from that perspective, `@disable` is in the proper place.

> A section comparing with the C++ solution is necessary as well, more than the
> one sentence dismissal. For example, how C++ deals with the:
>
>  void foo(const int x);
>  void foo(const int& x);
>
> situation needs to be understood and compared. Failing to understand it can 
> lead
> to serious oversights. For example, C++ doesn't require an @disable syntax to
> make it work.

C++ doesn't desire a @disable semantic, it just works as described in this DIP.

Eg:

```c++
void fun(const int& x) {}
void test()
{
fun(10);
fun(short(10)); // <- no problem!
}
```

It's the dlang critics of this functionality that demand explicit
controls on functions accepting one kind or the other.
I personally see no value in all that noise, but I added it in due to
popular demand.

> >> [...]
> >> Should `s` be promoted to an int temporary, then pass the temporary by
> >> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. 
> >> the
> >> implicit conversion is a type paint and involves no temporary)?
> > As per the DIP; yes, that is the point.
> > The text you seek is written: "[...]. The user should not experience
> > edge cases, or differences in functionality when calling fun(int x) vs
> > fun(ref int x)."
>
> I don't see how that addresses implicit type conversion at all.

It explicitly permits it as one of the goals of the DIP. Uniformity in
function calling is one of the main goals here.

> > Don't accept naked ref unless you want these semantics. There is a
> > suite of tools offered to use where this behaviour is undesirable.
> > Naked `ref` doesn't do anything particularly interesting in the
> > language today that's not *identical* semantically to using a pointer
> > and adding a single '&' character at the callsite.
>
> It's not good enough. The DIP needs to specifically address what happens with
> implicit conversions. The reader should not be left wondering about what is
> implied.

As I said above, it couldn't be stated more clearly in the DIP; it is
very explicitly permitted, and stated that "the user should not
experience any difference in calling semantics when using ref".

> I often read a spec and think yeah, yeah, of course it must be that
> way. But it is spelled out in the spec, and reading it gives me confidence 
> that
> I'm understanding the semantics, and it gives me confidence that whoever wrote
> the spec understood it.

Okay, but it is spelled out. How could I make it clearer?

> (Of course, writing out the implications sometimes causes the writer to 
> realize
> he didn't actually understand it at all.)
>
> Furthermore, D has these match levels:
>
>  1. exact
>  2. const
>  3. conversion
>  4. no match
>
> If there are two or more matches at the same level, the decision is made based
> on partial ordering. How does adding the new ref/value overloading fit into 
> that?

I haven't described this well. I can try and improve this.
Where can I find these existing rules detailed comprehensively? I have
never seen them mentioned in the dlang language reference.
It's hard for me to speak in these terms, when I've never seen any
text in the language spec that does so.

Note; this criticism was nowhere to be found in your rejection text,
and it would have been trivial during community reviews to make this
note.
I feel like this is a mostly simple revision to make.

> >> It should never have gotten this far without giving a precise explanation 
> >> of how
> > exception safety is achieved when faced with multiple parameters.
> >
> > I apologise. I've never used exceptions in any code I've ever written,
> > so it's pretty easy for me to overlook that detail.
>
> It's so, so easy to get that wrong. C++ benefits from decades of compiler bug
> fixes wit

Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-28 Thread Manu via Digitalmars-d-announce
On Mon, Jan 28, 2019 at 12:00 PM Andrei Alexandrescu via
Digitalmars-d-announce  wrote:
>
> On 1/28/19 1:00 PM, Andrei Alexandrescu wrote:
> > On 1/24/19 3:01 PM, kinke wrote:
> >> On Thursday, 24 January 2019 at 09:49:14 UTC, Manu wrote:
> >>> We discussed and concluded that one mechanism to mitigate this issue
> >>> was already readily available, and it's just that 'out' gains a much
> >>> greater sense of identity (which is actually a positive side-effect if
> >>> you ask me!).
> >>> You have a stronger motivation to use 'out' appropriately, because it
> >>> can issue compile errors if you accidentally supply an rvalue.
> >>
> >> `out` with current semantics cannot be used as drop-in replacement for
> >> shared in-/output ref params, as `out` params are default-initialized
> >> on entry. Ignoring backwards compatibility for a second, I think
> >> getting rid of that would actually be beneficial (most args are
> >> probably already default-initialized by the callee in the line above
> >> the call...) - and I'd prefer an explicitly required `out` at the call
> >> site (C# style), to make the side effect clearly visible.
> >>
> >> I'd have otherwise proposed a `@noRVal` param UDA, but redefining
> >> `out` is too tempting indeed. ;)
> >
> > It seems to me that a proposal adding the "@rvalue" attribute in
> > function signatures to each parameter that would accept either an rvalue
> > or an lvalue would be easy to argue.
> >
> > - No exposing existing APIs to wrong uses
> > - The function's writer makes the decision ("I'm fine with this function
> > taking an rvalue")
> > - Appears in the function's documentation
> > - Syntax is light and localized where it belongs
> > - Scales well with number of parameters
> > - Transparent to callers
> >
> > Whether existing keyword combinations ("in", "out", "ref" etc) could be
> > used is a secondary point.
> >
> > The advantage is there's a simple and clear path forward for API
> > definition and use.
> >
> >
> > Andrei
>
> One more thought.
>
> The main danger is restricted to a specific conversion: lvalue of type T
> is converted to ref of type U. That way both the caller and the function
> writer believe the value gets updated, when in fact it doesn't. Consider:
>
> real modf(real x, ref real i);
>
> Stores integral part in i, returns the fractional part. At this point
> there are two liabilities:
>
> 1. User passes the wrong parameter type:
>
> double integral;
> double frac = modf(x, integral);
> // oops, integral is always NaN
>
> The function silently converts integral from double to real and passes
> the resulting temporary into the function. The temporary is filled and
> lost, leaving user's value unchanged.
>
> 2. The API gets changed:
>
> // Fine, let's use double
> real modf(real x, ref double i);
>
> At this point all correct callers are silently broken - everybody who
> correctly used a real for the integral part now has their call broken
> (real implicitly converts to a double temporary, and the change does not
> propagate to the user's value).
>
> (If the example looks familiar it may be because of
> https://dlang.org/library/std/math/modf.html.)
>
> So it seems that the real problem is that the participants wrongly
> believe an lvalue is updated.
>
> But let's say the caller genuinely doesn't care about the integral part.
> To do so is awkward:
>
> real unused;
> double frac = modf(x, unused);
>
> That code isn't any better or less dangerous than:
>
> double frac = modf(x, double());
>
> Here the user created willingly created an unnamed temporary of type
> double. Given that there's no doubt the user is not interested in that
> value after the call, the compiler could (in a proposed semantics) allow
> the conversion of the unnamed temporary to ref.
>
> TL;DR: it could be argued that the only dangerous conversions are lvalue
> -> temp rvalue -> ref, so only disable those. The conversion rvalue ->
> temp rvalue -> ref is not dangerous because the starting value on the
> caller side could not be inspected after the call anyway.

I started reading this post, and I was compelled to reply with this
same response, and then I realised you got there yourself.
I understand your concern, and it has actually been discussed lightly,
but regardless, you'll find that the issue you describe is not
suggested anywhere in this DIP.
This DIP is about passing rvalues to ref... so the issue you describe
passing lvalues to ref does not apply here.
There is no suggestion to change lvalue rules anywhere in this DIP.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Manu via Digitalmars-d-announce
On Fri, Jan 25, 2019 at 7:44 PM Manu  wrote:
>
> On Fri, Jan 25, 2019 at 4:00 AM Walter Bright via
> Digitalmars-d-announce  wrote:
> >
> > The DIP should not invent its own syntax
>
> I removed it, and replaced it with simpler code (that I think is
> exception-correct) in my prior post here. It's also a super-trivial
> amendment.

Incidentally, the reason I invented a syntax in this DIP, was because
we have no initialisation syntax in D, despite the language clearly
having the ability to initialise values (when they're declared); we
have an amazingly complex and awkward library implementation of
`emplace`, which is pretty embarrassing really.
The fact that I needed to invent a syntax to perform an initialisation
is a very serious problem in its own right.

But forget about that; I removed the need to express initialisation
from the rewrite.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Manu via Digitalmars-d-announce
On Fri, Jan 25, 2019 at 4:00 AM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/24/2019 11:53 PM, Nicholas Wilson wrote:
> > That the conflation of pass by reference to avoid copying and mutation is 
> > not
> > only deliberate but also mitigated by @disable.
>
> The first oddity about @disable is it is attached to the foo(int), not the
> foo(ref int). If I wanted to know if foo(ref int) takes rvalue references,

And right here, I can see our fundamental difference of perspective...

I never said anything about 'rvalue references', and I never meant
anything like that; at least, not in the C++ sense, which you seem to
be alluding to.
In C++, rval references are syntactically distinct and identifiable as
such, for the purposes of implementing move semantics.

If we want to talk about "rvalue references", then we need to be
having a *completely* different conversation.
That said, I'm not sure why you've raised this matter, since it's not
written anywhere in the DIP.

What I'm talking about is "not-rvalue-references accepting rvalues",
which if you want to transpose into C++ terms, is like `const T&`.

> There are indeed
> unlikable things about the C++ rules, but the DIP needs to pay more attention 
> to
> how C++ does this, and justify why D differs. Particularly because D will 
> likely
> have to have some mechanism of ABI compatibility with C++ functions that take
> rvalue references.

I'm not paying attention to C++ T&& rules, because this DIP has
nothing to do with T&&, and there would be no allusion to connecting
this to a T&& method. Again, I find that to be a very interesting
topic of conversation, but it has nothing to do with this DIP.

> [...]
> Should `s` be promoted to an int temporary, then pass the temporary by
> reference? I can find no guidance in the DIP. What if `s` is a uint (i.e. the
> implicit conversion is a type paint and involves no temporary)?

As per the DIP; yes, that is the point.
The text you seek is written: "[...]. The user should not experience
edge cases, or differences in functionality when calling fun(int x) vs
fun(ref int x)."
That text appears at least 2 times through the document as the stated goal.

Don't accept naked ref unless you want these semantics. There is a
suite of tools offered to use where this behaviour is undesirable.
Naked `ref` doesn't do anything particularly interesting in the
language today that's not *identical* semantically to using a pointer
and adding a single '&' character at the callsite. This DIP attempts
to make `ref` interesting and useful as a feature in its own right.
In discussions designing this thing, I've come to appreciate the UFCS
advantages as the most compelling opportunity, among all the other
things that burn me almost practically every time I write D code.

> The DIP should not invent its own syntax

I removed it, and replaced it with simpler code (that I think is
exception-correct) in my prior post here. It's also a super-trivial
amendment.

> It should never have gotten this far without giving a precise explanation of 
> how
exception safety is achieved when faced with multiple parameters.

I apologise. I've never used exceptions in any code I've ever written,
so it's pretty easy for me to overlook that detail.
Nobody else that did the community reviews flagged it, and that
includes you and Andrei, as members of the community.

> All that criticism aside, I'd like to see rvalue references in D. But the DIP
> needs significant work.

This is *NOT* an "rvalue-references" DIP; this is a "references" DIP.
If you want to see an rvalue references DIP, I agree that's a
completely different development, and it's also interesting to me... I
had *absolutely no idea* that an rvalue-references DIP was welcome. I
thought D was somewhat aggressively proud of the fact that we don't
have rvalue-references... apparently I took the wrong impression.

That said, this remains infinitely more important to me than an
rvalue-references DIP. It's been killing me for 10 years, and I'm
personally yet to feel hindered by our lack of rvalue-reference
support.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Manu via Digitalmars-d-announce
On Fri, Jan 25, 2019 at 6:50 PM Neia Neutuladh via
Digitalmars-d-announce  wrote:
>
> On Fri, 25 Jan 2019 18:14:56 -0800, Manu wrote:
> > Removing the `void` stuff end expanding such that the declaration +
> > initialisation is at the appropriate moments; any function can throw
> > normally, and the unwind works naturally?
>
> The contention was that, if the arguments are constructed properly,
> ownership is given to the called function, which is responsible for
> calling destructors.

No, that was never the intent, and certainly not written anywhere.
Ownership is assigned the the calling scope that we introduce
surrounding the statement. That's where the temporaries declared; I
didn't consider that ownership unclear.

> I'm not sure what the point of that was. The called function doesn't own
> its parameters and shouldn't ever call destructors. So now I'm confused.

Correct. You're not confused. The callee does NOT own ref parameters.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Manu via Digitalmars-d-announce
On Fri, Jan 25, 2019 at 4:20 PM Neia Neutuladh via
Digitalmars-d-announce  wrote:
>
> On Fri, 25 Jan 2019 23:08:52 +, kinke wrote:
>
> > On Friday, 25 January 2019 at 19:08:55 UTC, Walter Bright wrote:
> >> On 1/25/2019 2:57 AM, kinke wrote:
> >>> On Thursday, 24 January 2019 at 23:59:30 UTC, Walter Bright wrote:
>  On 1/24/2019 1:03 PM, kinke wrote:
> > (bool __gate = false;) , ((A __pfx = a();)) , ((B __pfy =
> > b();)) , __gate = true , f(__pfx, __pfy);
> 
>  There must be an individual gate for each of __pfx and pfy.
>  With the rewrite above, if b() throws then _pfx won't be destructed.
> >>>
> >>> There is no individual gate, there's just one to rule the
> >>> caller-destruction of all temporaries.
> >>
> >> What happens, then, when b() throws?
> >
> > `__pfx` goes out of scope, and is dtor expression (cleanup/finally) is
> > run as part of stack unwinding. Rewritten as block statement:
>
> And nested calls are serialized as you'd expect:
>
> int foo(ref S i, ref S j);
> S bar(ref S i, ref S j);
> S someRvalue(int i);
>
> foo(
> bar(someRvalue(1), someRvalue(2)),
> someRvalue(4));
>
> // translates to something like:
> {
> bool __gate1 = false;
> S __tmp1 = void;
> S __tmp2 = void;
> S __tmp3 = void;
> __tmp1 = someRvalue(1);
> try
> {
> __tmp2 = someRvalue(2);
> __gate1 = true;
> __tmp3 = bar(__tmp1, __tmp2);
> }
> finally
> {
> if (!__gate1) __tmp1.__xdtor();
> }
> S __tmp4 = void;
> bool __gate2 = false;
> try
> {
> __tmp4 = someRvalue(4);
> __gate2 = true;
> return foo(__tmp3, __tmp4);
> }
> finally
> {
> if (!__gate2)
> {
> __tmp3.__xdtor();
> }
> }
> }

Is this fine?

Given above example:

int foo(ref S i, ref S j);
S bar(ref S i, ref S j);
S someRvalue(int i);

foo(
bar(someRvalue(1), someRvalue(2)),
someRvalue(4));

===>

{
  S __tmp0 = someRvalue(1);
  S __tmp1 = someRvalue(2);
  S __tmp2 = bar(__tmp0, __tmp1);
  S __tmp3 = someRvalue(4);
  foo(__tmp2, __tmp3);
}

Removing the `void` stuff end expanding such that the declaration +
initialisation is at the appropriate moments; any function can throw
normally, and the unwind works naturally?


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 11:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> No, it is not rejected in principle. Finding serious errors in it on the eve 
> of
> approval is disappointing, and is not auspicious for being in a hurry to 
> approve it.

I'm very clearly NOT in a hurry here. We've been sitting on this for 10 years.
What's weird is that you closed the door on iteration, and instead
suggested I should write a new one with someone competent.

More strangely, part of your feedback is broken, it appears you've
reviewed and made assessment against code and text that's just not
there, and you've neglected to respond to those points several times
now.
The error you found seems entirely revision-worthy rather than
rejection-worthy. Your comments about treating expressions as
statements is just wrong, and from that point on where you've
mis-interpreted something so fundamental to the DIP, I don't think
it's possible to trust any outcome from your 'formal assessment'.

I appreciate that you identified the exception issue, we'll fix it,
but I think you need to reconsider the formal rejection.

> but it is a bit unfair to the
> implementor to dump an incomplete spec on him and have him fill in the gaps

How is that the intent? We can get the rewrite semantics right with an
iteration.
So is it rejected on that premise? I don't understand how re-reading
it with satisfactory rewrite logic going to change your assessment of
the DIP in general? No surrounding text would change, and assuming
that the rewrite is corrected, then do you just find a different
reason to reject it?
If so, then that needs to be the reason for the rejection, and not the
error in the rewrite.

I presume the real reason for rejection is this part:
"They say that with the current semantics, this function only operates
on long values as it should. With the proposed semantics, the call
will accept all shared integral types. Any similar proposal must
address this hole in order to be accepted."

But to make that criticism is to miss the point entirely. The point is
to do the thing you say needs to be addressed... and a whole bunch of
techniques to motivate the compiler to emit desirable compile errors
can be deployed in various circumstances. None of them are
particularly unpleasant or awkward.
TL;DR: use `out`, use `*`, use @disable, use const. These can and
should all be deployed appropriately anyway.

Is that the reason it was rejected? If so, then I can't fix that by
rewriting the DIP, that *is* the DIP.
If you're not persuaded by the advantages, and that (rather extensive)
set of tools to mitigate the side effects you're concerned about, then
that's really more of an opinion than a technical rejection.
I guess you're entitled to say "I don't like it, and I reject it
because I don't like it", but you have to *say* that, and not make up
some other stuff.

> The statement thing is a "do what I meant, not what I wrote" example, and 
> DIPs need
> to be better than that. You're leaving him to design where the temporaries go,
> where the gates go, and ensure everything is properly exception safe.

I agree, we'll fix the temporaries; but getting that correct is a
revision surely. There's no spec to change there.
The criticism talking about rewriting expressions as statements is
still mysterious to me. I don't understand how a rejection can be
presented based on an incorrect reading of the DIP... and how am I
supposed to accept the rejection text containing those criticisms when
the criticisms don't address what's written?
You had to change the code (removing the semicolon from the statement)
to make the claim that I was rewriting expressions as statements, and
I honestly have no idea why you did that?

Anyway...


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 6:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/24/2019 4:31 PM, 12345swordy wrote:
> > And wait for another 180+ days for a fix? Come on dude, can you understand 
> > the
> > frustration being display here?
>
> Of course it's frustrating. On the other hand, we've had a lot of problems
> stemming from implementing features without thoroughly understanding them.
>
> Rvalue references have a lot of subtleties to them, and we should not rush 
> into
> it, especially since these issues only turned up at the last minute.

"Rush"? We've literally been debating this since my first post on this
forum... like, 10 years ago.
It's the issue I specifically joined this forum to complain about.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 6:35 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/24/2019 4:31 PM, 12345swordy wrote:
> > And wait for another 180+ days for a fix? Come on dude, can you understand 
> > the
> > frustration being display here?
>
> Of course it's frustrating. On the other hand, we've had a lot of problems
> stemming from implementing features without thoroughly understanding them.
>
> Rvalue references have a lot of subtleties to them, and we should not rush 
> into
> it, especially since these issues only turned up at the last minute.

Which issues? The initialization order issue? That's relatively
trivial, isolated, and doesn't change the substance of the proposal in
any way (unless a working rewrite is impossible, which I'm confident
is not the case).
The rest of your criticisms certainly did not 'turn up at last
minute', they were extensively discussed, and discussion material is
available, and present in the community review summary.

And then there's the weird expression vs statement comments, which are
bizarre, because you literally had to modify my code snippets
(removing the semicolons) to read it that way... I can't accept that
feedback, that just demonstrates a mis-reading of the DIP. If the DIP
could be misunderstood that way, then that's surely revision-worthy,
not throw-it-out-and-start-over worthy, and it has nothing to say
about the substance of the design.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 3:50 PM Rubn via Digitalmars-d-announce
 wrote:
>
> On Thursday, 24 January 2019 at 23:18:11 UTC, kinke wrote:
> > Proposed `out` semantics:
> > ---
> > void increment(out long value) { ++value; }
> > increment(out value);
> > ---
> >
> > vs. pointer version with current `out` semantics:
> > ---
> > void increment(long* pValue) { ++(*pValue); }
> > increment();
> > ---
> >
> > The pointer workaround is both ugly (C) and unsafe (you can
> > pass null).
>
> @safe void safestFunction() {
>  int* ptr;
>  increment(out *ptr); // can also pass null to ref/out even in
> @safe
> }
>
> It's probably going to be a hard sell to change the behavior of
> out now as well. It'd break quite a bit of code I think, did a
> search through druntime and phobos and quite a few functions use
> it. Maybe user code uses it less, I know I never use it.

I think any issues with `out` are tangential though. `out` does 'work'
right now, and it's a valid way to address the concern with respect to
one broad use case for ref.
Theoretically, `out` is the right solution for that particular class
of calling contexts, and any further issued with `out` should be taken
as a separate issue/discussion.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 3:45 PM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 1/24/2019 1:31 AM, Manu wrote:
> > This process is pretty unsatisfying, because it ships off to a
> > black-box committee, who were apparently able to misunderstand the
> > substance of the proposal and then not seek clarification, and despite
> > the only legitimate issue from my perspective being easily corrected,
> > it's been suggested to start a whole new DIP.
>
> It's no problem if you want to rework the existing text, just submit it as a 
> new
> DIP.

This process has a long and deep pipe, why should it be a new DIP?
There's nothing from the rejection text that would motivate me to
change any words... Is it that you reject it 'in principle'? If so,
there's nothing I can ever do about that.
This took a substantial amount of my life, and you could have sought
clarification, or a revision before a rejection.
The rejection appears to be premised by misunderstanding more than
anything, and a one very real (but isolated) technical issue that I
believe can be corrected readily enough without affecting the
surrounding text.

The only improvement I could make is to better fold the discussion
from the community review into the core text, but it's not like that
digest wasn't already right there during consideration.

I have no idea how you guys managed to edit and re-frame my DIP as
applying to expressions? You removed the semicolons from the
statements, and then told me I had no idea what I was doing, mixing
expressions with statements that way... why did you do that?


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 1:05 PM kinke via Digitalmars-d-announce
 wrote:
>
> On Thursday, 24 January 2019 at 09:04:41 UTC, Nicholas Wilson
> wrote:
> > On Thursday, 24 January 2019 at 07:18:58 UTC, Mike Parker wrote:
> >> The second problem is the use of := (which the DIP Author
> >> defines as representing "the initial construction, and not a
> >> copy operation as would be expected if this code were written
> >> with an = expression"). This approach shows its deficiencies
> >> in the multiple arguments case; if the first constructor
> >> throws an exception, all remaining values will be destroyed in
> >> the void state as they never have the chance to become
> >> initialized.
> >
> > Although not specified by the DIP, I think this could be easily
> > remedied by saying that the order of construction is the same
> > as if the temporaries were not bound to ref, i.e.
> >
> > ---
> > struct A {~this();} struct B{ ~this();}
> > A a();
> > B b();
> >
> > void f(A a, B b);
> > void g(ref A a, ref B b);
> >
> > f(a(),b());  //(1)
> > g(a(),b()); //(2)
> > ---
> >
> > and a() or b() may throw (and are pure), that (1) and (2)
> > exhibit the same exception/destructor semantics.
>
> Describing this stuff in detail (rewritten expression?!), isn't
> trivial and requires knowledge about how calls and
> construction/destruction of argument expressions works.

Sure, it's not 'trivial', but it is 'simple' in that it's isolated,
and it only affects the part of the DIP that defines the rewrite
semantic. It doesn't lead to "practically a completely differnet DIP"
as was suggested.
Changing the detail of the rewrite such that it has the proper effect
required by the surrounding text and handles exceptions correctly can
probably be done in such a way that not a single line of text requires
any changes.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 12:05 PM kinke via Digitalmars-d-announce
 wrote:
>
> On Thursday, 24 January 2019 at 09:49:14 UTC, Manu wrote:
> > We discussed and concluded that one mechanism to mitigate this
> > issue
> > was already readily available, and it's just that 'out' gains a
> > much
> > greater sense of identity (which is actually a positive
> > side-effect if
> > you ask me!).
> > You have a stronger motivation to use 'out' appropriately,
> > because it
> > can issue compile errors if you accidentally supply an rvalue.
>
> `out` with current semantics cannot be used as drop-in
> replacement for shared in-/output ref params, as `out` params are
> default-initialized on entry.

Shared in/out functions are very rare by contrast to out parameters.

> Ignoring backwards compatibility
> for a second, I think getting rid of that would actually be
> beneficial (most args are probably already default-initialized by
> the callee in the line above the call...) - and I'd prefer an
> explicitly required `out` at the call site (C# style), to make
> the side effect clearly visible.
>
> I'd have otherwise proposed a `@noRVal` param UDA, but redefining
> `out` is too tempting indeed. ;)

Maybe... but there are satisfying options for basically any case we
could imagine; and worst case, use a pointer rather than ref.
Adding stuff like @norval feels heavy-handed, and I personally judge
this issue as being severe enough to warrant that baggage.

What are some legit cases where, assuming a world where we want to
avoid naked ref in cases where we want to receive compile errors when
users pass rvalues, aren't satisfied by other options?


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-24 Thread Manu via Digitalmars-d-announce
On Thu, Jan 24, 2019 at 1:25 AM Nicholas Wilson via
Digitalmars-d-announce  wrote:
>
> On Thursday, 24 January 2019 at 07:18:58 UTC, Mike Parker wrote:
> > Walter and Andrei have declined to accept DIP 1016, "ref T
> > accepts r-values", on the grounds that it has two fundamental
> > flaws that would open holes in the language. They are not
> > opposed to the feature in principle and suggested that a
> > proposal that closes those holes and covers all the bases will
> > have a higher chance of getting accepted.
> >
> > You can read a summary of the Formal Assessment at the bottom
> > of the document:
> >
> > https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1016.md
>
> > void atomicIncrement(ref shared long x);
> > atomicIncrement(myInt);
>
> Raises a good point, not covered by @disable where the intent is
> to modify it and modifying a temporary is wrong. `out ref`
> perhaps?

Actually, this was discussed, but somehow this piece of discussion
didn't get folded back it to the DIP. It is mentioned in the
'Community Review Round 1' digest though.

We discussed and concluded that one mechanism to mitigate this issue
was already readily available, and it's just that 'out' gains a much
greater sense of identity (which is actually a positive side-effect if
you ask me!).
You have a stronger motivation to use 'out' appropriately, because it
can issue compile errors if you accidentally supply an rvalue.

That doesn't address the specific `atomicIncrement` case here, but now
we're in VERY niche territory; we analysed a lot of cases, and
concluded that such cases were relatively few, and other choices exist
to mitigate those cases.
There are cases that want to do mutation to rvalues (like in pipeline
functions), and then most cases can use 'out' instead. Remaining cases
are quite hard to find, and in this particular case, I'd suggest that
`atomicIncrement`, a very low-level implementation-detail function,
should just receive a pointer.


Re: A brief survey of build tools, focused on D

2018-12-10 Thread Manu via Digitalmars-d-announce
On Mon, Dec 10, 2018 at 10:30 AM Neia Neutuladh via
Digitalmars-d-announce  wrote:
>
> I wrote a post about language-agnostic (or, more accurately, cross-
> language) build tools, primarily using D as an example and Dub as a
> benchmark.
>
> Spoiler: dub wins in speed, simplicity, dependency management, and
> actually working without modifying the tool's source code.
>
> https://blog.ikeran.org/?p=339

Why isn't premake in the list? It's the only buildtool that works
reasonably well with IDE's, and it's had D well supported for almost
6-7 years.
It also doesn't depend on a horrible runtime language distro.


Re: Visual D 0.48.0 released

2018-12-03 Thread Manu via Digitalmars-d-announce
On Mon, Dec 3, 2018 at 2:30 AM Petar via Digitalmars-d-announce
 wrote:
>
> On Monday, 3 December 2018 at 10:04:48 UTC, M.M. wrote:
> > On Sunday, 2 December 2018 at 21:23:31 UTC, Manu wrote:
> >> On Sun, Dec 2, 2018 at 8:05 AM Rainer Schuetze via
> >> Digitalmars-d-announce 
> >> wrote:
> >>> [...]
> >>
> >> Bravo!
> >> Thank you for your awesome work as always Rainer!
> >>
> >> For those following, this release is something really special.
> >
> > I am not following... why is special? Because of the new
> > debugging function?
>
> Just have a look at
> http://rainers.github.io/visuald/visuald/VersionHistory.html ;)

A big thing that didn't seem to make it into the changelog, is that
the syntax colouring is MUCH more detailed. It is now competitive with
VisualAssist for C++.


Re: Visual D 0.48.0 released

2018-12-02 Thread Manu via Digitalmars-d-announce
On Sun, Dec 2, 2018 at 8:05 AM Rainer Schuetze via
Digitalmars-d-announce  wrote:
>
> Hi,
>
> I have made a new release of Visual D available. Some highlights of
> version 0.48.0:
>
> * installer and binaries now digitally signed by the "D Language Foundation"
> * experimental: option to enable semantic identifier highlighting
> * mago debugger: show return value, closure and capture variables as
> locals (with dmd 2.084/nightly)
>
> See http://rainers.github.io/visuald/visuald/VersionHistory.html for the
> full list of changes.
>
> Visual D is a Visual Studio extension that adds D language support to
> VS2008-2017. It is written in D, its source code can be found on github:
> https://github.com/D-Programming-Language/visuald, pull requests welcome.
>
> The installer can be found at
> http://rainers.github.io/visuald/visuald/StartPage.html
>
> Happy coding,
> Rainer

Bravo!
Thank you for your awesome work as always Rainer!

For those following, this release is something really special.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-16 Thread Manu via Digitalmars-d-announce
On Thu, Nov 15, 2018 at 8:00 PM Vladimir Panteleev via
Digitalmars-d-announce  wrote:
>
> On Thursday, 15 November 2018 at 19:18:27 UTC, Manu wrote:
> > I'm not sure how VisualStudio (read: MSBuild) should behave
> > differently than make?
> > It's not like the build script is taking a long time, it's the
> > invocation of DMD that takes 100% of that time.
>
> That seems to take about half the time for me (2 out of the 4
> seconds).
>
> > 36% slower seems highly optimistic? Perhaps you're building a
> > debug build of DMD with a debug build of DMD? I guess that
> > wouldn't be so slow... I suspect it's the optimiser that's very
> > slow?
>
> 36% slower for compiling a real-life program with the built D
> compiler. The comparison is between a normal and debug build of
> DMD (used to build the program).

What was the batch size for module grouping? I wonder if batching too
many modules together into a single compiler invocation makes too much
data for the optimiser? Is optimisation time strictly linear with the
size of the AST? Or is having more functions available for the inliner
to consider increasingly expensive?


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-15 Thread Manu via Digitalmars-d-announce
Wed, Nov 14, 2018 at 12:25 AM Vladimir Panteleev via
Digitalmars-d-announce  wrote:
>
> On Thursday, 8 November 2018 at 07:54:56 UTC, Manu wrote:
> > And all builds are release builds... what good is a debug
> > build? DMD
> > is unbelievably slow in debug. If it wasn't already slow
> > enough... if
> > I try and build with a debug build, it takes closer to 5
> > minutes.
>
> I just got to try a side-by-side comparison of a release and
> debug (as in `make -f posix.mak BUILD=debug`) build of DMD.
>
> With a 25KLOC project, the debug build is only 36% slower.
>
> Maybe the experience on Windows / Visual Studio is very different.

I'm not sure how VisualStudio (read: MSBuild) should behave
differently than make?
It's not like the build script is taking a long time, it's the
invocation of DMD that takes 100% of that time.

36% slower seems highly optimistic? Perhaps you're building a debug
build of DMD with a debug build of DMD? I guess that wouldn't be so
slow... I suspect it's the optimiser that's very slow?


Re: NES emulator written in D

2018-11-13 Thread Manu via Digitalmars-d-announce
On Mon, Nov 12, 2018 at 10:30 PM blahness via Digitalmars-d-announce
 wrote:
>
> On Tuesday, 13 November 2018 at 05:59:52 UTC, Manu wrote:
> >
> > Nice work.
> >
> > Oh wow, this is pretty rough!
> > ```
> > void createTable() {
> >   this.table = [
> > , , , , ,
> > ,
> > , , , , ,
> > ,
> > , , , ,
> > ...
> > ```
> >
> > Here's one I prepared earlier:
> > https://github.com/TurkeyMan/superemu (probably doesn't work
> > with DMD from the last year or 2!) Extensible architecture,
> > supports a bunch of systems.
>
> That's an artifact from the original code which was written in
> Go. My main focus was adding missing instructions & fixing any
> timing issues. It now passes nearly every NES specific CPU
> instruction & timing test I can throw at it so I'm fairly happy
> with it. Any improvements are always welcome though.

A great test is to emulate an Atari2600; you'll know your 6502 is 100%
perfect if you can play pitfall or some other complex 2600 games ;)
I can't see how your cycle counting logic works, it looks like it's
missing a lot of cycles. How do you clock your scanlines against your
CPU?
Can you run Battletoads or Super Mario Bros? They're pretty sensitive
to proper timing.


Re: NES emulator written in D

2018-11-12 Thread Manu via Digitalmars-d-announce
On Sat, Feb 3, 2018 at 5:55 AM blahness via Digitalmars-d-announce
 wrote:
>
> Hi everyone,
>
> Not sure how interested people here will be with this but I've
> ported https://github.com/fogleman/nes from Go to D [1]. I should
> point out that I'm not the author of the original Go version.
>
> The emulator code itself is 100% D with no dependencies. I've
> also created a little app using SDL to show how you'd put this
> library to use [2].
>
> Its PPU & APU timing isn't 100% accurate (same as the Go version)
> so not all games will work correctly but this should be pretty
> easy to fix.
>
> Links
> --
> [1] https://github.com/blahness/nes
> [2] https://github.com/blahness/nes_test

Nice work.

Oh wow, this is pretty rough!
```
void createTable() {
  this.table = [
, , , , , ,
, , , , , ,
, , , ,
...
```

Here's one I prepared earlier: https://github.com/TurkeyMan/superemu
(probably doesn't work with DMD from the last year or 2!)
Extensible architecture, supports a bunch of systems.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread Manu via Digitalmars-d-announce
On Thu, Nov 8, 2018 at 12:55 AM Joakim via Digitalmars-d-announce
 wrote:
>
> On Thursday, 8 November 2018 at 08:29:28 UTC, Manu wrote:
> > On Thu, Nov 8, 2018 at 12:10 AM Joakim via
> > Digitalmars-d-announce 
> > wrote:
> >>
> >> On Thursday, 8 November 2018 at 07:54:56 UTC, Manu wrote:
> >
> > I didn't configure the build infrastructure!
>
> Maybe you can? I have no experience with VS, but surely it has
> some equivalent of ninja -j5?

msbuild does parallel builds quite effectively. I expect it perceives
a dependency between jobs which cause it to serialise. Maybe there's a
legit dependency, or maybe the msbuild script has a problem? Either
way, it's not acceptable.
I would log this is maximum priority bug.
(https://issues.dlang.org/show_bug.cgi?id=19377)

> >> > And all builds are release builds... what good is a debug
> >> > build? DMD
> >> > is unbelievably slow in debug. If it wasn't already slow
> >> > enough... if
> >> > I try and build with a debug build, it takes closer to 5
> >> > minutes.
> >> >
> >> > I suspect one part of the problem is that DMD used to be
> >> > built with a C compiler, and now it's built with DMD... it
> >> > really should be built with LDC at least?
> >>
> >> Could be part of the problem on Windows, dunno.
> >
> > Well... ffs... people need to care about this! >_<
>
> I agree that the official release of DMD for Windows should be
> faster, and we should be building it with ldc... if that's the
> problem.

I think it's a combination of problems, and primary problem being
criminal negligence!


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread Manu via Digitalmars-d-announce
On Thu, Nov 8, 2018 at 12:10 AM Walter Bright via
Digitalmars-d-announce  wrote:
>
> On 11/7/2018 11:41 PM, Manu wrote:
> > I'm on an i7 with 8 threads and plenty of ram... although threads are
> > useless, since DMD only uses one ;)
>
> So does every other compiler.
>
> To do a multicore build, you'll need to use a makefile that supports -j.

Right.
So...?

Also, MSBuild is what people use on Windows... but same applies.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread Manu via Digitalmars-d-announce
On Thu, Nov 8, 2018 at 12:10 AM Joakim via Digitalmars-d-announce
 wrote:
>
> On Thursday, 8 November 2018 at 07:54:56 UTC, Manu wrote:
> > On Wed, Nov 7, 2018 at 10:30 PM Vladimir Panteleev via
> > Digitalmars-d-announce 
> > wrote:
> >>
> >> On Thursday, 8 November 2018 at 06:08:20 UTC, Vladimir
> >> Panteleev wrote:
> >> > It was definitely about 4 seconds not too long ago, a few
> >> > years at most.
> >>
> >> No, it's still 4 seconds.
> >>
> >> digger --offline --config-file=/dev/null -j auto -c
> >> local.cache=none build 7.31s user 1.51s system 203% cpu
> >> 4.340 total
> >>
> >> > It does seem to take more time now; I wonder why.
> >>
> >> If it takes longer, then it's probably because it's being
> >> built in one CPU core, or in the release build.
> >
> > https://youtu.be/msWuRlD3zy0
>
> Lol, I saw that link and figured it was either some comedy video,
> like the Python ones Walter sometimes posts, or you were actually
> showing us how long it takes. Pretty funny to see the latter.

It's not so funny when every one-line tweak burns 2 minutes of my life away.

> > DMD only builds with one core, since it builds altogether.
>
> Yes, but your build time is unusually long even with one core.
> Are the D backend and frontend at least built in parallel to each
> other?

That doesn't matter, you can clearly see the backend built in less
than 2 seconds.

> It doesn't seem to be even doing that, though they're
> separate invocations of DMD.

I didn't configure the build infrastructure!

> > And all builds are release builds... what good is a debug
> > build? DMD
> > is unbelievably slow in debug. If it wasn't already slow
> > enough... if
> > I try and build with a debug build, it takes closer to 5
> > minutes.
> >
> > I suspect one part of the problem is that DMD used to be built
> > with a C compiler, and now it's built with DMD... it really
> > should be built with LDC at least?
>
> Could be part of the problem on Windows, dunno.

Well... ffs... people need to care about this! >_<


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 11:55 PM Joakim via Digitalmars-d-announce
 wrote:
>
> On Thursday, 8 November 2018 at 07:41:58 UTC, Manu wrote:
> > On Wed, Nov 7, 2018 at 10:30 PM Joakim via
> > Digitalmars-d-announce 
> > wrote:
> >>
> >> On Thursday, 8 November 2018 at 04:16:44 UTC, Manu wrote:
> >> > On Tue, Nov 6, 2018 at 10:05 AM Vladimir Panteleev via
> >> > Digitalmars-d-announce
> >> >  wrote:
> >> >> [...]
> >> >
> >> > "Indeed, a clean build of DMD itself (about 170’000 lines of
> >> > D and 120’000 lines of C/C++) takes no longer than 4 seconds
> >> > to build on a rather average developer machine."
> >> >
> >> > ...what!? DMD takes me... (compiling) ... 1 minute 40
> >> > seconds to build! And because DMD does all-files-at-once
> >> > compilation, rather than separate compilation for each
> >> > source file, whenever you change just one line in one file,
> >> > you incur that entire build time, every time, because it
> >> > can't just rebuild the one source file that changed. You
> >> > also can't do multi-processor builds with all-in-one build
> >> > strategies.
> >> >
> >> > 4 seconds? That's just untrue. D is actually kinda slow
> >> > these days... In my experience it's slower than modern C++
> >> > compilers by quite a lot.
> >>
> >> It sounds like you're not using "a rather average developer
> >> machine" then, as there's no way DMD should be that slow to
> >> build on a core i5 or better:
> >>
> >> https://forum.dlang.org/post/rqukhkpxcvgiefrdc...@forum.dlang.org
> >
> > I'm on an i7 with 8 threads and plenty of ram... although
> > threads are useless, since DMD only uses one ;)
>
> Running Windows XP? ;) That does sound like Windows though, as I
> do remember being surprised how long dmd took to build on Win7
> when I tried it 8-9 years back. I still don't think the toolchain
> should be _that_ much slower than linux though.
>
> Btw, the extra cores are _not_ useless for the DMD backend, which
> has always used separate compilation, whether written in C++ or D.

No, you're right, the backend builds in 2-3 seconds.



Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 10:30 PM Vladimir Panteleev via
Digitalmars-d-announce  wrote:
>
> On Thursday, 8 November 2018 at 06:08:20 UTC, Vladimir Panteleev
> wrote:
> > It was definitely about 4 seconds not too long ago, a few years
> > at most.
>
> No, it's still 4 seconds.
>
> digger --offline --config-file=/dev/null -j auto -c
> local.cache=none build 7.31s user 1.51s system 203% cpu 4.340
> total
>
> > It does seem to take more time now; I wonder why.
>
> If it takes longer, then it's probably because it's being built
> in one CPU core, or in the release build.

https://youtu.be/msWuRlD3zy0

DMD only builds with one core, since it builds altogether.
And all builds are release builds... what good is a debug build? DMD
is unbelievably slow in debug. If it wasn't already slow enough... if
I try and build with a debug build, it takes closer to 5 minutes.

I suspect one part of the problem is that DMD used to be built with a
C compiler, and now it's built with DMD... it really should be built
with LDC at least?


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 10:30 PM Joakim via Digitalmars-d-announce
 wrote:
>
> On Thursday, 8 November 2018 at 04:16:44 UTC, Manu wrote:
> > On Tue, Nov 6, 2018 at 10:05 AM Vladimir Panteleev via
> > Digitalmars-d-announce 
> > wrote:
> >> [...]
> >
> > "Indeed, a clean build of DMD itself (about 170’000 lines of D
> > and 120’000 lines of C/C++) takes no longer than 4 seconds to
> > build on a rather average developer machine."
> >
> > ...what!? DMD takes me... (compiling) ... 1 minute 40 seconds
> > to build! And because DMD does all-files-at-once compilation,
> > rather than separate compilation for each source file, whenever
> > you change just one line in one file, you incur that entire
> > build time, every time, because it can't just rebuild the one
> > source file that changed. You also can't do multi-processor
> > builds with all-in-one build strategies.
> >
> > 4 seconds? That's just untrue. D is actually kinda slow these
> > days... In my experience it's slower than modern C++ compilers
> > by quite a lot.
>
> It sounds like you're not using "a rather average developer
> machine" then, as there's no way DMD should be that slow to build
> on a core i5 or better:
>
> https://forum.dlang.org/post/rqukhkpxcvgiefrdc...@forum.dlang.org

I'm on an i7 with 8 threads and plenty of ram... although threads are
useless, since DMD only uses one ;)



Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 10:10 PM Vladimir Panteleev via
Digitalmars-d-announce  wrote:
>
> On Thursday, 8 November 2018 at 04:16:44 UTC, Manu wrote:
> > ...what!? DMD takes me... (compiling) ... 1 minute 40 seconds
> > to build! And because DMD does all-files-at-once compilation,
> > rather than separate compilation for each source file, whenever
> > you change just one line in one file, you incur that entire
> > build time, every time, because it can't just rebuild the one
> > source file that changed. You also can't do multi-processor
> > builds with all-in-one build strategies.
> >
> > 4 seconds? That's just untrue. D is actually kinda slow these
> > days... In my experience it's slower than modern C++ compilers
> > by quite a lot.
>
> It was definitely about 4 seconds not too long ago, a few years
> at most.
>
> It does seem to take more time now; I wonder why.

100 seconds is a lot more than 4... 25x even, that's a pretty big
productivity decline ;)


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Tue, Nov 6, 2018 at 10:05 AM Vladimir Panteleev via
Digitalmars-d-announce  wrote:
>
> This is a tool + article I wrote in February, but never got
> around to finishing / publishing until today.
>
> https://blog.thecybershadow.net/2018/02/07/dmdprof/
>
> Hopefully someone will find it useful.

"Indeed, a clean build of DMD itself (about 170’000 lines of D and
120’000 lines of C/C++) takes no longer than 4 seconds to build on a
rather average developer machine."

...what!? DMD takes me... (compiling) ... 1 minute 40 seconds to build!
And because DMD does all-files-at-once compilation, rather than
separate compilation for each source file, whenever you change just
one line in one file, you incur that entire build time, every time,
because it can't just rebuild the one source file that changed. You
also can't do multi-processor builds with all-in-one build strategies.

4 seconds? That's just untrue. D is actually kinda slow these days...
In my experience it's slower than modern C++ compilers by quite a lot.



Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 8:18 PM Manu  wrote:
>
> On Wed, Nov 7, 2018 at 8:16 PM Manu  wrote:
> >
> > On Tue, Nov 6, 2018 at 10:05 AM Vladimir Panteleev via
> > Digitalmars-d-announce  wrote:
> > >
> > > This is a tool + article I wrote in February, but never got
> > > around to finishing / publishing until today.
> > >
> > > https://blog.thecybershadow.net/2018/02/07/dmdprof/
> > >
> > > Hopefully someone will find it useful.
> >
> > "Indeed, a clean build of DMD itself (about 170’000 lines of D and
> > 120’000 lines of C/C++) takes no longer than 4 seconds to build on a
> > rather average developer machine."
> >
> > ...what!? DMD takes me... (compiling) ... 1 minute 40 seconds to build!
> > And because DMD does all-files-at-once compilation, rather than
> > separate compilation for each source file, whenever you change just
> > one line in one file, you incur that entire build time, every time,
> > because it can't just rebuild the one source file that changed. You
> > also can't do multi-processor builds with all-in-one build strategies.
> >
> > 4 seconds? That's just untrue. D is actually kinda slow these days...
> > In my experience it's slower than modern C++ compilers by quite a lot.
>
> Also, in my experience, DMD seems to build a LOT slower now that it's
> in D than it used to when it was C++.

Oh, and also, nice work Vladimir! This is awesome! :)



Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Manu via Digitalmars-d-announce
On Wed, Nov 7, 2018 at 8:16 PM Manu  wrote:
>
> On Tue, Nov 6, 2018 at 10:05 AM Vladimir Panteleev via
> Digitalmars-d-announce  wrote:
> >
> > This is a tool + article I wrote in February, but never got
> > around to finishing / publishing until today.
> >
> > https://blog.thecybershadow.net/2018/02/07/dmdprof/
> >
> > Hopefully someone will find it useful.
>
> "Indeed, a clean build of DMD itself (about 170’000 lines of D and
> 120’000 lines of C/C++) takes no longer than 4 seconds to build on a
> rather average developer machine."
>
> ...what!? DMD takes me... (compiling) ... 1 minute 40 seconds to build!
> And because DMD does all-files-at-once compilation, rather than
> separate compilation for each source file, whenever you change just
> one line in one file, you incur that entire build time, every time,
> because it can't just rebuild the one source file that changed. You
> also can't do multi-processor builds with all-in-one build strategies.
>
> 4 seconds? That's just untrue. D is actually kinda slow these days...
> In my experience it's slower than modern C++ compilers by quite a lot.

Also, in my experience, DMD seems to build a LOT slower now that it's
in D than it used to when it was C++.



Re: usable @nogc Exceptions with Mir Runtime

2018-11-02 Thread Manu via Digitalmars-d-announce
On Tue, Oct 30, 2018 at 9:30 AM Oleg via Digitalmars-d-announce
 wrote:
>
> Thanks for your work!
>
> > Example
> > ===
> > ///
> > @safe pure nothrow @nogc
> > unittest
> > {
> > import mir.exception;
> > import mir.format;
> > try throw new MirException(stringBuf() << "Hi D" << 2 <<
> > "!" << getData);
> > catch(Exception e) assert(e.msg == "Hi D2!");
> > }
> >
> > ===
>
> I don't understand why you choose C++ format style instead of
> D-style format?

Perhaps this is a stupid question... but there's clearly `new
MirException` right there in that code.
How is this @nogc?


Re: Copy Constructor DIP and implementation

2018-09-24 Thread Manu via Digitalmars-d-announce
On Mon, 24 Sep 2018 at 16:22, Jonathan M Davis via
Digitalmars-d-announce  wrote:
>
> On Monday, September 24, 2018 3:20:28 PM MDT Manu via Digitalmars-d-announce
> wrote:
> > copy-ctor is good, @implicit is also good... we want both. Even though
> > copy-ctor is not strictly dependent on @implicit, allowing it will
> > satisfy that there's not a breaking change, it it will also
> > self-justify expansion of @implicit as intended without a separate and
> > time-consuming fight, which is actually the true value of this DIP!
>
> @implicit on copy constructors is outright bad. It would just be a source of
> bugs. Every time that someone forgets to use it (which plenty of programmers
> will forget, just like they forget to use @safe, pure, nothrow, etc.),
> they're going to have a bug in their program.

perhaps a rule where declaring a copy-ctor WITHOUT @explicit emits a
compile error...?


Re: Copy Constructor DIP and implementation

2018-09-24 Thread Manu via Digitalmars-d-announce
On Mon, 24 Sep 2018 at 12:40, 12345swordy via Digitalmars-d-announce
 wrote:
>
> On Monday, 24 September 2018 at 17:34:58 UTC, Manu wrote:
> > On Mon, 24 Sep 2018 at 00:55, Gary Willoughby via
> > Digitalmars-d-announce 
> > wrote:
> >>
> >> On Sunday, 23 September 2018 at 02:40:15 UTC, Nicholas Wilson
> >> wrote:
> >> > It appears that @implicit has been removed from the
> >> > implementation [1], but not yet from the DIP.
> >> >
> >> > https://github.com/dlang/dmd/commit/cdd8100
> >>
> >> Good, It's not needed.
> >
> > @implicit is desperately needed (just not for copy
> > constructors!). Do you have confidence that an @implicit
> > proposal will happen if you all insist that it's removed here?
> > This is a great driving motivator to support @implicit's
> > introduction.
>
> If we are going to introduce the keyword/attribute implicit then
> it needs its own DIP. As of now, this DIP have a very weak
> justification for it.

I certainly agree; I'm fairly sure I produced the very first critical
comment on this issue when it first landed, which went like "@implicit
is a dependency and needs a dependent dip", which Andrei brushed off.
I still believe it's a separate feature, and it's a dependency for
this particular DIP, so that should come first... but here's the
thing; that's just not how dlang works around here.
We like the idea that there's process and structure, but there's not,
and you just need to be practical about maneuvering towards the goals
you want in the ways that manifest.

In this case, having @implicit is a very real and desirable goal, it's
been a sore hole in the language since ever... so anything that moves
it towards reality is preferable to nothing.
While I felt strongly about my conviction initially (that there should
be a dependent DIP), I realised that a much more useful and practical
position was to allow this dip to introduce @implicit implicitly
(heh)... that's a much better reality than waiting an addition year or
2 (if ever!) for the thing otherwise.

I encourage people to consider this holistically and consider the
practicality of allowing it, even thought it's not a strictly
principled in terms of process ;)

copy-ctor is good, @implicit is also good... we want both. Even though
copy-ctor is not strictly dependent on @implicit, allowing it will
satisfy that there's not a breaking change, it it will also
self-justify expansion of @implicit as intended without a separate and
time-consuming fight, which is actually the true value of this DIP!


Re: Copy Constructor DIP and implementation

2018-09-24 Thread Manu via Digitalmars-d-announce
On Mon, 24 Sep 2018 at 00:55, Gary Willoughby via
Digitalmars-d-announce  wrote:
>
> On Sunday, 23 September 2018 at 02:40:15 UTC, Nicholas Wilson
> wrote:
> > It appears that @implicit has been removed from the
> > implementation [1], but not yet from the DIP.
> >
> > https://github.com/dlang/dmd/commit/cdd8100
>
> Good, It's not needed.

@implicit is desperately needed (just not for copy constructors!).
Do you have confidence that an @implicit proposal will happen if you
all insist that it's removed here? This is a great driving motivator
to support @implicit's introduction.


Re: Copy Constructor DIP and implementation

2018-09-17 Thread Manu via Digitalmars-d-announce
On Mon, 17 Sep 2018 at 13:55, 12345swordy via Digitalmars-d-announce
 wrote:
>
> On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote:
> > Hello everyone,
> >
> > I have finished writing the last details of the copy
> > constructor DIP[1] and also I have published the first
> > implementation [2]. As I wrongfully made a PR for the DIP queue
> > in the early stages of the development of the DIP, I want to
> > announce this way that the DIP is ready for the draft review
> > now. Those who are familiar with the compiler, please take a
> > look at the implementation and help me improve it!
> >
> > Thanks,
> > RazvanN
> >
> > [1] https://github.com/dlang/DIPs/pull/129
> > [2] https://github.com/dlang/dmd/pull/8688
>
> The only thing I object is adding yet another attribute to a
> already big bag of attributes. What's wrong with adding keywords?
>
> -Alexander

I initially felt strongly against @implicit, it shouldn't be
necessary, and we could migrate without it.
But... assuming that @implicit should make an appearance anyway (it
should! being able to mark implicit constructors will fill a massive
usability hole in D!), then it doesn't hurt to use it eagerly here and
avoid a breaking change at this time, since it will be the correct
expression for the future regardless.


Re: Copy Constructor DIP and implementation

2018-09-12 Thread Manu via Digitalmars-d-announce
On Wed, 12 Sep 2018 at 04:40, Dejan Lekic via Digitalmars-d-announce
 wrote:
>
> On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole
> wrote:
> >
> > Here is a question (that I don't think has been asked) why not
> > @copy?
> >
> > @copy this(ref Foo other) { }
> >
> > It can be read as copy constructor, which would be excellent
> > for helping people learn what it is doing (spec lookup).
> >
> > Also can we really not come up with an alternative bit of code
> > than the tupleof to copying wholesale? E.g. super(other);
>
> I could not agree more. @implicit can mean many things, while
> @copy is much more specific... For what is worth I vote for @copy
> ! :)

@implicit may be attributed to any constructor allowing it to be
invoked implicitly. It's the inverse of C++'s `explicit` keyword.
As such, @implicit is overwhelmingly useful in its own right.

This will address my single biggest usability complaint of D as
compared to C++. @implicit is super awesome, and we must embrace it.


Re: DIP Draft Reviews

2018-09-05 Thread Manu via Digitalmars-d-announce
On Wed, 5 Sep 2018 at 20:30, Mike Parker via Digitalmars-d-announce
 wrote:
>
> On Wednesday, 5 September 2018 at 14:30:14 UTC, rikki cattermole
> wrote:
>
> >
> > Last time I checked, it should be me and yshui's named
> > parameter DIP's next, they really need to be reviewed together
> > though, at least initially.
>
> I'm not at all thrilled by the idea of running two DIPs through
> the queue in concert and would prefer to avoid that circumstance.
>
> I've already discussed this with Yuxuan and asked if he'd be
> willing work together with you on a single DIP. His response was
> that the two proposals are not mutually exclusive and that yours
> could be built on top of his.
>
> I need to take the time to fully absorb both DIPs and then I'll
> decide how to approach it. But you'll be hearing from me as soon
> as I do.

Out of curiosity... what's going on with mine? Is there something I'm
meant to have done? It's kinda just hanging out no?


  1   2   3   >