Re: The New Fundraising Campaign

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 19, 2019 at 03:11:55AM +, bachmeier via Digitalmars-d-announce 
wrote:
> On Friday, 4 January 2019 at 10:30:07 UTC, Martin Tschierschke wrote:
> 
> > Cool, what a wonderful start to the year 2019!
> > A big thank you to all pushing the development of D with money and time!
> > What next Mike?
> 
> Hopefully a campaign to put together a working forum. Would you invest major
> resources in a language that doesn't even have a usable forum?

This forum is very functional.  I would participate less in a forum that
requires loading up a browser to use. But then again, maybe people would
be happier if I wasn't around to blab about vim and symmetry and why dub
sux, so perhaps that might be for the better. :-P


T

-- 
The peace of mind---from knowing that viruses which exploit Microsoft system 
vulnerabilities cannot touch Linux---is priceless. -- Frustrated system 
administrator.


DIP 1006--Providing more selective control over contracts--Superseded

2019-01-18 Thread Mike Parker via Digitalmars-d-announce
DIP 1006, "Providing more selective control over contracts", had 
quite a bit of bad luck going through the DIP process. It started 
with my forgetting that I had scheduled it for review after DConf 
2017. From then on, there were several points along the way that 
caused progress on the DIP to stall.


The DIP was ultimately accepted in principle, but Walter objected 
to the proposed implementation. He and Andrei asked that the DIP 
be revised to propose an alternative implementation that could be 
more broadly applied to enable and disable more features than 
just contracts.


Before the revision was completed, it was superseded when Walter 
added the -check command line switch, which was released in DMD 
2.084.0.


We discussed whether the DIP should be revised to reflect the new 
feature and decided not to do so. Instead, I've revised the 
Procedure document to add a new DIP status, "Superseded". This 
will be applied to any DIP that is made obsolete by the 
implementation of a similar/alternative feature or the acceptance 
of a similar/alternative proposal.


Thanks to Mathias Lang for authoring the DIP, and apologies for 
the amount of time it took to get to this point. The initial 
delay early in the review process was one of the motivations for 
revising the procedure and now, here at the end, this DIP has 
motivated a new addition to the procedure. So even though it 
wasn't accepted as written, it did contribute to improving the 
DIP process and was the impetus for the -check compiler switch.




Re: The New Fundraising Campaign

2019-01-18 Thread bachmeier via Digitalmars-d-announce
On Friday, 4 January 2019 at 10:30:07 UTC, Martin Tschierschke 
wrote:



Cool, what a wonderful start to the year 2019!
A big thank you to all pushing the development of D with money 
and time!

What next Mike?


Hopefully a campaign to put together a working forum. Would you 
invest major resources in a language that doesn't even have a 
usable forum?


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 09:41:14PM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2019-01-18 21:23, H. S. Teoh wrote:
> 
> > Haha, that's just an old example from back in the bad ole days where
> > NTP syncing is rare, and everyone's PC is slightly off anywhere from
> > seconds to minutes (or if it's really badly-managed, hours, or maybe
> > the wrong timezone or whatever).
> 
> I had one of those issues at work. One day when I came in to work it
> was suddenly not possible to SSH into a remote machine. It worked the
> day before. Turns out the ntpd daemon was not running on the remote
> machine (for some reason) and we're using Kerberos with SSH, that
> means if the clocks are too much out of sync it will not be able to
> login. That was a ... fun, debugging experience.
[...]

Ouch.  Ouch!  That must not have been a pleasant experience in any sense
of the word.  Knowing all too well how these things tend to go, the
errors you get from the SSH log probably were very unhelpful, mostly
stemming from C's bad ole practice or returning a generic unhelpful
"failed" error code for all failures indiscriminately.  I had to work on
SSH-based code recently, and it's just ... not a nice experience overall
due to the way the C code was written.


T

-- 
GEEK = Gatherer of Extremely Enlightening Knowledge


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Paul Backus via Digitalmars-d-announce

On Friday, 18 January 2019 at 20:03:48 UTC, Mark wrote:

[...]

Represent types as strings, CTFE them as you see fit, and 
output a string that can then be mixin'ed to use the actual 
type. :)


Two problems:

1) Mixing in a string is unhygienic. If two modules (or two 
scopes in the same module) define types with the same name, you 
might get the wrong one.


2) You can't mixin the name of a Voldemort type.


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Stefan Koch via Digitalmars-d-announce

On Friday, 18 January 2019 at 20:32:35 UTC, Jacob Carlborg wrote:

On 2019-01-18 20:28, Stefan Koch wrote:

The only difference that type-functions have from what you 
describe is that it does not need to occupy a keyword 'type'.


You're using "alias" instead of my "type" keyword?


yes. After all what type-functions do when returning types,  is 
to return an alias.

Since it's unable to create types this is all it can do.


Re: D-lighted, I'm Sure

2019-01-18 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-01-18 21:23, H. S. Teoh wrote:


Haha, that's just an old example from back in the bad ole days where NTP
syncing is rare, and everyone's PC is slightly off anywhere from seconds
to minutes (or if it's really badly-managed, hours, or maybe the wrong
timezone or whatever).


I had one of those issues at work. One day when I came in to work it was 
suddenly not possible to SSH into a remote machine. It worked the day 
before. Turns out the ntpd daemon was not running on the remote machine 
(for some reason) and we're using Kerberos with SSH, that means if the 
clocks are too much out of sync it will not be able to login. That was a 
... fun, debugging experience.


--
/Jacob Carlborg


Re: D-lighted, I'm Sure

2019-01-18 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-01-18 15:29, Mike Parker wrote:
Not long ago, in my retrospective on the D Blog in 2018, I invited folks 
to write about their first impressions of D. Ron Tarrant, who you may 
have seen in the Lear forum, answered the call. The result is the latest 
post on the blog, the first guest post of 2019. Thanks, Ron!


The blog:
https://dlang.org/blog/2019/01/18/d-lighted-im-sure/


Regarding Dub. If you only have a project without any dependencies or 
perhaps only system dependencies already available on the system it 
might not add that much. But as soon as you want to use someone else D 
code it helps tremendously. Dub both acts as a build tool and a package 
manager. It will automatically download the source code for the 
dependencies, build them and handle the imports paths. As for JSON 
files, it's possible to use the alternative format SDL. One extremely 
valuable feature this has over JSON is that it supports comments.


To address some of the direct questions in the blog post:

"information about how I would go about packaging a D app (with GtkD) 
for distribution".


When it comes to distribution D applications there isn't much that is 
specific to D. Most of the approaches and documentation that applies to 
any native language would apply to D as well. There are two D specific 
things (that I can think of for now) that are worth mentioning:


* When you compile a release build for distribution, use the LDC [1] 
compiler. It produces better code. You can also add things like LTO 
(Link Time Optimization) and possibly PGO (Profile Guided Optimization).


* If you have any static assets for you application, like images, sound 
videos, config files or similar, it's possible to embed those directly 
in the executable using the "import expression" [2] feature. This will 
read a file, at compile time, into a string literal inside the code.


Some more general things about distribution. I think it's more platform 
specific than language specific. I can only speak for macOS (since 
that's the main platform I use). There it's expected to have the 
application distributed as a disk image (DMG). This image would contain 
an application bundle. An application bundle is a regular directory with 
the ".app" extension with a specific directory and file structure. Some 
applications in the OS treats these bundles specially. For example, 
double clicking on this bundle in the file browser will launch the 
application. The bundle will contain the actual executable and and 
resources like libraries and assets like images and audio. In your case, 
don't expect a Mac user to have GTK installed, bundle that in the 
application bundle.


Then there's the issue of which versions of the platforms you want to 
support. For macOS it's possible to specify a minimum deployment target 
using the "MACOSX_DEPLOYMENT_TARGET" environment variable. This allows 
to build the application on the latest version of the OS but still have 
the application work on older versions.


On *BSD and Linux it's not that easy and Linux has the additional axis 
of distros which adds another level of issues. The best is to compile 
for each distro and version you want to support, but that's a lot of 
work. I would provide fully statically linked binaries, including 
statically linked to the C standard library. This way to can provide one 
binary for all version of and distros of Linux and you know it will 
always work.


"how to build on one platform for distribution on another (if that’s 
even possible)"


I can say that it's possible, but unless you're targeting a platform 
that doesn't provide a compiler, like mobile or an embedded platform, I 
think it's rare to need to cross-compile. I'll tell you way:


When building an application that targets multiple platforms you would 
need to test it at some point. That means running the application on all 
the supported platforms. That means you need to have access to these 
platforms. Usually a lot in the beginning when developing the 
application and in the end when doing final verification before a release.


Having said that, I'm all for automating as much as possible. That means 
automatically running all the tests and building the final release build 
and packing for distribution. For that I recommend hooking up you're 
project to one of the publicly available and free CI services. Travis CI 
[3] is one of them that supports Linux, macOS and Windows (early release 
[4]). AppVeyor is an alternative that has a much more mature support for 
Windows.


If you really want to cross-compile, it's possible if you use LDC. DMD 
can compile for the same platform for either 32 bit or 64 bit, but not 
for a different platform. I think it's simplest to use Docker. I have 
two Docker files for a container containing LDC setup for 
cross-compiling to macOS [6] and for Windows [7]. Unfortunately these 
Docker files pulls down the SDKs from someones (not mine) Dropbox account.


[1] 

Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-01-18 20:28, Stefan Koch wrote:

The only difference that type-functions have from what you describe is 
that it does not need to occupy a keyword 'type'.


You're using "alias" instead of my "type" keyword?

--
/Jacob Carlborg


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 08:03:48PM +, Mark via Digitalmars-d-announce wrote:
[...]
> Why not do away with AliasSeq and use strings all the way?
> 
> string Constify(string type)
> {
> // can add input checks here
> return "const(" ~ type ~ ")";
> }
> 
> void main()
> {
> import std.algorithm : map;
> enum someTypes = ["int", "char", "bool"];
> enum constTypes = map!Constify(someTypes);
> mixin(constTypes[0] ~ "myConstInt = 42;"); // int myConstInt = 42;
> }
> 
> Represent types as strings, CTFE them as you see fit, and output a
> string that can then be mixin'ed to use the actual type. :)

That would work, but it would also suffer from all the same problems as
macro-based programming in C.  The compiler would be unable to detect
when you accidentally pasted type names together where you intended to
be separate, the strings may not actually represent real types, and
generating code from pasting / manipulating strings is very error-prone.
And you could write very unmaintainable code like pasting partial tokens
together as strings, etc., which makes it hard for anyone else
(including yourself after 3 months) to understand just what the code is
trying to do.

Generally, you want some level of syntactic / semantic enforcement by
the compiler when you manipulate lists (or whatever other structures) of
types.


T

-- 
INTEL = Only half of "intelligence".


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 08:03:09PM +, Neia Neutuladh via 
Digitalmars-d-announce wrote:
> On Fri, 18 Jan 2019 11:43:58 -0800, H. S. Teoh wrote:
> > (1) it often builds unnecessarily -- `touch source.d` and it
> > rebuilds source.d even though the contents haven't changed; and
> 
> Timestamp-based change detection is simple and cheap. If your
> filesystem supports a revision id for each file, that might work
> better, but I haven't heard of such a thing.

Barring OS/filesystem support, there's recent OS features like inotify
that lets a build daemon listen for changes to files within a
subdirectory. Tup, for example, uses this to make build times
proportional to the size of the changeset rather than the size of the
entire workspace.  I consider this an essential feature of a modern
build system.

Timestamp-based change detection also does needless work even when there
*is* a change.  For example, edit source.c, change a comment, and make
will recompile it all the way down -- .o file, .so file or executable,
all dependent targets, etc.. Whereas a content-based change detection
(e.g. md5 checksum based) will stop at the .o step because the comment
did not cause the .o file to change, so further actions like linking
into the executable are superfluous and can be elided.  For small
projects the difference is negligible, but for large-scale projects this
can mean the difference between a few seconds -- usable for high
productivity code-compile-test cycle -- and half an hour: completely
breaks the productivity cycle.


> If you're only dealing with a small number of small files,
> content-based change detection might be a reasonable option.

Content-based change detection is essential IMO. It's onerous if you use
the old scan-the-entire-source-tree model of change detection; it's
actually quite practical if you use a modern inotify- (or equivalent)
based system.


> > (2) it often fails to build necessary targets -- if for whatever
> > reason your system clock is out-of-sync or whatever, and a newer
> > version of source.d has an earlier date than a previously-built
> > object.
> 
> I'm curious what you're doing that you often have clock sync errors.

Haha, that's just an old example from back in the bad ole days where NTP
syncing is rare, and everyone's PC is slightly off anywhere from seconds
to minutes (or if it's really badly-managed, hours, or maybe the wrong
timezone or whatever).  The problem is most manifest when networked
filesystems are involved.

These days, clock sync isn't really a problem anymore, generally
speaking, but there's still something else about make that makes it fail
to pick up changes.  I still regularly have to `make clean;make`
makefile-based projects just to get the lousy system to pick up the
changes.  I don't have that problem with more modern build systems.
Probably it's an issue of undetected dependencies.


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't 
seem to remove the bugs on my system! -- Mike Dresser


Re: D-lighted, I'm Sure

2019-01-18 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 18 Jan 2019 11:43:58 -0800, H. S. Teoh wrote:
> (1) it often builds unnecessarily -- `touch source.d` and it rebuilds
> source.d even though the contents haven't changed; and

Timestamp-based change detection is simple and cheap. If your filesystem 
supports a revision id for each file, that might work better, but I 
haven't heard of such a thing.

If you're only dealing with a small number of small files, content-based 
change detection might be a reasonable option.

> (2) it often fails to build necessary targets -- if for whatever reason
> your system clock is out-of-sync or whatever, and a newer version of
> source.d has an earlier date than a previously-built object.

I'm curious what you're doing that you often have clock sync errors.


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Mark via Digitalmars-d-announce
On Thursday, 17 January 2019 at 20:47:38 UTC, Steven 
Schveighoffer wrote:


well, there was no static foreach for that article (which I 
admit I didn't read, but I know what you mean).


But it's DEFINITELY not as easy as it could be:

import std.conv;

alias AliasSeq(P...) = P;

template staticMap(alias Transform, Params...)
{
alias seq0 = Transform!(Params[0]);
static foreach(i; 1 .. Params.length)
{
   mixin("alias seq" ~ i.to!string ~ " = AliasSeq!(seq" ~ 
(i-1).to!string ~ ", Transform!(Params[" ~ i.to!string ~ 
"]));");

}
mixin("alias staticMap = seq" ~ (Params.length-1).to!string 
~ ";");

}

alias Constify(T) = const(T);
void main()
{
alias someTypes = AliasSeq!(int, char, bool);
pragma(msg, staticMap!(Constify, someTypes)); // 
(const(int), const(char), const(bool))

}

Note, that this would be a LOT easier with string 
interpolation...


mixin("alias seq${i} = AliasSeq!(seq${i-1}, 
Transform!(Params[${i}]));".text);


-Steve


Why not do away with AliasSeq and use strings all the way?

string Constify(string type)
{
// can add input checks here
return "const(" ~ type ~ ")";
}

void main()
{
import std.algorithm : map;
enum someTypes = ["int", "char", "bool"];
enum constTypes = map!Constify(someTypes);
mixin(constTypes[0] ~ "myConstInt = 42;"); // int myConstInt 
= 42;

}

Represent types as strings, CTFE them as you see fit, and output 
a string that can then be mixin'ed to use the actual type. :)


Re: D-lighted, I'm Sure

2019-01-18 Thread Meta via Digitalmars-d-announce

On Friday, 18 January 2019 at 16:42:15 UTC, Ron Tarrant wrote:
Just to set the record straight, I only had access to that 
Coleco Adam for the few weeks I was in that Newfoundland 
outport. Within a year, I too had my very own C-64 plugged into 
a monster Zenith console job. Remember those? I don't remember 
what I paid for a used C-64, but the Zenith 26" was $5 at a 
garage sale up the street and another $5 for delivery.


Great read Ron. Can I ask which town in Newfoundland it was where 
you stayed back in 1985?


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 06:59:59PM +, JN via Digitalmars-d-announce wrote:
[...]
> The trick with makefiles is that they work well for a single
> developer, or a single project, but become an issue when dealing with
> multiple libraries, each one coming with its own makefile (if you're
> lucky, if you're not, you have multiple CMake/SCons/etc. systems to
> deal with). Makefiles are very tricky to do crossplatform, especially
> on Windows, and usually they aren't enough, I've often seen people use
> bash/python/ruby scripts to drive the building process anyway.

Actually, the problems I had with makefiles come from within single
projects.  One of the most fundamental problems, which is also a core
design, of Make is that it's timestamp-based.  This means:

(1) it often builds unnecessarily -- `touch source.d` and it rebuilds
source.d even though the contents haven't changed; and

(2) it often fails to build necessary targets -- if for whatever reason
your system clock is out-of-sync or whatever, and a newer version of
source.d has an earlier date than a previously-built object.

Furthermore, makefiles generally do not have a global view of your
workspace, so builds are not reproducible (unless you go out of your way
to do it).  Running `make` after editing some source files does not
guarantee you'll end up with the same executables as if you checked in
your changes, did a fresh checkout, and ran `make`.  I've had horrible
all-nighters looking for heisenbugs that have no representation in the
source code, but are caused by make picking up stale object files from
who knows how many builds ago.  You end up having to `make clean; make`
every other build "just to be sure", which is really stupid in this day
and age.  (And even `make clean` does not guarantee you get a clean
workspace -- too many projects than I care to count exhibit this
problem.)

Then there's parallel building, which again requires explicit effort,
macro hell typical of tools from that era, etc..  I've already ranted
about this at great lengths before, so I'm not going to repeat them
again.  But make is currently near (if not at) the bottom of my list of
build tools for many, many reasons.


Ultimately, as I've already said elsewhere, what is needed is a
*standard tool-independent dependency graph declaration* attached to
every project, that captures the dependency graph of the project in a
way that any tool that understands the standard format can parse and act
on.  At the core of it, every build system out there is essentially just
an implementation of a directed acyclic graph walk. A standard problem
with standard algorithms to solve it.  But everybody rolls their own
implementation gratuitously incompatible with everything else, and so we
find ourselves today with multiple, incompatible build systems that, in
large-scale software, often has to somehow co-exist within the same
project.


> The big thing dub provides is package management. Having a package
> manager is an important thing for a language nowadays. Gone are the
> days of hunting for library source, figuring out where to put
> includes. Just add a line in your dub.json file and you have the
> library. Need to upgrade to newer version? Just change the version in
> dub.json file. Need to download the problem from scratch? No problem,
> dub can use the json file to download all the dependencies in proper
> versions.

Actually, I have the opposite problem.  All too often, my projects that
depend on some external library become uncompilable because said library
has upgraded from version X to version Z, and version X doesn't exist
anymore (the oldest version is now Y), or upstream made an incompatible
change, or the network is down and dub can't download the right version,
etc..

These days, I'm very inclined to just download the exact version of the
source code that I need, and include it as part of my source tree, just
so there will be no gratuitous breakage due to upstream changes, old
versions being no longer supported, or OS changes that break pre-shipped
.so files, and all of that nonsense.  Just compile the damn thing from
scratch from the exact version of the sources that you KNOW works --
sources that you have in hand RIGHT HERE instead of somewhere out there
in the nebulous "cloud" which happens to be unreachable right now,
because your network is down and in order to fix the network you need to
compile this tool that depends on said missing sources.

I understand it's convenient for the package manager to "automatically"
install dependencies for you, refresh to the latest version, and
what-not. But frankly, I find that the amount of effort it takes to
download the source code of some library and setup the include paths
manually is miniscule, compared to the dependency hell I have to deal
with in a system like dub.

These days I almost automatically write off 3rd party libraries that
have too many dependencies.  The best kind of 3rd party code is the
standalone kind, 

Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Stefan Koch via Digitalmars-d-announce

On Friday, 18 January 2019 at 10:23:11 UTC, Jacob Carlborg wrote:

On 2019-01-17 23:44, H. S. Teoh wrote:

YES!  This is the way it should be.  Type-tuples become first 
class
citizens, and you can pass them around to functions and return 
them from

functions
No no no, not only type-tuples, you want types to be first 
class citizens. This makes it possible to store a type in a 
variable, pass it to and return from functions. Instead of a 
type-tuple, you want a regular array of types. Then it would be 
possible to use the algorithms in std.algorithm to manipulate 
the arrays. I really hate that today one needs to resort to 
things like staticMap and staticIndexOf.


Of course, if we both get tuples and types as first class 
citizens it would be possible to store types in these tuples as 
well. But a tuple is usually immutable and I'm not sure if it 
would be possible to use std.algorithm on that.


It would be awesome to be able to do things like this:

type foo = int;

type bar(type t)
{
return t;
}

auto u = [byte, short, int, long].map!(t => t.unsigned).array;
assert(u == [ubyte, ushort, uint, ulong];


Yes, you will be able to do exactly what you describe above.
type-tuples are strictly a superset of types; which also include 
true compile-time constants. (e.g. things you can use to 
instantiate a template with.)


Within type functions you are able to create `alias[]` which is 
in some ways equivalent to type-tuple (and will be converted to 
one upon being returned outside of compile-functions),which you 
can append to if you own it and type functions can also take 
other type-functions as parameters.
Therefore it's perfectly possible to implement staticMap in terms 
of type functions.

I already did the semantic sanity checks, and it shows promise.

The only difference that type-functions have from what you 
describe is that it does not need to occupy a keyword 'type'.


Cheers,
Stefan


Re: D-lighted, I'm Sure

2019-01-18 Thread JN via Digitalmars-d-announce

On Friday, 18 January 2019 at 18:48:00 UTC, H. S. Teoh wrote:
I'm also not a big fan of dub, but I'm in the minority around 
these parts.  Having grown up on makefiles and dealt with them 
in a large project at my day job, I've developed a great 
distaste for them, and nowadays the standard build tool I reach 
for is SCons.  Though possibly in the not-so-distant future I 
might start using something more scalable like Tup, or Button, 
written by one of our very own D community members. But for 
small projects, just plain ole dmd is Good Enough(tm) for me.


The trick with makefiles is that they work well for a single 
developer, or a single project, but become an issue when dealing 
with multiple libraries, each one coming with its own makefile 
(if you're lucky, if you're not, you have multiple 
CMake/SCons/etc. systems to deal with). Makefiles are very tricky 
to do crossplatform, especially on Windows, and usually they 
aren't enough, I've often seen people use bash/python/ruby 
scripts to drive the building process anyway.


The big thing dub provides is package management. Having a 
package manager is an important thing for a language nowadays. 
Gone are the days of hunting for library source, figuring out 
where to put includes. Just add a line in your dub.json file and 
you have the library. Need to upgrade to newer version? Just 
change the version in dub.json file. Need to download the problem 
from scratch? No problem, dub can use the json file to download 
all the dependencies in proper versions.


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 12:06:54PM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 1/18/19 11:42 AM, Ron Tarrant wrote:
[...]
> > Just to set the record straight, I only had access to that Coleco
> > Adam for the few weeks I was in that Newfoundland outport. Within a
> > year, I too had my very own C-64 plugged into a monster Zenith
> > console job.  Remember those? I don't remember what I paid for a
> > used C-64, but the Zenith 26" was $5 at a garage sale up the street
> > and another $5 for delivery.
> 
> I had to use my parents' TV in the living room :) And I was made to
> learn typing before I could play games on it, so cruel...
[...]

Wow, what cruelty! ;-)  The Apple II was my first computer ever, and I
spent 2 years playing computer games on it until they were oozing out of
my ears.  Then I got so fed up with them that I decided I'm gonna write
my own.  So began my journey into BASIC, and then 6502 assembly, etc..

A long road later, I ended up here with D.


T

-- 
This is a tpyo.


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 02:29:14PM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> The blog:
> https://dlang.org/blog/2019/01/18/d-lighted-im-sure/
[...]

Very nice indeed!  Welcome aboard, Ron!

And wow... 6502?  That's what I grew up on too!  I used to remember most
of the opcodes by heart... though nowadays that memory has mostly faded
away.  The thought of it still evokes nostalgic feelings, though.

I'm also not a big fan of dub, but I'm in the minority around these
parts.  Having grown up on makefiles and dealt with them in a large
project at my day job, I've developed a great distaste for them, and
nowadays the standard build tool I reach for is SCons.  Though possibly
in the not-so-distant future I might start using something more scalable
like Tup, or Button, written by one of our very own D community members.
But for small projects, just plain ole dmd is Good Enough(tm) for me.

I won't bore you with my boring editor, vim (with no syntax highlighting
-- yes I've been told I'm crazy, and in fact I agree -- just plain ole
text, with little things like autoindenting, no fancy IDE features --
Linux is my IDE, the whole of it :-P).  Vim users seem to out in force
around these parts for some reason, besides the people clamoring for a
"proper" IDE, but I suspect I'm the only one who deliberately turns
*off* syntax highlighting, and indeed, any sort of color output from dmd
or any other tools (I find it distracting). So don't pay too much heed
to what I say, at least on this subject. :-D


T

-- 
Живёшь только однажды.


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 11:23:11AM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2019-01-17 23:44, H. S. Teoh wrote:
> 
> > YES!  This is the way it should be.  Type-tuples become first class
> > citizens, and you can pass them around to functions and return them
> > from functions
> No no no, not only type-tuples, you want types to be first class
> citizens.  This makes it possible to store a type in a variable, pass
> it to and return from functions. Instead of a type-tuple, you want a
> regular array of types.  Then it would be possible to use the
> algorithms in std.algorithm to manipulate the arrays. I really hate
> that today one needs to resort to things like staticMap and
> staticIndexOf.

Yes, that would be the next level of symmetry. :-D  Types as first class
citizens would eliminate another level of distinctions that leads to the
necessity of staticMap, et al.  But it will also require changing the
language in a much more fundamental, invasive way.

So I'd say, let's take it one step at a time.  Start with first-class
type-tuples, then once that's ironed out and working well, take it to
the next level and have first-class types.  Trying to leap from here to
there in one shot is probably a little too ambitious, with too high a
chance of failure.


[...]
> It would be awesome to be able to do things like this:
> 
> type foo = int;
> 
> type bar(type t)
> {
> return t;
> }
> 
> auto u = [byte, short, int, long].map!(t => t.unsigned).array;
> assert(u == [ubyte, ushort, uint, ulong];
[...]

Yes this would be awesome.  But in order to avoid unmanageable
complexity of implementation, all of this would have to be compile-time
only constructs.


T

-- 
Your inconsistency is the only consistent thing about you! -- KD


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 17, 2019 at 05:32:52PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
> On 1/17/2019 11:31 AM, H. S. Teoh wrote:
> > [...]
> 
> Thanks for the thoughtful and well-written piece.
> 
> But there is a counterpoint: symmetry in mathematics is one thing, but
> symmetry in human intuition is not. Anytime one is dealing in human
> interfaces, one runs into this.  I certainly did with the way imports
> worked in D. The lookups worked exactly the same for any sort of
> symbol lookup. I thought it was great.
> 
> But I was unable to explain it to others. Nobody could understand it
> when I said imported symbol lookup worked exactly like any lookup in a
> name space.  They universally said it was "unintuitive", filed bug
> reports, etc.  Eventually, I had to give it up. Now import lookup
> follows special different rules, people are happy, and I learned
> (again) that symmetry doesn't always produce the best outcomes.

Alas, it's true, it's true, 100% symmetry is, in the general case,
impossible to achieve.  If we wanted 100% mathematical symmetry, one
could argue lambda calculus is the best programming language ever,
because it's Turing complete, the syntax is completely uniform with no
quirky exceptions, and the semantics are very clearly defined with no
ambiguity anywhere.  Unfortunately, these very characteristics are also
what makes lambda calculus impossible to work with for anything but the
most trivial of programs. It's completely unmaintainable, extremely hard
to read, and has non-trivial semantics that vary wildly from the
smallest changes to the code.

For a human-friendly programming language, any symmetry must necessarily
be based on human expectations.  Unfortunately, as you learned, human
intuition varies from person to person, and indeed, is often
inconsistent even with the same person, so trying to maximise symmetry
in a way that doesn't become "counterintuitive" is a pretty tall order.

As somebody (perhaps you) said once, in Boeing's experience with
designing intuitive UIs, they discovered that what people consider
"intuitive" is partly colored by their experience, and their experience
is in turn shaped by the UIs they interact with.  So it's a feedback
loop, which means what's "intuitive" is not some static set of rules
(even allowing for arbitrarily complex rules), but it's a *moving
target*, the hardest thing to design for.  What's considered "intuitive"
today may be considered "totally counter-intuitive" 10 years from now.

In the case of imports, I'd argue that the problem is with how people
understand the word "import".  From a compiler's POV, the simplest, most
straightforward (and most symmetric!) definition is "pull in the symbols
into the local scope".  Unfortunately, that's not the understanding most
programmers have.  Perhaps in an older, bygone era people might have
been more open to that sort of definition, but in this day and age of
encapsulation and modularity, "pull in symbols into the local scope"
does not adequately capture people's expectations: it violates
encapsulation, in the following sense: symbols from the imported module
shadow local symbols, which goes against the expectation that the local
module is an encapsulated thing, inviolate from outside interference.
It breaks the expectation of encapsulation.  It breaks the symmetry that
everywhere else, outside code cannot interfere with local symbols.

Consequently, the expectation is that imported symbols are somehow
"second class" relative to local symbols -- imported symbols don't
shadow local symbols (unless you explicitly ask for it), and thus
encapsulation is preserved (in some sense).  So we have here a conflict
between different axes of symmetry: the symmetry of every module being
an inviolate, self-contained unit (encapsulation), and the symmetry of
having the same rules for symbol lookup no matter where the symbol came
from.  It's a toss-up which axis of symmetry one should strive for, and
which one should be compromised.

I'd say the general principle ought to be that the higher-level symmetry
(encapsulation of modules) should override the lower-level symmetry (the
mechanics of symbol lookup).  But this is easy to say because hindsight
is 20/20; it's not so simple at the time of decision-making because it's
not obvious which symmetries are in effect and what their relative
importance should be.  And there's always the bugbear that symmetry from
the implementor's (compiler writer's) POV does not necessarily translate
to symmetry from the user's (language user's) POV.

Still, I'd say that in a general sense, symmetry ought to be a
relatively high priority as far as designing language features or
adding/changing features are concerned.  Adding a new feature with
little regard for how it interacts with existing features, what new
corner cases it might introduce, etc., is generally a bad idea. Striving
for maximal symmetry should at least give you a ballpark idea for where
things should be headed, 

Re: D-lighted, I'm Sure

2019-01-18 Thread Steven Schveighoffer via Digitalmars-d-announce

On 1/18/19 11:42 AM, Ron Tarrant wrote:

On Friday, 18 January 2019 at 15:08:48 UTC, Steven Schveighoffer wrote:

Nice read! And welcome to Ron! I too, started with BASIC, but on a 
Commodore 64 :)



Thanks, Steve.

Just to set the record straight, I only had access to that Coleco Adam 
for the few weeks I was in that Newfoundland outport. Within a year, I 
too had my very own C-64 plugged into a monster Zenith console job. 
Remember those? I don't remember what I paid for a used C-64, but the 
Zenith 26" was $5 at a garage sale up the street and another $5 for 
delivery.


I had to use my parents' TV in the living room :) And I was made to 
learn typing before I could play games on it, so cruel...


-Steve


Re: D-lighted, I'm Sure

2019-01-18 Thread Ron Tarrant via Digitalmars-d-announce
On Friday, 18 January 2019 at 15:08:48 UTC, Steven Schveighoffer 
wrote:


Nice read! And welcome to Ron! I too, started with BASIC, but 
on a Commodore 64 :)


-Steve

Thanks, Steve.

Just to set the record straight, I only had access to that Coleco 
Adam for the few weeks I was in that Newfoundland outport. Within 
a year, I too had my very own C-64 plugged into a monster Zenith 
console job. Remember those? I don't remember what I paid for a 
used C-64, but the Zenith 26" was $5 at a garage sale up the 
street and another $5 for delivery.





Re: D-lighted, I'm Sure

2019-01-18 Thread Steven Schveighoffer via Digitalmars-d-announce

On 1/18/19 9:29 AM, Mike Parker wrote:
Not long ago, in my retrospective on the D Blog in 2018, I invited folks 
to write about their first impressions of D. Ron Tarrant, who you may 
have seen in the Lear forum, answered the call. The result is the latest 
post on the blog, the first guest post of 2019. Thanks, Ron!


As a reminder, I'm still looking for new-user impressions and guest 
posts on any D-related topic. Please contact me if you're interested. 
And don't forget, there's a bounty for guest posts, so you can make a 
bit of extra cash in the process.


The blog:
https://dlang.org/blog/2019/01/18/d-lighted-im-sure/

Reddit:
https://www.reddit.com/r/programming/comments/ahawhz/dlighted_im_sure_the_first_two_months_with_d/ 



Nice read! And welcome to Ron! I too, started with BASIC, but on a 
Commodore 64 :)


-Steve


D-lighted, I'm Sure

2019-01-18 Thread Mike Parker via Digitalmars-d-announce
Not long ago, in my retrospective on the D Blog in 2018, I 
invited folks to write about their first impressions of D. Ron 
Tarrant, who you may have seen in the Lear forum, answered the 
call. The result is the latest post on the blog, the first guest 
post of 2019. Thanks, Ron!


As a reminder, I'm still looking for new-user impressions and 
guest posts on any D-related topic. Please contact me if you're 
interested. And don't forget, there's a bounty for guest posts, 
so you can make a bit of extra cash in the process.


The blog:
https://dlang.org/blog/2019/01/18/d-lighted-im-sure/

Reddit:
https://www.reddit.com/r/programming/comments/ahawhz/dlighted_im_sure_the_first_two_months_with_d/


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-01-17 23:44, H. S. Teoh wrote:


Interesting.  Is it possible to assign a "fake" mangle to type functions
that never actually gets emitted into the object code, but just enough
to make various internal compiler stuff that needs to know the mangle
work properly?


Not sure that would be possible. I tries to a support for pragma(mangle) 
on alias declarations. That opened a can of worms. It turns out that the 
compiler is using the mangling of a type to compare types internally.


--
/Jacob Carlborg


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread Jacob Carlborg via Digitalmars-d-announce

On 2019-01-17 23:44, H. S. Teoh wrote:


YES!  This is the way it should be.  Type-tuples become first class
citizens, and you can pass them around to functions and return them from
functions
No no no, not only type-tuples, you want types to be first class 
citizens. This makes it possible to store a type in a variable, pass it 
to and return from functions. Instead of a type-tuple, you want a 
regular array of types. Then it would be possible to use the algorithms 
in std.algorithm to manipulate the arrays. I really hate that today one 
needs to resort to things like staticMap and staticIndexOf.


Of course, if we both get tuples and types as first class citizens it 
would be possible to store types in these tuples as well. But a tuple is 
usually immutable and I'm not sure if it would be possible to use 
std.algorithm on that.


It would be awesome to be able to do things like this:

type foo = int;

type bar(type t)
{
return t;
}

auto u = [byte, short, int, long].map!(t => t.unsigned).array;
assert(u == [ubyte, ushort, uint, ulong];

--
/Jacob Carlborg


Re: Top Five World’s Most Underrated Programming Languages

2019-01-18 Thread JN via Digitalmars-d-announce

On Friday, 18 January 2019 at 08:55:23 UTC, Paulo Pinto wrote:
Apparently Google is ramping up the use of Rust in Fuchsia and 
hiring quite a few devs.


Azure IoT Edge uses a mix of C# and Rust.

Rust has lately got a lot of attention from game developers. 
Several game studios announced they are switching from C++ to 
Rust. I think the developing compile to WebAssembly story is 
helping with that as well, because people can compile their games 
to web platform.


Re: Top Five World’s Most Underrated Programming Languages

2019-01-18 Thread Paulo Pinto via Digitalmars-d-announce

On Friday, 18 January 2019 at 03:41:38 UTC, Brian wrote:
On Monday, 14 January 2019 at 20:21:25 UTC, Andrei Alexandrescu 
wrote:

Of possible interest:

https://www.technotification.com/2019/01/most-underrated-programming-languages.html


Because no software can use it.

examples:
1. Docker use golang.
2. Middleware system use java.
3. Shell use python.
4. AI use python and R.
5. Desktop application use QT / C#.
6. Web framework & database use php's laravel and java's 
sprint-boot.

7. Web use Javascript / typescript.


Google is using Go for gVisor and Fuchsia, MIT for Biscuit, TUM 
(Munich) for userspache high performance network drivers, in 
spite of the naysayers regarding Go and systems programming.


Apparently Google is ramping up the use of Rust in Fuchsia and 
hiring quite a few devs.


Azure IoT Edge uses a mix of C# and Rust.

C# support for low level systems programing is looking better 
every release since they started integrated Midori lessons into 
it, while making it beat TechEmpower and working closely with 
Unity.


Now C# support is started to be a thing all AAA devs wish for on 
their game engines, even if only for gameplay scripts, while 
Unity is betting down in improving C# AOT compilation via their 
HPC# subset (C#'s -betterC in IL2CPP).


D really needs its killer use case if it is to move away from 
that list.