[OT] uncovering x86 hardware bugs and unknown instructions by fuzzing.

2017-07-31 Thread Guillaume Chatelet via Digitalmars-d

Some people here might find this interesting:
https://github.com/xoreaxeaxeax/sandsifter

White paper here:
https://github.com/xoreaxeaxeax/sandsifter/blob/master/references/domas_breaking_the_x86_isa_wp.pdf


The progress of D since 2013

2017-07-31 Thread Maxim Fomin via Digitalmars-d

Hi!

Good to see D is progressing! I was active forum and bugzilla 
participant in 2011-2013. Since then I have not touched D.


What is the progress of D (2014-2017) in following dimensions:
1) Support of linking in win64? AFAIK Walter introduced win64 
support in circa 2012 which was the big progress. However, 
support for win64 linking was limited because dmd defaulted on 
old dmc linker, and Walter didn't plan to do anything with this.
2) What is the support of other platforms? AFAIK there was 
progress on Android. From my memory recollections, the full 
support of Android was expected at that time.
3) What is the state of GC? AFAIK there were some improvements 
for GC sent as GSOC projects but they were not added in 2013. I 
see in the changelog that there are some improvements in speed 
and configuration support was added.
4) What is the state of GDC/LDC? GDC team was actively working on 
including gdc in gcc project. Do gdc and ldc still pull D 
frontend, so there is essentially 1 frontend (where gdc and ldc 
frontends lag several versions behind) + 3 backends? I see in the 
changelog some dmd backend improvements. How the dmd backend is 
compared with C++/GDC/LDC? AFAIK in 2013 there was a tradeoff: 
either you use dmd with brand-new frontend or gdc/ldc where 
performance is comparable to gcc, but frontend lags behind. Is it 
still true?
5) What is the progress with CTFE? I see a lot of discussions in 
forum archive devoted to the development of CTFE. What is the 
summary of CTFE development in recent years?
6) I don't see any significant changes in D core from dlang 
documentation (except those mentioned in changelog for 
2014-2017). Is is true or is the official spec as usual delayed 
:)? Is dlang spec fully and frequently updated or is it sparse as 
in the past? Is TDPL book still relevant?
7) Is UDA still compile-time? Are there plans to make it also 
runtime?
8) What is the progress with shared and immutable? AFAIK the 
compiler support for shared was not complete and Phobos library 
itself was not 'immutable-' and 'shared-correct'.

9) Does D gains popularity?
10) Anything else 2013 D user must know? :) I don't ask about 
Phobos because according to the changelog the progress is 
enormous, incremential and targets several directions - I doubt 
it can be easily summarised...


Thanks!



Re: The progress of D since 2013

2017-07-31 Thread rikki cattermole via Digitalmars-d

On 31/07/2017 8:22 AM, Maxim Fomin wrote:

Hi!

Good to see D is progressing! I was active forum and bugzilla 
participant in 2011-2013. Since then I have not touched D.


What is the progress of D (2014-2017) in following dimensions:
1) Support of linking in win64? AFAIK Walter introduced win64 support in 
circa 2012 which was the big progress. However, support for win64 
linking was limited because dmd defaulted on old dmc linker, and Walter 
didn't plan to do anything with this.


Optlink is still the default. MSVC link can be used for both 32bit and 
64bit.


2) What is the support of other platforms? AFAIK there was progress on 
Android. From my memory recollections, the full support of Android was 
expected at that time.
3) What is the state of GC? AFAIK there were some improvements for GC 
sent as GSOC projects but they were not added in 2013. I see in the 
changelog that there are some improvements in speed and configuration 
support was added.
4) What is the state of GDC/LDC? GDC team was actively working on 
including gdc in gcc project. Do gdc and ldc still pull D frontend, so 
there is essentially 1 frontend (where gdc and ldc frontends lag several 
versions behind) + 3 backends? I see in the changelog some dmd backend 
improvements. How the dmd backend is compared with C++/GDC/LDC? AFAIK in 
2013 there was a tradeoff: either you use dmd with brand-new frontend or 
gdc/ldc where performance is comparable to gcc, but frontend lags 
behind. Is it still true?
5) What is the progress with CTFE? I see a lot of discussions in forum 
archive devoted to the development of CTFE. What is the summary of CTFE 
development in recent years?


New implementation by Stefan for the purposes of making it faster and 
cheaper.


6) I don't see any significant changes in D core from dlang 
documentation (except those mentioned in changelog for 2014-2017). Is is 
true or is the official spec as usual delayed :)? Is dlang spec fully 
and frequently updated or is it sparse as in the past? Is TDPL book 
still relevant?

7) Is UDA still compile-time? Are there plans to make it also runtime?


No runtime reflection has been added during this period (unfortunately).

8) What is the progress with shared and immutable? AFAIK the compiler 
support for shared was not complete and Phobos library itself was not 
'immutable-' and 'shared-correct'.

9) Does D gains popularity?


Considerably. Downloads for dmd per day.
http://erdani.com/d/downloads.daily.png

10) Anything else 2013 D user must know? :) I don't ask about Phobos 
because according to the changelog the progress is enormous, 
incremential and targets several directions - I doubt it can be easily 
summarised...


Thanks!




Re: The progress of D since 2013

2017-07-31 Thread Eugene Wissner via Digitalmars-d

On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote:
4) What is the state of GDC/LDC? GDC team was actively working 
on including gdc in gcc project. Do gdc and ldc still pull D 
frontend, so there is essentially 1 frontend (where gdc and ldc 
frontends lag several versions behind) + 3 backends? I see in 
the changelog some dmd backend improvements. How the dmd 
backend is compared with C++/GDC/LDC? AFAIK in 2013 there was a 
tradeoff: either you use dmd with brand-new frontend or gdc/ldc 
where performance is comparable to gcc, but frontend lags 
behind. Is it still true?


It is still similar. LDC/GDC for performance and dmd for the 
latest version.
GDC is currently being updated to 2.072, but it still doesn't use 
the new frontend writen in D, but ports the frontend changes to 
the C++-frontend.


Re: How do you use D?

2017-07-31 Thread Michael via Digitalmars-d

On Friday, 28 July 2017 at 14:58:01 UTC, Ali wrote:

How do you use D?
In work, (key projects or smaller side projects)


I did my undergraduate in CS where I picked up Python, Java and a 
little bit of C/C++, but Java was my most familiar language. When 
I started my PhD in an Engineering Maths department, I picked up 
Andrei's book on D as I had come across the language several 
times earlier but never had a good excuse to pick it up properly. 
My supervisor is more of a mathematician so I did not have any 
dependencies or limitations in the tools I chose to use for 
research. For the first year of my PhD I built models in Java 
with Python for scripting on the side. I was incredibly 
disappointed with the performance in Java, and having been 
learning D on the side during that year, I decided to rewrite it 
using D. I essentially chose D for the one reason many people do 
NOT choose D; I wanted a GC-language that offered a decent level 
of control like C/C++ and was much nicer to write than Java, but 
with the convenience of not having to concern myself too much 
with memory management. I was happy to tune for efficiency, but 
did not want memory management to interrupt my workflow when 
writing a new model. D was perfect for this.



in your side project, (github, links please)


I've been lazy with side projects since I am always trying to 
work on my maths and writing skills which are pretty lacking 
given my choice of degree.


Did you introduce D to your work place? How? What challenges 
did you face?


I've tried to inform people of the merits of D but in this 
department, we're heavily tied to Matlab for teaching. When I 
started, they switched up the undergrad courses and started 
teaching Python as an alternative to Matlab alongside C/Java, but 
there's still a lot of reliance on Matlab. I'd like to see them 
chuck Java and teach C/D but we'll see. At university, there's a 
lot of difficulty in balancing the necessities (C for embedded 
systems/robotics and Matlab for modelling).



What is you D setup at work, which compiler, which IDE?


I've been a long-time Sublime Text user, using DMD (rdmd is a 
life saver) and that's about it. I'm interested in VS Code with 
the dlang extension though.



And any other fun facts you may want to share :)


It makes me sad to see so many people disgruntled by the mere 
presence of a garbage collector. I like it a lot and while I am 
completely on board with moving toward making it more optional, I 
am glad it's there and would welcome speed improvements. I think 
there's a balance to be struck between allowing programmers to 
forget about the low-level memory management when writing 
programs and tuning memory management when optimising for 
performance.


Re: DIP 1009--Improve Contract Usability--Preliminary Review Round 2 Begins

2017-07-31 Thread Nick Treleaven via Digitalmars-d

On Friday, 28 July 2017 at 16:44:24 UTC, MysticZach wrote:

On Friday, 28 July 2017 at 11:04:23 UTC, Nick Treleaven wrote:
One option to solve the out contract ambiguity and aid parsing 
by tools is to require 'do' after out contract expressions.


BTW `do` would only be required before the {} function body - 
further `in` and `out` clauses also can be used to disambiguate, 
see below.


One of the main goals of this DIP is to eliminate the need for 
`body/do` in the common case. It would significantly reduce 
this DIP's value if it couldn't do that, IMO.


This is subjective. If you put `do` on the end of the line, it is 
trivial:


in(x > 4)
out(works)
out(r; r.test)
out(flag) do
{
  // body
}


Re: DIP 1009--Improve Contract Usability--Preliminary Review Round 2 Begins

2017-07-31 Thread Nick Treleaven via Digitalmars-d

On Friday, 28 July 2017 at 16:58:41 UTC, Moritz Maxeiner wrote:
Having a keyword delimit the end of an optional is both 
redundant and inconsistent


You are arguing against the current syntax, not my proposal. In 
my case the `do` keyword would be disambiguating between out 
expressions and out blocks. It is not redundant, by the same 
logic I could argue that `;` in `(; identifier)` is redundant. 
They are different valid options of disambiguation.


Re: How do you use D?

2017-07-31 Thread Nicholas Wilson via Digitalmars-d

On Sunday, 30 July 2017 at 01:53:15 UTC, Zwargh wrote:
I am using D to develop a system for rational drug design. The 
main application for D is for protein 3D structure prediction 
and statistical analysis using Differential Geometry and Knot 
Theory.


Cool! Are you considered using dcompute for this once it has 
matured a bit?




Re: [OT] Generative C++

2017-07-31 Thread Joakim via Digitalmars-d

On Friday, 28 July 2017 at 07:49:02 UTC, Yuxuan Shui wrote:
Someone made an interesting proposal to C++: 
https://herbsutter.files.wordpress.com/2017/07/p0707r1.pdf


Thoughts?


Thanks for mentioning this: I just watched the video linked from 
his blog post, but didn't read the paper.


It's an elegant design, but strikes me as a bad idea for the 
language. This is the seductive kind of architecture produced by 
architecture astronauts, because it solves real needs, ie 
interface, value, etc. as "keywords," but generalizes it in such 
a way that it's very customizable (as opposed to architecture 
that shuffles codes around and solves no real need).


To start off, this basically will lead to a ton of metaclass 
"keywords" added to every codebase, which simplifies how much 
code is written but still requires the programmer to understand 
how the underlying metaclass works.  It balkanizes the language 
further, because every codebase will have their own metaclasses, 
possibly even naming the exact same metaclass implementation 
differently.  You could work around the syntax problem of a 
keyword explosion a bit by making coders type 
"MetaClass::interface" instead of just "interface", but that 
still leaves the other issues.


The job of language designers like Sutter is to find abstractions 
that would make programmers' lives easier and bake them into the 
language.  Sometimes, the way people use these abstractions is so 
varied that you need what he calls "encapsulated abstractions" or 
"user-written extensions" like functions, classes, or modules, on 
one of his last slides with 7 mins. to go.


Other times, there are some really common ways to use the 
abstraction and you're better off adding the most common 
customization of that abstraction to the language with a 
particular keyword, and saying you can't do all those other niche 
customizations.  That is the job of the language designer, and it 
is as important what he excludes as much as what he includes.


I think he'd be better off just adding keywords like "interface" 
for the metaclasses he thinks are really common, rather than 
allowing programmers to define it any way they want.  However, 
this is an empirical question: how widely do C++ programmers need 
to customize their "meta-classes" as they're implemented now, and 
is it worth the additional keywords that noobs would see 
sprinkled all over codebases and get confused by?  I don't write 
C++, so I can't definitely answer this question, but my guess is 
that it isn't worth it.


If he's right that C++ use is so balkanized, this will simplify 
some code but further balkanize the language.  That might be 
worth it for them, but rather than simplifying the language, it 
makes it more powerful and more complex, heading higher up into 
the hills rather than the lower ground he claims to be heading 
for.


Re: The progress of D since 2013

2017-07-31 Thread kinke via Digitalmars-d

Hi,

On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote:

1) Support of linking in win64?


LDC: MSVC targets, both 32 and 64 bits, fully supported since a 
year or so. Requires Visual Studio 2015+.


2) What is the support of other platforms? AFAIK there was 
progress on Android.


LDC: Quite good. All tests pass on Android, see Joakim Noah's 
work, but currently requires a tiny LLVM patch. That will be 
taken care of by LDC 1.4. All tests also passing on ARMv6+ on 
Linux. A guy got a vibe.d app to work successfully on an ARMv5 
industrial controller. AArch64 support is underway...


4) What is the state of GDC/LDC? GDC team was actively working 
on including gdc in gcc project.


And they succeeded, it has recently been accepted.

Do gdc and ldc still pull D frontend, so there is essentially 1 
frontend (where gdc and ldc frontends lag several versions 
behind) + 3 backends?


More or less. LDC uses a slightly modified D front-end (yep, 
that's been officially converted to D in case you missed it), 
whereas Iain/GDC still uses a C++ one, with backports from newer 
D versions.
The lag isn't that bad for LDC; LDC 1.3 uses the 2.073.2 
front-end, current master the 2.074.1 one, and there's a WIP PR 
for 2.075.0, which already compiles.


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Vladimir Panteleev via Digitalmars-d

On Wednesday, 26 July 2017 at 17:48:21 UTC, Walter Bright wrote:

On 7/26/2017 6:29 AM, Kagamin wrote:

Should we still try to mark them safe at all?


Marking ones that are safe with @safe is fine. OS APIs pretty 
much never change.


Sometimes operating systems add new flags to their API which 
change how some values are interpreted. Some API functions may, 
for example, normally take a pointer to a such-and-such struct, 
but if a certain flag is specified, the parameter is instead 
interpreted as a pointer to a different data type. That would be 
one case where an API call becomes un-@safe due to the addition 
of a flag.




Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Shachar Shemesh via Digitalmars-d

On 31/07/17 16:33, Vladimir Panteleev wrote:

On Wednesday, 26 July 2017 at 17:48:21 UTC, Walter Bright wrote:

On 7/26/2017 6:29 AM, Kagamin wrote:

Should we still try to mark them safe at all?


Marking ones that are safe with @safe is fine. OS APIs pretty much 
never change.


Sometimes operating systems add new flags to their API which change how 
some values are interpreted. Some API functions may, for example, 
normally take a pointer to a such-and-such struct, but if a certain flag 
is specified, the parameter is instead interpreted as a pointer to a 
different data type. That would be one case where an API call becomes 
un-@safe due to the addition of a flag.




One of the things that really bother me with the D community is the 
"100% or nothing" approach.


System programming is, by definition, an exercise in juggling 
conflicting aims. The more absolute the language, the less useful it is 
for performing real life tasks.


Shachar


Re: The progress of D since 2013

2017-07-31 Thread Guillaume Piolat via Digitalmars-d

On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote:

Hi!

Good to see D is progressing! I was active forum and bugzilla 
participant in 2011-2013. Since then I have not touched D.


Welcome back :)

3) What is the state of GC? AFAIK there were some improvements 
for GC sent as GSOC projects but they were not added in 2013. I 
see in the changelog that there are some improvements in speed 
and configuration support was added.


"GC" is still the thing that come up as objection from native 
programmers. Recently the perception shifted a bit thanks to the 
D blog (and indeed most users have no irredeemable problems with 
it).


-profile=gc and @nogc makes GC avoidance much simpler than in the 
past.



4) What is the state of GDC/LDC? GDC team was actively working 
on including gdc in gcc project. Do gdc and ldc still pull D 
frontend, so there is essentially 1 frontend (where gdc and ldc 
frontends lag several versions behind) + 3 backends? I see in 
the changelog some dmd backend improvements. How the dmd 
backend is compared with C++/GDC/LDC? AFAIK in 2013 there was a 
tradeoff: either you use dmd with brand-new frontend or gdc/ldc 
where performance is comparable to gcc, but frontend lags 
behind. Is it still true?


LDC got Win32 and Win64 backends. DMD still compiles faster, but 
generates slower code (about 2x).


6) I don't see any significant changes in D core from dlang 
documentation (except those mentioned in changelog for 
2014-2017). Is is true or is the official spec as usual delayed 
:)? Is dlang spec fully and frequently updated or is it sparse 
as in the past? Is TDPL book still relevant?


There are several relevant books to own now.



9) Does D gains popularity?


Yes, from all directions. More and more risk-adverse programmers 
are considering using it, it's not exclusivey early adopters 
anymore.



10) Anything else 2013 D user must know? :) I don't ask about 
Phobos because according to the changelog the progress is 
enormous, incremential and targets several directions - I doubt 
it can be easily summarised...


Use DUB, it makes everyone's life easier.
http://code.dlang.org/




Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Timon Gehr via Digitalmars-d

On 31.07.2017 15:56, Shachar Shemesh wrote:


One of the things that really bother me with the D community is the 
"100% or nothing" approach.

...


Personally, I'm more bothered by this kind of lazy argument that sounds 
good but has no substance.


System programming is, by definition, an exercise in juggling 
conflicting aims. The more absolute the language, the less useful it is 
for performing real life tasks.


Why do you think @trusted exists?


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Shachar Shemesh via Digitalmars-d

On 31/07/17 17:08, Timon Gehr wrote:

On 31.07.2017 15:56, Shachar Shemesh wrote:


One of the things that really bother me with the D community is the 
"100% or nothing" approach.

...


Personally, I'm more bothered by this kind of lazy argument that sounds 
good but has no substance.


System programming is, by definition, an exercise in juggling 
conflicting aims. The more absolute the language, the less useful it 
is for performing real life tasks.


Why do you think @trusted exists?


That's fine, but since, according to the logic presented here, no OS 
function can ever be @safe, then all code calling such a function can't 
be @safe either. At this point, half your code, give or take, is 
@trusted. That's the point you give up, and just write everything as 
@system.


And what we have here is that you started out trying to be 100% pure 
(and, in this case, there is no problem with current code, only 
*hypothetical* future changes), and end up not getting any protection 
from @safe at all.


There is a proverb in Hebrew that says:
תפסת מרובה, לא תפסת.
Try to grab too much, and you end up holding nothing.

Shachar


Re: The progress of D since 2013

2017-07-31 Thread inevzxui via Digitalmars-d

On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote:

Hi!

Good to see D is progressing! I was active forum and bugzilla 
participant in 2011-2013. Since then I have not touched D.


[...]

10) Anything else 2013 D user must know? :)


Yes, bug 314 has been fixed !
https://issues.dlang.org/show_bug.cgi?id=314



Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Timon Gehr via Digitalmars-d

On 31.07.2017 16:15, Shachar Shemesh wrote:


Why do you think @trusted exists?


That's fine, but since, according to the logic presented here, no OS 
function can ever be @safe,


This is actually not true. Vladimir was just pointing out a complication 
of which to be aware. Are you arguing against applying due diligence 
when specifying library interfaces?




There is a proverb in Hebrew that says:
תפסת מרובה, לא תפסת.
Try to grab too much, and you end up holding nothing. 


I.e. if you mark too many functions as @trusted, you will have no memory 
safety.


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-31 Thread Vladimir Panteleev via Digitalmars-d

On Monday, 31 July 2017 at 14:51:22 UTC, Timon Gehr wrote:

On 31.07.2017 16:15, Shachar Shemesh wrote:
That's fine, but since, according to the logic presented here, 
no OS function can ever be @safe,


This is actually not true. Vladimir was just pointing out a 
complication of which to be aware. Are you arguing against 
applying due diligence when specifying library interfaces?


Indeed. @safe is not a sandbox, there is no need to actually go 
to extreme measures to safeguard against potential changes beyond 
our control; just something to keep in mind.




Dlang + compile-time contracts

2017-07-31 Thread Marco Leise via Digitalmars-d
Coming from D.learn where someone asked for some automatism to
turn runtime format strings to `format()` into the equivalent
`format!()` form automatically to benefit from compile-time
type checks I started wondering...

The OP wasn't looking for other benefits of the template
version other than argument checking and didn't consider the
downsides either. So maybe there is room for improvement using
runtime arguments.

So let's add some features:
1) compile-time "in" contract, run on the argument list
2) functionality to promote runtime arguments to compile-time


string format(string fmt)
in(ctfe) {
  // Test if argument 'fmt' is based off a compile-time
  // readable literal/enum/immutable
  static if (__traits(isCtfeConvertible, fmt))
  {
// Perform the actual promotion
enum ctfeFmt = __traits(ctfeConvert, fmt);
static assert(ctfeFmt == "%s", "fmt string is not '%s'");
  }
}
body
{
  return "...";
}


Note that this idea is based on existing technology in the
front-end. Compare how an alias can stand in for a CT or RT
argument at the same time:


void main()
{
const fmt1 = "%x";
auto fmt2 = "%s";
aliasTest!fmt1;
aliasTest!fmt2;
}

void aliasTest(alias fmt)()
{
import std.stdio;
static if (__traits(compiles, {enum ctfeFmt = fmt;}))
// "Promotion" to compile time value
enum output = "'fmt' is '" ~ fmt ~ "' at compile-time";
else
string output = "'fmt' is '" ~ fmt ~ "' at runtime";
writeln(output);
}


This prints:
'fmt' is '%x' at compile-time
'fmt' is '%s' at runtime

For technical reasons a compile-time "in" contract can not
work in nested functions so all the CTFE contracts need to be
on the top level, user facing code. That means in practice
when there are several formatting functions, they'd extract
the implementation of the compile-time contract into separate
functions. I have no idea how exactly that scales as the
`static if (__traits(isCtfeConvertible, …))` stuff has to
remain in the contract. (It's probably ok.)

Extending the CTFE promotion to any variables that can be
const-folded is not part of this idea as it leaves a lot of
fuzzyness in language specification documents and results in
code that produces errors in one compiler, but not in another.
Since some people will still find it beneficial it should be a
compiler vendor extension and print a warning only on contract
violations.



-- 
Marco



Re: newCTFE Status July 2017

2017-07-31 Thread Marco Leise via Digitalmars-d
Am Sun, 30 Jul 2017 14:44:07 +
schrieb Stefan Koch :

> On Thursday, 13 July 2017 at 12:45:19 UTC, Stefan Koch wrote:
> > [ ... ]  
> 
> Hi Guys,
> 
> After getting the brainfuck to D transcompiler to work.
> I now made it's output compatible with newCTFE.
> 
> See it here:
> https://gist.github.com/UplinkCoder/002b31572073798897552af4e8de2024
> 
> Unfortunately the above code does seem to get mis-compiled,
> As it does not output Hello World, but rather:
> 
> 

Funny, it is working and mis-compiling at the same time.
I figure with such complex code, it is working if it ends up
*printing anything* at all and not segfaulting. :)

-- 
Marco



Re: newCTFE Status July 2017

2017-07-31 Thread Stefan Koch via Digitalmars-d

On Monday, 31 July 2017 at 17:58:56 UTC, Marco Leise wrote:

Am Sun, 30 Jul 2017 14:44:07 +
schrieb Stefan Koch :


On Thursday, 13 July 2017 at 12:45:19 UTC, Stefan Koch wrote:
> [ ... ]

Hi Guys,

After getting the brainfuck to D transcompiler to work.
I now made it's output compatible with newCTFE.

See it here: 
https://gist.github.com/UplinkCoder/002b31572073798897552af4e8de2024


Unfortunately the above code does seem to get mis-compiled,
As it does not output Hello World, but rather:




Funny, it is working and mis-compiling at the same time.
I figure with such complex code, it is working if it ends up
*printing anything* at all and not segfaulting. :)


I fixed the bug which cause this to miscompile it works now at 
ctfe.

This code is not really that complex, it only looks confusing.
complex code uses slices  of struct-types and pointer-slicing and 
that stuff.




Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-31 Thread Jesse Phillips via Digitalmars-d

On Friday, 28 July 2017 at 23:25:35 UTC, Nicholas Wilson wrote:

On Friday, 28 July 2017 at 21:47:32 UTC, Jesse Phillips wrote:
* Remove the whole program defaults, I'm ok with it being 
changed in a re-implementation of the runtime (like the 
embedded example), we just don't need the extra confusion 
within user code.


The program defaults are there to make @safe by default a 
thing, and betterC by default a thing, which are covered in the 
_three Primary points_ of the semesters Vision document[1]. 
These would not normally be set (i.e. opt in).


I read that as more, Improve Druntime and phobos's us of @safe so 
that it becomes more usable.


* Specifying inferred needs to be within druntime only, and 
thus may not need to exist.


I think there is usefulness in having it, particularly as the 
meta-default (defaultAttributeSet). Walter has ben pushing for 
some time to infer attributes by default. This would provide a 
directive for the compiler to do so. Please elaborate.


I don't expect inference to happen outside templates. Besides if 
the compiler infers the appropriate attributes by default, isn't 
1. a default attribute redundant and 2. specifying inference 
redundant as the compiler defaults to inferring?


* I'm concerned user defined attributes could define a 
"defaults" for functions.


you mean user UDAs (if not please explain)? the default 
application of UDA is for core attributes only.


Yes I'm User's UDA (user defined attributes). I expected it 
wouldn't apply outside the context of core.attributes.



The compiler will default functions to the first value of the 
enum [for my example].


( That would provide a pessimistic default and debates the 
ability for the compiler to infer)


Yes, just throwing in an example structure and why I mentioned 
[for my example]. But as said earlier infer and default are at 
odds.



[So on and so forth]


Thanks for your suggestions.

[1]: https://wiki.dlang.org/Vision/2017H2





Re: D client for ROS

2017-07-31 Thread aberba via Digitalmars-d

On Saturday, 29 July 2017 at 10:21:32 UTC, Johan Engelen wrote:

Hi all,
  Are there any robot folks out here that are working with ROS 
and would be able to work on creating a D client library for it?


ROS is used a lot at our university and in robot research in 
general, and most people use the C++ client (the main one, next 
to Python). In arguing for teaching D at our university, it 
would help a lot if students could use D with ROS.


Cheers,
  Johan

(fyi: I'm an assistent prof Robotics and Mechatronics and LDC 
developer)


Mike as been doing some foundation work non ARM Cortex 
...microcontrollers... His work is closest I know of.


Re: D client for ROS

2017-07-31 Thread Johan Engelen via Digitalmars-d

On Monday, 31 July 2017 at 21:41:47 UTC, aberba wrote:

On Saturday, 29 July 2017 at 10:21:32 UTC, Johan Engelen wrote:

Hi all,
  Are there any robot folks out here that are working with ROS 
and would be able to work on creating a D client library for 
it?


Mike as been doing some foundation work non ARM Cortex 
...microcontrollers... His work is closest I know of.


I know of Mike's work, but ROS is not only for embedded 
platforms. (in fact, I think the majority of people run ROS on 
unconstrained systems with Linux available)




Re: newCTFE Status July 2017

2017-07-31 Thread Temtaime via Digitalmars-d

On Sunday, 30 July 2017 at 20:40:24 UTC, Stefan Koch wrote:

On Thursday, 13 July 2017 at 12:45:19 UTC, Stefan Koch wrote:

[...]


Hello Guys,

The bug preventing newCTFE from executing bf_ctfe[1] correctly  
(a peculiarity in which for for and if statement-conditions  
other then 32bit integers where ignored) is now fixed.


[...]


Aren't you disabling codegen by passing a -o- to your engine, so 
it starts to compile faster?


Re: [OT] uncovering x86 hardware bugs and unknown instructions by fuzzing.

2017-07-31 Thread deadalnix via Digitalmars-d

On Monday, 31 July 2017 at 07:17:33 UTC, Guillaume Chatelet wrote:

Some people here might find this interesting:
https://github.com/xoreaxeaxeax/sandsifter

White paper here:
https://github.com/xoreaxeaxeax/sandsifter/blob/master/references/domas_breaking_the_x86_isa_wp.pdf


This man is a superhero.

See also https://www.youtube.com/watch?v=lR0nh-TdpVg for in 
hardware privilege escalation and 
https://www.youtube.com/watch?v=HlUe0TUHOIc . We should consider 
building a shrine for this guy.


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-31 Thread Nicholas Wilson via Digitalmars-d

On Monday, 31 July 2017 at 19:27:46 UTC, Jesse Phillips wrote:

On Friday, 28 July 2017 at 23:25:35 UTC, Nicholas Wilson wrote:

On Friday, 28 July 2017 at 21:47:32 UTC, Jesse Phillips wrote:
* Remove the whole program defaults, I'm ok with it being 
changed in a re-implementation of the runtime (like the 
embedded example), we just don't need the extra confusion 
within user code.


The program defaults are there to make @safe by default a 
thing, and betterC by default a thing, which are covered in 
the _three Primary points_ of the semesters Vision 
document[1]. These would not normally be set (i.e. opt in).


I read that as more, Improve Druntime and phobos's us of @safe 
so that it becomes more usable.


Improving druntime and phobos is obviously important, but so is 
the ability for the end user to use it. If its hard to use less 
people will use it, conversely the easier it is the more likely 
people are to use it. Not that this also provides an easy way to 
find out which function are not @safe and fix them, without 
slapping @safe on main, i.e. build unit (package/library) at a 
time.


* Specifying inferred needs to be within druntime only, and 
thus may not need to exist.


I think there is usefulness in having it, particularly as the 
meta-default (defaultAttributeSet). Walter has ben pushing for 
some time to infer attributes by default. This would provide a 
directive for the compiler to do so. Please elaborate.


I don't expect inference to happen outside templates. Besides 
if the compiler infers the appropriate attributes by default, 
isn't 1. a default attribute redundant and 2. specifying 
inference redundant as the compiler defaults to inferring?


It happens already for dip1000 IIRC and I be surprised if, 
particularly for @safe, Walter didn't want more inference. 
Especially for minimal user effort.


The compiler will default functions to the first value of the 
enum [for my example].


( That would provide a pessimistic default and debates the 
ability for the compiler to infer)


Yes, just throwing in an example structure and why I mentioned 
[for my example]. But as said earlier infer and default are at 
odds.


I disagree. Even if infer wasn't the default I would certainly 
like to be able to have them be inferred at the flick of a 
compiler switch. It goes back to the ease of use argument, its 
more effort for me to manually annotate things.


Re: newCTFE Status July 2017

2017-07-31 Thread Stefan Koch via Digitalmars-d

On Monday, 31 July 2017 at 23:03:21 UTC, Temtaime wrote:

On Sunday, 30 July 2017 at 20:40:24 UTC, Stefan Koch wrote:

On Thursday, 13 July 2017 at 12:45:19 UTC, Stefan Koch wrote:

[...]


Hello Guys,

The bug preventing newCTFE from executing bf_ctfe[1] correctly
 (a peculiarity in which for for and if statement-conditions  
other then 32bit integers where ignored) is now fixed.


[...]


Aren't you disabling codegen by passing a -o- to your engine, 
so it starts to compile faster?


Ah yes.
An oversight while posting the results. it does not affect the 
meassurements in any real way though the difference it 3-5 
milliseconds.


In the testcase the ctfe workload is totally dominant.


Re: The progress of D since 2013

2017-07-31 Thread Mike via Digitalmars-d

On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote:

Good to see D is progressing! I was active forum and bugzilla 
participant in 2011-2013. Since then I have not touched D.


Good to see you back.  I also took a hiatus from D in 2015 and 
just recently returned after GDC fixed a blocker for me.  I'll 
comment on what I've observed.


1) Support of linking in win64? AFAIK Walter introduced win64 
support in circa 2012 which was the big progress. However, 
support for win64 linking was limited because dmd defaulted on 
old dmc linker, and Walter didn't plan to do anything with this.


Haven't used D on Windows.  Don't know.


2) What is the support of other platforms?


I'm currently only using D for bare-metal type projects and some 
desktop utilities to support that development.  DMD has added 
some improvements to -betterC 
(http://forum.dlang.org/post/cwzmbpttbaqqzdetw...@forum.dlang.org) but I'm not really interested in that feature as I'd like to use all of D for bare-metal, just in a pay-as-you-go fashion. The compiler is still too tightly coupled to the runtime, but there have been some improvements (https://www.youtube.com/watch?v=endKC3fDxqs)



3) What is the state of GC?


From what I can tell, aside from a few improvements to metering 
the GC, most endeavors have not materialized.



4) What is the state of GDC/LDC?


GDC was recently accepted for inclusion in GCC (version 8 I 
believe): https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html


5) What is the progress with CTFE? I see a lot of discussions 
in forum archive devoted to the development of CTFE. What is 
the summary of CTFE development in recent years?


I believe there is an effort to overhaul CTFE, but it is ongoing 
and not yet committed.


6) I don't see any significant changes in D core from dlang 
documentation (except those mentioned in changelog for 
2014-2017). Is is true or is the official spec as usual delayed 
:)? Is dlang spec fully and frequently updated or is it sparse 
as in the past?


I haven't seen any improvements to filling holes in the spec.  I 
believe the semantics of 'shared' are still undefined.


I have seen significant improvements to the website/documentation 
with runnable examples and such.


8) What is the progress with shared and immutable? AFAIK the 
compiler support for shared was not complete and Phobos library 
itself was not 'immutable-' and 'shared-correct'.


AFAIK nothing in that regard has changed.


9) Does D gains popularity?


Not sure.  I've seen some talent in the D community depart, some 
new talent emerge, some talent participating less, and some 
talent taking on more.


10) Anything else 2013 D user must know? :) I don't ask about 
Phobos because according to the changelog the progress is 
enormous, incremential and targets several directions - I doubt 
it can be easily summarised...


* Formal creation of the D Language Foundation
* DMD frontend converted to D
* DMD backend converted to boost license 
(http://forum.dlang.org/post/oc8acc$1ei9$1...@digitalmars.com)
* DIP1000 merged under the -dip1000 feature gate 
(https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md)
* Walter claims memory safety will kill C 
(https://www.reddit.com/r/cpp/comments/6b4xrc/walter_bright_believes_memory_safety_will_kill_c/), and if you have any faith in the TIOBE index, it may already be happening (https://www.tiobe.com/tiobe-index/)
* Lots of infrastructure improvments (dlang-bot and other CI 
automation)


Overall, though, I'd say D is just further along on the path it 
was on in 2013.  If you were hoping for a new direction, you'll 
probably be disappointed.


Mike