Re: It makes me sick!

2017-07-27 Thread FoxyBrown via Digitalmars-d-learn

On Friday, 28 July 2017 at 01:10:03 UTC, Mike Parker wrote:

On Friday, 28 July 2017 at 00:28:52 UTC, FoxyBrown wrote:


You are not being very logical.

The zip file as N files in it. No matter what those files are, 
it should be a closed system. That is, if I insert or add(not 
replace) M file to the directory structure it should not break 
D, period!


That's *not* what happened here. Jonathan explained it quite 
well. std.datetime was refactored into a package, its contents 
split into new modules. When you import a module foo, dmd looks 
for:


1. foo.di
2. foo.d
3. foo/package.d

When it finds one, it stops looking. It's not an error for all 
three to exist. Your error came because it found 
std/datetime.d, but you linked to a library that included 
symbols for std/datatetime/package.d. It's not the compiler's 
responsibility to error in that case. It's your responsibility 
to properly install.




Sorry, wrong.



Why? Because NO file in the zip should be referencing any file 
not in the zip unless it is designed to behave that way(e.g., 
an external lib or whatever).


If an old external program is referencing a file in dmd2 that 
isn't in the new zip it should err.


Why? Because suppose you have an old program that references 
some old file in dmd2 dir and you upgrade dmd2 by extracting 
the zip. The program MAY still work and use broke 
functionality that will go undetected but be harmful.


Why? Because dmd.exe is reference a file it shouldn't and it 
should know it shouldn't yet it does so anyways. It really has 
nothing to do with the file being in the dir but that dmd is 
being stupid because no one bothered to sanity checks because 
they are too lazy/think it's irrelevant because it doesn't 
effect them.


That's unreasonable.


Nope, your unreasonable expecting the end user to clean up the 
mess "you" leave.




I should be able to put any extra files anywhere in the dmd2 
dir structure and it should NOT break dmd.


There are numerous applications out there that can break if you 
simply overwrite a directory with a newer version of the app. 
DMD is not alone with this. You should always delete the 
directory first. It's precisely why the compiler does so.


Nope. Virtually all apps, at least on windows, work fine if you 
replace their contents with new versions. Generally, only 
generated files such as settings and such could break the apps... 
but this is not the problem here.



If dmd breaks in strange and unpredictable ways IT IS DMD's 
fault! No exceptions, no matter what you believe, what you say, 
what lawyer you pay to create a law for you to make you think you 
are legally correct! You can make any claim you want like: "The 
end user should install in to a clean dir so that DMD doesn't get 
confused and load a module that doesn't actually have any 
implementation" but that's just your opinion. At the end of the 
day it only makes you and dmd look bad when it doesn't work 
because of some lame minor issue that could be easily fixed. It 
suggests laziness["Oh, there's a fix but I'm too lazy to add 
it"], arrogance["Oh, it's the end users fault, let them deal with 
it"], and a bit of ignorance.


In the long run, mentalities like yours are hurting D rather than 
helping it. Sure, you might contribute significantly to D's 
infrastructure, but if no one uses because there are so many 
"insignificant" issues then you've just wasted an significant 
portion of your life for absolutely nothing.


So, I'd suggest you rethink your position and the nearsighted 
rhetoric that you use. You can keep the mentality of kicking the 
can down the road and blaming the end user but it will ultimately 
get you no where.







Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Friday, 28 July 2017 at 01:50:24 UTC, Jonathan M Davis wrote:
On Friday, July 28, 2017 01:13:10 Nicholas Wilson via 
Digitalmars-d wrote:
IIRC the reason they lack a leading @ is purely historical and 
considered not good but not worth breaking code over. I 
believe this DIP presents an opportunity and reason to make 
that change. Existing code will still work i.e. we can 
deprecate the old form, since both the new and the old are 
implementation controlled, and make it become a positional 
keyword or the like.


The only reason that _any_ of them have @ on them was to avoid 
creating a new keyword. And I for one would hate to see @ on 
all of them.


Fair enough, but its always slightly annoyed me that `pure` and 
`nothrow` don't have leading '@'s.


It's just trading on inconsistency for another. Should public 
have @ on it? Should static have @ on it? What about scope, 
const, or shared? You're just taking a subset of the attributes 
and turning them into enums with @ on them and leaving some of 
them as-is.


This DIP is in the process of being amended to explicitly exclude 
linkage, storage class & visibility attributes. That Subset are 
function attributes under the 'Encompassed' and 'Optionally 
encompassed' subsections of "Attributes & attribute-like compiler 
behaviour encompassed in this DIP".



How is that making things more
consistent? It's just shuffling the attributes around and for 
some reason turns some of them into enums while leaving others 
as they are.


Its turning keyword-like compiler magic attributes into regular 
compiler attributes.


IMHO, doing anything to change the current attributes had 
better have an _extremely_ good reason, and this DIP does not 
give that. Yes, being able to negate attributes would be 
valuable, but that really doesn't seem to be what this DIP is 
about much as that's what it gives as a rationale. Instead, it 
seems to be talking about altering attributes in a manner which 
makes them way more complicated than they are now.


I also _really_ don't like the idea of having aliases for 
built-in attributes.


That is a feature.

If we had that, instead of looking at a function and seeing 
@safe, pure, nothrow, etc., we could end up seing something 
like @VibeDefault, and then you'd have to go figure out what on 
earth that was, and even after you figured out what it was, it 
could change later. At least with what we have now, I can know 
what I'm looking at.


I dont mean to be snide but either
1) ignore them, i see AliasSeq of attributes more useful for end 
users, i.e. application developers, or

2) use an IDE.



In addition, it looks like the DIP is talking about what the 
default attributes in general are. It's one thing to slap a 
default set of attributes at the top of a module and then 
negate them later in the module (which we can already do with 
attributes like public or @safe but can't do with some of the 
others like pure or nothrow). It's a different thing entirely 
to basically change the default attributes via a compiler 
switch. That's just dividing the language. You end up with code 
that works with one set of compiler switches but not another 
and is thus incompatible with other code - because of a 
compiler switch. Walter has been against such compiler flags 
every time that they've come up, and I am in complete 
agreement. We've only used them as transitional flags that are 
supposed to go away eventually (like -dip25 or -dip1000). 
Whether the code is legal or not should not depend on the 
compiler flags.


And honestly, showing stuff like 
@core.attribute.GarbageCollectedness.gc in the DIP makes it 
look _really_ bad.


That was an Illustrative mistake and I regret the confusion it 
has caused. I should have used `@gc` with `@gc` being an alias 
for @core.attribute.GarbageCollectedness.gc.


Sure, it might make sense from the standpoint of extensibility, 
but it's ridiculously long. We already arguably have too much 
of an attribute mess on your average non-templated function 
signature without stuff like that.


This dip is intended to reduce the amount of attribute spam by 
enabling defaults.


IMHO, if what we're trying to do is to be able to negate 
attributes, then we should looking at doing something like 
@nogc(false) or some other syntax that is about negation of an 
existing attribute.


The DIP is more than that, the benefit of being regular 
attributes (manipulation) and the ability to have configurable 
defaults,


This DIP is going off on a huge tangent from that with no 
justification as to why it would be worth the extra 
complication or the breakage that it would cause.


This would cause _very_ little if any, non-deprecatable breakage.

And it looks like a _lot_ of extra complication in comparison 
to what we have now.


The keyword-like attributes become regular attributes. I fail to 
see how that makes them any more complicated, IMO it makes them 
_less_ complicated (I am revising the DIP to remove the module 

Re: DDox and filters.

2017-07-27 Thread Danni Coy via Digitalmars-d-learn
yup figured it out -
module documentation needs to go *above*
the module declaration or you get nothing.

On Fri, Jul 28, 2017 at 11:53 AM, Soulsbane via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com> wrote:

> On Thursday, 27 July 2017 at 03:01:50 UTC, Danni Coy wrote:
>
>> I am trying to build my projects documentation via the ddox system via
>> dub. It seems that my modules are being documented and then filtered out.
>>
>> Ironically for a documentation system there isn't a lot of documentation.
>> What is the minimum I need in order for documentation to show up?
>> how do I control the filter options.
>>
>
> I think I had this problem and solved it by adding a comment block at the
> top describing the module.
>


Re: DDox and filters.

2017-07-27 Thread Soulsbane via Digitalmars-d-learn

On Thursday, 27 July 2017 at 03:01:50 UTC, Danni Coy wrote:
I am trying to build my projects documentation via the ddox 
system via dub. It seems that my modules are being documented 
and then filtered out.


Ironically for a documentation system there isn't a lot of 
documentation.
What is the minimum I need in order for documentation to show 
up?

how do I control the filter options.


I think I had this problem and solved it by adding a comment 
block at the top describing the module.


Re: using DCompute

2017-07-27 Thread jmh530 via Digitalmars-d-learn

On Friday, 28 July 2017 at 01:30:58 UTC, Nicholas Wilson wrote:


Yes, although I'll have to add an attribute shim layer for the 
dcompute druntime symbols to be accessible for DMD. When you 
compile LDC will produce .ptx and .spv files in the object file 
directory which will be able to be used in any project. The 
only thing that will be more fragile is lambda kernels as they 
are mangled numerically (`__lambda1`, `__lambda1` and so on).


I imagine that using dcompute this way with DMD for development 
would be popular. For instance, the GPU part might be only a 
small part of a project so you wouldn't want to be forced to use 
LDC the moment the tiniest GPU code is in it.


Once you've ensured everything is working correctly, you might 
add something about this to the wiki, or whatever.


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Jonathan M Davis via Digitalmars-d
On Friday, July 28, 2017 01:13:10 Nicholas Wilson via Digitalmars-d wrote:
> IIRC the reason they lack a leading @ is purely historical and
> considered not good but not worth breaking code over. I believe
> this DIP presents an opportunity and reason to make that change.
> Existing code will still work i.e. we can deprecate the old form,
> since both the new and the old are implementation controlled, and
> make it become a positional keyword or the like.

The only reason that _any_ of them have @ on them was to avoid creating a
new keyword. And I for one would hate to see @ on all of them. It's just
trading on inconsistency for another. Should public have @ on it? Should
static have @ on it? What about scope, const, or shared? You're just taking
a subset of the attributes and turning them into enums with @ on them and
leaving some of them as-is. How is that making things more consistent? It's
just shuffling the attributes around and for some reason turns some of them
into enums while leaving others as they are.

IMHO, doing anything to change the current attributes had better have an
_extremely_ good reason, and this DIP does not give that. Yes, being able to
negate attributes would be valuable, but that really doesn't seem to be what
this DIP is about much as that's what it gives as a rationale. Instead, it
seems to be talking about altering attributes in a manner which makes them
way more complicated than they are now.

I also _really_ don't like the idea of having aliases for built-in
attributes. If we had that, instead of looking at a function and seeing
@safe, pure, nothrow, etc., we could end up seing something like
@VibeDefault, and then you'd have to go figure out what on earth that was,
and even after you figured out what it was, it could change later. At least
with what we have now, I can know what I'm looking at.

In addition, it looks like the DIP is talking about what the default
attributes in general are. It's one thing to slap a default set of
attributes at the top of a module and then negate them later in the module
(which we can already do with attributes like public or @safe but can't do
with some of the others like pure or nothrow). It's a different thing
entirely to basically change the default attributes via a compiler switch.
That's just dividing the language. You end up with code that works with one
set of compiler switches but not another and is thus incompatible with other
code - because of a compiler switch. Walter has been against such compiler
flags every time that they've come up, and I am in complete agreement. We've
only used them as transitional flags that are supposed to go away eventually
(like -dip25 or -dip1000). Whether the code is legal or not should not
depend on the compiler flags.

And honestly, showing stuff like @core.attribute.GarbageCollectedness.gc in
the DIP makes it look _really_ bad. Sure, it might make sense from the
standpoint of extensibility, but it's ridiculously long. We already arguably
have too much of an attribute mess on your average non-templated function
signature without stuff like that.

IMHO, if what we're trying to do is to be able to negate attributes, then we
should looking at doing something like @nogc(false) or some other syntax
that is about negation of an existing attribute. This DIP is going off on a
huge tangent from that with no justification as to why it would be worth the
extra complication or the breakage that it would cause. And it looks like a
_lot_ of extra complication in comparison to what we have now.

- Jonathan M Davis



Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Mike via Digitalmars-d

On Friday, 28 July 2017 at 01:26:19 UTC, Mike wrote:

On Friday, 28 July 2017 at 01:13:10 UTC, Nicholas Wilson wrote:


Terminology:
I was confused by the term "attribute group". Although the 
term is defined in the DIP, it implies a combination of 
attributes rather than a mutually exclusive attribute 
category.  Still having trouble understanding the DIP in 
detail due to this.


If you have a better name, please do tell.


Yeah, naming is hard.  I suggest "attribute class".


Or "attribute category", maybe if the word "class" causes too 
much ambiguity.


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Friday, 28 July 2017 at 01:26:19 UTC, Mike wrote:

On Friday, 28 July 2017 at 01:13:10 UTC, Nicholas Wilson wrote:


Terminology:
I was confused by the term "attribute group". Although the 
term is defined in the DIP, it implies a combination of 
attributes rather than a mutually exclusive attribute 
category.  Still having trouble understanding the DIP in 
detail due to this.


If you have a better name, please do tell.


Yeah, naming is hard.  I suggest "attribute class".


I like it.


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread sarn via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


Like others in this thread have said, it needs more rationale.  
The rationale only mentions one actual problem: attributes can't 
be undone (which is a really important problem, by the way).  But 
in the abstract it says


[This DIP] does not (yet) propose any mechanism to disable 
compiler attributes directly (e.g. @!nogc).


Instead of coming up with more problems to solve, it then dives 
into describing an entire framework for doing *things* with 
attributes.  To be totally honest, as it stands it feels like 
architecture astronautics:

https://www.joelonsoftware.com/2001/04/21/dont-let-architecture-astronauts-scare-you/


Re: using DCompute

2017-07-27 Thread Nicholas Wilson via Digitalmars-d-learn

On Friday, 28 July 2017 at 00:39:43 UTC, James Dean wrote:

On Friday, 28 July 2017 at 00:23:35 UTC, Nicholas Wilson wrote:

On Thursday, 27 July 2017 at 21:33:29 UTC, James Dean wrote:
I'm interested in trying it out, says it's just for ldc. Can 
we simply compile it using ldc then import it and use dmd, 
ldc, or gdc afterwards?


The ability to write kernels is limited to LDC, though there 
is no practical reason that, once compiled, you couldn't use 
resulting generated files with GDC or DMD (as long as the 
mangling matches, which it should). This is not a priority to 
get working, since the assumption is if you're trying to use 
the GPU to boost your computing power, then you like care 
enough to use LDC, as opposed to DMD (GDC is still a bit 
behind DMD so I don't consider it) to get good optimisations 
in the first place.




Yes, but dmd is still good for development since LDC sometimes 
has problems.


If you have problems please tell us!

Can we compile kernels in LDC and import them in to a D project 
seamlessly? Basically keep an LDC project that deals with the 
kernels while using dmd for the bulk of the program. I mean, is 
it a simple import/export type of issue?


Yes, although I'll have to add an attribute shim layer for the 
dcompute druntime symbols to be accessible for DMD. When you 
compile LDC will produce .ptx and .spv files in the object file 
directory which will be able to be used in any project. The only 
thing that will be more fragile is lambda kernels as they are 
mangled numerically (`__lambda1`, `__lambda1` and so on).


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Mike via Digitalmars-d

On Friday, 28 July 2017 at 01:13:10 UTC, Nicholas Wilson wrote:


Terminology:
I was confused by the term "attribute group". Although the 
term is defined in the DIP, it implies a combination of 
attributes rather than a mutually exclusive attribute 
category.  Still having trouble understanding the DIP in 
detail due to this.


If you have a better name, please do tell.


Yeah, naming is hard.  I suggest "attribute class".


[Issue 17661] New isInputRange rejects valid input range

2017-07-27 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=17661

--- Comment #4 from Andrei Alexandrescu  ---
hsteoh, could you please submit those as a separate bug report for dmd and
create a phobos PR using the simplest workaround you can find? That PR would
close this bug, and the other bug will be for the compiler. Thanks!

--


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Friday, 28 July 2017 at 00:32:33 UTC, Mike wrote:

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


Terminology:
I was confused by the term "attribute group". Although the term 
is defined in the DIP, it implies a combination of attributes 
rather than a mutually exclusive attribute category.  Still 
having trouble understanding the DIP in detail due to this.


If you have a better name, please do tell.



Rationale:
The rationale is weak, but reading the "Description" appears 
that there's more to this DIP than what the rationale 
describes.  I suggest an enumerated list of problem/solution 
pairs that this DIP addresses.


Good idea.


Description:
It is also possible for the end user to directly control 
core.attribute.defaultAttributeSet by editing DRuntime 
directly.


Does this mean we can create an @safe-by-default or 
@final-by-default runtime?  if so, cool!, but that should be 
spelled out in more detail in the rationale.


Hmm, the runtime may have to be a special case for attribute 
inference,
I suspect that it does a whole bunch of things that are unsafe 
and the GC itself being @nogc is a bit weird (though I suppose 
you just link it out anyway). Not to mention global state being 
impure.



@core.attribute.GarbageCollectedness.inferred
That is way too verbose.  Is that just an illustration or is 
that really what we would need to be typing out?




Illustration, I expect that one will be able to go

@infer!(GarbageCollectedness, FunctionSafety)
or
@infer!(nogc,safe)

to both mean the same thing (or a combination of the above), but 
infer will be the default anyway I suspect. where `infer` just 
selects the inferred value of the enum and build an AliasSeq from 
them



Breaking changes / deprecation process:
It would be nice to get some decision early from the leadership 
if they would be willing to deprecate the no-leading-@ on 
attributes that are used with such proliferation in D code, as 
otherwise there will be a lot of time reviewing and debating 
this for nothing.  Sounds like a risky gamble.


Mike


IIRC the reason they lack a leading @ is purely historical and 
considered not good but not worth breaking code over. I believe 
this DIP presents an opportunity and reason to make that change. 
Existing code will still work i.e. we can deprecate the old form, 
since both the new and the old are implementation controlled, and 
make it become a positional keyword or the like.




Re: Silent error when using hashmap

2017-07-27 Thread FatalCatharsis via Digitalmars-d

On Thursday, 27 July 2017 at 14:09:37 UTC, Mike Parker wrote:
1. You can't expect exceptions thrown in a callback called from 
C to be propagated through the C side back into the D side. 
That includes errors. It happens on some platforms, but not 
all. On Windows, it does not (at least, not with DMD -- I can't 
speak for LDC).


Figures that the different calling conventions inside the depths 
of windows wouldn't be unwound properly. I'll just make sure to 
do blanket try catches for all throwables in each of these extern 
windows functions so that the buffer flushes for debug output and 
then explicitly exit the program.


You could try to catch the error and stash it for later, but 
better to do change the way you access the map:


```
if(auto pwin = hwnd in winMap) window = *pwin;
```


Knew about this but didn't think you could do this in one line. 
This is fantastic.


3. As I mentioned in another post in this thread, you are doing 
the wrong thing with your window reference. In your call to 
CreateWindowEx, you are correctly casting the reference to 
void*. Everywhere else, you're treating it as a pointer. That's 
wrong! To prove it, you can to this. Declare an instance of 
WinThing at the top of the file.


In CreateWindowEx, you've treated it as such. But then when you 
fetch it out of lpCreateParams, you cast it to a WinThing*. For 
that to be correct, you would have to change CreateWindowEx to 
pass a pointer to the reference (i.e. cast(void*)). But 
actually, that's still not correct because you're taking the 
address of a local variable. So the correct thing to do is to 
leave CreateWindowEx as is, and change all every WinThing* to 
WinThing.


So in the language, references to class objects are treated the 
same syntactically as pointers? So if we were in C++, me 
declaraing a WinThing** (reference to a reference to an object) 
is the same as WinThing* in D? Tried out your changes, that 
definately cleaned up the mess. Why can I not also do the same 
with the create struct like:


CREATESTRUCT createStruct = cast(CREATESTRUCT) lparam;

I take it this is because CREATESTRUCT is not a D class, but a 
struct somewhere?


Note that you don't have to dereference the createStruct 
pointer to access its fields.


Nice tip, didn't realize D implicitly dereferenced pointers when 
you apply the dot operator. Very nice.


There's no reason to make this static member or to call toUTFz 
when you use it. You can use a manifest constant with a wchar 
literal. Unlike char -> char*, whcar does not implicitly 
convert to wchar*, but you can use the .ptr property.


enum baseClass = "BaseClass"w;
wc.lpszClassName = baseClass.ptr;


Is the only difference between this and private static immutable 
values that the enum could not be an lvalue and has no reference 
at runtime?





Re: It makes me sick!

2017-07-27 Thread Mike Parker via Digitalmars-d-learn

On Friday, 28 July 2017 at 00:28:52 UTC, FoxyBrown wrote:


You are not being very logical.

The zip file as N files in it. No matter what those files are, 
it should be a closed system. That is, if I insert or add(not 
replace) M file to the directory structure it should not break 
D, period!


That's *not* what happened here. Jonathan explained it quite 
well. std.datetime was refactored into a package, its contents 
split into new modules. When you import a module foo, dmd looks 
for:


1. foo.di
2. foo.d
3. foo/package.d

When it finds one, it stops looking. It's not an error for all 
three to exist. Your error came because it found std/datetime.d, 
but you linked to a library that included symbols for 
std/datatetime/package.d. It's not the compiler's responsibility 
to error in that case. It's your responsibility to properly 
install.




Why? Because NO file in the zip should be referencing any file 
not in the zip unless it is designed to behave that way(e.g., 
an external lib or whatever).


If an old external program is referencing a file in dmd2 that 
isn't in the new zip it should err.


Why? Because suppose you have an old program that references 
some old file in dmd2 dir and you upgrade dmd2 by extracting 
the zip. The program MAY still work and use broke functionality 
that will go undetected but be harmful.


Why? Because dmd.exe is reference a file it shouldn't and it 
should know it shouldn't yet it does so anyways. It really has 
nothing to do with the file being in the dir but that dmd is 
being stupid because no one bothered to sanity checks because 
they are too lazy/think it's irrelevant because it doesn't 
effect them.


That's unreasonable.



I should be able to put any extra files anywhere in the dmd2 
dir structure and it should NOT break dmd.


There are numerous applications out there that can break if you 
simply overwrite a directory with a newer version of the app. DMD 
is not alone with this. You should always delete the directory 
first. It's precisely why the compiler does so.





Re: using DCompute

2017-07-27 Thread James Dean via Digitalmars-d-learn

On Friday, 28 July 2017 at 00:23:35 UTC, Nicholas Wilson wrote:

On Thursday, 27 July 2017 at 21:33:29 UTC, James Dean wrote:
I'm interested in trying it out, says it's just for ldc. Can 
we simply compile it using ldc then import it and use dmd, 
ldc, or gdc afterwards?


The ability to write kernels is limited to LDC, though there is 
no practical reason that, once compiled, you couldn't use 
resulting generated files with GDC or DMD (as long as the 
mangling matches, which it should). This is not a priority to 
get working, since the assumption is if you're trying to use 
the GPU to boost your computing power, then you like care 
enough to use LDC, as opposed to DMD (GDC is still a bit behind 
DMD so I don't consider it) to get good optimisations in the 
first place.




Yes, but dmd is still good for development since LDC sometimes 
has problems.


Can we compile kernels in LDC and import them in to a D project 
seamlessly? Basically keep an LDC project that deals with the 
kernels while using dmd for the bulk of the program. I mean, is 
it a simple import/export type of issue?






Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Mike via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


Terminology:
I was confused by the term "attribute group". Although the term 
is defined in the DIP, it implies a combination of attributes 
rather than a mutually exclusive attribute category.  Still 
having trouble understanding the DIP in detail due to this.


Rationale:
The rationale is weak, but reading the "Description" appears that 
there's more to this DIP than what the rationale describes.  I 
suggest an enumerated list of problem/solution pairs that this 
DIP addresses.


Description:
It is also possible for the end user to directly control 
core.attribute.defaultAttributeSet by editing DRuntime directly.


Does this mean we can create an @safe-by-default or 
@final-by-default runtime?  if so, cool!, but that should be 
spelled out in more detail in the rationale.



@core.attribute.GarbageCollectedness.inferred
That is way too verbose.  Is that just an illustration or is that 
really what we would need to be typing out?


Breaking changes / deprecation process:
It would be nice to get some decision early from the leadership 
if they would be willing to deprecate the no-leading-@ on 
attributes that are used with such proliferation in D code, as 
otherwise there will be a lot of time reviewing and debating this 
for nothing.  Sounds like a risky gamble.


Mike


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Friday, 28 July 2017 at 00:20:25 UTC, jmh530 wrote:
On Thursday, 27 July 2017 at 23:27:53 UTC, Nicholas Wilson 
wrote:



Might be useful to mention why not included.


This DIP focuses on function (i.e. @-like attributes), the 
rest of those attributes are storage classes/visibility 
classes or parametric in a way that doesn't fit with this DIP 
(extern(C++, A.B), package(foo) align(N).


So then you might make that more clear, such as by re-titling 
it "Function Attributes" instead of "Attributes" and change 
language in certain locations, like in the abstract, to refer 
to function attributions specifically instead of all attributes.


Thats a good idea.


Re: It makes me sick!

2017-07-27 Thread FoxyBrown via Digitalmars-d-learn

On Thursday, 27 July 2017 at 23:37:41 UTC, Jonathan M Davis wrote:
On Thursday, July 27, 2017 11:55:21 Ali Çehreli via 
Digitalmars-d-learn wrote:

On 07/27/2017 11:47 AM, Adam D. Ruppe wrote:
> On Thursday, 27 July 2017 at 18:35:02 UTC, FoxyBrown wrote:
>> But the issue was about missing symbols, not anything 
>> "extra". If datatime.d is there but nothing is using it, 
>> why should it matter?

>
> YOU were using it with an `import std.datetime;` line. With 
> the file still there, it sees it referenced from your code 
> and loads the file... but since it is no longer used 
> upstream, the .lib doesn't contain it and thus missing 
> symbol.


So, the actual problem is that given both

   datetime/package.d and
   datetime.d,

the import statement prefers the file. It could produce a 
compilation error.


If we don't want that extra check by the compiler, it would be 
better to keep datetime.d with a warning in it about the 
change. The warning could say "please remove this file". :)


I think that this should obviously be a compilation error as 
should any case where you've basically declared the same module 
twice. And really, I don't see any reason to support extracting 
the new zip on the old folder. We've never said that that would 
work, and if you think it through, it really isn't all that 
reasonable to expect that it would work. The list of files 
changes from release to release (even if removals are rare), 
and the layout of the directories could change. So long as the 
sc.ini or dmd.conf does ther right thing, then that really 
isn't a problem. Obviously, it's more of a pain if folks are 
making manual changes, but we've never promised that the 
directory structure of each release would be identical or that 
copying one compiler release on top of another would work.


- Jonathan M Davis



You are not being very logical.

The zip file as N files in it. No matter what those files are, it 
should be a closed system. That is, if I insert or add(not 
replace) M file to the directory structure it should not break D, 
period!


Why? Because NO file in the zip should be referencing any file 
not in the zip unless it is designed to behave that way(e.g., an 
external lib or whatever).


If an old external program is referencing a file in dmd2 that 
isn't in the new zip it should err.


Why? Because suppose you have an old program that references some 
old file in dmd2 dir and you upgrade dmd2 by extracting the zip. 
The program MAY still work and use broke functionality that will 
go undetected but be harmful.


Why? Because dmd.exe is reference a file it shouldn't and it 
should know it shouldn't yet it does so anyways. It really has 
nothing to do with the file being in the dir but that dmd is 
being stupid because no one bothered to sanity checks because 
they are too lazy/think it's irrelevant because it doesn't effect 
them.


I should be able to put any extra files anywhere in the dmd2 dir 
structure and it should NOT break dmd.


It's like if I put a text file in some OS directory and the OS 
decides to use that file and crash the OS and not boot... it 
could happen, but it shouldn't.


In fact, all of phobos should be versioned. Each module should 
have a version id embedded in it. Each release all the versions 
are updated before shipping.





Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread jmh530 via Digitalmars-d

On Thursday, 27 July 2017 at 23:27:53 UTC, Nicholas Wilson wrote:



Might be useful to mention why not included.


This DIP focuses on function (i.e. @-like attributes), the rest 
of those attributes are storage classes/visibility classes or 
parametric in a way that doesn't fit with this DIP (extern(C++, 
A.B), package(foo) align(N).


So then you might make that more clear, such as by re-titling it 
"Function Attributes" instead of "Attributes" and change language 
in certain locations, like in the abstract, to refer to function 
attributions specifically instead of all attributes.


Re: using DCompute

2017-07-27 Thread Nicholas Wilson via Digitalmars-d-learn

On Thursday, 27 July 2017 at 21:33:29 UTC, James Dean wrote:
I'm interested in trying it out, says it's just for ldc. Can we 
simply compile it using ldc then import it and use dmd, ldc, or 
gdc afterwards?


The ability to write kernels is limited to LDC, though there is 
no practical reason that, once compiled, you couldn't use 
resulting generated files with GDC or DMD (as long as the 
mangling matches, which it should). This is not a priority to get 
working, since the assumption is if you're trying to use the GPU 
to boost your computing power, then you like care enough to use 
LDC, as opposed to DMD (GDC is still a bit behind DMD so I don't 
consider it) to get good optimisations in the first place.



---
a SPIRV capable LLVM (available here to build ldc to to support 
SPIRV (required for OpenCL)).
or LDC built with any LLVM 3.9.1 or greater that has the NVPTX 
backend enabled, to support CUDA.

---

Is the LDC from the download pages have these enabled?


I dont think so, although future releases will likely have the 
NVPTX backend enabled.


Also, can DCompute or any GPU stuff efficiently render stuff 
because it is already on the GPU or does one sort of have to 
jump through hoops to, say, render a buffer?


There are memory sharing extensions that allow you to give access 
to and from OpenGL/DirectX so you shouldn't suffer a perf penalty 
for doing so.


e.g., suppose I want to compute a 3D mathematical function and 
visualize it's volume. Do I go in to the GPU, do the compute, 
back out to cpu, then to the graphics system(opengl/directX) or 
can I just essentially do it all from the gpu?


there should be no I/O overhead.


Re: @safe and null dereferencing

2017-07-27 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 27, 2017 at 09:32:12PM +, Moritz Maxeiner via Digitalmars-d 
wrote:
> On Thursday, 27 July 2017 at 20:48:51 UTC, H. S. Teoh wrote:
[...]
> > If someone malicious has root access to your server, you already
> > have much bigger things to worry about than D services hanging. :-D
> 
> That depends on how valuable you are as a target, how hard it was to
> gain root access, and what the attacker's intentions are.  If you are
> a high value target for which root access was hard to get, the
> attacker is unlikely to risk detection by doing things that someone
> (or an IDS) will categorize as an attack; the attacker is much more
> likely to try and subvert the system without being detected; see for
> example how Stuxnet was used to slowly damage centrifuge machines.

Yes, and therefore "you already have much bigger things to worry about
than D services hanging".  That you're ignorant of the compromise does
not negate the fact that you do have bigger things to worry about,
you're just blissfully unaware of them. :-P  Until the good stuff hits
the proverbial fan, of course.


T

-- 
Tell me and I forget. Teach me and I remember. Involve me and I understand. -- 
Benjamin Franklin


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 18:06:41 UTC, jmh530 wrote:
I think those are only for overwriting @nogc module, but the 
DIP should be more clear on this matter. I would assume you can 
import core.attribute to simplify that.


core.attribute will be implicitly imported. That is the FQN. As a 
regular attribute it can be aliased.


Also, the DIP doesn't provide names for the attribute groups 
for the other ones. I assume GarbageCollectedness is just named 
that for the purpose of the example and is something that could 
be changed. Ideally, it would provide the names for each of the 
different groups as part of the DIP.


Heh, I know how much fun bike shedding is on the D forums...


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 16:56:14 UTC, ketmar wrote:

Mike Parker wrote:


DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md

All review-related feedback on and discussion of the DIP 
should occur in this thread. The review period will end at 
11:59 PM ET on August 10 (3:59 AM GMT August 11), or when I 
make a post declaring it complete.


At the end of Round 1, if further review is deemed necessary, 
the DIP will be scheduled for another round. Otherwise, it 
will be queued for the formal review and evaluation by the 
language authors.


Thanks in advance to all who participate.

Destroy!


didn't get the rationale of the DIP at all. the only important 
case -- attribute cancelation -- is either missing, or so 
well-hidden that i didn't found it (except fast mention). 
everything other looks like atronautical complexity for the 
sake of having some "abstract good" (that is, for all my years 
of using D as the only lanugage i'm writing code into, i never 
had any need to "group defaults" or something -- only to 
selectively cancel attrs).


tl;dr: ketmar absolutely didn't got what this DIP is about.


Hmm, maybe a "last applied wins" could work, although this may 
require some complex changes to the compiler if the order that 
attributes apply is unspecified.


As in reply to a sibling comment the change is very simple: 
keyword- like function attributes instead become regular 
attributes.





Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 17:35:34 UTC, Adrian Matoga wrote:
I don't want to see monsters like 
"@core.attribute.GarbageCollectedness.inferred" as part of any 
declaration, ever.
I agree that the problem is valid, but I don't think adding the 
complexity and verboseness presented in the DIP can solve it.


You almost certainly won't, although 
"@core.attribute.GarbageCollectedness.inferred" would still be 
valid. GarbageCollectedness.inferred is a regular attribute and 
can be aliased to whatever you want, put in an AliasSeq or hidden 
behind some template that generates a whole bunch of attributes.


Re: @safe and null dereferencing

2017-07-27 Thread Jonathan M Davis via Digitalmars-d
On Thursday, July 27, 2017 13:48:51 H. S. Teoh via Digitalmars-d wrote:
> On Thu, Jul 27, 2017 at 07:50:52PM +, Moritz Maxeiner via Digitalmars-
d wrote:
> > On Thursday, 27 July 2017 at 18:46:16 UTC, Jonathan M Davis wrote:
> [...]
>
> > > I see no problem whatsoever requiring that the platform segfaults
> > > when you dereference null. Anything even vaguely modern will do
> > > that. Adding extra null checks is therefore redundant and
> > > complicates the compiler for no gain whatsoever.
> >
> > Except that when someone gets (root) access to any modern Linux
> > servers running D services he now has an easy way to create a denial
> > of service attack the owner of the server won't easily be able to find
> > the cause of, because pretty much everything *looks* right, except
> > that somehow the D services hang.
>
> If someone malicious has root access to your server, you already have
> much bigger things to worry about than D services hanging. :-D

Agreed. And Safe D has never made any promises about denial of service
attacks and whatnot, let alone preventing things going wrong if someone has
root access. If you don't want segfaulting to open a window for someone to
hit you with a DoS attack, then don't dereference null pointers, and if you
don't want someone to do nasty things to your server that would require them
to be root, then do the appropriate things to protect your machine so that
they don't have root. We can _always_ find ways that a badly written program
can have issues with DoS attacks or have trouble if someone malicious has
access to the machine that it's running on. @safe is about guaranteeing
memory safety, not about stopping people from screwing you over when you
write bad code or fail to protect your computer from attacks.

- Jonathan M Davis



Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 15:48:04 UTC, Olivier FAURE wrote:

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


This DIP proposes a very complex change (treating attributes as 
Enums), but doesn't really provide a rationale for these 
changes.


It is actually a very simple change, from the end user 
perspective.
* Function attributes that were keyword like, become regular 
attributes.
* They can be applied to modules, acting as a default for 
applicable symbols in the module.






Re: It makes me sick!

2017-07-27 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, July 27, 2017 11:55:21 Ali Çehreli via Digitalmars-d-learn 
wrote:
> On 07/27/2017 11:47 AM, Adam D. Ruppe wrote:
> > On Thursday, 27 July 2017 at 18:35:02 UTC, FoxyBrown wrote:
> >> But the issue was about missing symbols, not anything "extra". If
> >> datatime.d is there but nothing is using it, why should it matter?
> >
> > YOU were using it with an `import std.datetime;` line. With the file
> > still there, it sees it referenced from your code and loads the file...
> > but since it is no longer used upstream, the .lib doesn't contain it and
> > thus missing symbol.
>
> So, the actual problem is that given both
>
>datetime/package.d and
>datetime.d,
>
> the import statement prefers the file. It could produce a compilation
> error.
>
> If we don't want that extra check by the compiler, it would be better to
> keep datetime.d with a warning in it about the change. The warning could
> say "please remove this file". :)

I think that this should obviously be a compilation error as should any case
where you've basically declared the same module twice. And really, I don't
see any reason to support extracting the new zip on the old folder. We've
never said that that would work, and if you think it through, it really
isn't all that reasonable to expect that it would work. The list of files
changes from release to release (even if removals are rare), and the layout
of the directories could change. So long as the sc.ini or dmd.conf does ther
right thing, then that really isn't a problem. Obviously, it's more of a
pain if folks are making manual changes, but we've never promised that the
directory structure of each release would be identical or that copying one
compiler release on top of another would work.

- Jonathan M Davis




Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 15:40:01 UTC, jmh530 wrote:

On Thursday, 27 July 2017 at 14:58:22 UTC, Atila Neves wrote:



_Why_ it works like that I have no idea.



I thought that the attributes were just using the same behavior 
as public/private/etc.


Anyway, isn't that the same type of behavior this DIP is 
suggesting? There is an @nogc module foo; example in the DIP 
that has a gc function included and doesn't say anything about 
it being an error.


The DIP has a list of attributes not encompassed, but there are 
missing attributes from [1]. For instance, the visibility 
attributes are not encompassed, but that is not mentioned. In 
this case, they are grouped and have a default (public) and an 
opposite (private). However, it would break a lot of code to 
force them to use @. Might be useful to mention why not 
included.


https://dlang.org/spec/attribute.html


Hmm. With  private/package/protected/public/export you can mix 
and match them as you please:


public:

void foo() {}
void bar() {}
private:
void baz() {}
int a,b;
public int c;

whereas if it were to be encompassed by this DIP that would no 
longer work. Maybe it should work, perhaps a last attribute wins 
(assuming previous `@attributes:` come before it in the list)?



Might be useful to mention why not included.


This DIP focuses on function (i.e. @-like attributes), the rest 
of those attributes are storage classes/visibility classes or 
parametric in a way that doesn't fit with this DIP (extern(C++, 
A.B), package(foo) align(N).




Re: @safe and null dereferencing

2017-07-27 Thread Marco Leise via Digitalmars-d
Am Thu, 27 Jul 2017 17:59:41 +
schrieb Adrian Matoga :

> On Thursday, 27 July 2017 at 17:43:17 UTC, H. S. Teoh wrote:
> > On Thu, Jul 27, 2017 at 05:33:22PM +, Adrian Matoga via 
> > Digitalmars-d wrote: [...]  
> >> Why can't we just make the compiler insert null checks in 
> >> @safe code?  
> >
> > Because not inserting null checks is a sacred cow we inherited 
> > from the C/C++ days of POOP (premature optimization oriented 
> > programming), and we are loathe to slaughter it.  :-P  We 
> > should seriously take some measurements of this in a large D 
> > project to determine whether or not inserting null checks 
> > actually makes a significant difference in performance.  
> 
> That's exactly what I thought.

A typical non-synthetic worst case candidate should be in the
tests that would invoke a lot of null checks. (Could be a
function call at first to count checks per run of executable
and pick a good project.)

-- 
Marco



Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Nicholas Wilson via Digitalmars-d

On Thursday, 27 July 2017 at 14:58:22 UTC, Atila Neves wrote:
"at the top of a file means that one can never "undo" those 
attributes"


That's not true for `@safe`. This is perfectly legal:

@safe:

void foo()  { ... }// foo is @safe
void bar() @system { } // bar is @system


_Why_ it works like that I have no idea.

Atila


Huh.  I guess it's because there are three values in that group, 
unlike the rest of them, and the compiler handles them 
differently.


Re: Pass range to a function

2017-07-27 Thread Ali Çehreli via Digitalmars-d-learn

On 07/27/2017 02:16 PM, Chris wrote:

> What is the value of `???` in the following program:

> void categorize(??? toks) {
>   foreach (t; toks) {
> writeln(t);
>   }
> }

The easiest solution is to make it a template (R is a suitable template 
variable name for a range type):


void categorize(R)(R toks) {
  foreach (t; toks) {
writeln(t);
  }
}

Your function will work with any type that can be iterated with foreach 
and can be passed to writeln. However, you can use template constraints 
to limit its usage, document its usage, or produce better compilation 
errors when it's called with an incompatible type (the error message 
would point at the call site as opposed to the body of categorize):


import std.range;
import std.traits;

void categorize(R)(R toks)
if (isInputRange!R && isSomeString!(ElementType!R)) {
  foreach (t; toks) {
writeln(t);
  }
}

Ali



Re: @safe and null dereferencing

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 27 July 2017 at 20:48:51 UTC, H. S. Teoh wrote:
On Thu, Jul 27, 2017 at 07:50:52PM +, Moritz Maxeiner via 
Digitalmars-d wrote:
On Thursday, 27 July 2017 at 18:46:16 UTC, Jonathan M Davis 
wrote:

[...]
> I see no problem whatsoever requiring that the platform 
> segfaults when you dereference null. Anything even vaguely 
> modern will do that. Adding extra null checks is therefore 
> redundant and complicates the compiler for no gain 
> whatsoever.


Except that when someone gets (root) access to any modern 
Linux servers running D services he now has an easy way to 
create a denial of service attack the owner of the server 
won't easily be able to find the cause of, because pretty much 
everything *looks* right, except that somehow the D services 
hang.


If someone malicious has root access to your server, you 
already have much bigger things to worry about than D services 
hanging. :-D


That depends on how valuable you are as a target, how hard it was 
to gain root access, and what the attacker's intentions are.
If you are a high value target for which root access was hard to 
get, the attacker is unlikely to risk detection by doing things 
that someone (or an IDS) will categorize as an attack; the 
attacker is much more likely to try and subvert the system 
without being detected; see for example how Stuxnet was used to 
slowly damage centrifuge machines.


using DCompute

2017-07-27 Thread James Dean via Digitalmars-d-learn
I'm interested in trying it out, says it's just for ldc. Can we 
simply compile it using ldc then import it and use dmd, ldc, or 
gdc afterwards?


---
a SPIRV capable LLVM (available here to build ldc to to support 
SPIRV (required for OpenCL)).
or LDC built with any LLVM 3.9.1 or greater that has the NVPTX 
backend enabled, to support CUDA.

---

Is the LDC from the download pages have these enabled?

Also, can DCompute or any GPU stuff efficiently render stuff 
because it is already on the GPU or does one sort of have to jump 
through hoops to, say, render a buffer?


e.g., suppose I want to compute a 3D mathematical function and 
visualize it's volume. Do I go in to the GPU, do the compute, 
back out to cpu, then to the graphics system(opengl/directX) or 
can I just essentially do it all from the gpu?




Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Iakh via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

Destroy!


Extend rationale: could be application to templates and using 
with CTFE.


"inferred" is not consistent. As I understand inferred applies to 
templates only. And default value is so called 
inferred_or_system. So it is inferred for templates and system 
for everything else. So whole safety group is:

 - safe
 - system
 - trusted
 - inferred_or_safe / soft_safe
 - inferred_or_system / soft_system


Pass range to a function

2017-07-27 Thread Chris via Digitalmars-d-learn
I'm using regex `matchAll`, and mapping it to get a sequence of 
strings. I then want to pass that sequence to a function. What is 
the general "sequence of strings" type declaration I'd need to 
use?


In C#, it'd be `IEnumerable`. I'd rather not do a 
to-array on the sequence, if possible. (e.g. It'd be nice to just 
pass the lazy sequence into my categorize function.)


What is the value of `???` in the following program:


```
import std.stdio, std.regex, std.string, std.algorithm.iteration;

auto regexToStrSeq(RegexMatch!string toks) {
  return toks.map!(t => t[0].strip());
}

void categorize(??? toks) {
  foreach (t; toks) {
writeln(t);
  }
}

void main()
{
auto reg = 
regex("[\\s,]*(~@|[\\[\\]{\\}()'`~^@]|\"(?:.|[^\"])*\"|;.*|[^\\s\\[\\]{}('\"`,;)]*)");

auto line = "(+ 1 (* 2 32))";
auto baz = matchAll(line, reg);

categorize(regexToStrSeq(baz).array);
}
```


Re: @safe and null dereferencing

2017-07-27 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 27, 2017 at 07:50:52PM +, Moritz Maxeiner via Digitalmars-d 
wrote:
> On Thursday, 27 July 2017 at 18:46:16 UTC, Jonathan M Davis wrote:
[...]
> > I see no problem whatsoever requiring that the platform segfaults
> > when you dereference null. Anything even vaguely modern will do
> > that. Adding extra null checks is therefore redundant and
> > complicates the compiler for no gain whatsoever.
> 
> Except that when someone gets (root) access to any modern Linux
> servers running D services he now has an easy way to create a denial
> of service attack the owner of the server won't easily be able to find
> the cause of, because pretty much everything *looks* right, except
> that somehow the D services hang.

If someone malicious has root access to your server, you already have
much bigger things to worry about than D services hanging. :-D


T

-- 
Don't get stuck in a closet---wear yourself out.


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Andrei Alexandrescu via Digitalmars-d

On 07/25/2017 10:54 PM, Walter Bright wrote:

On 7/25/2017 8:26 AM, Andrei Alexandrescu wrote:

A suite of safe wrappers on OS primitives might be useful.


The idea of fixing the operating system interface(s) has come up now and 
then. I've tried to discourage that on the following grounds:



* We are not in the operating system business.

* Operating system APIs grow like weeds. We'd set ourselves an 
impossible task.


* It's a huge job simply to provide accurate declarations for the APIs.

* We'd have to write our own documentation for the operating system 
APIs. It's hard enough writing such for Phobos.


* A lot are fundamentally unfixable, like free() and strlen().

* The API import files should be focused solely on direct access to the 
APIs, not adding a translation layer. The user of them will expect this.


* We already have safe wrappers for the commonly used APIs. For read(), 
there is std.stdio.


The standard library would not be in the position to provide such, but 
the project seems a good choice for a crowdsource and crowdmaintained 
library. -- Andrei





Re: @safe and null dereferencing

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 27 July 2017 at 20:09:46 UTC, Steven Schveighoffer 
wrote:


Well, let's not forget that the services should not be 
dereferencing null. It's still a bug in the code.


Of course, but statistically speaking, all software is buggy so 
it's not an unreasonable assumption on the attackers part that 
there is at least one null dereference in complex server code 
that will eventually trigger.




It just may result in something other than a process exit.


Which is really bad for process supervision, because it'll likely 
not detect a problem and not kill+respawn the service.




I bet if you lowered that limit, you would cause all sorts of 
trouble, not just in D safe code. Imagine, any function that 
returns null specifically to mean an error, now may return it 
casually as the address of a valid item! You are going to screw 
up all checks for null!


Indeed, but atm I was only concerned about the implications for D 
@safe code.





Re: why no statements inside mixin teplates?

2017-07-27 Thread John Colvin via Digitalmars-d
On Friday, 12 May 2017 at 00:20:13 UTC, سليمان السهمي (Soulaïman 
Sahmi) wrote:
Is there a rational behind not allowing statements inside mixin 
templates? I know mixin does accept code containing statements, 
but using mixin is much uglier. so  I was wondering.


example use case:
//-
int compute(string)
{
return 1;
}

mixin template testBoilerPlate(alias arg, alias expected)
{
{
import std.format : format;
auto got = compute(arg);
assert(got == expected, "expected %s got 
%s".format(expected, got));

}
}

unittest
{
mixin testBoilerPlate("12345", 1);
mixin testBoilerPlate("00" ~ "0", 2 - 1);
}
//


If you can put up with the limitation of what can be done in a 
nested function then this convention works (choose whatever names 
you want, A and __ are just for example):


mixin template A()
{
auto __()
{
++a;
}
}

void main()
{
int a = 0;

mixin A!() __; __.__;

assert (a == 1);
}



Re: Problem with dtor behavior

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Thursday, 27 July 2017 at 19:19:27 UTC, SrMordred wrote:

//D-CODE
struct MyStruct{
int id;
this(int id){
writeln("ctor");
}
~this(){
writeln("dtor");
}
}

MyStruct* obj;
void push(T)(auto ref T value){
obj[0] = value;
}

void main()
{
obj = cast(MyStruct*)malloc( MyStruct.sizeof );
push(MyStruct(1));
}

OUTPUT:
ctor
dtor
dtor

I didnt expected to see two dtors in D (this destroy any 
attempt to free resources properly on the destructor).


AFAICT it's because opAssign (`obj[0] = value` is an opAssign) 
creates a temporary struct object (you can see it being destroyed 
by printing the value of `cast(void*) ` in the destructor).


Can someone explain why is this happening and how to achieve 
the same behavior as c++?


Use std.conv.emplace:

---
import std.conv : emplace;

void push(T)(auto ref T value){
emplace(obj, value);
}
---


Re: @safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 3:50 PM, Moritz Maxeiner wrote:

On Thursday, 27 July 2017 at 18:46:16 UTC, Jonathan M Davis wrote:
On Thursday, July 27, 2017 11:03:02 Steven Schveighoffer via 
Digitalmars-d wrote:

A possibility:

"@safe D does not support platforms or processes where dereferencing 
a null pointer does not crash the program. In such situations, 
dereferencing null is not defined, and @safe code will not prevent 
this from happening."


In terms of not marking C/C++ code safe, I am not convinced we need 
to go that far, but it's not as horrible a prospect as having to 
unmark D @safe code that might dereference null.


I see no problem whatsoever requiring that the platform segfaults when 
you dereference null. Anything even vaguely modern will do that. 
Adding extra null checks is therefore redundant and complicates the 
compiler for no gain whatsoever.


Except that when someone gets (root) access to any modern Linux servers 
running D services he now has an easy way to create a denial of service 
attack the owner of the server won't easily be able to find the cause 
of, because pretty much everything *looks* right, except that somehow 
the D services hang.


Well, let's not forget that the services should not be dereferencing 
null. It's still a bug in the code.


It just may result in something other than a process exit.

I bet if you lowered that limit, you would cause all sorts of trouble, 
not just in D safe code. Imagine, any function that returns null 
specifically to mean an error, now may return it casually as the address 
of a valid item! You are going to screw up all checks for null!


-Steve


Randomed/encoded code

2017-07-27 Thread James Dean via Digitalmars-d-learn
I would like to encode code in such a way that each compilation 
produces "random" code as compared to what the previous 
compilation produced, but ultimately the same code is ran each 
time(same effect).


Basically we can code a function that does a job X in many 
different ways. Each way looks different in binary but does the 
same job(Same effect). I'd like a way to sort of randomly 
sample/generate the different functions that do the same job.



The easiest way to wrap your head around this is to realize that 
certain instructions and groups of instructions can be 
rearranged, producing a binary that is different but the effect 
is the same. Probably, ultimately, that is all that can be 
done(certain other tricks could possibly be added to increase the 
sampling coverage such as nop like instructions, dummy 
instructions, etc).


The main issue is how to take an actual D function and transform 
it in to a new D function, which, when ran, ultimately does the 
same thing as the original but is not the same "binary".


Encrypting is a subset of this problem as we can take any string 
and use it to encode the code then decrypt it. And this may be 
usable, but then the encryption and decryption instructions must 
somehow be randomizable, else we are back at square one. It might 
be easier though, to use the encryption method to randomize the 
original function since the encryption routine is known while the 
original function is not(as it could be any function).


I'm not looking for a mathematical solution, just something that 
works well in practice. i.e., The most skilled human reading the 
disassembly would find it very difficult to interpret what is 
going on. He might be able to figure out one encryption routine, 
say, but when he sees the "same code"(same effect) he will have 
to start from scratch to understand it because its been 
"randomized".


The best way I can see how to do this is to have a list of well 
known encoding routines that take an arbitrary function, encrypt 
it. Each routine can be "randomized" by using various techniques 
to disguise it such as those mentioned earlier. This expands the 
list of functions tremendously. If there are N functions and M 
different ways to alter each of those functions then there are 
N*M total functions that we can use to encrypt the original 
function. If we further allow function composition of these 
functions, then we can get several orders of magnitude of 
complexity with such as a few N.


The goal though, is to do this efficiently and effectively in a 
way that can be amended. It will be useful in copy protection and 
used with other techniques to make it much more effective. 
Ultimately the weak point with the encryption techniques is 
decryption the functions but by composing encryption routines 
makes it stronger.


Any ideas how to achieve this in D nicely?







Re: @safe and null dereferencing

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 27 July 2017 at 18:46:16 UTC, Jonathan M Davis wrote:
On Thursday, July 27, 2017 11:03:02 Steven Schveighoffer via 
Digitalmars-d wrote:

A possibility:

"@safe D does not support platforms or processes where 
dereferencing a null pointer does not crash the program. In 
such situations, dereferencing null is not defined, and @safe 
code will not prevent this from happening."


In terms of not marking C/C++ code safe, I am not convinced we 
need to go that far, but it's not as horrible a prospect as 
having to unmark D @safe code that might dereference null.


I see no problem whatsoever requiring that the platform 
segfaults when you dereference null. Anything even vaguely 
modern will do that. Adding extra null checks is therefore 
redundant and complicates the compiler for no gain whatsoever.


Except that when someone gets (root) access to any modern Linux 
servers running D services he now has an easy way to create a 
denial of service attack the owner of the server won't easily be 
able to find the cause of, because pretty much everything *looks* 
right, except that somehow the D services hang.


Re: It makes me sick!

2017-07-27 Thread Jesse Phillips via Digitalmars-d-learn

On Thursday, 27 July 2017 at 03:34:19 UTC, FoxyBrown wrote:
Knowing that every time I upgrade to the latest "official" D 
compiler I run in to trouble:


I recompiled gtkD with the new compiler, same result.  My code 
was working before the upgrade just fine and I did not change 
anything.


I've had to delete my previous install at least 2 times before. 
It is an infrequent headache I hit because I'm not following 
appropriate install steps. I cannot expect upstream to support a 
DMD folder which has additional files from what they have 
provided.


Here is my attempt to explain the problem.

* std/datetime.d has a different mangled name than 
std/datetime/package.d.
* The phobos.lib contains the std.datetime.package module and no 
longer contains the std.datetime module.
* When the compiler is reading your code it sees imports for 
std.datetime and looks at the import location 
/install/directory/dmd2/src/std and it writes a reference to the 
std/datetime.d file.
* The linker takes over, loads up phobos.lib and barfs since the 
referenced symbol was not compiled into your obj file nor the 
released phobos.lib.


More recently the headache I've been hitting with upgrades is 
improvements to @safe and such. The bug fixes around this cause 
libraries I'm using to fail compilation. Even this isn't so bad, 
but the library that files is a dependent of a dub package I'm 
using. This means I have to wait for the dependent package to 
update and release followed by the dub package I'm actually 
referencing. So even if I create the needed patches, I have to 
wait at each step for the author to merge and tag their release. 
(or create a branch of the project and throw it in dub so I can 
control all the updates)


Re: @safe and null dereferencing

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 27 July 2017 at 17:52:09 UTC, H. S. Teoh wrote:
On Thu, Jul 27, 2017 at 11:03:02AM -0400, Steven Schveighoffer 
via Digitalmars-d wrote: [...]
However, there do exist places where dereferencing null may 
NOT cause a segmentation fault. For example, see this post by 
Moritz Maxeiner: 
https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org


In such cases, the compiled program can have no knowledge that 
the zero page is mapped somehow. There is no way to prevent 
it, or guarantee it during compilation.

[...]

There is one flaw with Moritz's example: if the zero page is 
mapped somehow, that means 0 is potentially a valid address of 
a variable, and therefore checking for null is basically not 
only useless but wrong: a null check of the address of this 
variable will fail, yet the pointer is actually pointing at a 
valid address that just happens to be 0.  IOW, if the zero page 
is mapped, we're *already* screwed anyway, might as well just 
give up now.


The point of the example was to show that exploiting the "null 
dereferences segfault" assumption on a modern Linux system to 
create completely unexpected behaviour (in the case I showed 
fgetc is going to make the process hang -> denial of service with 
hard to detect cause) and break any D program's @safe correctness 
is almost trivial.


Re: It makes me sick!

2017-07-27 Thread Ali Çehreli via Digitalmars-d-learn

On 07/27/2017 12:24 PM, Steven Schveighoffer wrote:

On 7/27/17 3:00 PM, Ali Çehreli wrote:

On 07/27/2017 11:54 AM, Jonathan M Davis via Digitalmars-d-learn wrote:

 > You ended up with two versions of std.datetime. One was the module,
and the
 > other was the package.

I don't know how many people install from the zip file but I think the
zip file should include a datetime.d file with the following statement
in it:

static assert(false, "std.datetime is now a package; please remove
this file");


If std/datetime.d is there, how does one import std/datetime/package.d?

-Steve


Currently not possible. (Thank you for the bug report. :) ) I tried to 
find a band aid to the issue. Otherwise, I agree that the compiler 
should issue an error.


Ali



Re: It makes me sick!

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/27/17 3:00 PM, Ali Çehreli wrote:

On 07/27/2017 11:54 AM, Jonathan M Davis via Digitalmars-d-learn wrote:

 > You ended up with two versions of std.datetime. One was the module, 
and the

 > other was the package.

I don't know how many people install from the zip file but I think the 
zip file should include a datetime.d file with the following statement 
in it:


static assert(false, "std.datetime is now a package; please remove this 
file");


If std/datetime.d is there, how does one import std/datetime/package.d?

-Steve


Re: @safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 2:46 PM, Jonathan M Davis via Digitalmars-d wrote:


However, one issue that has been brought up from time to time and AFAIK has
never really been addressed is that apparently if an object is large enough,
when you access one of its members when the pointer is null, you won't get a
segfault (I think that it was something like if the object was greater than
a page in size). So, as I understand it, ludicrously large objects _could_
result in @safety problems with null pointers. This would not happen in
normal code, but it can happen. And if we want @safe to make the guarantees
that it claims, we really should either disallow such objects or insert null
checks for them. For smaller objects though, what's the point? It buys us
nothing if the hardware is already doing it, and the only hardware that
wouldn't do it should be too old to matter at this point.



Yes: https://issues.dlang.org/show_bug.cgi?id=5176

There is a way to "fix" this: any time you access an object field that 
goes outside the page size, do a null check on the base pointer.


-Steve


Problem with dtor behavior

2017-07-27 Thread SrMordred via Digitalmars-d-learn

//D-CODE
struct MyStruct{
int id;
this(int id){
writeln("ctor");
}
~this(){
writeln("dtor");
}
}

MyStruct* obj;
void push(T)(auto ref T value){
obj[0] = value;
}

void main()
{
obj = cast(MyStruct*)malloc( MyStruct.sizeof );
push(MyStruct(1));
}

OUTPUT:
ctor
dtor
dtor


//C++ CODE
#include 
#include 
using namespace std;
void writeln(string s){ cout << s << '\n'; }

struct MyStruct{
int id;
MyStruct(int id){
writeln("ctor");
}
~MyStruct(){
writeln("dtor");
}
};

MyStruct* obj;
template
void push(T&& value){
obj[0] = value;
}


int main()
{
obj = (MyStruct*)malloc( sizeof(MyStruct) );
push(MyStruct(1));
return 0;
}

OUTPUT:
ctor
dtor


I didnt expected to see two dtors in D (this destroy any attempt 
to free resources properly on the destructor).
Can someone explain why is this happening and how to achieve the 
same behavior as c++?

Thanks :)


Re: It makes me sick!

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/27/17 2:54 PM, Jonathan M Davis via Digitalmars-d-learn wrote:


You ended up with two versions of std.datetime. One was the module, and the
other was the package. importing std.datetime could have imported either of
them. dmd _should_ generate an error in that case, but I don't know if it
does or not.


It does not (obviously, we are discussing it :)

https://issues.dlang.org/show_bug.cgi?id=17699

-Steve


Re: It makes me sick!

2017-07-27 Thread Ali Çehreli via Digitalmars-d-learn

On 07/27/2017 11:54 AM, Jonathan M Davis via Digitalmars-d-learn wrote:

> You ended up with two versions of std.datetime. One was the module, 
and the

> other was the package.

I don't know how many people install from the zip file but I think the 
zip file should include a datetime.d file with the following statement 
in it:


static assert(false, "std.datetime is now a package; please remove this 
file");


Of course the problem is, the user would have to remove the file every 
time they unzipped potentially a later release. :/ We need more 
prominent information about the change I guess.


Ali



[Issue 17699] New: Importing a module that has both modulename.d and modulename/package.d should be an error

2017-07-27 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=17699

  Issue ID: 17699
   Summary: Importing a module that has both modulename.d and
modulename/package.d should be an error
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: schvei...@yahoo.com

This has happened to several people, and can be reproduced easily:

extract dmd 2.074.1
extract dmd 2.075.0 over it.

Write some code that imports std.datetime.

result: linker errors.

What is happening is 2.075 split std.datetime into a std.datetime package. This
creates a situation where both std/datetime.d and std/datetime/package.d exist.
The compiler imports the former and ignores the latter.

However, the library is built with only the package file, so the symbol
mangling is completely different for the new version.

I propose that if the compiler sees such a situation, it should throw an
ambiguity error instead of picking one or the other. This would at least
prevent such head-scratching errors.

--


Re: D easily overlooked?

2017-07-27 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 26 July 2017 at 22:18:37 UTC, kinke wrote:
My point was improving vs. complaining. Both take some analysis 
to figure out an issue, but then some people step up and try to 
help improving things and some just let out their frustration, 
wondering why noone has been working on that particular 
oh-so-obvious thing, and possibly drop out, like all the 
like-minded guys before them.


Yeah, providing details about what failed so that others can 
improve on it if they want to is of course a sensible thing to 
do, but I think most of such "complaints" could be avoided by 
being up-front explicit about what a tool does well and what it 
doesn't do well.


If I use a product and it breaks in an unexpected way I assume 
that not only that part is broken but that a lot of other parts 
are broken as well, i.e. I would assume it was alpha quality. And 
then the provider would be better off by marking it as such.




Re: It makes me sick!

2017-07-27 Thread Ali Çehreli via Digitalmars-d-learn

On 07/27/2017 11:47 AM, Adam D. Ruppe wrote:

On Thursday, 27 July 2017 at 18:35:02 UTC, FoxyBrown wrote:

But the issue was about missing symbols, not anything "extra". If
datatime.d is there but nothing is using it, why should it matter?


YOU were using it with an `import std.datetime;` line. With the file
still there, it sees it referenced from your code and loads the file...
but since it is no longer used upstream, the .lib doesn't contain it and
thus missing symbol.



So, the actual problem is that given both

  datetime/package.d and
  datetime.d,

the import statement prefers the file. It could produce a compilation error.

If we don't want that extra check by the compiler, it would be better to 
keep datetime.d with a warning in it about the change. The warning could 
say "please remove this file". :)


Ali



Re: It makes me sick!

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/27/17 2:35 PM, FoxyBrown wrote:

On Thursday, 27 July 2017 at 18:14:52 UTC, Steven Schveighoffer wrote:

On 7/27/17 1:58 PM, FoxyBrown wrote:

On Thursday, 27 July 2017 at 12:23:52 UTC, Jonathan M Davis wrote:
On Wednesday, July 26, 2017 22:29:00 Ali Çehreli via 
Digitalmars-d-learn wrote:

On 07/26/2017 09:20 PM, FoxyBrown wrote:
 >> Somebody else had the same problem which they solved by removing
 >>
 >> "entire dmd":
 >> http://forum.dlang.org/thread/ejybuwermnentslcy...@forum.dlang.org
 >>
 >> Ali
 >
 > Thanks, that was it. So I guess I have to delete the original 
dmd2 dir

 > before I install each time... didn't use to have to do that.

Normally, it shouldn't be necessary. The splitting of the datetime 
package[1] had this effect but I'm not sure why the installation 
process can't take care of it.


Ali

[1] http://dlang.org/changelog/2.075.0.html#split-std-datetime


It _should_ take care of it. The fact that multiple people have run 
into this problem and that the solution was to remove dmd and then 
reinstall it implies that there's a bug in the installer.


- Jonathan M Davis


I do not use the installer, I use the zip file. I assumed that 
everything would be overwritten and any old stuff would simply go 
unused.. but it seems it doesn't. If the other person used the 
installer then it is a problem with dmd itself not designed properly 
and using files that it shouldn't. I simply unzip the zip file in to 
the dmd2 dir and replace sc.ini... that has been my MO for since I've 
been trying out dmd2 and only recently has it had a problem.


If you extracted the zip file over the original install, then it 
didn't get rid of std/datetime.d (as extracting a zipfile doesn't 
remove items that exist on the current filesystem but aren't in the 
zipfile). So I can totally see this happening.


I don't know of a good way to solve this except to tell people, don't 
do that.


But the issue was about missing symbols, not anything "extra". If 
datatime.d is there but nothing is using it, why should it matter? Why 
would it have any effect on the compilation process and create errors 
with D telling me something is being used that isn't?


dmd shouldn't be picking up extraneous and non-connected files just for 
the fun of it.


They aren't non-connected. If you import std.datetime, the compiler is 
first going to look for std/datetime.d. Not finding that, it will look 
for std/datetime/package.d.


The latter is what is supported and built into the library for 2.075. 
The former is a ghost of the original installation, but it's what *your* 
code is importing. You might not even import std.datetime, but rather 
something else that imports it. Either way, the compiler generates the 
wrong mangled names, and they don't match up with the ones in the library.


-Steve


Re: It makes me sick!

2017-07-27 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, July 27, 2017 18:35:02 FoxyBrown via Digitalmars-d-learn wrote:
> On Thursday, 27 July 2017 at 18:14:52 UTC, Steven Schveighoffer
>
> wrote:
> > On 7/27/17 1:58 PM, FoxyBrown wrote:
> >> On Thursday, 27 July 2017 at 12:23:52 UTC, Jonathan M Davis
> >>
> >> wrote:
> >>> On Wednesday, July 26, 2017 22:29:00 Ali Çehreli via
> >>>
> >>> Digitalmars-d-learn wrote:
>  On 07/26/2017 09:20 PM, FoxyBrown wrote:
>   >> Somebody else had the same problem which they solved by
> 
>  removing
> 
>   >> "entire dmd":
>  http://forum.dlang.org/thread/ejybuwermnentslcy...@forum.dlang.org
> 
>   >> Ali
>   >
>   > Thanks, that was it. So I guess I have to delete the
> 
>  original dmd2 dir
> 
>   > before I install each time... didn't use to have to do
> 
>  that.
> 
>  Normally, it shouldn't be necessary. The splitting of the
>  datetime package[1] had this effect but I'm not sure why the
>  installation process can't take care of it.
> 
>  Ali
> 
>  [1]
>  http://dlang.org/changelog/2.075.0.html#split-std-datetime
> >>>
> >>> It _should_ take care of it. The fact that multiple people
> >>> have run into this problem and that the solution was to
> >>> remove dmd and then reinstall it implies that there's a bug
> >>> in the installer.
> >>>
> >>> - Jonathan M Davis
> >>
> >> I do not use the installer, I use the zip file. I assumed that
> >> everything would be overwritten and any old stuff would simply
> >> go unused.. but it seems it doesn't. If the other person used
> >> the installer then it is a problem with dmd itself not
> >> designed properly and using files that it shouldn't. I simply
> >> unzip the zip file in to the dmd2 dir and replace sc.ini...
> >> that has been my MO for since I've been trying out dmd2 and
> >> only recently has it had a problem.
> >
> > If you extracted the zip file over the original install, then
> > it didn't get rid of std/datetime.d (as extracting a zipfile
> > doesn't remove items that exist on the current filesystem but
> > aren't in the zipfile). So I can totally see this happening.
> >
> > I don't know of a good way to solve this except to tell people,
> > don't do that.
> >
> > -Steve
>
> But the issue was about missing symbols, not anything "extra". If
> datatime.d is there but nothing is using it, why should it
> matter? Why would it have any effect on the compilation process
> and create errors with D telling me something is being used that
> isn't?
>
> dmd shouldn't be picking up extraneous and non-connected files
> just for the fun of it.
>
> Basically, if no "references" escape out side of the D ecosystem,
> then there shouldn't be a problem.

You ended up with two versions of std.datetime. One was the module, and the
other was the package. importing std.datetime could have imported either of
them. dmd _should_ generate an error in that case, but I don't know if it
does or not. And depending on what you were doing, if you were dealing with
previously generated object files rather than fully building your project
from scratch, they would have depended on symbols that did not exist
anymore, because they were moved to other modules. And in that case, dmd
would not have generated an error about conflicting symbols, because the
code that was using the symbols had already been compiled. It would have
just complained about the missing symbols - which is what you saw.

If you'd just make sure that you uninstall the previous version before
installing the new one, you wouldn't have to worry about any such problems.
The installer would take care of that for you, but if you want to use the
zip file, then you're going to have to do it manually, and deleting the
directory and then unzipping instead of just unzipping on top of it would
take you less time than you've spent complaining about how it should have
worked.

- Jonathan M Davis




Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread rjframe via Digitalmars-d
On Thu, 27 Jul 2017 14:44:23 +, Mike Parker wrote:

> DIP 1012 is titled "Attributes".
> 
> https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


1. I would like to see consistency; I'd rather see @nogc and @gc than @nogc 
and @core.attributes.[whatever].gc, so all these attributes should be 
aliased alike.

2. I don't really understand the need for this. The two times I wanted a 
whole module @safe but failed (so note I'm speaking from a lack of 
experience), I just placed the one or two functions above the attribute.

Though mixing combinations of attributes on functions would be greatly 
simplified by this proposal, I personally don't know how common that is, 
so I don't know the value in the proposal.

3. I don't like @inferred. If I'm going to call a function, I need to know 
whether I can call it from a @safe/@nogc/@whatever function. I can't 
imagine trying to work with Phobos (or any other library) if it documented 
@inferred everywhere. Unless I've missed the point.


Re: It makes me sick!

2017-07-27 Thread Adam D. Ruppe via Digitalmars-d-learn

On Thursday, 27 July 2017 at 18:47:57 UTC, Adam D. Ruppe wrote:

YOU were using it with an `import std.datetime;` line


Of course, it is also possible that import was through a 
dependency of something you used, possibly including the standard 
library.




Re: @safe and null dereferencing

2017-07-27 Thread Jonathan M Davis via Digitalmars-d
On Thursday, July 27, 2017 11:03:02 Steven Schveighoffer via Digitalmars-d 
wrote:
> A possibility:
>
> "@safe D does not support platforms or processes where dereferencing a
> null pointer does not crash the program. In such situations,
> dereferencing null is not defined, and @safe code will not prevent this
> from happening."
>
> In terms of not marking C/C++ code safe, I am not convinced we need to
> go that far, but it's not as horrible a prospect as having to unmark D
> @safe code that might dereference null.

I see no problem whatsoever requiring that the platform segfaults when you
dereference null. Anything even vaguely modern will do that. Adding extra
null checks is therefore redundant and complicates the compiler for no gain
whatsoever.

However, one issue that has been brought up from time to time and AFAIK has
never really been addressed is that apparently if an object is large enough,
when you access one of its members when the pointer is null, you won't get a
segfault (I think that it was something like if the object was greater than
a page in size). So, as I understand it, ludicrously large objects _could_
result in @safety problems with null pointers. This would not happen in
normal code, but it can happen. And if we want @safe to make the guarantees
that it claims, we really should either disallow such objects or insert null
checks for them. For smaller objects though, what's the point? It buys us
nothing if the hardware is already doing it, and the only hardware that
wouldn't do it should be too old to matter at this point.

So, I say that we need to deal with the problem with ludicrously large
objects, but beyond that, we should just change the spec, because inserting
the checks buys us nothing.

- Jonathan M Davis



Re: It makes me sick!

2017-07-27 Thread FoxyBrown via Digitalmars-d-learn
On Thursday, 27 July 2017 at 18:14:52 UTC, Steven Schveighoffer 
wrote:

On 7/27/17 1:58 PM, FoxyBrown wrote:
On Thursday, 27 July 2017 at 12:23:52 UTC, Jonathan M Davis 
wrote:
On Wednesday, July 26, 2017 22:29:00 Ali Çehreli via 
Digitalmars-d-learn wrote:

On 07/26/2017 09:20 PM, FoxyBrown wrote:
 >> Somebody else had the same problem which they solved by 
removing

 >>
 >> "entire dmd":
 >> 
http://forum.dlang.org/thread/ejybuwermnentslcy...@forum.dlang.org

 >>
 >> Ali
 >
 > Thanks, that was it. So I guess I have to delete the 
original dmd2 dir
 > before I install each time... didn't use to have to do 
that.


Normally, it shouldn't be necessary. The splitting of the 
datetime package[1] had this effect but I'm not sure why the 
installation process can't take care of it.


Ali

[1] 
http://dlang.org/changelog/2.075.0.html#split-std-datetime


It _should_ take care of it. The fact that multiple people 
have run into this problem and that the solution was to 
remove dmd and then reinstall it implies that there's a bug 
in the installer.


- Jonathan M Davis


I do not use the installer, I use the zip file. I assumed that 
everything would be overwritten and any old stuff would simply 
go unused.. but it seems it doesn't. If the other person used 
the installer then it is a problem with dmd itself not 
designed properly and using files that it shouldn't. I simply 
unzip the zip file in to the dmd2 dir and replace sc.ini... 
that has been my MO for since I've been trying out dmd2 and 
only recently has it had a problem.


If you extracted the zip file over the original install, then 
it didn't get rid of std/datetime.d (as extracting a zipfile 
doesn't remove items that exist on the current filesystem but 
aren't in the zipfile). So I can totally see this happening.


I don't know of a good way to solve this except to tell people, 
don't do that.


-Steve


But the issue was about missing symbols, not anything "extra". If 
datatime.d is there but nothing is using it, why should it 
matter? Why would it have any effect on the compilation process 
and create errors with D telling me something is being used that 
isn't?


dmd shouldn't be picking up extraneous and non-connected files 
just for the fun of it.


Basically, if no "references" escape out side of the D ecosystem, 
then there shouldn't be a problem.






Static initialization of pointers

2017-07-27 Thread Adrian Matoga via Digitalmars-d
The D language specification under "Global and static 
initializers" [1], says the following:



The Initializer for a global or static variable must be
evaluatable at compile time. Whether some pointers can be
initialized with the addresses of other functions or data
is implementation defined. Runtime initialization can be
done with static constructors.


What is the rationale of making this implementation defined? 
What's the range of possible behaviors? Are there any 
circumstances in which a pointer can't be initialized with an 
address of a function or data? If so, couldn't a subset of cases 
have an explicitly defined, portable behavior?


As far as I've tested this, DMD, GDC and LDC can handle static 
initialization of pointers to functions or data (except that LDC 
fails if function pointer(s) are obtained via 
__traits(getUnitTests, module_name)), even across separately 
compiled modules, which is consistent with a similar feature of C 
and C++.


IIUC, the C standard always allows such initialization. In 6.6 
Constant expressions, N1570 [2] says:



7 More latitude is permitted for constant expressions
in initializers. Such a constant expression shall be,
or evaluate to, one of the following:
— an arithmetic constant expression,
— a null pointer constant,
— an address constant, or
— an address constant for a complete object type
  plus or minus an integer constant expression.


and


9 An address constant is a null pointer, a pointer to an
lvalue designating an object of static storage duration,
or a pointer to a function designator; it shall be created
explicitly using the unary & operator or an integer constant
cast to pointer type, or implicitly by the use of an
expression of array or function type. The array-subscript []
and member-access . and -> operators, the address & and
indirection * unary operators, and pointer casts may be used
in the creation of an address constant, but the value of an
object shall not be accessed by use of these operators.


[1] https://dlang.org/spec/declaration.html#global_static_init
[2] http://iso-9899.info/n1570.html


Re: It makes me sick!

2017-07-27 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, July 27, 2017 14:14:52 Steven Schveighoffer via Digitalmars-d-
learn wrote:
> On 7/27/17 1:58 PM, FoxyBrown wrote:
> > I do not use the installer, I use the zip file. I assumed that
> > everything would be overwritten and any old stuff would simply go
> > unused.. but it seems it doesn't. If the other person used the installer
> > then it is a problem with dmd itself not designed properly and using
> > files that it shouldn't. I simply unzip the zip file in to the dmd2 dir
> > and replace sc.ini... that has been my MO for since I've been trying out
> > dmd2 and only recently has it had a problem.
>
> If you extracted the zip file over the original install, then it didn't
> get rid of std/datetime.d (as extracting a zipfile doesn't remove items
> that exist on the current filesystem but aren't in the zipfile). So I
> can totally see this happening.
>
> I don't know of a good way to solve this except to tell people, don't do
> that.

Yeah, there are plenty of releases where nothing gets removed, but files do
get removed from time to time, so simply extracting the zip on top of the
old directory will cause problems at least occasionally. Also, in the case
of Linux, an .so is generated with a specific version number in it, so every
release has different files. I don't think that Windows currently has
anything like that, but it could in the future. So, if you want to use the
zip, then you should always remove the old version and not simply overwrite
it.

- Jonathan M Davis



Re: @safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 2:09 PM, Steven Schveighoffer wrote:
there's nothing in the spec to require it. And it does seem apparent 
that we handle this situation.


that we *should* handle this situation.

-Steve


Re: @safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 1:52 PM, H. S. Teoh via Digitalmars-d wrote:

On Thu, Jul 27, 2017 at 11:03:02AM -0400, Steven Schveighoffer via 
Digitalmars-d wrote:
[...]

However, there do exist places where dereferencing null may NOT cause
a segmentation fault. For example, see this post by Moritz Maxeiner:
https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org

In such cases, the compiled program can have no knowledge that the
zero page is mapped somehow. There is no way to prevent it, or
guarantee it during compilation.

[...]

There is one flaw with Moritz's example: if the zero page is mapped
somehow, that means 0 is potentially a valid address of a variable, and
therefore checking for null is basically not only useless but wrong: a
null check of the address of this variable will fail, yet the pointer is
actually pointing at a valid address that just happens to be 0.  IOW, if
the zero page is mapped, we're *already* screwed anyway, might as well
just give up now.


Very true. You wouldn't want to store anything there as any @safe code 
could easily get a pointer to that data at any time!


Either way, the guarantees of @safe go out the window if dereferencing 
null is not a crashing error.


-Steve


Re: It makes me sick!

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/27/17 1:58 PM, FoxyBrown wrote:

On Thursday, 27 July 2017 at 12:23:52 UTC, Jonathan M Davis wrote:
On Wednesday, July 26, 2017 22:29:00 Ali Çehreli via 
Digitalmars-d-learn wrote:

On 07/26/2017 09:20 PM, FoxyBrown wrote:
 >> Somebody else had the same problem which they solved by removing
 >>
 >> "entire dmd":
 >> http://forum.dlang.org/thread/ejybuwermnentslcy...@forum.dlang.org
 >>
 >> Ali
 >
 > Thanks, that was it. So I guess I have to delete the original dmd2 
dir

 > before I install each time... didn't use to have to do that.

Normally, it shouldn't be necessary. The splitting of the datetime 
package[1] had this effect but I'm not sure why the installation 
process can't take care of it.


Ali

[1] http://dlang.org/changelog/2.075.0.html#split-std-datetime


It _should_ take care of it. The fact that multiple people have run 
into this problem and that the solution was to remove dmd and then 
reinstall it implies that there's a bug in the installer.


- Jonathan M Davis


I do not use the installer, I use the zip file. I assumed that 
everything would be overwritten and any old stuff would simply go 
unused.. but it seems it doesn't. If the other person used the installer 
then it is a problem with dmd itself not designed properly and using 
files that it shouldn't. I simply unzip the zip file in to the dmd2 dir 
and replace sc.ini... that has been my MO for since I've been trying out 
dmd2 and only recently has it had a problem.


If you extracted the zip file over the original install, then it didn't 
get rid of std/datetime.d (as extracting a zipfile doesn't remove items 
that exist on the current filesystem but aren't in the zipfile). So I 
can totally see this happening.


I don't know of a good way to solve this except to tell people, don't do 
that.


-Steve


Re: @safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 1:33 PM, Adrian Matoga wrote:


Why can't we just make the compiler insert null checks in @safe code? We 
can afford bounds checking even in @system -O -release. C++ can afford 
null check upon executing an std::function. The pointer would most 
likely be in a register anyway, and the conditional branch would almost 
always not be taken, so the cost of that check would be barely 
measurable. Moreover, the compiler can elide the check e.g. if the 
access via pointer is made in a loop in which the pointer doesn't 
change. And if you prove that this tiny little check ruins performance 
of your code, there's @trusted to help you.


The rationale from Walter has always been that the hardware is already 
doing this for us. I was always under the assumption that D only 
supported environments/systems where this happens. But technically 
there's nothing in the spec to require it. And it does seem apparent 
that we handle this situation.


This question/query is asking whether we should amend the spec with 
(what I think is) Walter's view, or if we should change the compiler to 
insert the checks.


-Steve


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread jmh530 via Digitalmars-d

On Thursday, 27 July 2017 at 17:35:34 UTC, Adrian Matoga wrote:


I don't want to see monsters like 
"@core.attribute.GarbageCollectedness.inferred" as part of any 
declaration, ever.
I agree that the problem is valid, but I don't think adding the 
complexity and verboseness presented in the DIP can solve it.


I think those are only for overwriting @nogc module, but the DIP 
should be more clear on this matter. I would assume you can 
import core.attribute to simplify that.


Also, the DIP doesn't provide names for the attribute groups for 
the other ones. I assume GarbageCollectedness is just named that 
for the purpose of the example and is something that could be 
changed. Ideally, it would provide the names for each of the 
different groups as part of the DIP.


Re: @safe and null dereferencing

2017-07-27 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 27, 2017 at 11:03:02AM -0400, Steven Schveighoffer via 
Digitalmars-d wrote:
[...]
> However, there do exist places where dereferencing null may NOT cause
> a segmentation fault. For example, see this post by Moritz Maxeiner:
> https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org
> 
> In such cases, the compiled program can have no knowledge that the
> zero page is mapped somehow. There is no way to prevent it, or
> guarantee it during compilation.
[...]

There is one flaw with Moritz's example: if the zero page is mapped
somehow, that means 0 is potentially a valid address of a variable, and
therefore checking for null is basically not only useless but wrong: a
null check of the address of this variable will fail, yet the pointer is
actually pointing at a valid address that just happens to be 0.  IOW, if
the zero page is mapped, we're *already* screwed anyway, might as well
just give up now.

One workaround for this is to redefine a null pointer as size_t.max
(i.e., all bits set) instead of 0.  It's far less likely for a valid
address to be size_t.max than for 0 to be a valid address in a system
where the zero page is mappable (due to alignment issues, the only
possibility is if you have a ubyte* pointing to data stored at the
address size_t.max, whereas address 0 can be a valid address for any
data type).  However, this will break basically *all* code out there in
C/C++/D land, so I don't see it ever happening in this lifetime.


T

-- 
Those who've learned LaTeX swear by it. Those who are learning LaTeX swear at 
it. -- Pete Bleackley


Re: @safe and null dereferencing

2017-07-27 Thread Adrian Matoga via Digitalmars-d

On Thursday, 27 July 2017 at 17:43:17 UTC, H. S. Teoh wrote:
On Thu, Jul 27, 2017 at 05:33:22PM +, Adrian Matoga via 
Digitalmars-d wrote: [...]
Why can't we just make the compiler insert null checks in 
@safe code?


Because not inserting null checks is a sacred cow we inherited 
from the C/C++ days of POOP (premature optimization oriented 
programming), and we are loathe to slaughter it.  :-P  We 
should seriously take some measurements of this in a large D 
project to determine whether or not inserting null checks 
actually makes a significant difference in performance.


That's exactly what I thought.



Re: It makes me sick!

2017-07-27 Thread FoxyBrown via Digitalmars-d-learn

On Thursday, 27 July 2017 at 12:23:52 UTC, Jonathan M Davis wrote:
On Wednesday, July 26, 2017 22:29:00 Ali Çehreli via 
Digitalmars-d-learn wrote:

On 07/26/2017 09:20 PM, FoxyBrown wrote:
 >> Somebody else had the same problem which they solved by 
removing

 >>
 >> "entire dmd":
 >>   
http://forum.dlang.org/thread/ejybuwermnentslcy...@forum.dlang.org

 >>
 >> Ali
 >
 > Thanks, that was it. So I guess I have to delete the 
original dmd2 dir

 > before I install each time... didn't use to have to do that.

Normally, it shouldn't be necessary. The splitting of the 
datetime package[1] had this effect but I'm not sure why the 
installation process can't take care of it.


Ali

[1] http://dlang.org/changelog/2.075.0.html#split-std-datetime


It _should_ take care of it. The fact that multiple people have 
run into this problem and that the solution was to remove dmd 
and then reinstall it implies that there's a bug in the 
installer.


- Jonathan M Davis


I do not use the installer, I use the zip file. I assumed that 
everything would be overwritten and any old stuff would simply go 
unused.. but it seems it doesn't. If the other person used the 
installer then it is a problem with dmd itself not designed 
properly and using files that it shouldn't. I simply unzip the 
zip file in to the dmd2 dir and replace sc.ini... that has been 
my MO for since I've been trying out dmd2 and only recently has 
it had a problem.


Re: @safe and null dereferencing

2017-07-27 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 27, 2017 at 05:33:22PM +, Adrian Matoga via Digitalmars-d wrote:
[...]
> Why can't we just make the compiler insert null checks in @safe code?

Because not inserting null checks is a sacred cow we inherited from the
C/C++ days of POOP (premature optimization oriented programming), and we
are loathe to slaughter it.  :-P  We should seriously take some
measurements of this in a large D project to determine whether or not
inserting null checks actually makes a significant difference in
performance.


> We can afford bounds checking even in @system -O -release. C++ can
> afford null check upon executing an std::function. The pointer would
> most likely be in a register anyway, and the conditional branch would
> almost always not be taken, so the cost of that check would be barely
> measurable. Moreover, the compiler can elide the check e.g. if the
> access via pointer is made in a loop in which the pointer doesn't
> change. And if you prove that this tiny little check ruins performance
> of your code, there's @trusted to help you.

The compiler can (and should, if it doesn't already) also propagate
non-nullness (ala VRP) as part of its dataflow analysis, so that once a
pointer has been established to be non-null, all subsequent checks of
that pointer can be elided (until the next assignment to the pointer, of
course).


T

-- 
Public parking: euphemism for paid parking. -- Flora


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Adrian Matoga via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md

All review-related feedback on and discussion of the DIP should 
occur in this thread. The review period will end at 11:59 PM ET 
on August 10 (3:59 AM GMT August 11), or when I make a post 
declaring it complete.


At the end of Round 1, if further review is deemed necessary, 
the DIP will be scheduled for another round. Otherwise, it will 
be queued for the formal review and evaluation by the language 
authors.


Thanks in advance to all who participate.

Destroy!


I don't want to see monsters like 
"@core.attribute.GarbageCollectedness.inferred" as part of any 
declaration, ever.
I agree that the problem is valid, but I don't think adding the 
complexity and verboseness presented in the DIP can solve it.


Re: @safe and null dereferencing

2017-07-27 Thread Adrian Matoga via Digitalmars-d
On Thursday, 27 July 2017 at 15:03:02 UTC, Steven Schveighoffer 
wrote:
Inside the thread for adding @safe/@trusted attributes to OS 
functions, it has come to light that @safe has conflicting 
rules.


For the definition of safe, it says:

"Safe functions are functions that are statically checked to 
exhibit no possibility of undefined behavior."


In the definition of @trusted, it says:

"Trusted functions are guaranteed by the programmer to not 
exhibit any undefined behavior if called by a safe function."


Yet, safe functions allow dereferencing of null pointers. 
Example:


void foo() @safe
{
   int *x;
   *x = 5;
}

There are various places on the forum where Walter argues that 
null pointer dereferencing should cause a segmentation fault 
(or crash) and is checked by the hardware/OS. Therefore, 
checking for null pointers before any dereferencing would be a 
waste of cycles.


However, there do exist places where dereferencing null may NOT 
cause a segmentation fault. For example, see this post by 
Moritz Maxeiner: 
https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org


In such cases, the compiled program can have no knowledge that 
the zero page is mapped somehow. There is no way to prevent it, 
or guarantee it during compilation.


It's also worth noting that C/C++ identifies null dereferencing 
as undefined behavior. So if we are being completely pedantic, 
we could say that no C/C++ code could be marked safe if there 
is a possibility that a null pointer would be dereferenced.


The way I see it, we have 2 options. First, we can disallow 
null pointer dereferencing in @safe code. This would be hugely 
disruptive. We may not have to instrument all @safe code with 
null checks, we could do it with flow analysis, and assuming 
that all pointers passed into a @safe function are not null. 
But it would likely disallow a lot of existing @safe code.


The other option is to explicitly state what happens in such 
cases. I would opt for this second option, as the likelihood of 
these situations is very low.


If we were to update the spec to take this into account, how 
would it look?


A possibility:

"@safe D does not support platforms or processes where 
dereferencing a null pointer does not crash the program. In 
such situations, dereferencing null is not defined, and @safe 
code will not prevent this from happening."


In terms of not marking C/C++ code safe, I am not convinced we 
need to go that far, but it's not as horrible a prospect as 
having to unmark D @safe code that might dereference null.


Thoughts?

-Steve


Why can't we just make the compiler insert null checks in @safe 
code? We can afford bounds checking even in @system -O -release. 
C++ can afford null check upon executing an std::function. The 
pointer would most likely be in a register anyway, and the 
conditional branch would almost always not be taken, so the cost 
of that check would be barely measurable. Moreover, the compiler 
can elide the check e.g. if the access via pointer is made in a 
loop in which the pointer doesn't change. And if you prove that 
this tiny little check ruins performance of your code, there's 
@trusted to help you.




[Issue 17697] Ddoc: get rid of `_` detection in URLs

2017-07-27 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=17697

--- Comment #2 from Vladimir Panteleev  ---
(In reply to Adam D. Ruppe from comment #1)
> The correct solution is to get rid of the utterly counterproductive
> identifier highlighting entirely, then remove the useless _ suppression
> entirely.

We only need to restrict it to $(D ...) or `backtick` blocks, as any
identifiers that would benefit from being highlighted would need to be there
anyway.

--


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread ketmar via Digitalmars-d

Mike Parker wrote:


DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md

All review-related feedback on and discussion of the DIP should occur in 
this thread. The review period will end at 11:59 PM ET on August 10 (3:59 
AM GMT August 11), or when I make a post declaring it complete.


At the end of Round 1, if further review is deemed necessary, the DIP 
will be scheduled for another round. Otherwise, it will be queued for the 
formal review and evaluation by the language authors.


Thanks in advance to all who participate.

Destroy!


didn't get the rationale of the DIP at all. the only important case -- 
attribute cancelation -- is either missing, or so well-hidden that i didn't 
found it (except fast mention). everything other looks like atronautical 
complexity for the sake of having some "abstract good" (that is, for all my 
years of using D as the only lanugage i'm writing code into, i never had 
any need to "group defaults" or something -- only to selectively cancel attrs).


tl;dr: ketmar absolutely didn't got what this DIP is about.


Re: Hacking the compiler: Get Scope after some line of function

2017-07-27 Thread unDEFER via Digitalmars-d-learn

On Thursday, 27 July 2017 at 11:59:51 UTC, unDEFER wrote:

So how to get scope e.g. after line "B b;"?


I have found. That in scopes was found symbols from declarations, 
you must iterate by declarations (DeclarationExp) and add symbols 
by sc.insert(decexp.declaration);


Re: Profiling after exit()

2017-07-27 Thread Mario Kröplin via Digitalmars-d-learn

On Thursday, 27 July 2017 at 14:44:31 UTC, Temtaime wrote:
Also there was an issue that profiling doesn't work with 
multi-threaded apps and leads to a crash.

Don't know if it is fixed.


Was fixed two years ago:
http://forum.dlang.org/post/mia2kf$djb$1...@digitalmars.com


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Olivier FAURE via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md


This DIP proposes a very complex change (treating attributes as 
Enums), but doesn't really provide a rationale for these changes.


The DIP's written rationale is fairly short, and only mentions 
"We need a way to conveniently change default values for 
attributes" which I feel doesn't really justifies these complex 
new semantics.


Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread jmh530 via Digitalmars-d

On Thursday, 27 July 2017 at 14:58:22 UTC, Atila Neves wrote:



_Why_ it works like that I have no idea.



I thought that the attributes were just using the same behavior 
as public/private/etc.


Anyway, isn't that the same type of behavior this DIP is 
suggesting? There is an @nogc module foo; example in the DIP that 
has a gc function included and doesn't say anything about it 
being an error.


The DIP has a list of attributes not encompassed, but there are 
missing attributes from [1]. For instance, the visibility 
attributes are not encompassed, but that is not mentioned. In 
this case, they are grouped and have a default (public) and an 
opposite (private). However, it would break a lot of code to 
force them to use @. Might be useful to mention why not included.


https://dlang.org/spec/attribute.html




Re: Profiling after exit()

2017-07-27 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 27 July 2017 at 14:52:18 UTC, Stefan Koch wrote:

On Thursday, 27 July 2017 at 14:30:33 UTC, Eugene Wissner wrote:
I have a multi-threaded application, whose threads normally 
run forever. But I need to profile this program, so I compile 
the code with -profile, send a SIGTERM and call exit(0) from 
my signal handler to exit the program. The problem is that I 
get the profiling information only from the main thread, but 
not from the other ones.


[...]


You will need to run it single threaded.
If you want to use the builtin-profiler.


Are there profilers that work well with dmd? valgrind? OProfile?


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 27 July 2017 at 14:45:03 UTC, Steven Schveighoffer 
wrote:

On 7/27/17 10:20 AM, Moritz Maxeiner wrote:
On Thursday, 27 July 2017 at 13:56:00 UTC, Steven 
Schveighoffer wrote:


I'm fine with saying libraries or platforms that do not 
segfault when accessing zero page are incompatible with @safe 
code.


So we can't have @safe in shared libraries on Linux? Because 
there's no way for the shared lib author to know what programs 
using it are going to do.


You can't guarantee @safe on such processes or systems. It has 
to be assumed by the compiler that your provided code doesn't 
happen.


It's not that we can't have @safe because of what someone might 
do, it's that @safe guarantees can only work if you don't do 
such things.


Which essentially means that any library written in @safe D 
exposing a C API needs to write in big fat red letters "Don't do 
this or you break our safety guarantees".



It is nice to be aware of these possibilities, since they could 
be an effective attack on D @safe code.


Well, yeah, that's the consequence of @safe correctness depending 
on UB always resulting in a crash.


@safe and null dereferencing

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d
Inside the thread for adding @safe/@trusted attributes to OS functions, 
it has come to light that @safe has conflicting rules.


For the definition of safe, it says:

"Safe functions are functions that are statically checked to exhibit no 
possibility of undefined behavior."


In the definition of @trusted, it says:

"Trusted functions are guaranteed by the programmer to not exhibit any 
undefined behavior if called by a safe function."


Yet, safe functions allow dereferencing of null pointers. Example:

void foo() @safe
{
   int *x;
   *x = 5;
}

There are various places on the forum where Walter argues that null 
pointer dereferencing should cause a segmentation fault (or crash) and 
is checked by the hardware/OS. Therefore, checking for null pointers 
before any dereferencing would be a waste of cycles.


However, there do exist places where dereferencing null may NOT cause a 
segmentation fault. For example, see this post by Moritz Maxeiner: 
https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org


In such cases, the compiled program can have no knowledge that the zero 
page is mapped somehow. There is no way to prevent it, or guarantee it 
during compilation.


It's also worth noting that C/C++ identifies null dereferencing as 
undefined behavior. So if we are being completely pedantic, we could say 
that no C/C++ code could be marked safe if there is a possibility that a 
null pointer would be dereferenced.


The way I see it, we have 2 options. First, we can disallow null pointer 
dereferencing in @safe code. This would be hugely disruptive. We may not 
have to instrument all @safe code with null checks, we could do it with 
flow analysis, and assuming that all pointers passed into a @safe 
function are not null. But it would likely disallow a lot of existing 
@safe code.


The other option is to explicitly state what happens in such cases. I 
would opt for this second option, as the likelihood of these situations 
is very low.


If we were to update the spec to take this into account, how would it look?

A possibility:

"@safe D does not support platforms or processes where dereferencing a 
null pointer does not crash the program. In such situations, 
dereferencing null is not defined, and @safe code will not prevent this 
from happening."


In terms of not marking C/C++ code safe, I am not convinced we need to 
go that far, but it's not as horrible a prospect as having to unmark D 
@safe code that might dereference null.


Thoughts?

-Steve


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Patrick Schluter via Digitalmars-d
On Thursday, 27 July 2017 at 11:46:24 UTC, Steven Schveighoffer 
wrote:

On 7/27/17 2:48 AM, Jacob Carlborg wrote:
And then the compiler runs the "Dead Code Elimination" pass 
and we're left with:


void contains_null_check(int* p)
{
 *p = 4;
}


So the result is that it will segfault. I don't see a problem 
with this. It's what I would have expected.


Except that that code was used in the Linux kernel where page 0 
was mapped and thus de-referencing the pointer did not segfault.


The issue that is missed here is for what purpose the compiler is 
used. Will the code always be run in a hosted environment or is 
it used in a freestanding implementation (kernel and embedded 
stuff). The C standard makes a difference between the 2 but the 
compiler gurus apparently do not care.
As for D, Walter's list of constraints for a D compiler makes it 
imho impossible to use the language on smaller embedded platforms 
ring 0 mode x86.
That's why calling D a system language to be somehow 
disingenuous. Calling it an application language to be truer.




Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Atila Neves via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md

All review-related feedback on and discussion of the DIP should 
occur in this thread. The review period will end at 11:59 PM ET 
on August 10 (3:59 AM GMT August 11), or when I make a post 
declaring it complete.


At the end of Round 1, if further review is deemed necessary, 
the DIP will be scheduled for another round. Otherwise, it will 
be queued for the formal review and evaluation by the language 
authors.


Thanks in advance to all who participate.

Destroy!


"at the top of a file means that one can never "undo" those 
attributes"


That's not true for `@safe`. This is perfectly legal:

@safe:

void foo()  { ... }// foo is @safe
void bar() @system { } // bar is @system


_Why_ it works like that I have no idea.

Atila


Re: Profiling after exit()

2017-07-27 Thread Stefan Koch via Digitalmars-d-learn

On Thursday, 27 July 2017 at 14:30:33 UTC, Eugene Wissner wrote:
I have a multi-threaded application, whose threads normally run 
forever. But I need to profile this program, so I compile the 
code with -profile, send a SIGTERM and call exit(0) from my 
signal handler to exit the program. The problem is that I get 
the profiling information only from the main thread, but not 
from the other ones.


[...]


You will need to run it single threaded.
If you want to use the builtin-profiler.


DIP 1012--Attributes--Preliminary Review Round 1 Begins

2017-07-27 Thread Mike Parker via Digitalmars-d-announce
The first preliminary review round of DIP 1012, "Attributes", has 
begun.


http://forum.dlang.org/thread/rqebssbxgrchphyur...@forum.dlang.org

Two reminders:

The first preliminary round for DIP 1011 ends in ~24 hours.

http://forum.dlang.org/thread/topmfucguenqpucsb...@forum.dlang.org

The second preliminary round for DIP 1009 finishes at the end of 
next week.


http://forum.dlang.org/thread/luhdbjnsmfomtgpyd...@forum.dlang.org



Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 10:20 AM, Moritz Maxeiner wrote:

On Thursday, 27 July 2017 at 13:56:00 UTC, Steven Schveighoffer wrote:

On 7/27/17 9:24 AM, Moritz Maxeiner wrote:

On Wednesday, 26 July 2017 at 01:09:50 UTC, Steven Schveighoffer wrote:
I think we can correctly assume no fclose implementations exist that 
do anything but access data pointed at by stream. Which means a 
segfault on every platform we support.


On platforms that may not segfault, you'd be on your own.

In other words, I think we can assume for any C functions that are 
passed pointers that dereference those pointers, passing null is 
safely going to segfault.


Likewise, because D depends on hardware flagging of dereferencing 
null as a segfault, any platforms that *don't* have that for C also 
won't have it for D. And then @safe doesn't even work in D code either.


As we have good support for different prototypes for different 
platforms, we could potentially unmark those as @trusted in those 
cases.


--- null.d ---
version (linux):

import core.stdc.stdio : FILE;
import core.sys.linux.sys.mman;

extern (C) @safe int fgetc(FILE* stream);

void mmapNull()
{
 void* mmapNull = mmap(null, 4096, PROT_READ | PROT_WRITE, 
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED | MAP_POPULATE, -1, 0);
 assert (mmapNull == null, "Do `echo 0 > 
/proc/sys/vm/mmap_min_addr` as root");

 *(cast (char*) null) = 'D';
}

void nullDeref() @safe
{
 fgetc(null);
}

void main(string[] args)
{
 mmapNull();
 nullDeref();
}
---

For some fun on Linux, try out
# echo 0 > /proc/sys/vm/mmap_min_addr
$ rdmd null.d

Consider `mmapNull` being run in some third party shared lib you 
don't control.


Again, all these hacks are just messing with the assumptions D is making.


Which aren't in the official D spec (or at the very least I can't seem 
to find them there).


You are right. I have asked Walter to add such an update. I should pull 
that out to its own thread, will do.



You don't need C functions to trigger such problems.


Sure, but it was relevant to the previous discussion.


Right, but what I'm saying is that it's a different argument. We could 
say "you can't mark fgetc @safe", and still have this situation occur.


I'm fine with saying libraries or platforms that do not segfault when 
accessing zero page are incompatible with @safe code.


So we can't have @safe in shared libraries on Linux? Because there's no 
way for the shared lib author to know what programs using it are going 
to do.


You can't guarantee @safe on such processes or systems. It has to be 
assumed by the compiler that your provided code doesn't happen.


It's not that we can't have @safe because of what someone might do, it's 
that @safe guarantees can only work if you don't do such things.


It is nice to be aware of these possibilities, since they could be an 
effective attack on D @safe code.


And it's on you not to do this, the compiler will assume the segfault 
will occur.


It's not a promise the author of the D code can (always) make.
In any case, the @trusted and @safe spec need to be explicit about the 
assumptions made.


I agree. The promise only works as well as the environment. @safe is not 
actually safe if it's based on incorrect assumptions.


-Steve


DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Mike Parker via Digitalmars-d

DIP 1012 is titled "Attributes".

https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md

All review-related feedback on and discussion of the DIP should 
occur in this thread. The review period will end at 11:59 PM ET 
on August 10 (3:59 AM GMT August 11), or when I make a post 
declaring it complete.


At the end of Round 1, if further review is deemed necessary, the 
DIP will be scheduled for another round. Otherwise, it will be 
queued for the formal review and evaluation by the language 
authors.


Thanks in advance to all who participate.

Destroy!


Re: Profiling after exit()

2017-07-27 Thread Temtaime via Digitalmars-d-learn
Also there was an issue that profiling doesn't work with 
multi-threaded apps and leads to a crash.

Don't know if it is fixed.


Re: Profiling after exit()

2017-07-27 Thread Temtaime via Digitalmars-d-learn

Exit is not "normal exit" for D programs so, do not use it.
Your threads should stop at some point to make able the app exit 
successfully.

There's a "join" method. You can use it with your "done" variable.


Profiling after exit()

2017-07-27 Thread Eugene Wissner via Digitalmars-d-learn
I have a multi-threaded application, whose threads normally run 
forever. But I need to profile this program, so I compile the 
code with -profile, send a SIGTERM and call exit(0) from my 
signal handler to exit the program. The problem is that I get the 
profiling information only from the main thread, but not from the 
other ones.


Is there a way to get the profiling information from all threads 
before terminating the program? Maybe some way to finish the 
threads gracefully? or manully call "write trace.log"-function 
for a thread?


Here is a small example that demonstrates the problem:

import core.thread;
import core.stdc.stdlib;

shared bool done = false;

void run()
{
while (!done)
{
foo;
}
}

void foo()
{
new Object;
}

void main()
{
auto thread = new Thread();
thread.start;
Thread.sleep(3.seconds);

exit(0); // Replace with "done = true;" to get the expected 
behaviour.

}

There is already an issue: 
https://issues.dlang.org/show_bug.cgi?id=971
The hack was to call trace_term() in internal/trace. call_term() 
doesn't exist anymore, I tried to export the static destructor 
from druntime/src/rt/trace.d with:


extern (C) void _staticDtor449() @nogc nothrow;

(on my system) and call it manually. I get some more information 
this way, but the numbers in the profiling report are still wrong.


[Issue 5176] Limit static object sizes

2017-07-27 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=5176

Steven Schveighoffer  changed:

   What|Removed |Added

 CC||schvei...@yahoo.com

--


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 27 July 2017 at 13:56:00 UTC, Steven Schveighoffer 
wrote:

On 7/27/17 9:24 AM, Moritz Maxeiner wrote:
On Wednesday, 26 July 2017 at 01:09:50 UTC, Steven 
Schveighoffer wrote:
I think we can correctly assume no fclose implementations 
exist that do anything but access data pointed at by stream. 
Which means a segfault on every platform we support.


On platforms that may not segfault, you'd be on your own.

In other words, I think we can assume for any C functions 
that are passed pointers that dereference those pointers, 
passing null is safely going to segfault.


Likewise, because D depends on hardware flagging of 
dereferencing null as a segfault, any platforms that *don't* 
have that for C also won't have it for D. And then @safe 
doesn't even work in D code either.


As we have good support for different prototypes for 
different platforms, we could potentially unmark those as 
@trusted in those cases.


--- null.d ---
version (linux):

import core.stdc.stdio : FILE;
import core.sys.linux.sys.mman;

extern (C) @safe int fgetc(FILE* stream);

void mmapNull()
{
 void* mmapNull = mmap(null, 4096, PROT_READ | PROT_WRITE, 
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED | MAP_POPULATE, -1, 0);
 assert (mmapNull == null, "Do `echo 0 > 
/proc/sys/vm/mmap_min_addr` as root");

 *(cast (char*) null) = 'D';
}

void nullDeref() @safe
{
 fgetc(null);
}

void main(string[] args)
{
 mmapNull();
 nullDeref();
}
---

For some fun on Linux, try out
# echo 0 > /proc/sys/vm/mmap_min_addr
$ rdmd null.d

Consider `mmapNull` being run in some third party shared lib 
you don't control.


Again, all these hacks are just messing with the assumptions D 
is making.


Which aren't in the official D spec (or at the very least I can't 
seem to find them there).



You don't need C functions to trigger such problems.


Sure, but it was relevant to the previous discussion.

I'm fine with saying libraries or platforms that do not 
segfault when accessing zero page are incompatible with @safe 
code.


So we can't have @safe in shared libraries on Linux? Because 
there's no way for the shared lib author to know what programs 
using it are going to do.


And it's on you not to do this, the compiler will assume the 
segfault will occur.


It's not a promise the author of the D code can (always) make.
In any case, the @trusted and @safe spec need to be explicit 
about the assumptions made.


[Issue 5176] Limit static object sizes

2017-07-27 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=5176

ag0ae...@gmail.com changed:

   What|Removed |Added

   Keywords||safe
 CC||ag0ae...@gmail.com

--


Re: Silent error when using hashmap

2017-07-27 Thread Mike Parker via Digitalmars-d

On Thursday, 27 July 2017 at 00:07:39 UTC, FatalCatharsis wrote:

I figured this was the case. WM_NCCREATE is probably sent first 
and the lookup fails. I'm more concerned with why there was no 
exceptions/debug output of any kind.


There are a few things going on with your code. I'll break it 
down one at a time.


1. You can't expect exceptions thrown in a callback called from C 
to be propagated through the C side back into the D side. That 
includes errors. It happens on some platforms, but not all. On 
Windows, it does not (at least, not with DMD -- I can't speak for 
LDC).


2. The error you're getting is because neither WM_CREATE nore 
WM_NCCREATE is the first message sent. You can see this by 
inserting the following into your WndProc:


```
static UINT first;
if(msg == WM_NCCREATE) {
try {
if(first == 0) writeln("NCCREATE is first");
else writeln("First is ", first);
}
catch(Exception e){}
}
else {
// window = winMap[hwnd];
if(first == 0) first = msg;
}
```

This prints 36, which if you look it up (hexcode on MSDN, but I 
like the list at WineHQ [1]) will show you is WM_GETMINMAXINFO. 
That means that when you try to fetch the window instance from 
winMap, there's nothing there. When you try to access a 
non-extent element in an AA, you get a RangeError. Because of 
point 1 above, you're never seeing it and instead are getting a 
crash.


You could try to catch the error and stash it for later, but 
better to do change the way you access the map:


```
if(auto pwin = hwnd in winMap) window = *pwin;
```

The in operator returns a pointer, so it needs to be dereferenced.

3. As I mentioned in another post in this thread, you are doing 
the wrong thing with your window reference. In your call to 
CreateWindowEx, you are correctly casting the reference to void*. 
Everywhere else, you're treating it as a pointer. That's wrong! 
To prove it, you can to this. Declare an instance of WinThing at 
the top of the file.


```
WinThing testMe;
```

Then, add this in the class constructor *before* the call to 
CreateWindowEx.


```
testMe = this;
```

Finally, where you handle the WM_NCCREATE message, do this:

```
try writeln("window is testMe = ", *window is testMe);
catch(Exception e) {}
```

This will print false. Why? Because the window instance you 
created is a reference. In CreateWindowEx, you've treated it as 
such. But then when you fetch it out of lpCreateParams, you cast 
it to a WinThing*. For that to be correct, you would have to 
change CreateWindowEx to pass a pointer to the reference (i.e. 
cast(void*)). But actually, that's still not correct 
because you're taking the address of a local variable. So the 
correct thing to do is to leave CreateWindowEx as is, and change 
all every WinThing* to WinThing.


```
WinThing[HWND] winMap;

WinThing window;
window = cast(WinThing)createStruct.lpCreateParams;
```

Note that you don't have to dereference the createStruct pointer 
to access its fields.


Fix these these issues and it should compile. One other thing, in 
case you are unaware (not an error, just a matter of style).


```
private static string BASE_CLASS = "BaseClass";
```

There's no reason to make this static member or to call toUTFz 
when you use it. You can use a manifest constant with a wchar 
literal. Unlike char -> char*, whcar does not implicitly convert 
to wchar*, but you can use the .ptr property.


enum baseClass = "BaseClass"w;
wc.lpszClassName = baseClass.ptr;

[1] https://wiki.winehq.org/List_Of_Windows_Messages



Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 27 July 2017 at 13:45:21 UTC, ag0aep6g wrote:

On 07/27/2017 03:24 PM, Moritz Maxeiner wrote:

--- null.d ---
version (linux):

import core.stdc.stdio : FILE;
import core.sys.linux.sys.mman;

extern (C) @safe int fgetc(FILE* stream);

void mmapNull()
{
 void* mmapNull = mmap(null, 4096, PROT_READ | PROT_WRITE, 
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED | MAP_POPULATE, -1, 0);
 assert (mmapNull == null, "Do `echo 0 > 
/proc/sys/vm/mmap_min_addr` as root");

 *(cast (char*) null) = 'D';
}

void nullDeref() @safe
{
 fgetc(null);
}

void main(string[] args)
{
 mmapNull();
 nullDeref();
}
---

For some fun on Linux, try out
# echo 0 > /proc/sys/vm/mmap_min_addr
$ rdmd null.d


The gist of this is that Linux can be configured so that null 
can be a valid pointer. Right?


In summation, yes. To be technical about it:
- Linux can be configured so that the bottom page of a process' 
virtual address space is not protected from being mapped to valid 
memory (by default, `mmap_min_addr` is 4096, i.e. the bottom page 
can't be mapped)
- C's `NULL` is in pretty much all implementations (not the C 
spec) defined as the value `0`, which corresponds to the virtual 
address `0` in a process, i.e. lies in the bottom page of the 
process' virtual address space
- The null dereference segmentation fault on Linux stems from the 
fact that the bottom page (which `NULL` maps to) isn't mapped to 
valid memory
- If you map the bottom page of a process' virtual address space 
to valid memory, than accessing it doesn't create a segmentation 
fault




That seems pretty bad for @safe at large, not only when C 
functions are involved.


Yes:
- In C land, since derefencing `NULL` is UB by definition, this 
is perfectly valid behaviour
- In D lang, because we require `null` dereferences to crash, we 
break @safe with it


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread Steven Schveighoffer via Digitalmars-d

On 7/27/17 9:24 AM, Moritz Maxeiner wrote:

On Wednesday, 26 July 2017 at 01:09:50 UTC, Steven Schveighoffer wrote:
I think we can correctly assume no fclose implementations exist that 
do anything but access data pointed at by stream. Which means a 
segfault on every platform we support.


On platforms that may not segfault, you'd be on your own.

In other words, I think we can assume for any C functions that are 
passed pointers that dereference those pointers, passing null is 
safely going to segfault.


Likewise, because D depends on hardware flagging of dereferencing null 
as a segfault, any platforms that *don't* have that for C also won't 
have it for D. And then @safe doesn't even work in D code either.


As we have good support for different prototypes for different 
platforms, we could potentially unmark those as @trusted in those cases.


--- null.d ---
version (linux):

import core.stdc.stdio : FILE;
import core.sys.linux.sys.mman;

extern (C) @safe int fgetc(FILE* stream);

void mmapNull()
{
 void* mmapNull = mmap(null, 4096, PROT_READ | PROT_WRITE, 
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED | MAP_POPULATE, -1, 0);
 assert (mmapNull == null, "Do `echo 0 > /proc/sys/vm/mmap_min_addr` 
as root");

 *(cast (char*) null) = 'D';
}

void nullDeref() @safe
{
 fgetc(null);
}

void main(string[] args)
{
 mmapNull();
 nullDeref();
}
---

For some fun on Linux, try out
# echo 0 > /proc/sys/vm/mmap_min_addr
$ rdmd null.d

Consider `mmapNull` being run in some third party shared lib you don't 
control.


Again, all these hacks are just messing with the assumptions D is 
making. You don't need C functions to trigger such problems. I'm fine 
with saying libraries or platforms that do not segfault when accessing 
zero page are incompatible with @safe code. And it's on you not to do 
this, the compiler will assume the segfault will occur.


-Steve


Re: all OS functions should be "nothrow @trusted @nogc"

2017-07-27 Thread ag0aep6g via Digitalmars-d

On 07/27/2017 03:24 PM, Moritz Maxeiner wrote:

--- null.d ---
version (linux):

import core.stdc.stdio : FILE;
import core.sys.linux.sys.mman;

extern (C) @safe int fgetc(FILE* stream);

void mmapNull()
{
 void* mmapNull = mmap(null, 4096, PROT_READ | PROT_WRITE, 
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED | MAP_POPULATE, -1, 0);
 assert (mmapNull == null, "Do `echo 0 > /proc/sys/vm/mmap_min_addr` 
as root");

 *(cast (char*) null) = 'D';
}

void nullDeref() @safe
{
 fgetc(null);
}

void main(string[] args)
{
 mmapNull();
 nullDeref();
}
---

For some fun on Linux, try out
# echo 0 > /proc/sys/vm/mmap_min_addr
$ rdmd null.d


The gist of this is that Linux can be configured so that null can be a 
valid pointer. Right?


That seems pretty bad for @safe at large, not only when C functions are 
involved.


Re: Destructors vs. Finalizers

2017-07-27 Thread Guillaume Piolat via Digitalmars-d
For cycle in RC you either do it like other RC system and break 
cycles manually, or create a parent owner to own every things 
pointing to each other and having cycles.


On Thursday, 27 July 2017 at 11:43:37 UTC, Steven Schveighoffer 
wrote:

This is an unworkable solution.


Not at all unworkable, it's much easier than mixed strategies.


One simple time you forget to clean up deterministically and 
then you corrupt memory by using members that are already 
cleaned up.


Once again, this is the reason of the existence of the 
GC-Proof-resource-class, which have the GC warn you of 
non-deterministic destruction at debug time.


In the very case you mention it will tell you "you have forgot to 
release one resource T deterministically".


Or, you disable calling destructors in the GC, and instead leak 
resources.


As I said in previous messages, not all resources can be 
destroyed by the GC: they have to fit into 3 different 
constraints.


I'll leave the discussion, you seem to ignore my arguments in 
what seems as an attempt to have the last word.


  1   2   >