Re: wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD

2018-01-23 Thread drug via Digitalmars-d

24.01.2018 10:25, Jonathan M Davis пишет:


If you need to interact with time_t, there's SysTime.toUnixTime,
SysTime.fromUnixTime, stdTimeToUnixTime, and unixTimeToStdTime - assuming of
course that time_t is unix time. But if it's not, you're kind of screwed in
general with regards to interacting with anything else, since time_t is
technically opaque. It's just _usually_ unix time and most stuff is going to
assume that it is. There's also SysTime.toTM, though tm isn't exactly a fun
data type to deal with if you're looking to convert anything.

But if you care about calendar stuff, using January 1st, 1 A.D. as your
epoch is far cleaner than an arbitrary date like January 1st, 1970. My guess
is that that epoch was originally selected to try and keep the values small
in a time where every bit mattered. It's not a particularly good choice
otherwise, but we've been stuck dealing with it ever since, because that's
what C and C++ continue to use and what OS APIs typically use.

- Jonathan M Davis




I'm agree with you that 1 A.D. is better epoch than 1970. IIRC c++11 by 
default uses 1 nsec presicion so even 64 bits are not enough to present 
datetime from January 1st, 1 A.D. to our days.


And by the way I'd like to thank you for your great work - in comparison 
to very (at least for me) inconsistent means c/c++ provide to handle 
date and time, std.datetime is the great pleasure to work with.


Re: The most confusing error message

2018-01-23 Thread Petar via Digitalmars-d
On Wednesday, 24 January 2018 at 07:32:17 UTC, Petar Kirov 
[ZombineDev] wrote:
On Wednesday, 24 January 2018 at 07:21:09 UTC, Shachar Shemesh 
wrote:

test.d(6): Error: struct test.A(int var = 3) is used as a type

Of course it is. That's how structs are used.

Program causing this:
struct A(int var = 3) {
int a;
}

void main() {
A a;
}

To resolve, you need to change A into A!(). For some reason I 
have not been able to fathom, default template parameters on 
structs don't work like they do on functions.


Because IFTI is short for implicit function


(Heh, the Send button is too easy to press on the mobile version 
of the forum.)


Because IFTI is short for implicit function template 
instantiation. We don't have this feature for any other type of 
template, though IIRC it has been discussed before. Though one 
may argue that we have this feature for mixin templates:


https://dlang.org/spec/template-mixin.html
If the TemplateDeclaration has no parameters, the mixin form that 
has no !(TemplateArgumentList) can be used.


Re: The most confusing error message

2018-01-23 Thread Petar via Digitalmars-d
On Wednesday, 24 January 2018 at 07:21:09 UTC, Shachar Shemesh 
wrote:

test.d(6): Error: struct test.A(int var = 3) is used as a type

Of course it is. That's how structs are used.

Program causing this:
struct A(int var = 3) {
int a;
}

void main() {
A a;
}

To resolve, you need to change A into A!(). For some reason I 
have not been able to fathom, default template parameters on 
structs don't work like they do on functions.


Because IFTI is short for implicit function


Re: wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD

2018-01-23 Thread Jonathan M Davis via Digitalmars-d
On Wednesday, January 24, 2018 10:05:12 drug via Digitalmars-d wrote:
> 24.01.2018 03:15, Jonathan M Davis пишет:
> > On Tuesday, January 23, 2018 23:27:27 Nathan S. via Digitalmars-d wrote:
> >> https://dlang.org/phobos/std_datetime_systime.html#.Clock.currStdTime
> >> """
> >> @property @trusted long currStdTime(ClockType clockType =
> >> ClockType.normal)();
> >> Returns the number of hnsecs since midnight, January 1st, 1 A.D.
> >> for the current time.
> >> """
> >>
> >> This choice of offset seems Esperanto-like: deliberately chosen
> >> to equally inconvenience every user. Is there any advantage to
> >> this at all on any platform, or is it just pure badness?
> >
> > Your typical user would use Clock.currTime and get a SysTime. The badly
> > named "std time" is the internal representation used by SysTime. Being
> > able to get at it to convert to other time representations can be
> > useful, but most code doesn't need to do anything with it.
> >
> > "std time" is from January 1st 1 A.D. because that's the perfect
> > representation for implementing ISO 8601, which is the standard that
> > std.datetime follows, implementing the proleptic Gregorian calendar
> > (i.e. it assumes that the calendar was always the Gregorian calendar
> > and doesn't do anything with the Julian calendar).
> >
> > https://en.wikipedia.org/wiki/ISO_8601
> > https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar
> >
> > The math is greatly simplified by using January 1st 1 A.D. as the start
> > date and by assuming Gregorian for the whole way.
> >
> > C# does the same thing with its date/time stuff - it even uses
> > hecto-nanoseconds exactly like we do. hnsecs gives you the optimal
> > balance between precision and range that can be gotten with 64 bits (it
> > covers from about 22,000 B.C. to about 22,000 A.D., whereas IIRC, going
> > one decimal place more precise would reduce it to about 200 years in
> > either direction).
> >
> > - Jonathan M Davis
>
> I guess he meant it's inconvenient working with c/c++ for example to
> add/subtract difference between epoch in c/c++ and d

If you need to interact with time_t, there's SysTime.toUnixTime,
SysTime.fromUnixTime, stdTimeToUnixTime, and unixTimeToStdTime - assuming of
course that time_t is unix time. But if it's not, you're kind of screwed in
general with regards to interacting with anything else, since time_t is
technically opaque. It's just _usually_ unix time and most stuff is going to
assume that it is. There's also SysTime.toTM, though tm isn't exactly a fun
data type to deal with if you're looking to convert anything.

But if you care about calendar stuff, using January 1st, 1 A.D. as your
epoch is far cleaner than an arbitrary date like January 1st, 1970. My guess
is that that epoch was originally selected to try and keep the values small
in a time where every bit mattered. It's not a particularly good choice
otherwise, but we've been stuck dealing with it ever since, because that's
what C and C++ continue to use and what OS APIs typically use.

- Jonathan M Davis




The most confusing error message

2018-01-23 Thread Shachar Shemesh via Digitalmars-d

test.d(6): Error: struct test.A(int var = 3) is used as a type

Of course it is. That's how structs are used.

Program causing this:
struct A(int var = 3) {
int a;
}

void main() {
A a;
}

To resolve, you need to change A into A!(). For some reason I have not 
been able to fathom, default template parameters on structs don't work 
like they do on functions.


Re: wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD

2018-01-23 Thread drug via Digitalmars-d

24.01.2018 03:15, Jonathan M Davis пишет:

On Tuesday, January 23, 2018 23:27:27 Nathan S. via Digitalmars-d wrote:

https://dlang.org/phobos/std_datetime_systime.html#.Clock.currStdTime
"""
@property @trusted long currStdTime(ClockType clockType =
ClockType.normal)();
Returns the number of hnsecs since midnight, January 1st, 1 A.D.
for the current time.
"""

This choice of offset seems Esperanto-like: deliberately chosen
to equally inconvenience every user. Is there any advantage to
this at all on any platform, or is it just pure badness?


Your typical user would use Clock.currTime and get a SysTime. The badly
named "std time" is the internal representation used by SysTime. Being able
to get at it to convert to other time representations can be useful, but
most code doesn't need to do anything with it.

"std time" is from January 1st 1 A.D. because that's the perfect
representation for implementing ISO 8601, which is the standard that
std.datetime follows, implementing the proleptic Gregorian calendar (i.e. it
assumes that the calendar was always the Gregorian calendar and doesn't do
anything with the Julian calendar).

https://en.wikipedia.org/wiki/ISO_8601
https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar

The math is greatly simplified by using January 1st 1 A.D. as the start date
and by assuming Gregorian for the whole way.

C# does the same thing with its date/time stuff - it even uses
hecto-nanoseconds exactly like we do. hnsecs gives you the optimal balance
between precision and range that can be gotten with 64 bits (it covers from
about 22,000 B.C. to about 22,000 A.D., whereas IIRC, going one decimal
place more precise would reduce it to about 200 years in either direction).

- Jonathan M Davis



I guess he meant it's inconvenient working with c/c++ for example to 
add/subtract difference between epoch in c/c++ and d


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Mike Franklin via Digitalmars-d

On Wednesday, 24 January 2018 at 03:46:41 UTC, lobo wrote:

Well if your embedded device has all that on it you should be 
sitting on an OS with proper memory management support.


I don't see how the OS can help if the underlying hardware 
doesn't have an MMU.  That being said, admittedly, the more 
capable microcontrollers do have an MPU that can be configured to 
throw a hardware exception.



We don't use D, it is all C++ and some Ada in the older systems.


Why don't you use D?

Mike




Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread lobo via Digitalmars-d
On Wednesday, 24 January 2018 at 02:28:12 UTC, Mike Franklin 
wrote:
On Wednesday, 24 January 2018 at 01:44:51 UTC, Walter Bright 
wrote:


Microcontroller code tends to be small and so it's unlikely 
that you'll need to worry about it.


I think you need to get involved in programming 
microcontrollers again because the landscape has changed 
drastically.  The microcontrollers I use now are more powerful 
than PCs of the 90's.


The project I'm currently working on is an HMI for industrial 
control with a full touchscreen 2D GUI.  The code base  is 
240,084 lines of code and that doesn't even include the 3rd 
party libraries I'm using (e.g. 2D graphics library, newlib C 
library, FreeType font rendering library).  That's not "small" 
by my standard of measure.


And with devices such as this being increasingly connected to 
the Internet, such carelessness can easily be exploited as 
evident in https://en.wikipedia.org/wiki/2016_Dyn_cyberattack   
And that's not to mention the types of critical systems that 
run on such platforms that we are increasingly becoming more 
dependent on.


We better start worrying about it.

Mike


Well if your embedded device has all that on it you should be 
sitting on an OS with proper memory management support. Even the 
hokey FreeRTOS can be configured to throw a hardware exception on 
nullptr access.


I work on critical systems SW developing life support and pace 
makers. For us nullptrs and memory management is not an issue. It 
is not hard to design these problems out of the critical 
component architecture.


The bigger problem is code logic bugs and for that we make heavy 
use of asserts and in-out contracts. We don't use D, it is all 
C++ and some Ada in the older systems.


bye,
lobo


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Jonathan M Davis via Digitalmars-d
On Wednesday, January 24, 2018 02:28:12 Mike Franklin via Digitalmars-d 
wrote:
> On Wednesday, 24 January 2018 at 01:44:51 UTC, Walter Bright
>
> wrote:
> > Microcontroller code tends to be small and so it's unlikely
> > that you'll need to worry about it.
>
> I think you need to get involved in programming microcontrollers
> again because the landscape has changed drastically.  The
> microcontrollers I use now are more powerful than PCs of the 90's.
>
> The project I'm currently working on is an HMI for industrial
> control with a full touchscreen 2D GUI.  The code base  is
> 240,084 lines of code and that doesn't even include the 3rd party
> libraries I'm using (e.g. 2D graphics library, newlib C library,
> FreeType font rendering library).  That's not "small" by my
> standard of measure.
>
> And with devices such as this being increasingly connected to the
> Internet, such carelessness can easily be exploited as evident in
> https://en.wikipedia.org/wiki/2016_Dyn_cyberattack   And that's
> not to mention the types of critical systems that run on such
> platforms that we are increasingly becoming more dependent on.
>
> We better start worrying about it.

Well, we can just mandate that dereferencing null be @safe such that if it's
not guaranteed that dereferencing null will segfault, the compiler will have
to insert additional checks. We need to do that anyway for the overly large
objects (and unfortunately don't last I heard). But as long as null checks
aren't inserted when the target is going to segfault on dereferencing null,
then we're not inserting unnecessary checks. That way, stuff running on a
normal CPU would be the same as now (save for the objects that are too large
for segfaulting to work), and targets like a microcontroller would get the
extra checks so that they behaved more like if they were going to segfault
on dereferencing null.

But making dereferencing null @system makes no sense, because that would
mean that dereferencing pointers and references in general could not be
@safe. So, basically, anything that's not on the stack would then be
@system. And that would destroy @safe.

- Jonathan M Davis



Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Mike Franklin via Digitalmars-d
On Wednesday, 24 January 2018 at 01:44:51 UTC, Walter Bright 
wrote:


Microcontroller code tends to be small and so it's unlikely 
that you'll need to worry about it.


I think you need to get involved in programming microcontrollers 
again because the landscape has changed drastically.  The 
microcontrollers I use now are more powerful than PCs of the 90's.


The project I'm currently working on is an HMI for industrial 
control with a full touchscreen 2D GUI.  The code base  is 
240,084 lines of code and that doesn't even include the 3rd party 
libraries I'm using (e.g. 2D graphics library, newlib C library, 
FreeType font rendering library).  That's not "small" by my 
standard of measure.


And with devices such as this being increasingly connected to the 
Internet, such carelessness can easily be exploited as evident in 
https://en.wikipedia.org/wiki/2016_Dyn_cyberattack   And that's 
not to mention the types of critical systems that run on such 
platforms that we are increasingly becoming more dependent on.


We better start worrying about it.

Mike


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Walter Bright via Digitalmars-d

On 1/23/2018 4:42 PM, Mike Franklin wrote:
That's what kindof ticks me off about this "null is memory safe" argument; it 
seems to be only applicable to a specific platform and environment.


It's an extremely useful argument, though, as modern computers have virtual 
memory systems that map 0 to a seg fault, and have since the 80's, specifically 
because it DOES catch lots and lots of bugs.


I always thought the IBM PC should have put the ROMs at address 0 instead of 
0. It probably would have saved billions of dollars.



I have a micocontroller in front of me where an address of null (essentially 0) is a 
perfectly valid memory address.


Microcontroller code tends to be small and so it's unlikely that you'll need to 
worry about it.




Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Mike Franklin via Digitalmars-d
On Tuesday, 23 January 2018 at 21:53:24 UTC, Steven Schveighoffer 
wrote:


[1] Note: the reason they are safe is because they generally 
result in a segfault, which doesn't harm any memory. This is 
very much a user-space POV, and doesn't take into account 
kernel-space where null dereferences may actually be valid 
memory! It also doesn't (currently) take into account possible 
huge objects that could extend into valid memory space, even in 
user space.


That's what kindof ticks me off about this "null is memory safe" 
argument; it seems to be only applicable to a specific platform 
and environment.  I have a micocontroller in front of me where an 
address of null (essentially 0) is a perfectly valid memory 
address.


Mike


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Mike Franklin via Digitalmars-d
On Tuesday, 23 January 2018 at 21:53:24 UTC, Steven Schveighoffer 
wrote:



Interestingly, `destroy` is an unsafe operation for classes.


Because it's calling a @system function, rt_finalize. This 
function calls whatever is in the destructor, and because it 
works on Object level, it has no idea what the actual 
attributes of the derived destructor are.


This needs to be fixed, but a whole host of issues like this 
exist with Object.


Are there any bugzilla issues that you are aware of that document 
this?


Mike


Re: wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD

2018-01-23 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, January 23, 2018 23:27:27 Nathan S. via Digitalmars-d wrote:
> https://dlang.org/phobos/std_datetime_systime.html#.Clock.currStdTime
> """
> @property @trusted long currStdTime(ClockType clockType =
> ClockType.normal)();
> Returns the number of hnsecs since midnight, January 1st, 1 A.D.
> for the current time.
> """
>
> This choice of offset seems Esperanto-like: deliberately chosen
> to equally inconvenience every user. Is there any advantage to
> this at all on any platform, or is it just pure badness?

Your typical user would use Clock.currTime and get a SysTime. The badly
named "std time" is the internal representation used by SysTime. Being able
to get at it to convert to other time representations can be useful, but
most code doesn't need to do anything with it.

"std time" is from January 1st 1 A.D. because that's the perfect
representation for implementing ISO 8601, which is the standard that
std.datetime follows, implementing the proleptic Gregorian calendar (i.e. it
assumes that the calendar was always the Gregorian calendar and doesn't do
anything with the Julian calendar).

https://en.wikipedia.org/wiki/ISO_8601
https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar

The math is greatly simplified by using January 1st 1 A.D. as the start date
and by assuming Gregorian for the whole way.

C# does the same thing with its date/time stuff - it even uses
hecto-nanoseconds exactly like we do. hnsecs gives you the optimal balance
between precision and range that can be gotten with 64 bits (it covers from
about 22,000 B.C. to about 22,000 A.D., whereas IIRC, going one decimal
place more precise would reduce it to about 200 years in either direction).

- Jonathan M Davis



Re: Developing blockchain software with D, not C++

2018-01-23 Thread deadalnix via Digitalmars-d

On Thursday, 18 January 2018 at 09:02:38 UTC, Walter Bright wrote:
I don't remember how long, but it took me a fair while to do 
the divide:


  https://github.com/dlang/druntime/blob/master/src/rt/llmath.d

It could be upscaled by rote to 128 bits, but even that would 
take me much longer than an hour. And it would still leave the 
issue of making ucent work with 32 bit code gen.


It could also be translated to D, but I doubt the generated 
code would be as good.


Nevertheless, we do have the technology, we just need someone 
to put it together.


All the code to split 64 bits into 32 bits was generic and could 
be reused.


wut: std.datetime.systime.Clock.currStdTime is offset from Jan 1st, 1 AD

2018-01-23 Thread Nathan S. via Digitalmars-d

https://dlang.org/phobos/std_datetime_systime.html#.Clock.currStdTime
"""
@property @trusted long currStdTime(ClockType clockType = 
ClockType.normal)();
Returns the number of hnsecs since midnight, January 1st, 1 A.D. 
for the current time.

"""

This choice of offset seems Esperanto-like: deliberately chosen 
to equally inconvenience every user. Is there any advantage to 
this at all on any platform, or is it just pure badness?


Re: Implementing tail-const in D

2018-01-23 Thread sarn via Digitalmars-d

On Tuesday, 23 January 2018 at 09:36:03 UTC, Simen Kjærås wrote:
Since tail-const (more correctly called head-mutable) was 
mentioned here lately (in the 'I closed a very old bug!'[1] 
thread), I've been racking my brain to figure out what needs 
doing to make a viable solution.


Have you seen Rebindable in Phobos?  I know it's not the same 
thing as what you're talking about, but it's relevant.

https://dlang.org/library/std/typecons/rebindable.html


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Steven Schveighoffer via Digitalmars-d

On 1/22/18 11:11 PM, Mike Franklin wrote:

On Tuesday, 23 January 2018 at 02:25:57 UTC, Mike Franklin wrote:

Should `destroy` be `@system` so it can't be called in `@safe` code, 
or should the compiler be smart enough to figure out the flow control 
and throw an error?


Interestingly, `destroy` is an unsafe operation for classes.


Because it's calling a @system function, rt_finalize. This function 
calls whatever is in the destructor, and because it works on Object 
level, it has no idea what the actual attributes of the derived 
destructor are.


This needs to be fixed, but a whole host of issues like this exist with 
Object.



But it's not an unsafe operation for structs


Because struct destructors are not virtual. The compiler can tell when a 
struct destructor is unsafe:


https://run.dlang.io/is/o3ujrP

Note, I had to call destroy in a sub-function because if I made main 
@safe, it would fail to compile due to automatic destruction.




Not sure if that's a bug or not.


Not a bug.

Also, as others have pointed out, null dereferences are also considered 
@safe [1]. destroying an object doesn't actually deallocate it. It puts 
it into a state that is @safe to call, but will likely crash.


On 1/22/18 9:43 PM, Nicholas Wilson wrote:
>
> The compiler should be taught that any access to a `.destroy()`ed object
> is invalid i.e. that its lifetime ends when destroy is called.

destroy is just a function, there shouldn't be any special magic for it 
(we have enough of that already). And in fact its lifetime has not 
ended, it's just destructed, and left as an empty shell.


The idea behind destroy is to decouple destruction from deallocation (as 
delete combines the two). @safe is all about memory safety, nothing 
else. As long as you can't corrupt memory, it is @safe.


-Steve

[1] Note: the reason they are safe is because they generally result in a 
segfault, which doesn't harm any memory. This is very much a user-space 
POV, and doesn't take into account kernel-space where null dereferences 
may actually be valid memory! It also doesn't (currently) take into 
account possible huge objects that could extend into valid memory space, 
even in user space.


Vision for 2018 H1?

2018-01-23 Thread Dukc via Digitalmars-d
Is it intended to be updated? No pressure, just making sure it's 
not forgotten...


Re: Tuple DIP

2018-01-23 Thread Atila Neves via Digitalmars-d
On Tuesday, 23 January 2018 at 14:58:07 UTC, Petar Kirov 
[ZombineDev] wrote:

On Tuesday, 23 January 2018 at 11:04:33 UTC, Atila Neves wrote:

On Sunday, 14 January 2018 at 18:17:38 UTC, Timon Gehr wrote:

On 14.01.2018 19:14, Timothee Cour wrote:
actually I just learned that indeed 
sizeof(typeof(tuple()))=1, but why

is that? (at least for std.typecons.tuple)
maybe worth mentioning that in the DIP (with rationale)


It's inherited from C, where all struct instances have size 
at least 1. (Such that each of them has a distinct address.)


Inherited from C++. In C empty structs have size 0. This 
caused me all sorts of problems when importing C headers from 
C++ in funky codebases.


foo.c:
#include 

struct Foo {};

int main() {
printf("%zu\n", sizeof(struct Foo));
return 0;
}


% clear && gcc foo.c && ./a.out
0

% clear && gcc -xc++ foo.c && ./a.out
1


Atila


AFAIR the ISO C standard does not allow empty structs (as they 
would have no meaning). If you use the warnings as errors 
option, it won't compile:


:3:8: error: struct has no members [-Werror=pedantic]
 struct Foo {};
^~~


That's a warning treated as error. I checked and it seems that 
you're right about the C standard, although in practice compilers 
seem to accept empty structs. I knew that in C++ it's explicit 
that empty classes have size 1. Live and learn.


Atila


Re: Tuple DIP

2018-01-23 Thread Petar via Digitalmars-d

On Tuesday, 23 January 2018 at 11:04:33 UTC, Atila Neves wrote:

On Sunday, 14 January 2018 at 18:17:38 UTC, Timon Gehr wrote:

On 14.01.2018 19:14, Timothee Cour wrote:
actually I just learned that indeed 
sizeof(typeof(tuple()))=1, but why

is that? (at least for std.typecons.tuple)
maybe worth mentioning that in the DIP (with rationale)


It's inherited from C, where all struct instances have size at 
least 1. (Such that each of them has a distinct address.)


Inherited from C++. In C empty structs have size 0. This caused 
me all sorts of problems when importing C headers from C++ in 
funky codebases.


foo.c:
#include 

struct Foo {};

int main() {
printf("%zu\n", sizeof(struct Foo));
return 0;
}


% clear && gcc foo.c && ./a.out
0

% clear && gcc -xc++ foo.c && ./a.out
1


Atila


AFAIR the ISO C standard does not allow empty structs (as they 
would have no meaning). If you use the warnings as errors option, 
it won't compile:


:3:8: error: struct has no members [-Werror=pedantic]
 struct Foo {};
^~~


Re: Implementing tail-const in D

2018-01-23 Thread Simen Kjærås via Digitalmars-d

On Tuesday, 23 January 2018 at 14:17:26 UTC, Andrea Fontana wrote:

On Tuesday, 23 January 2018 at 12:39:12 UTC, Simen Kjærås wrote:
On Tuesday, 23 January 2018 at 12:12:42 UTC, Nicholas Wilson 
wrote:
On Tuesday, 23 January 2018 at 09:36:03 UTC, Simen Kjærås 
wrote:

Questions: Is a DIP required for this?


A DIP is required for language changes. So yes.


No language changes are proposed - this is all library code.

--
  Simen


It would be useful to have one or more short examples. Just to 
see what actually change in a common scenario.


Andrea


Your wish is my command. For the most part, the changes will 
require that instead of storing Unqual!Ts, use HeadMutable!Ts, 
and when assigning to a HeadMutable!T, remember to assign 
headMutable(rhs).


Here's a somewhat simplistic map function. As you can see, not a 
whole lot is changed - map passes head-mutable versions of its 
arguments to MapResult, and MapResult implements opHeadMutable(), 
otherwise everything is exactly as you'd expect.


import std.range;

auto map(alias fn, R)(R r) if (isInputRange!(HeadMutable!R))
{
// Pass head-mutable versions to MapResult.
return MapResult!(fn, HeadMutable!R)(headMutable(r));
}

struct MapResult(alias fn, R) if (isInputRange!R)
{
R range;

this(R rng)
{
range = rng;
}

@property
auto front()
{
return fn(range.front);
}

void popFront()
{
range.popFront();
}

@property
bool empty()
{
return range.empty;
}

// The only change to MapResult:
auto opHeadMutable(this This)()
{
import std.traits : CopyTypeQualifiers;
return MapResult!(fn, 
HeadMutable!(CopyTypeQualifiers!(This, R)))(range);

}
}


unittest
{
import std.algorithm : equal;

const a = [1,2,3,4].map!(v => v*2);
assert(!isInputRange!(typeof(a)));

// Here, std.algorithm.map gives up, since a const MapResult 
is not
// an input range, and calling Unqual on it doesn't give a 
sensible

// result.
// HeadMutable makes this work, since the type system now 
knows how

// to make a head-mutable version of the type.
auto b = a.map!(v => v/2);

assert(equal([1,2,3,4], b));
}

--
  Simen


Re: Implementing tail-const in D

2018-01-23 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 23 January 2018 at 12:39:12 UTC, Simen Kjærås wrote:
On Tuesday, 23 January 2018 at 12:12:42 UTC, Nicholas Wilson 
wrote:
On Tuesday, 23 January 2018 at 09:36:03 UTC, Simen Kjærås 
wrote:

Questions: Is a DIP required for this?


A DIP is required for language changes. So yes.


No language changes are proposed - this is all library code.

--
  Simen


It would be useful to have one or more short examples. Just to 
see what actually change in a common scenario.


Andrea


[your code here]8a0c0767f3c37500

2018-01-23 Thread aicha via Digitalmars-d

credi
ts 9;999



Re: Implementing tail-const in D

2018-01-23 Thread Simen Kjærås via Digitalmars-d
On Tuesday, 23 January 2018 at 12:12:42 UTC, Nicholas Wilson 
wrote:

On Tuesday, 23 January 2018 at 09:36:03 UTC, Simen Kjærås wrote:

Questions: Is a DIP required for this?


A DIP is required for language changes. So yes.


No language changes are proposed - this is all library code.

--
  Simen


Re: Implementing tail-const in D

2018-01-23 Thread Nicholas Wilson via Digitalmars-d

On Tuesday, 23 January 2018 at 09:36:03 UTC, Simen Kjærås wrote:

Questions: Is a DIP required for this?


A DIP is required for language changes. So yes.


Re: Tuple DIP

2018-01-23 Thread Atila Neves via Digitalmars-d

On Sunday, 14 January 2018 at 18:17:38 UTC, Timon Gehr wrote:

On 14.01.2018 19:14, Timothee Cour wrote:
actually I just learned that indeed sizeof(typeof(tuple()))=1, 
but why

is that? (at least for std.typecons.tuple)
maybe worth mentioning that in the DIP (with rationale)


It's inherited from C, where all struct instances have size at 
least 1. (Such that each of them has a distinct address.)


Inherited from C++. In C empty structs have size 0. This caused 
me all sorts of problems when importing C headers from C++ in 
funky codebases.


foo.c:
#include 

struct Foo {};

int main() {
printf("%zu\n", sizeof(struct Foo));
return 0;
}


% clear && gcc foo.c && ./a.out
0

% clear && gcc -xc++ foo.c && ./a.out
1


Atila


Implementing tail-const in D

2018-01-23 Thread Simen Kjærås via Digitalmars-d
Since tail-const (more correctly called head-mutable) was 
mentioned here lately (in the 'I closed a very old bug!'[1] 
thread), I've been racking my brain to figure out what needs 
doing to make a viable solution.


Unqual is the standard way today to get a head-mutable version of 
something. For dynamic arrays, static arrays, pointers and value 
types, including structs without aliasing, thi works. For AAs, 
classes, and structs with aliasing, Unqual is the wrong tool, but 
it's the tool we have, so it's what we use.


Unqual has other uses, so HeadMutable!T should be a separate 
template. This means parts of Phobos will need to be reworked to 
support types not currently supported. However, given these types 
are not currently supported, this should not break any existing 
code.


While it is generally desirable for T to be implicitly castable 
to HeadMutable!T (just like const(int[]) is implicitly castable 
to const(int)[]), the rules for such implicit casting in the 
language today are inconsistent[2] and incompatible with alias 
this[3], opDispatch, opDot, subclassing, and constructors.


Instead of implicit casting, I therefore propose we use a method 
headMutable(), which will attempt to call the appropriate 
functions to do the conversion. With these two building blocks, 
we have what we need for tail-const (head-mutable) ranges and 
other constructs.


What does your code need to do to support HeadMutable? If you 
have a templated struct that holds an array or pointer, the type 
of which depends on a template parameter, you can define a 
function opHeadMutable that returns a head-mutable version. 
That's it.


If you use HeadMutable!T anywhere, you almost definitely should 
use headMutable() when assigning to it, since T might not be 
implicitly castable to HeadMutable!T.


So what does all of this look like? An example templated struct 
with opHeadMutable hook:


struct R(T) {
T[] arr;
auto opHeadMutable(this This)() {
import std.traits : CopyTypeQualifiers;
return R!(CopyTypeQualifiers!(This, T))(arr);
}
}

This is the code you will need to write to ensure your types can 
be converted to head-mutable. opHeadMutable provides both a 
method for conversion, and a way for the HeadMutable!T template 
to extract the correct type.


The actual implementation of HeadMutable!T and headMutable is 
available here:

https://gist.github.com/Biotronic/67bebfe97f17e73cc610d9bcd119adfb


My current issues with this:
1) I don't like the names much. I called them Decay, decay and 
opDecay for a while. Name suggestions are welcome.
2) As mentioned above, implicit conversions would be nice, but 
that'd require an entirely new type of implicit conversion in 
addition to alias this, opDispatch, opDot and interfaces/base 
classes. This would require some pretty darn good reasons, and I 
don't think a call to headMutable() is that much of a problem.


Questions:
Is a DIP required for this? Should I create a PR implementing 
this for the range types in Phobos? What other types would 
benefit from this?


I welcome any and all... feck it. Destroy!

--
  Simen

[1]: 
https://forum.dlang.org/post/egpcfhpediicvkjuk...@forum.dlang.org

[2]: https://issues.dlang.org/show_bug.cgi?id=18268
[3]: Alias this is too eager, and allows for calling mutating 
methods on the temporary value it returns. If alias this was used 
to allow const(int[]) to convert to const(int)[], 
isInputRange!(const(int[])) would return true.


Re: Please provide a channel for D ecosystem ideas

2018-01-23 Thread JN via Digitalmars-d

On Saturday, 20 January 2018 at 20:37:45 UTC, Andre Pany wrote:

Hi,

the GSOC wiki page inspired me to write this request. If I have 
an idea how the improve the D ecosystem but cannot do it 
myself, there is at the moment no good channel to provide this 
idea to someone other in the D community. Maybe someone other 
is searching for an opportunity to help the D ecosystem but 
does not know how.


In the past, dsource used to serve this purpose. It was during 
the Tango vs Phobos times, but there were so many projects being 
born, each project had a separate webforum where people could 
discuss stuff. I feel like we are missing something now. Gtkd has 
its own forums, vibed has its own, most other projects are on 
github where discussion is hidden behind issues section.


I think wiki could work, or just a subsection of this forum here.


Re: Shouldn't invalid references like this fail at compile time?

2018-01-23 Thread Kagamin via Digitalmars-d

On Monday, 22 January 2018 at 23:30:16 UTC, Aedt wrote:
I was asked in Reddit 
(https://www.reddit.com/r/learnprogramming/comments/7ru82l/i_was_thinking_of_using_d_haxe_or_another/) how would D handle the following similar D code. I'm surprised that both dmd and ldc provides no warnings even with -w argument passed.


Well, if you want to check much at compile time, you probably 
want SPARK or F* (fstar).