^^ limitation

2012-04-24 Thread Tyro[17]
I believe the following two lines of code should produce the same 
output. Is there a specific reason why doesn't allow this? Of course the 
only way to store the result would be to put in into a BigInt variable 
or convert it to string but I don't that shouldn't prevent the compiler 
from producing the correct value.


(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Re: ^^ limitation

2012-04-24 Thread Marco Leise
Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" :

> I believe the following two lines of code should produce the same 
> output. Is there a specific reason why doesn't allow this? Of course the 
> only way to store the result would be to put in into a BigInt variable 
> or convert it to string but I don't that shouldn't prevent the compiler 
> from producing the correct value.
> 
> (101^^1000).to!string.writeln;
> (BigInt(101)^^1000).writeln;
> 
> Regards,
> Andrew

Well... what do you want to hear? I like to know that the result of 
mathematical operations doesn't change its type depending on the ability to  
compile-time evaluate it and the magnitude of the result. Imagine the mess when 
the numbers are replaced by constants that are defined else where. This may 
work in languages that are not strongly typed, but we rely on the exact data 
type of an expression. You are calling a function called to!string with the 
overload that takes an int. A BigInt or a string may be handled entirely 
differently by to!string. The compiler doesn't know what either BigInt is or 
what to!string is supposed to do. It cannot make the assumption that passing a 
string to it will work the same way as passing an int. What you would need is 
that int and BigInt have the same semantics everywhere. But once you leave the 
language by calling a C function for example you need an explicit 32-bit int 
again.
If you need this functionality use a programming language that has type classes 
and seamlessly switches between int/BigInt types, but drops the systems 
language attribute. You'll find languages that support unlimited integers and 
floats without friction. Or you use BigInt everywhere. Maybe Python or 
Mathematica.

-- 
Marco



Re: ^^ limitation

2012-04-24 Thread bearophile

Marco Leise:

If you need this functionality use a programming language that 
has type classes and seamlessly switches between int/BigInt 
types, but drops the systems language attribute.


I think Lisp (that beside allowing you to use fixnums that can't 
grow, often is used with tagged integers, that switch to 
multi-precision when the number grows) was used as system 
language too (Symbolics?)


Bye,
bearophile


Re: ^^ limitation

2012-04-25 Thread Don Clugston

On 24/04/12 23:00, Tyro[17] wrote:

I believe the following two lines of code should produce the same
output. Is there a specific reason why doesn't allow this? Of course the
only way to store the result would be to put in into a BigInt variable
or convert it to string but I don't that shouldn't prevent the compiler
from producing the correct value.

(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Because BigInt is part of the library, not part of the compiler, so the 
compiler doesn't know it exists.


What would be the type of 3^^5 ? Would it be a BigInt as well?

This kind of thing doesn't work well in C-family languages.


Re: ^^ limitation

2012-04-26 Thread Tryo[17]

On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:

Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" :

I believe the following two lines of code should produce the 
same output. Is there a specific reason why doesn't allow 
this? Of course the only way to store the result would be to 
put in into a BigInt variable or convert it to string but I 
don't that shouldn't prevent the compiler from producing the 
correct value.


(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Well... what do you want to hear? I like to know that the


Honestly, I just want to hear the rationale for why things are
the way
they are. I see thing possible in other languages that I know is
not as
powerful as D and I get to wonder why... If I don't understand
enough
to make a determination on my own, I ask.

result of mathematical operations doesn't change its type 
depending on the ability to  compile-time evaluate it and the 
magnitude of the result. Imagine the mess when the numbers are 
replaced by constants that are defined else where. This may


D provides an auto type facility that determins which the type
that
can best accommodate a particular value. What prevents the
from determining that the only type that can accommodate that
value is a BigInt? The same way it decides between int, long,
ulong, etc.

work in languages that are not strongly typed, but we rely on 
the exact data type of an expression. You are calling a 
function called to!string with the overload that takes an int.


Why couldn't to!string be overloaded to take a BigInt?

A BigInt or a string may be handled entirely differently by 
to!string. The compiler doesn't know what either BigInt is or 
what to!string is supposed to do. It cannot make the assumption


The point is this, currently 2^^31 will produce a negative long
value
on my system. Not that the value is wrong, the variable simply
cannot support the magnitude of the result for this calculation
so it wraps around and produces a negative value. However,
2^^n for n>=32 produces a value of 0. Why not produce the value
and let the user choose what to put it into? Why not make the he
language BigInt aware? What is the negative effect of taking
BigInt out of the library and make it an official part of the
language?

that passing a string to it will work the same way as passing 
an int. What you would need is that int and BigInt have the 
same semantics everywhere. But once you leave the language by 
calling a C function for example you need an explicit 32-bit 
int again.
If you need this functionality use a programming language that 
has type classes and seamlessly switches between int/BigInt 
types, but drops the systems language attribute. You'll find 
languages that support unlimited integers and floats without 
friction. Or you use BigInt everywhere. Maybe Python or 
Mathematica.


I am not interested in another language (maybe in then future),
simply an understanding why things are the way they are.

Andrew



Re: ^^ limitation

2012-04-26 Thread Tryo[17]

On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:

Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" :

I believe the following two lines of code should produce the 
same output. Is there a specific reason why doesn't allow 
this? Of course the only way to store the result would be to 
put in into a BigInt variable or convert it to string but I 
don't that shouldn't prevent the compiler from producing the 
correct value.


(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Well... what do you want to hear? I like to know that the


Honestly, I just want to hear the rationale for why things are
the way they are. I see thing possible in other languages that
I know are not as powerful as D and I get to wonder why... If
I don't understand enough to make a determination on my
own, I simply ask.

result of mathematical operations doesn't change its type 
depending on the ability to  compile-time evaluate it and the 
magnitude of the result. Imagine the mess when the numbers are 
replaced by constants that are defined else where. This may


D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.

work in languages that are not strongly typed, but we rely on 
the exact data type of an expression. You are calling a 
function called to!string with the overload that takes an int.


Why couldn't to!string be overloaded to take a BigInt?

A BigInt or a string may be handled entirely differently by 
to!string. The compiler doesn't know what either BigInt is or 
what to!string is supposed to do. It cannot make the assumption


The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?

that passing a string to it will work the same way as passing 
an int. What you would need is that int and BigInt have the 
same semantics everywhere. But once you leave the language by 
calling a C function for example you need an explicit 32-bit 
int again.
If you need this functionality use a programming language that 
has type classes and seamlessly switches between int/BigInt 
types, but drops the systems language attribute. You'll find 
languages that support unlimited integers and floats without 
friction. Or you use BigInt everywhere. Maybe Python or 
Mathematica.


I am not interested in another language (maybe in then future),
simply an understanding why things are the way they are.

Andrew



Re: ^^ limitation

2012-04-26 Thread James Miller

On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:


D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.
Because the compiler doesn't know how to make a BigInt, BigInt is 
part of the library, not the language.


Why couldn't to!string be overloaded to take a BigInt?

It is, its the same overload that takes other objects.


The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?


Because this is a native language. The idea is to be close to the 
hardware, and that means fixed-sized integers, fixed-sized floats 
and having to live with that. Making BigInt part of the language 
opens up the door for a whole host of other things to become 
"part of the language". While we're at it, why don't we make 
matrices part of the language, and regexes, and we might aswell 
move all that datetime stuff into the language too. Oh and I 
would love to see all the signals stuff in there too.


The reason we don't put everything in the language is because the 
more you put into the language, the harder it is to move. There 
are more than enough bugs in D right now, and adding more 
features into the language means a higher burden for core 
development. There is a trend of trying to move away from tight 
integration into the compiler, and by extension the language. 
Associative arrays are being worked on to make most of the work 
be done in object.d, with the end result being the compiler only 
has to convert T[U] into AA(T, U) and do a similar conversion for 
aa literals. This means that there is no extra fancy work for the 
compiler to do to support AA's


Also, D is designed for efficiency, if I don't want a BigInt, and 
all of the extra memory that comes with, then I would rather have 
an error. I don't want what /should/ be a fast system to slow 
down because I accidentally type 1 << 33 instead of 1 << 23, I 
want an error of some sort.


The real solution here isn't to just blindly allow arbitrary 
features to be "in the language" as it were, but to make it 
easier to integrate library solutions so they feel like part of 
the language.


--
James Miller


Re: ^^ limitation

2012-04-27 Thread Timon Gehr

On 04/27/2012 03:55 AM, James Miller wrote:

On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:


D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.

Because the compiler doesn't know how to make a BigInt, BigInt is part
of the library, not the language.


Why couldn't to!string be overloaded to take a BigInt?

It is, its the same overload that takes other objects.


The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?


Because this is a native language. The idea is to be close to the
hardware, and that means fixed-sized integers, fixed-sized floats and
having to live with that. Making BigInt part of the language opens up
the door for a whole host of other things to become "part of the
language". While we're at it, why don't we make matrices part of the
language, and regexes, and we might aswell move all that datetime stuff
into the language too. Oh and I would love to see all the signals stuff
in there too.

The reason we don't put everything in the language is because the more
you put into the language, the harder it is to move. There are more than
enough bugs in D


s/in D/in the DMD frontend/


right now, and adding more features into the language
means a higher burden for core development. There is a trend of trying
to move away from tight integration into the compiler, and by extension
the language. Associative arrays are being worked on to make most of the
work be done in object.d, with the end result being the compiler only
has to convert T[U] into AA(T, U) and do a similar conversion for aa
literals. This means that there is no extra fancy work for the compiler
to do to support AA's

Also, D is designed for efficiency, if I don't want a BigInt, and all of
the extra memory that comes with, then I would rather have an error. I
don't want what /should/ be a fast system to slow down because I
accidentally type 1 << 33 instead of 1 << 23, I want an error of some sort.

The real solution here isn't to just blindly allow arbitrary features to
be "in the language" as it were, but to make it easier to integrate
library solutions so they feel like part of the language.

--
James Miller




Re: ^^ limitation

2012-04-27 Thread Marco Leise
Am Fri, 27 Apr 2012 02:56:11 +0200
schrieb "Tryo[17]" :

> On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:
> > Well... what do you want to hear? I like to know that the
> 
> Honestly, I just want to hear the rationale for why things are
> the way they are. I see thing possible in other languages that
> I know are not as powerful as D and I get to wonder why... If
> I don't understand enough to make a determination on my
> own, I simply ask.

In the first moment I wasn't sure if you were trolling. It seems so obvious and 
clear to me that the result of a calculation cannot change its type depending 
on the exact magnitudes of the operands, that I interpreted ^^ as *g* or :p. 
"Ha! Ha! Limitation!"
Considering that you probably have more experience with higher-level languages, 
where the actual data type can be more or less hidden and dynamically changed, 
I can understand the confusion. The word powerful can mean different things to 
different people. Powerful can mean, that you have a high-level foreach loop, 
but it can also mean that you are able to implement a foreach loop in low-level 
assembly.

A warning could be useful. I don't know about: (3 ^^ 99) & 0x though. 
I.e. cases where you may be aware of the overflow, but want the 2^32 modulo 
anyway for some kind of hash function.

-- 
Marco



Limitation with current regex API

2012-01-16 Thread Jerry
Hi all,

In general, I'm enjoying the regex respin.  However, I ran into one
issue that seems to have no clean workaround.

Generally, I want to be able to get the start and end indices of
matches.  With the complete match, this info can be pieced together with
match.pre().length and match.hit.length().  However, I can't do this
with captures.

For an example: I have a string and the regex .*(a).*(b).*(c).*.  I want
to find where a, b, and c are located when I match.  As far as I can
tell, the only way to do this would be to capture every chunk of text,
then iterate to determine the offsets.  That seems wasteful.

If you look at the ICU and Java regex APIs, you'll see that this
information is retrievable.  I believe it's available under the covers
of the D regex library API too.

Can this please be exposed?  It's very helpful for doing text processing
where you need to be able to align the results of multiple
transformations to the input text.

Thanks
Jerry




[challenge] Limitation in D's metaprogramming

2010-10-18 Thread Nick Sabalausky
I don't know if others have noticed this before, but I think I've found a 
notable limitation in D's metaprogramming potential: There doesn't appear to 
be a way to have mutable global state at compile-time.

Challange:

Create two..."things"...they can be functions, templates, variables, 
mix-n-match, whatever. One of them increments a counter, and the other can 
be used to retreive the value. But both of these must operate at 
compile-time, and they must both be callable (directly or indirectly, 
doesn't matter) from within the context of any module that imports them.

This is an example, except it operates at run-time, not compile-time:

---
// a.d
module a;
int value=0;
void inc()
{
value++;
}
int get()
{
return value;
}

void incFromA()
{
inc();
}

//b.d
module b;
import a;
void incFromB()
{
inc();
}

//main.d
import a;
import b;
import std.stdio;
void main()
{
inc();
incFromA();
incFromB();
writeln(get());
}
---

The goal of this challenge is to define a global-level manifest constant 
enum that holds a value that has been incremented from multiple modules:

enum GOAL = ;

It can, of course, then be displayed via:
pragma(msg, std.conv.to!string(GOAL));

At this point, I'm not concerned about order-of-execution issues resulting 
in unexpected or unpredictable values. As long as a value can be incremented 
at compile-time from multiple modules and used to initialize an enum 
manifest constant, that satisfies this challenge.




Re: Limitation with current regex API

2012-01-16 Thread Vladimir Panteleev

On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
As far as I can tell, the only way to do this would be to 
capture every chunk of text, then iterate to determine the 
offsets.


Not sure if this is what you were referring to, but you can do...

m.pre.length + m.captures[1].ptr - m.hit.ptr



Re: Limitation with current regex API

2012-01-16 Thread Vladimir Panteleev
On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev 
wrote:

On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
As far as I can tell, the only way to do this would be to 
capture every chunk of text, then iterate to determine the 
offsets.


Not sure if this is what you were referring to, but you can 
do...


Even simpler: m.captures[1].ptr - s.ptr

(s is the string being matched)


Re: Limitation with current regex API

2012-01-16 Thread Jerry
"Vladimir Panteleev"  writes:

> On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:
>> On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
>>> As far as I can tell, the only way to do this would be to capture every
>>> chunk of text, then iterate to determine the offsets.
>>
>> Not sure if this is what you were referring to, but you can do...
>
> Even simpler: m.captures[1].ptr - s.ptr
>
> (s is the string being matched)

Ah ok, that'll work.



Re: Limitation with current regex API

2012-01-16 Thread Mail Mantis
2012/1/17 Mail Mantis :
> Correct me if I'm wrong, but wouldn't this be better:
> (m_captures[1].ptr - s.ptr) / s[0].sizeof;

No, it wouldn't. Somehow, I forgot the rules for pointer ariphmetics. Sorry.


Re: Limitation with current regex API

2012-01-16 Thread Nick Sabalausky
"Vladimir Panteleev"  wrote in message 
news:klzeekkilpzwmjmku...@dfeed.kimsufi.thecybershadow.net...
> On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:
>> On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
>>> As far as I can tell, the only way to do this would be to capture every 
>>> chunk of text, then iterate to determine the offsets.
>>
>> Not sure if this is what you were referring to, but you can do...
>
> Even simpler: m.captures[1].ptr - s.ptr
>
> (s is the string being matched)

That wouldn't work in @safe mode, would it?




Re: Limitation with current regex API

2012-01-16 Thread Mail Mantis
2012/1/17 Vladimir Panteleev :
> On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:
>>
>> On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
>>>
>>> As far as I can tell, the only way to do this would be to capture every
>>> chunk of text, then iterate to determine the offsets.
>>
>>
>> Not sure if this is what you were referring to, but you can do...
>
>
> Even simpler: m.captures[1].ptr - s.ptr
>
> (s is the string being matched)

Correct me if I'm wrong, but wouldn't this be better:
(m_captures[1].ptr - s.ptr) / s[0].sizeof;


Re: Limitation with current regex API

2012-01-16 Thread Timon Gehr

On 01/17/2012 04:03 AM, Nick Sabalausky wrote:

"Vladimir Panteleev"  wrote in message
news:klzeekkilpzwmjmku...@dfeed.kimsufi.thecybershadow.net...

On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:

On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:

As far as I can tell, the only way to do this would be to capture every
chunk of text, then iterate to determine the offsets.


Not sure if this is what you were referring to, but you can do...


Even simpler: m.captures[1].ptr - s.ptr

(s is the string being matched)


That wouldn't work in @safe mode, would it?




There is nothing unsafe about the operation, so I'd actually expect it 
to work.


Re: Limitation with current regex API

2012-01-16 Thread Nick Sabalausky
"Timon Gehr"  wrote in message 
news:jf2p5d$2ria$1...@digitalmars.com...
> On 01/17/2012 04:03 AM, Nick Sabalausky wrote:
>> "Vladimir Panteleev"  wrote in message
>> news:klzeekkilpzwmjmku...@dfeed.kimsufi.thecybershadow.net...
>>> On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:
 On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:
> As far as I can tell, the only way to do this would be to capture 
> every
> chunk of text, then iterate to determine the offsets.

 Not sure if this is what you were referring to, but you can do...
>>>
>>> Even simpler: m.captures[1].ptr - s.ptr
>>>
>>> (s is the string being matched)
>>
>> That wouldn't work in @safe mode, would it?
>>
>>
>
> There is nothing unsafe about the operation, so I'd actually expect it to 
> work.

I thought pointer arithmetic was forbidden in @safe?




Re: Limitation with current regex API

2012-01-16 Thread Timon Gehr

On 01/17/2012 05:00 AM, Nick Sabalausky wrote:

"Timon Gehr"  wrote in message
news:jf2p5d$2ria$1...@digitalmars.com...

On 01/17/2012 04:03 AM, Nick Sabalausky wrote:

"Vladimir Panteleev"   wrote in message
news:klzeekkilpzwmjmku...@dfeed.kimsufi.thecybershadow.net...

On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:

On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:

As far as I can tell, the only way to do this would be to capture
every
chunk of text, then iterate to determine the offsets.


Not sure if this is what you were referring to, but you can do...


Even simpler: m.captures[1].ptr - s.ptr

(s is the string being matched)


That wouldn't work in @safe mode, would it?




There is nothing unsafe about the operation, so I'd actually expect it to
work.


I thought pointer arithmetic was forbidden in @safe?




I don't know exactly, since @safe is neither fully specified nor 
implemented. In my understanding, in @safe code, operations that may 
lead to memory corruption are forbidden. Pointer - pointer cannot, other 
kinds of pointer arithmetic may.


Re: Limitation with current regex API

2012-01-16 Thread Jerry
Mail Mantis  writes:

> 2012/1/17 Vladimir Panteleev :
>> On Tuesday, 17 January 2012 at 01:44:37 UTC, Vladimir Panteleev wrote:
>>>
>>> On Monday, 16 January 2012 at 19:28:42 UTC, Jerry wrote:

 As far as I can tell, the only way to do this would be to capture every
 chunk of text, then iterate to determine the offsets.
>>>
>>>
>>> Not sure if this is what you were referring to, but you can do...
>>
>>
>> Even simpler: m.captures[1].ptr - s.ptr
>>
>> (s is the string being matched)
>
> Correct me if I'm wrong, but wouldn't this be better:
> (m_captures[1].ptr - s.ptr) / s[0].sizeof;

I *think* pointer arithmetic handles that.  However this is much uglier
than:

m_captures[1].begin
m_captures[1].end

Jerry


Re: Limitation with current regex API

2012-01-17 Thread Jonathan M Davis
On Tuesday, January 17, 2012 05:04:39 Timon Gehr wrote:
> I don't know exactly, since @safe is neither fully specified nor
> implemented. In my understanding, in @safe code, operations that may
> lead to memory corruption are forbidden. Pointer - pointer cannot, other
> kinds of pointer arithmetic may.

Pointer arithmetic is definitely forbidden in @safe, but I'm not sure that that 
forbids pointer - pointer, since it's not dangerous. It's changing a pointer 
via arithmetic which is dangerous.

- Jonathan M Davis


Re: Limitation with current regex API

2012-01-17 Thread Don Clugston

On 17/01/12 10:40, Jonathan M Davis wrote:

On Tuesday, January 17, 2012 05:04:39 Timon Gehr wrote:

I don't know exactly, since @safe is neither fully specified nor
implemented. In my understanding, in @safe code, operations that may
lead to memory corruption are forbidden. Pointer - pointer cannot, other
kinds of pointer arithmetic may.


Pointer arithmetic is definitely forbidden in @safe, but I'm not sure that that
forbids pointer - pointer, since it's not dangerous. It's changing a pointer
via arithmetic which is dangerous.

- Jonathan M Davis


My guess is that safe D is supposed to enforce C pointer semantics.
At least, code which is both @safe and pure must do so.
The semantics are currently enforced in CTFE.

pointer - pointer is undefined behaviour in C, if the pointers come from 
different arrays. It's OK if they are from the same array, which is true 
in this case.




Re: Limitation with current regex API

2012-01-17 Thread Andrei Alexandrescu

On 1/17/12 6:59 AM, Don Clugston wrote:

On 17/01/12 10:40, Jonathan M Davis wrote:

On Tuesday, January 17, 2012 05:04:39 Timon Gehr wrote:

I don't know exactly, since @safe is neither fully specified nor
implemented. In my understanding, in @safe code, operations that may
lead to memory corruption are forbidden. Pointer - pointer cannot, other
kinds of pointer arithmetic may.


Pointer arithmetic is definitely forbidden in @safe, but I'm not sure
that that
forbids pointer - pointer, since it's not dangerous. It's changing a
pointer
via arithmetic which is dangerous.

- Jonathan M Davis


My guess is that safe D is supposed to enforce C pointer semantics.
At least, code which is both @safe and pure must do so.
The semantics are currently enforced in CTFE.

pointer - pointer is undefined behaviour in C, if the pointers come from
different arrays. It's OK if they are from the same array, which is true
in this case.



Yah, that C rule is to allow segmented memory architectures work 
properly. One possibility for D is to require a flat memory model, in 
which the difference between any two pointers can be taken.


Andrei


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread bearophile
Nick Sabalausky:

> The goal of this challenge is to define a global-level manifest constant 
> enum that holds a value that has been incremented from multiple modules:

And I hope people will try to solve it!
So if someone solves it, we can later patch that bug ;-)

Bye,
bearophile


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Denis Koroskin

On Tue, 19 Oct 2010 04:07:16 +0400, Nick Sabalausky  wrote:


I don't know if others have noticed this before, but I think I've found a
notable limitation in D's metaprogramming potential: There doesn't  
appear to

be a way to have mutable global state at compile-time.

Challange:

Create two..."things"...they can be functions, templates, variables,
mix-n-match, whatever. One of them increments a counter, and the other  
can

be used to retreive the value. But both of these must operate at
compile-time, and they must both be callable (directly or indirectly,
doesn't matter) from within the context of any module that imports them.

This is an example, except it operates at run-time, not compile-time:

---
// a.d
module a;
int value=0;
void inc()
{
value++;
}
int get()
{
return value;
}

void incFromA()
{
inc();
}

//b.d
module b;
import a;
void incFromB()
{
inc();
}

//main.d
import a;
import b;
import std.stdio;
void main()
{
inc();
incFromA();
incFromB();
writeln(get());
}
---

The goal of this challenge is to define a global-level manifest constant
enum that holds a value that has been incremented from multiple modules:

enum GOAL = ;

It can, of course, then be displayed via:
pragma(msg, std.conv.to!string(GOAL));

At this point, I'm not concerned about order-of-execution issues  
resulting
in unexpected or unpredictable values. As long as a value can be  
incremented

at compile-time from multiple modules and used to initialize an enum
manifest constant, that satisfies this challenge.




I hope that's not a limitation but rather a deliberate design decision.  
CTFE needs to be pure, otherwise an order of evaluation would have an  
impact.


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Jonathan M Davis
On Monday, October 18, 2010 17:07:16 Nick Sabalausky wrote:
> I don't know if others have noticed this before, but I think I've found a
> notable limitation in D's metaprogramming potential: There doesn't appear
> to be a way to have mutable global state at compile-time.
> 
> Challange:
> 
> Create two..."things"...they can be functions, templates, variables,
> mix-n-match, whatever. One of them increments a counter, and the other can
> be used to retreive the value. But both of these must operate at
> compile-time, and they must both be callable (directly or indirectly,
> doesn't matter) from within the context of any module that imports them.
> 
> This is an example, except it operates at run-time, not compile-time:
> 
> ---
> // a.d
> module a;
> int value=0;
> void inc()
> {
> value++;
> }
> int get()
> {
> return value;
> }
> 
> void incFromA()
> {
> inc();
> }
> 
> //b.d
> module b;
> import a;
> void incFromB()
> {
> inc();
> }
> 
> //main.d
> import a;
> import b;
> import std.stdio;
> void main()
> {
> inc();
> incFromA();
> incFromB();
> writeln(get());
> }
> ---
> 
> The goal of this challenge is to define a global-level manifest constant
> enum that holds a value that has been incremented from multiple modules:
> 
> enum GOAL = ;
> 
> It can, of course, then be displayed via:
> pragma(msg, std.conv.to!string(GOAL));
> 
> At this point, I'm not concerned about order-of-execution issues resulting
> in unexpected or unpredictable values. As long as a value can be
> incremented at compile-time from multiple modules and used to initialize
> an enum manifest constant, that satisfies this challenge.

One word: monads.

Now, to get monads to work, you're going to have to be fairly organized about 
it, but that would be the classic solution to not being able to have or alter 
global state and yet still be able to effectively have global state.

- Jonathan M Davis


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Nick Sabalausky
"Jonathan M Davis"  wrote in message 
news:mailman.715.1287449256.858.digitalmar...@puremagic.com...
>
> One word: monads.
>
> Now, to get monads to work, you're going to have to be fairly organized 
> about
> it, but that would be the classic solution to not being able to have or 
> alter
> global state and yet still be able to effectively have global state.
>

Oh yea, I've heard about them but don't have any real experience with them. 
Any FP experts know whether or not monads are known to be constructible from 
purely-FP building blocks? I always assumed "no", and that monads really 
just came from a deliberate compiler-provided hole in the whole purity 
thing, but maybe I'm wrong? (That would be quite interesting: constructing 
impurity from purity.)




Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Jonathan M Davis
On Monday, October 18, 2010 17:54:50 Nick Sabalausky wrote:
> "Jonathan M Davis"  wrote in message
> news:mailman.715.1287449256.858.digitalmar...@puremagic.com...
> 
> > One word: monads.
> > 
> > Now, to get monads to work, you're going to have to be fairly organized
> > about
> > it, but that would be the classic solution to not being able to have or
> > alter
> > global state and yet still be able to effectively have global state.
> 
> Oh yea, I've heard about them but don't have any real experience with them.
> Any FP experts know whether or not monads are known to be constructible
> from purely-FP building blocks? I always assumed "no", and that monads
> really just came from a deliberate compiler-provided hole in the whole
> purity thing, but maybe I'm wrong? (That would be quite interesting:
> constructing impurity from purity.)

You can think of a monad as an extra parameter which is passed to each function 
and holds the global state. It isn't a hole in purity at all. For instance, 
it's 
how Haskell manages to have I/O and yet be functionally pure. You don't need 
the 
compiler's help to do monads - it's just easier if you have it.

- Jonathan M Davis


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread bearophile
Jonathan M Davis:

> You can think of a monad as an extra parameter which is passed to each 
> function 
> and holds the global state. It isn't a hole in purity at all. For instance, 
> it's 
> how Haskell manages to have I/O and yet be functionally pure. You don't need 
> the 
> compiler's help to do monads - it's just easier if you have it.

Yet, sooner or later the compiler has to help you giving you a hole to let the 
contents of those I/O monads pass though to/from the outside world, otherwise 
you will not see any program input/output unless you use something like a 
post-mortem debugger :-) So I think the Haskell compiler has to manage your I/O 
monads in a special way anyway. Purity can't be absolute.

Bye,
bearophile


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Michael Stone
bearophile Wrote:

> Jonathan M Davis:
> 
> > You can think of a monad as an extra parameter which is passed to each 
> > function 
> > and holds the global state. It isn't a hole in purity at all. For instance, 
> > it's 
> > how Haskell manages to have I/O and yet be functionally pure. You don't 
> > need the 
> > compiler's help to do monads - it's just easier if you have it.
> 
> Yet, sooner or later the compiler has to help you giving you a hole to let 
> the contents of those I/O monads pass though to/from the outside world, 
> otherwise you will not see any program input/output unless you use something 
> like a post-mortem debugger :-) So I think the Haskell compiler has to manage 
> your I/O monads in a special way anyway. Purity can't be absolute.

The stdin/(stdout/stderr) streams/pipes form a pure system with eager 
evaluation of the application in my opinion. You can use monads to propagate 
the state through the application naturally in this way.

The arguments about real world are silly in this context. Even the operating 
system, the compiler and the processes are all abstactions. The real world 
operates with resistances, capacitances, voltages etc.


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Jonathan M Davis
On Monday 18 October 2010 18:49:41 bearophile wrote:
> Jonathan M Davis:
> > You can think of a monad as an extra parameter which is passed to each
> > function and holds the global state. It isn't a hole in purity at all.
> > For instance, it's how Haskell manages to have I/O and yet be
> > functionally pure. You don't need the compiler's help to do monads -
> > it's just easier if you have it.
> 
> Yet, sooner or later the compiler has to help you giving you a hole to let
> the contents of those I/O monads pass though to/from the outside world,
> otherwise you will not see any program input/output unless you use
> something like a post-mortem debugger :-) So I think the Haskell compiler
> has to manage your I/O monads in a special way anyway. Purity can't be
> absolute.
> 
> Bye,
> bearophile

I've been dealing primarily with D in my free time these days instead of 
Haskell, so I don't remember all of the details, but the side effects are 
essentially removed by making the IO part of the output at the end. If there 
_is_ a hole of some kind in the functional purity of the language it's confined 
to one point and it does not affect your program overall at all. It would only 
be 
when the monad was finally consumed. And many other types of monads don't have 
anything do to with I/O and _definitely_ don't need any kind of hole in the 
functional purity of the system. Haskell wouldn't work if it weren't 
functionally pure (thanks to the fact that it's lazy). Monads were an ingenious 
solution to the problem of how to deal with stuff like I/O that needs side 
effects 
and/or global state without actually allowing either. Monads _do_ end up being 
a 
rather viral in that once you have one, pretty much everything in the call 
chain 
after that has to pass it along too, but it does allow for functional purity 
and 
still allow global state and side effects.

Regardless, you can implement basic monads in D just fine without any language 
support whatsoever. What you basically end up doing is passing around the 
global 
state to every function. It's returned as part of the result of the function. 
Think of passing the global state to every function and having those functions 
return a tuple of their actual return value and the global state. The caller 
takes out the actual return value to do whatever it does and passes on the 
global state variable to the next function that it calls and finally returning 
it 
in a tuple along with its own return value.

Tuple!(Retval, GlobalState) func(GlobalState gs, otherparams...)
{
//
return tuple(retval, gs);
}

Monads can be a bit hard to wrap your mind around, but they're ingenious. 
Haskell couldn't really exist without them.

- Jonathan M Davis


Re: [challenge] Limitation in D's metaprogramming

2010-10-18 Thread Robert Jacques

On Mon, 18 Oct 2010 20:07:16 -0400, Nick Sabalausky  wrote:

I don't know if others have noticed this before, but I think I've found a
notable limitation in D's metaprogramming potential: There doesn't  
appear to

be a way to have mutable global state at compile-time.

Challange:

Create two..."things"...they can be functions, templates, variables,
mix-n-match, whatever. One of them increments a counter, and the other  
can

be used to retreive the value. But both of these must operate at
compile-time, and they must both be callable (directly or indirectly,
doesn't matter) from within the context of any module that imports them.

This is an example, except it operates at run-time, not compile-time:

---
// a.d
module a;
int value=0;
void inc()
{
value++;
}
int get()
{
return value;
}

void incFromA()
{
inc();
}

//b.d
module b;
import a;
void incFromB()
{
inc();
}

//main.d
import a;
import b;
import std.stdio;
void main()
{
inc();
incFromA();
incFromB();
writeln(get());
}
---

The goal of this challenge is to define a global-level manifest constant
enum that holds a value that has been incremented from multiple modules:

enum GOAL = ;

It can, of course, then be displayed via:
pragma(msg, std.conv.to!string(GOAL));

At this point, I'm not concerned about order-of-execution issues  
resulting
in unexpected or unpredictable values. As long as a value can be  
incremented

at compile-time from multiple modules and used to initialize an enum
manifest constant, that satisfies this challenge.



This isn't exactly what you're looking for, but you can abuse conditional  
compilation + D's symbol table to create a scoped counter:


string nthLabel(int n) {
return "__Global_"~ to!string(n);
}
string incGlobal() {
return `mixin("alias int "~nthLabel( getGlobal!(__FILE__, __LINE__)  
)~";");`;

}
template getGlobal(string file = __FILE__, int line = __LINE__, int N = 0)  
{

static if( !__traits(compiles, mixin(nthLabel(N)) ) )
enum getGlobal = N;
else
enum getGlobal = getGlobal!(file,line, N+1 );
}

enum Foo= getGlobal!(__FILE__, __LINE__) ;
mixin( incGlobal() );
enum Bar= getGlobal!(__FILE__, __LINE__);
mixin( incGlobal() );
enum FooBar = getGlobal!(__FILE__, __LINE__);


void main(string[] args) {
//  012
writeln(Foo,'\t',Bar,'\t',FooBar);
return;
}


enum pointers or class references limitation

2017-08-30 Thread Dmitry Olshansky via Digitalmars-d
The subj is not (any longer) supported by compiler. In fact it 
used to produce wrong code sometimes and now it just plainly 
rejects it.


It's freaking inconvenient because I can't deploy new 
compile-time std.regex w/o it.


The example:

enum ctr = ctRegex!"blah";

after my changes must be:

static immutable ctr = ctRegex!"blah";

Howeever I divised a trick to get equivalent functionality as 
follows:


template ctRegexImpl(alias pattern, string flags=[])
{
   static immutable staticRe = ...;
   struct Wrapper
   {
  @property auto getRe(){ return staticRe; }
  alias getRe this;
   }
   enum wrapper = Wrapper();
}

public enum ctRegex(alias pattern, alias flags=[]) = 
ctRegexImpl!(pattern, flags).wrapper;


Now ctRegex returns a neat forwarding struct that bypasses the 
strange limitation. The question remains though - why can't 
compiler do it automatically?




Monads (Re: [challenge] Limitation in D's metaprogramming)

2010-10-19 Thread Graham Fawcett
Hi Jonathan,

On Mon, 18 Oct 2010 18:02:58 -0700, Jonathan M Davis wrote:

> On Monday, October 18, 2010 17:54:50 Nick Sabalausky wrote:
>> "Jonathan M Davis"  wrote in message
>> news:mailman.715.1287449256.858.digitalmar...@puremagic.com...
>> 
>> > One word: monads.
>> > 
>> > Now, to get monads to work, you're going to have to be fairly
>> > organized about it, but that would be the classic solution to not
>> > being able to have or alter global state and yet still be able to
>> > effectively have global state.
>> 
>> Oh yea, I've heard about them but don't have any real experience with
>> them. Any FP experts know whether or not monads are known to be
>> constructible from purely-FP building blocks? I always assumed "no",
>> and that monads really just came from a deliberate compiler-provided
>> hole in the whole purity thing, but maybe I'm wrong? (That would be
>> quite interesting: constructing impurity from purity.)
> 
> You can think of a monad as an extra parameter which is passed to each
> function and holds the global state. It isn't a hole in purity at all.
> For instance, it's how Haskell manages to have I/O and yet be
> functionally pure. You don't need the compiler's help to do monads -
> it's just easier if you have it.

I don't see how monads will help here. Monads are useful for threading
state through pure computations, and are an enabler for I/O in Haskell
by threading the "real world" through a computation as a series of
computational states. Something has to initiate the thread, and tie it
up at the end: there's an implicit scope here, and I think here's
where the hard questions start to crop up in the context of CTFE.

Without permitting I/O, you could use a state-carrying monad to
implement a kind of dynamically-scoped namespace during CTFE; the
dynamically-scoped, mutable values would be global from the internal
perspective of the CTFE computations, but they would not leak out of
the monad into "truly global" state; and the overall effect would be
pure.

So, you can achieve an implicit, dynamic scope using monads. But if
all CTFE expansions are done within a single such scope, it would be
indistinguishable from running all CTFE expansions with access to
globally shared state (though not I/O). So then, why not just use
global state? Therefore, the monads could only practically be used as
an isolation technique, to isolate dynamically scoped values during
different CTFE expansions -- for example, at a compilation-unit level
-- without sacrificing purity. But then, it seems you would lose the
very thing you need to implement the challenge on the table: two
modules would want access to the same counter during expansion, not
two isolated copies of the counter. So again, you're back at global
state, or mimicking it using a state-carrying monad, with no
significant benefits accomplished by using a monad.

Just my two cents,
Graham


Re: enum pointers or class references limitation

2017-08-30 Thread Timon Gehr via Digitalmars-d

On 30.08.2017 11:36, Dmitry Olshansky wrote:
The subj is not (any longer) supported by compiler. In fact it used to 
produce wrong code sometimes and now it just plainly rejects it.


It's freaking inconvenient because I can't deploy new compile-time 
std.regex w/o it.


The example:

enum ctr = ctRegex!"blah";

after my changes must be:

static immutable ctr = ctRegex!"blah";

Howeever I divised a trick to get equivalent functionality as follows:

template ctRegexImpl(alias pattern, string flags=[])
{
static immutable staticRe = ...;
struct Wrapper
{
   @property auto getRe(){ return staticRe; }
   alias getRe this;
}
enum wrapper = Wrapper();
}

public enum ctRegex(alias pattern, alias flags=[]) = 
ctRegexImpl!(pattern, flags).wrapper;


Now ctRegex returns a neat forwarding struct that bypasses the strange 
limitation. The question remains though - why can't compiler do it 
automatically?




I think the underlying reason why it does not work is that dynamic array 
manifest constants are messed up. I.e. class reference `enum`s are 
disallowed in order to avoid having to make a decision for either 
inconsistent or insane semantics.


Re: enum pointers or class references limitation

2017-08-31 Thread Dmitry Olshansky via Digitalmars-d

On Wednesday, 30 August 2017 at 12:28:10 UTC, Timon Gehr wrote:

On 30.08.2017 11:36, Dmitry Olshansky wrote:
The subj is not (any longer) supported by compiler. In fact it 
used to produce wrong code sometimes and now it just plainly 
rejects it.



[..]


I think the underlying reason why it does not work is that 
dynamic array manifest constants are messed up. I.e. class 
reference `enum`s are disallowed in order to avoid having to 
make a decision for either inconsistent or insane semantics.


Well from my point of view enum is just evaluate this expression 
at the usage site. So any array or class instance will be created 
anew at the point of usage.


What are the problems with enums and dynamic arrays?


Re: enum pointers or class references limitation

2017-08-31 Thread Nicholas Wilson via Digitalmars-d
On Thursday, 31 August 2017 at 08:40:03 UTC, Dmitry Olshansky 
wrote:

On Wednesday, 30 August 2017 at 12:28:10 UTC, Timon Gehr wrote:

On 30.08.2017 11:36, Dmitry Olshansky wrote:
The subj is not (any longer) supported by compiler. In fact 
it used to produce wrong code sometimes and now it just 
plainly rejects it.



[..]


I think the underlying reason why it does not work is that 
dynamic array manifest constants are messed up. I.e. class 
reference `enum`s are disallowed in order to avoid having to 
make a decision for either inconsistent or insane semantics.


Well from my point of view enum is just evaluate this 
expression at the usage site. So any array or class instance 
will be created anew at the point of usage.


What are the problems with enums and dynamic arrays?


I think Timon is referring to:

enum int[] foo = [1,2,3];

auto bar = foo;
auto baz = foo;

assert(!(bar is baz)); // Passes


Re: enum pointers or class references limitation

2017-08-31 Thread Ali Çehreli via Digitalmars-d

On 08/31/2017 01:52 AM, Nicholas Wilson wrote:


I think Timon is referring to:

enum int[] foo = [1,2,3];

auto bar = foo;
auto baz = foo;

assert(!(bar is baz)); // Passes


Even better:

enum int[] foo = [1,2,3];
assert(!(foo is foo)); // Passes

Ali



Re: enum pointers or class references limitation

2017-09-01 Thread Dmitry Olshansky via Digitalmars-d

On Thursday, 31 August 2017 at 14:28:57 UTC, Ali Çehreli wrote:

On 08/31/2017 01:52 AM, Nicholas Wilson wrote:


I think Timon is referring to:

enum int[] foo = [1,2,3];

auto bar = foo;
auto baz = foo;

assert(!(bar is baz)); // Passes


Even better:

enum int[] foo = [1,2,3];
assert(!(foo is foo)); // Passes



I guess

assert(!([1,2,3] is [1,2,3]));

Which is exactly what enum expands to and totally expected. Where 
is the surprise?

Ali





Re: enum pointers or class references limitation

2017-09-01 Thread Ali Çehreli via Digitalmars-d

On 09/01/2017 11:48 AM, Dmitry Olshansky wrote:
> On Thursday, 31 August 2017 at 14:28:57 UTC, Ali Çehreli wrote:
>> On 08/31/2017 01:52 AM, Nicholas Wilson wrote:
>>
>>> I think Timon is referring to:
>>>
>>> enum int[] foo = [1,2,3];
>>>
>>> auto bar = foo;
>>> auto baz = foo;
>>>
>>> assert(!(bar is baz)); // Passes
>>
>> Even better:
>>
>> enum int[] foo = [1,2,3];
>> assert(!(foo is foo)); // Passes
>>
>
> I guess
>
> assert(!([1,2,3] is [1,2,3]));
>
> Which is exactly what enum expands to and totally expected. Where is the
> surprise?

In the surprising case foo is a symbol, seemingly of a variable. Failing 
the 'is' test is surprising in that case. I've just remembered that the 
actual surprising case is the following explicit check:


assert(!(foo.ptr is foo.ptr)); // Passes

I find it surprising because it looks like an entity does not have a 
well-behaving .ptr.


(Aside: I think your code might be surprising to at least newcomers as 
well.)


Ali



Re: enum pointers or class references limitation

2017-09-01 Thread Q. Schroll via Digitalmars-d

On Friday, 1 September 2017 at 21:08:20 UTC, Ali Çehreli wrote:

[snip]
> assert(!([1,2,3] is [1,2,3]));
>
> Which is exactly what enum expands to and totally expected.
Where is the
> surprise?


This is not a surprise. Array literals are not identical.

In the surprising case foo is a symbol, seemingly of a 
variable. Failing the 'is' test is surprising in that case. 
I've just remembered that the actual surprising case is the 
following explicit check:


assert(!(foo.ptr is foo.ptr)); // Passes

I find it surprising because it looks like an entity does not 
have a well-behaving .ptr.


(Aside: I think your code might be surprising to at least 
newcomers as well.)


That's a good reason to unrecommend/disallow enums with 
indirections. The compiler should recommend/suggest using static 
immutable instead as it does not have such oddities. The only 
advantage of enum is being guaranteed to be known at compile-time 
and they can be templatized (can be also done for static 
immutable via eponymous template).
I'd vote for a warning/error when the type of an enum has 
indirections together with a pragma to switch the warning off for 
the rare case you know exactly what you do.
Just as Scott Meyers said: make it easy to use correctly and hard 
to use incorrectly. Today it's easy to use incorrectly.




Re: enum pointers or class references limitation

2017-09-01 Thread Timon Gehr via Digitalmars-d

On 01.09.2017 20:48, Dmitry Olshansky wrote:

On Thursday, 31 August 2017 at 14:28:57 UTC, Ali Çehreli wrote:

On 08/31/2017 01:52 AM, Nicholas Wilson wrote:


I think Timon is referring to:

enum int[] foo = [1,2,3];

auto bar = foo;
auto baz = foo;

assert(!(bar is baz)); // Passes


Even better:

enum int[] foo = [1,2,3];
assert(!(foo is foo)); // Passes



I guess

assert(!([1,2,3] is [1,2,3]));

Which is exactly what enum expands to and totally expected.


I know what it does and do expect it. I still consider it problematic.
It's conceivable that someone just wants an enum slice to statically 
allocated array data (or a enum struct instance that has a field that is 
a statically allocated array, etc.). The enum/array literal design makes 
this impossible.



Where is the surprise?
My main point is that I don't see why a ctRegex should runtime-allocate 
a class instance at each usage site. (And if it does not, the difference 
to array literals is not so easy to justify.)




But, there are a number of perhaps surprising behaviors of array 
literals at compile time that come to mind, some, but not all closely 
related to the issue at hand:


struct S{ int[] x; }
static assert(S([1,2,3]) is S([1,2,3])); // ok
auto x = S([1,2,3]), y = S([1,2,3]);

struct C{ int[] x; this(int[] x){ this.x=x; } }
static assert(new C([1,2,3]).x is new C([1,2,3]).x); // ok
static assert((){
auto c=new C([1,2,3]);
auto d=new C([1,2,3]);
assert(c.x is d.x);
c.x[0]=2;
assert(c.x !is d.x);
return true;
}()); // ok

enum s = S([1,2,3]);
immutable t = S([1,2,3]);
enum u = t;
void main()@nogc{
assert(x is y); // fails
// auto v = s; // error: gc allocation
auto w1 = t; // ok
// auto w2 = u; // error (!)
}

Basically, I think implicitly making expressions confused about their 
aliasing is just not a good idea. You can see that your: 'enum is just 
evaluate this expression at the usage site' is not the full story as 
otherwise w1 and w2 would behave the same.


Re: enum pointers or class references limitation

2017-09-01 Thread Q. Schroll via Digitalmars-d

On Friday, 1 September 2017 at 23:13:50 UTC, Q. Schroll wrote:

[..]
Just as Scott Meyers said: make it easy to use correctly and 
hard to use incorrectly. Today it's easy to use incorrectly.


While
  enum foo = [1,2,3];
  assert(foo is foo);
fails,
  enum bla = "123";
  assert(foo is foo);
passes.

Enhancement request submitted: 
https://issues.dlang.org/show_bug.cgi?id=17799


Unfortunately after I found out the second one does not have to 
do with mutability. Making foo immutable(int)[] does not change 
anything. It only works for const(char)[], immutable(char)[], and 
probably w/dchar friends. That's odd.


Re: enum pointers or class references limitation

2017-09-01 Thread Timon Gehr via Digitalmars-d

On 02.09.2017 01:13, Q. Schroll wrote:

On Friday, 1 September 2017 at 21:08:20 UTC, Ali Çehreli wrote:

[snip]
> assert(!([1,2,3] is [1,2,3]));
>
> Which is exactly what enum expands to and totally expected.
Where is the
> surprise?


This is not a surprise. Array literals are not identical.

In the surprising case foo is a symbol, seemingly of a variable. 
Failing the 'is' test is surprising in that case. I've just remembered 
that the actual surprising case is the following explicit check:


assert(!(foo.ptr is foo.ptr)); // Passes

I find it surprising because it looks like an entity does not have a 
well-behaving .ptr.


(Aside: I think your code might be surprising to at least newcomers as 
well.)


That's a good reason to unrecommend/disallow enums with indirections. 


It can be useful to have enums with indirections available for 
metaprogramming, for intermediate data typed as mutable that is not 
necessarily required while running the program, yet can be read during 
compilation.


The compiler should recommend/suggest using static immutable instead as 
it does not have such oddities.


It types your data as immutable.

The only advantage of enum is being 
guaranteed to be known at compile-time


That is not an advantage of enum, as the same is true for static immutable.

and they can be templatized (can 
be also done for static immutable via eponymous template).
I'd vote for a warning/error when the type of an enum has indirections 
together with a pragma to switch the warning off for the rare case you 
know exactly what you do.
Just as Scott Meyers said: make it easy to use correctly and hard to use 
incorrectly. Today it's easy to use incorrectly.




Note that enums with indirections are already disallowed, except for 
dynamic arrays.


Re: enum pointers or class references limitation

2017-09-01 Thread Timon Gehr via Digitalmars-d

On 02.09.2017 01:37, Q. Schroll wrote:

On Friday, 1 September 2017 at 23:13:50 UTC, Q. Schroll wrote:

[..]
Just as Scott Meyers said: make it easy to use correctly and hard to 
use incorrectly. Today it's easy to use incorrectly.


While
   enum foo = [1,2,3];
   assert(foo is foo);
fails,
   enum bla = "123";
   assert(foo is foo);
passes.

Enhancement request submitted: 
https://issues.dlang.org/show_bug.cgi?id=17799


Unfortunately after I found out the second one does not have to do with 
mutability. Making foo immutable(int)[] does not change anything. It 
only works for const(char)[], immutable(char)[], and probably w/dchar 
friends. That's odd.


This is called string pooling. This passed too:

void main(){
assert("123" is "123");
}

D (at least sometimes) allows the identities of different immutable 
locations to become conflated.


Re: Monads (Re: [challenge] Limitation in D's metaprogramming)

2010-10-19 Thread Jonathan M Davis
On Tuesday 19 October 2010 12:13:19 Graham Fawcett wrote:
> Hi Jonathan,
> 
> On Mon, 18 Oct 2010 18:02:58 -0700, Jonathan M Davis wrote:
> > On Monday, October 18, 2010 17:54:50 Nick Sabalausky wrote:
> >> "Jonathan M Davis"  wrote in message
> >> news:mailman.715.1287449256.858.digitalmar...@puremagic.com...
> >> 
> >> > One word: monads.
> >> > 
> >> > Now, to get monads to work, you're going to have to be fairly
> >> > organized about it, but that would be the classic solution to not
> >> > being able to have or alter global state and yet still be able to
> >> > effectively have global state.
> >> 
> >> Oh yea, I've heard about them but don't have any real experience with
> >> them. Any FP experts know whether or not monads are known to be
> >> constructible from purely-FP building blocks? I always assumed "no",
> >> and that monads really just came from a deliberate compiler-provided
> >> hole in the whole purity thing, but maybe I'm wrong? (That would be
> >> quite interesting: constructing impurity from purity.)
> > 
> > You can think of a monad as an extra parameter which is passed to each
> > function and holds the global state. It isn't a hole in purity at all.
> > For instance, it's how Haskell manages to have I/O and yet be
> > functionally pure. You don't need the compiler's help to do monads -
> > it's just easier if you have it.
> 
> I don't see how monads will help here. Monads are useful for threading
> state through pure computations, and are an enabler for I/O in Haskell
> by threading the "real world" through a computation as a series of
> computational states. Something has to initiate the thread, and tie it
> up at the end: there's an implicit scope here, and I think here's
> where the hard questions start to crop up in the context of CTFE.
> 
> Without permitting I/O, you could use a state-carrying monad to
> implement a kind of dynamically-scoped namespace during CTFE; the
> dynamically-scoped, mutable values would be global from the internal
> perspective of the CTFE computations, but they would not leak out of
> the monad into "truly global" state; and the overall effect would be
> pure.
> 
> So, you can achieve an implicit, dynamic scope using monads. But if
> all CTFE expansions are done within a single such scope, it would be
> indistinguishable from running all CTFE expansions with access to
> globally shared state (though not I/O). So then, why not just use
> global state? Therefore, the monads could only practically be used as
> an isolation technique, to isolate dynamically scoped values during
> different CTFE expansions -- for example, at a compilation-unit level
> -- without sacrificing purity. But then, it seems you would lose the
> very thing you need to implement the challenge on the table: two
> modules would want access to the same counter during expansion, not
> two isolated copies of the counter. So again, you're back at global
> state, or mimicking it using a state-carrying monad, with no
> significant benefits accomplished by using a monad.
> 
> Just my two cents,
> Graham

I haven't really looked in detail at how you'd solve the problem in question, 
but I believe that you'd pretty much have to chain all of the initializations 
so 
that there is a clear one which is initialized first and that each successive 
initialization relies on the previous one, with the monad being passed along 
through each. I believe that that would work between modules as well, though 
I'm 
not 100% certain. Regardless, the solution is rather messy because you have to 
have the monad specifically passed along everywhere and have a fairly explicit 
ordering to the instantiation (though you might be able to decouple it a bit if 
all you care about was the final count and some of the initializations depended 
on multiple other instantations and combined their monads), so it's not a 
particularly pretty solution even if it works. However, if you're trying to 
have 
state in an effectively stateless environment, that's pretty much the only way 
that I know to go about doing it. What you really need is some sort of global 
state for CTFE, but static initialization is specifically designed so that it 
doesn't have global state (if nothing else to avoid order issues screwing with 
initializations), so I don't really see that happening any time soon, if ever.

- Jonathan M Davis


std.variant holding bigger structs and std.concurrency message limitation

2013-04-27 Thread Tavi Cacina
is it a bug the fact that a Variant may be initialized with a 
struct bigger than 32 bytes? Even if this does function, it is 
not consistent because you can not assign such an 'inflated' 
variant to another one, assertion. This affects the max size of a 
std.concurrency message (right now it is not specified that such 
restriction exists)


---
import std.variant;

struct S
{
  int[9] s;
}

void main()
{
  Variant v1, v2; // maximum size 32 bytes
  v1 = S(); // works, even if sizeof(S) > 32
  v2 = v1; // AssertError: target must be non-null
}
---


Re: std.variant holding bigger structs and std.concurrency message limitation

2013-04-27 Thread Idan Arye

On Saturday, 27 April 2013 at 11:37:38 UTC, Tavi Cacina wrote:
is it a bug the fact that a Variant may be initialized with a 
struct bigger than 32 bytes? Even if this does function, it is 
not consistent because you can not assign such an 'inflated' 
variant to another one, assertion. This affects the max size of 
a std.concurrency message (right now it is not specified that 
such restriction exists)


---
import std.variant;

struct S
{
  int[9] s;
}

void main()
{
  Variant v1, v2; // maximum size 32 bytes
  v1 = S(); // works, even if sizeof(S) > 32
  v2 = v1; // AssertError: target must be non-null
}
---


There used to be a maximum size check for placing things in 
variants, but it was removed back in 2009: 
https://github.com/D-Programming-Language/phobos/commit/0c142994d9b5cb9f379eca28f3a625c749370e4a#L20L189


The way it works now, is that if the size is too big they use a 
reference instead: 
https://github.com/D-Programming-Language/phobos/blob/master/std/variant.d#L544#L555


Re: std.variant holding bigger structs and std.concurrency message limitation

2013-04-29 Thread David Eagen

On Saturday, 27 April 2013 at 17:42:54 UTC, Idan Arye wrote:
The way it works now, is that if the size is too big they use a 
reference instead: 
https://github.com/D-Programming-Language/phobos/blob/master/std/variant.d#L544#L555


So is the bug in std.concurrency and they way it uses Variant or 
is the bug in Variant?




Re: std.variant holding bigger structs and std.concurrency message limitation

2013-05-09 Thread Anonimous

On Tuesday, 30 April 2013 at 00:04:04 UTC, David Eagen wrote:

On Saturday, 27 April 2013 at 17:42:54 UTC, Idan Arye wrote:
The way it works now, is that if the size is too big they use 
a reference instead: 
https://github.com/D-Programming-Language/phobos/blob/master/std/variant.d#L544#L555


So is the bug in std.concurrency and they way it uses Variant 
or is the bug in Variant?


Ping.


Re: std.variant holding bigger structs and std.concurrency message limitation

2013-05-09 Thread David Nadlinger

On Tuesday, 30 April 2013 at 00:04:04 UTC, David Eagen wrote:

On Saturday, 27 April 2013 at 17:42:54 UTC, Idan Arye wrote:
The way it works now, is that if the size is too big they use 
a reference instead: 
https://github.com/D-Programming-Language/phobos/blob/master/std/variant.d#L544#L555


So is the bug in std.concurrency and they way it uses Variant 
or is the bug in Variant?


It's a variant bug, please make sure it is on Bugzilla.

David


Re: std.variant holding bigger structs and std.concurrency message limitation

2013-05-09 Thread Anonimous

On Thursday, 9 May 2013 at 12:36:36 UTC, David Nadlinger wrote:

On Tuesday, 30 April 2013 at 00:04:04 UTC, David Eagen wrote:

On Saturday, 27 April 2013 at 17:42:54 UTC, Idan Arye wrote:
The way it works now, is that if the size is too big they use 
a reference instead: 
https://github.com/D-Programming-Language/phobos/blob/master/std/variant.d#L544#L555


So is the bug in std.concurrency and they way it uses Variant 
or is the bug in Variant?


It's a variant bug, please make sure it is on Bugzilla.

David


Yes, it is reported by Tavi Cacina
http://d.puremagic.com/issues/show_bug.cgi?id=10017

Sorry,I ment that it's an important bug,but maybe everybody was 
busy

with dconf and just didn't do anything with it.

There is already pull,could you tell about its status?


Re: std.variant holding bigger structs and std.concurrency message limitation

2013-05-09 Thread David Nadlinger

On Thursday, 9 May 2013 at 12:42:25 UTC, Anonimous wrote:
Sorry,I ment that it's an important bug,but maybe everybody was 
busy

with dconf and just didn't do anything with it.


Yep, things are only slowly getting back to normality as jet lag 
wears off and the recordings are taken care of… ;)



There is already pull,could you tell about its status?


See my comment there.

David


post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread Walter Bright via Digitalmars-d

On 1/17/2015 12:33 AM, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


There was no known reason to.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread zeljkog via Digitalmars-d

On 17.01.15 09:33, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


I think it improves readability. A little :)
Often user don't care to read if-part.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread ketmar via Digitalmars-d
On Sat, 17 Jan 2015 08:33:49 +
deadalnix via Digitalmars-d  wrote:

> This is accepted :
> auto fun(T)(T T) inout if(...) { ... }
> 
> This is not :
> auto fun(T)(T T) if(...) inout { ... }
> 
> Is there a reason ?
the first is easier to parse, and i it's looking better. the second is
just unnecessary code in parser and will not be used in the wild to the
extent that justifies increased complexity.


signature.asc
Description: PGP signature


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d
On Saturday, 17 January 2015 at 16:02:16 UTC, ketmar via 
Digitalmars-d wrote:

On Sat, 17 Jan 2015 08:33:49 +
deadalnix via Digitalmars-d  wrote:


This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?
the first is easier to parse, and i it's looking better. the 
second is
just unnecessary code in parser and will not be used in the 
wild to the

extent that justifies increased complexity.


You obviously have data to back your point, both in term of 
readability, use in the wild and complexity added in the parser. 
Because if you don't, you have no point whatsoever and should 
probably not be posting.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d

On Saturday, 17 January 2015 at 10:05:29 UTC, Walter Bright wrote:

On 1/17/2015 12:33 AM, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


There was no known reason to.


Is that possible to make it work then ? Should I open a bug ?


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread ketmar via Digitalmars-d
On Sat, 17 Jan 2015 16:55:31 +
deadalnix via Digitalmars-d  wrote:

> On Saturday, 17 January 2015 at 16:02:16 UTC, ketmar via 
> Digitalmars-d wrote:
> > On Sat, 17 Jan 2015 08:33:49 +
> > deadalnix via Digitalmars-d  wrote:
> >
> >> This is accepted :
> >> auto fun(T)(T T) inout if(...) { ... }
> >> 
> >> This is not :
> >> auto fun(T)(T T) if(...) inout { ... }
> >> 
> >> Is there a reason ?
> > the first is easier to parse, and i it's looking better. the 
> > second is
> > just unnecessary code in parser and will not be used in the 
> > wild to the
> > extent that justifies increased complexity.
> 
> You obviously have data to back your point, both in term of 
> readability, use in the wild and complexity added in the parser. 
> Because if you don't, you have no point whatsoever and should 
> probably not be posting.
sure i have. i made alot of patches to the parser, so i know how it
is written. to make this work parser need to be changed not less than
to accept '@' before `pure`, `nothrow` and so on, and this change was
rejected due to added complexity for supporting it by devteam.

as for "will not be used" -- you can use google to count requests for
this feature. the numbers will show you how much people miss it.

i have no habit of writing tales from the faery world, you know.


signature.asc
Description: PGP signature


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d
On Saturday, 17 January 2015 at 17:08:12 UTC, ketmar via 
Digitalmars-d wrote:
sure i have. i made alot of patches to the parser, so i know 
how it
is written. to make this work parser need to be changed not 
less than
to accept '@' before `pure`, `nothrow` and so on, and this 
change was

rejected due to added complexity for supporting it by devteam.



I'm sorry but this is not a good reason. It would be failry easy 
to add this in SDC's parser, so now what ? it tells nothing about 
the feature and everything about DMD's parser.


as for "will not be used" -- you can use google to count 
requests for

this feature. the numbers will show you how much people miss it.

i have no habit of writing tales from the faery world, you know.


Absence of information is not information.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread ketmar via Digitalmars-d
On Sat, 17 Jan 2015 17:34:21 +
deadalnix via Digitalmars-d  wrote:

> On Saturday, 17 January 2015 at 17:08:12 UTC, ketmar via 
> Digitalmars-d wrote:
> > sure i have. i made alot of patches to the parser, so i know 
> > how it
> > is written. to make this work parser need to be changed not 
> > less than
> > to accept '@' before `pure`, `nothrow` and so on, and this 
> > change was
> > rejected due to added complexity for supporting it by devteam.
> >
> I'm sorry but this is not a good reason. It would be failry easy 
> to add this in SDC's parser, so now what ? it tells nothing about 
> the feature and everything about DMD's parser.
this was one of the good reasons to reject `@pure` syntax, so i can't
see why it's not a good reason to reject OP's syntax.

> > as for "will not be used" -- you can use google to count 
> > requests for
> > this feature. the numbers will show you how much people miss it.
> >
> > i have no habit of writing tales from the faery world, you know.
> 
> Absence of information is not information.
i don't think that you are right here. but i'm not in the right mood to
argue.


signature.asc
Description: PGP signature


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread Walter Bright via Digitalmars-d

On 1/17/2015 8:56 AM, deadalnix wrote:

On Saturday, 17 January 2015 at 10:05:29 UTC, Walter Bright wrote:

On 1/17/2015 12:33 AM, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


There was no known reason to.


Is that possible to make it work then ? Should I open a bug ?


Sure, but you'll need a rationale that is better than "why not" :-)


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread bearophile via Digitalmars-d

Walter Bright:

Sure, but you'll need a rationale that is better than "why not" 
:-)


Often in a language it's a good idea to have only one way to do 
something. To have two places to put those attributes generates 
the question: where do you want to put them? And it's a question 
that wastes time. In Python you don't have "wars" regarding where 
to put the { } because there is just one way to format code and 
indentations... and it's a good way.


Bye,
bearophile


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d

On Saturday, 17 January 2015 at 21:15:53 UTC, Walter Bright wrote:

On 1/17/2015 8:56 AM, deadalnix wrote:
On Saturday, 17 January 2015 at 10:05:29 UTC, Walter Bright 
wrote:

On 1/17/2015 12:33 AM, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?


There was no known reason to.


Is that possible to make it work then ? Should I open a bug ?


Sure, but you'll need a rationale that is better than "why not" 
:-)


Because I can never remember which one it is and run into the 
wrong case 50% of the time. I'd assume that I'm not the only one, 
but, as I have done for ages, do not consider this as an issue 
big enough to complain. This is the kind of thing that drain you 
productivity minute by minute.


Kind of like

class C(T) : B if(...) {} vs class C(T) if(...) : B {}

That Brian mentioned in his DConf talk. It is just another 
instance of the same problem. Only one used to be accepted, but 
now both are valid. It looks like to me like another instance of 
the same problem.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread Walter Bright via Digitalmars-d

On 1/17/2015 4:06 PM, deadalnix wrote:

Because I can never remember which one it is and run into the wrong case 50% of
the time. I'd assume that I'm not the only one, but, as I have done for ages, do
not consider this as an issue big enough to complain. This is the kind of thing
that drain you productivity minute by minute.

Kind of like

class C(T) : B if(...) {} vs class C(T) if(...) : B {}

That Brian mentioned in his DConf talk. It is just another instance of the same
problem. Only one used to be accepted, but now both are valid. It looks like to
me like another instance of the same problem.


On the other hand, I think having only one way to do it is better for 
consistency and stylistic reasons.


For example, I never liked that:

int short unsigned

is valid in C. I don't believe it adds value.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread deadalnix via Digitalmars-d

On Sunday, 18 January 2015 at 00:19:47 UTC, Walter Bright wrote:
On the other hand, I think having only one way to do it is 
better for consistency and stylistic reasons.


For example, I never liked that:

int short unsigned

is valid in C. I don't believe it adds value.


You are basically telling me that consistency matter. If so, we 
either rollback the class case, or go forward on that one.


Considering how many time I ran in both of them, we are better 
off without.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread Walter Bright via Digitalmars-d

On 1/17/2015 5:33 PM, deadalnix wrote:

On Sunday, 18 January 2015 at 00:19:47 UTC, Walter Bright wrote:

On the other hand, I think having only one way to do it is better for
consistency and stylistic reasons.

For example, I never liked that:

int short unsigned

is valid in C. I don't believe it adds value.


You are basically telling me that consistency matter. If so, we either rollback
the class case, or go forward on that one.


I don't really know where the class change came from :-(



Considering how many time I ran in both of them, we are better off without.




Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-17 Thread Brian Schott via Digitalmars-d

On Sunday, 18 January 2015 at 07:47:04 UTC, Walter Bright wrote:

On 1/17/2015 5:33 PM, deadalnix wrote:
You are basically telling me that consistency matter. If so, 
we either rollback

the class case, or go forward on that one.


I don't really know where the class change came from :-(


I could write a dfix rule to clean up class declarations. I 
prefer consistency because it makes creating tools for D easier 
and because I don't have to explain to people why there's more 
than one right way to do exactly the same thing.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Daniel Murphy via Digitalmars-d
"Walter Bright"  wrote in message news:m9fodo$18lu$1...@digitalmars.com... 


I don't really know where the class change came from :-(


https://github.com/D-Programming-Language/dmd/pull/1227


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Walter Bright via Digitalmars-d

On 1/17/2015 11:52 PM, Brian Schott wrote:

On Sunday, 18 January 2015 at 07:47:04 UTC, Walter Bright wrote:

On 1/17/2015 5:33 PM, deadalnix wrote:

You are basically telling me that consistency matter. If so, we either rollback
the class case, or go forward on that one.


I don't really know where the class change came from :-(


I could write a dfix rule to clean up class declarations. I prefer consistency
because it makes creating tools for D easier and because I don't have to explain
to people why there's more than one right way to do exactly the same thing.


Sounds like a good idea. If I wasn't clear, I think that class change was a 
mistake.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Brian Schott via Digitalmars-d

On Sunday, 18 January 2015 at 08:40:19 UTC, Walter Bright wrote:
Sounds like a good idea. If I wasn't clear, I think that class 
change was a mistake.


Now that I see from that pull request that the ugly syntax was 
the original, I'm not so sure. The dfix feature I'm planning is 
to convert


class A if (B) : C

to

class A : C if (B)


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Jonathan M Davis via Digitalmars-d
On Saturday, January 17, 2015 08:33:49 deadalnix via Digitalmars-d wrote:
> This is accepted :
> auto fun(T)(T T) inout if(...) { ... }
>
> This is not :
> auto fun(T)(T T) if(...) inout { ... }
>
> Is there a reason ?

Well, inout is part of the signature. It's debatable as to whether the
template constraint is, particularly when you consider that what you're
really dealing with is

template fun(T)
if(...)
{
auto fun(T t) inout {...}
}

just with a shorter syntax. And I'd guess that you have trouble remembering
whether the inout goes primarily due to your coding style. I don't think
that I _ever_ put the template constraint on the same line as the signature,
so I've never had any trouble remembering where the function attributes go
in comparison to the template constraint, and it would never have occurred
to me that anyone would have a problem with that.

But in general, I think that having multiple ways to do the same thing needs
a good reason, especially when it means adding a new way to do something,
and I think that the fact that the template constraint isn't really part of
the function signature is a good reason not to allow the function attributes
after it.

- Jonathan M Davis



Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Walter Bright via Digitalmars-d

On 1/18/2015 12:46 AM, Brian Schott wrote:

On Sunday, 18 January 2015 at 08:40:19 UTC, Walter Bright wrote:

Sounds like a good idea. If I wasn't clear, I think that class change was a
mistake.


Now that I see from that pull request that the ugly syntax was the original, I'm
not so sure. The dfix feature I'm planning is to convert

class A if (B) : C

to

class A : C if (B)


The other way around. Consider:

  class A(T) : C!(args), D!(more args), E!(lots of stuff) if (B)

the 'if' becomes significantly separated from A.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Walter Bright via Digitalmars-d

On 1/18/2015 12:16 AM, Daniel Murphy wrote:

"Walter Bright"  wrote in message news:m9fodo$18lu$1...@digitalmars.com...

I don't really know where the class change came from :-(


https://github.com/D-Programming-Language/dmd/pull/1227


Thanks for digging it up. I see I missed that one.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread deadalnix via Digitalmars-d

On Monday, 19 January 2015 at 02:24:00 UTC, Walter Bright wrote:

On 1/18/2015 12:46 AM, Brian Schott wrote:
On Sunday, 18 January 2015 at 08:40:19 UTC, Walter Bright 
wrote:
Sounds like a good idea. If I wasn't clear, I think that 
class change was a

mistake.


Now that I see from that pull request that the ugly syntax was 
the original, I'm

not so sure. The dfix feature I'm planning is to convert

class A if (B) : C

to

class A : C if (B)


The other way around. Consider:

  class A(T) : C!(args), D!(more args), E!(lots of stuff) if (B)

the 'if' becomes significantly separated from A.


That's exactly why I thing both should be allowed.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread Walter Bright via Digitalmars-d

On 1/18/2015 7:07 PM, deadalnix wrote:

On Monday, 19 January 2015 at 02:24:00 UTC, Walter Bright wrote:

On 1/18/2015 12:46 AM, Brian Schott wrote:

On Sunday, 18 January 2015 at 08:40:19 UTC, Walter Bright wrote:

Sounds like a good idea. If I wasn't clear, I think that class change was a
mistake.


Now that I see from that pull request that the ugly syntax was the original, I'm
not so sure. The dfix feature I'm planning is to convert

class A if (B) : C

to

class A : C if (B)


The other way around. Consider:

  class A(T) : C!(args), D!(more args), E!(lots of stuff) if (B)

the 'if' becomes significantly separated from A.


That's exactly why I thing both should be allowed.


No. Constraints belong after the template declaration, not embedded in the 
template's implementation.


Furthermore, there's no useful purpose to enabling style wars and then requiring 
people to put one way in their coding standard document.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-18 Thread deadalnix via Digitalmars-d

On Monday, 19 January 2015 at 03:57:14 UTC, Walter Bright wrote:
No. Constraints belong after the template declaration, not 
embedded in the template's implementation.


Furthermore, there's no useful purpose to enabling style wars 
and then requiring people to put one way in their coding 
standard document.


IMO style is the role of the formater. Prompting the programmer 
with "don't write this, write that instead" only crate reaction à 
la "If you know what I meant, why don't you compile that you 
asshole ?"


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-19 Thread Walter Bright via Digitalmars-d

On 1/18/2015 8:23 PM, deadalnix wrote:

IMO style is the role of the formater. Prompting the programmer with "don't
write this, write that instead" only crate reaction à la "If you know what I
meant, why don't you compile that you asshole ?"


Redundancy is built in to the language design on purpose. If there was no 
redundancy, any random sequence of bytes would be a valid program.


It's why statements end in ; even though it is not strictly necessary.

For an example from another industry, it's why double-entry bookkeeping was 
invented. Errors are reduced by introducing redundancy.


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-19 Thread Steven Schveighoffer via Digitalmars-d

On 1/18/15 10:57 PM, Walter Bright wrote:

On 1/18/2015 7:07 PM, deadalnix wrote:

On Monday, 19 January 2015 at 02:24:00 UTC, Walter Bright wrote:

On 1/18/2015 12:46 AM, Brian Schott wrote:

On Sunday, 18 January 2015 at 08:40:19 UTC, Walter Bright wrote:

Sounds like a good idea. If I wasn't clear, I think that class
change was a
mistake.


Now that I see from that pull request that the ugly syntax was the
original, I'm
not so sure. The dfix feature I'm planning is to convert

class A if (B) : C

to

class A : C if (B)


The other way around. Consider:

  class A(T) : C!(args), D!(more args), E!(lots of stuff) if (B)

the 'if' becomes significantly separated from A.


That's exactly why I thing both should be allowed.


No. Constraints belong after the template declaration, not embedded in
the template's implementation.


I just want to point out then, the OP is asking for this same thing 
(template constraint to be allowed after the template declaration).


-Steve


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-19 Thread Steven Schveighoffer via Digitalmars-d

On 1/17/15 3:33 AM, deadalnix wrote:

This is accepted :
auto fun(T)(T T) inout if(...) { ... }

This is not :
auto fun(T)(T T) if(...) inout { ... }

Is there a reason ?



I kind of agree with you. Because this is short for:

template fun(T) if(...) { auto fun(T t) inout {...}}

I think it makes the most sense to put the constraint right after the 
template.


BUT:

1. I don't think there should be 2 ways to do this
2. The current requirement is not so horrible.

I would leave it alone.

-Steve


Re: post qualifier and template constraint limitation, is there a reason ?

2015-01-22 Thread deadalnix via Digitalmars-d

On Monday, 19 January 2015 at 10:49:52 UTC, Walter Bright wrote:

On 1/18/2015 8:23 PM, deadalnix wrote:
IMO style is the role of the formater. Prompting the 
programmer with "don't
write this, write that instead" only crate reaction à la "If 
you know what I

meant, why don't you compile that you asshole ?"


Redundancy is built in to the language design on purpose. If 
there was no redundancy, any random sequence of bytes would be 
a valid program.


It's why statements end in ; even though it is not strictly 
necessary.


For an example from another industry, it's why double-entry 
bookkeeping was invented. Errors are reduced by introducing 
redundancy.


Ideally, the redundancy is there to catch useful error, or it is 
just noise. I'm not sure what useful error we are catching here, 
as type system already to the check.