Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jacob Carlborg

On 2012-11-09 08:28, Jonathan M Davis wrote:


But the types are already tested by the templat constraints and the fact that
they compile at all. It's the functions' runtime behaviors that can't be
tested, and no language can really test that at compile time, whereas unit
test _do_ test the runtime behavior. So, you get both static and dynamic
checks.


Well, I guess you're right.

--
/Jacob Carlborg


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jonathan M Davis
On Friday, November 09, 2012 08:21:38 Jacob Carlborg wrote:
> On 2012-11-09 07:20, H. S. Teoh wrote:
> > Well, unittests are a runtime check, and they don't *guarantee*
> > anything. (One could, in theory, write a pathological pseudo-range that
> > passes basic unittests but fail to behave like a range in some obscure
> > corner case. Transient ranges would fall under that category, should we
> > decide not to admit them as valid ranges. :-))
> > 
> > But of course that's just splitting hairs.
> 
> But since we do have a language with static typing we can at least do
> our best to try at catch as many errors as possible at compile time. We
> don't want to end up as a dynamic language and testing for types in the
> unit tests.

But the types are already tested by the templat constraints and the fact that 
they compile at all. It's the functions' runtime behaviors that can't be 
tested, and no language can really test that at compile time, whereas unit 
test _do_ test the runtime behavior. So, you get both static and dynamic 
checks.

- Jonathan M Davis


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jacob Carlborg

On 2012-11-09 07:20, H. S. Teoh wrote:


Well, unittests are a runtime check, and they don't *guarantee*
anything. (One could, in theory, write a pathological pseudo-range that
passes basic unittests but fail to behave like a range in some obscure
corner case. Transient ranges would fall under that category, should we
decide not to admit them as valid ranges. :-))

But of course that's just splitting hairs.


But since we do have a language with static typing we can at least do 
our best to try at catch as many errors as possible at compile time. We 
don't want to end up as a dynamic language and testing for types in the 
unit tests.


--
/Jacob Carlborg


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jonathan M Davis
On Friday, November 09, 2012 01:53:40 Nick Sabalausky wrote:
> Looking at one set of interfaces in isolation, sure the chances might
> be low. (Just like the chances of name collisions when hygeine is
> lacking, and yet we thankfully have a real module system, instead of C's
> clumsy "Well, it should usually work ok!" garbage.) But it's a terrible
> precedent. Scale things up, use ducks as common practice, and all of a
> sudden you're right back into the same old land of "no-hygeine". Bad,
> sloppy, lazy precedent. AND the presumed benefit of the duckness is
> minimal at best. Just not a good design, it makes all the wrong
> tradeoffs.

As long as your template constraints requires more than a couple of functions 
and actually tests much of anything about what those functions return or what 
arguments can be passed to them, I find it very unlikely that anything will 
accidentally pass them. Too many coincidences would be required to end up with 
a set of functions that matched when they weren't supposed to . It's only 
likely to be problem if you're checking only one or two functions and don't 
check much beyond their existence. It just doesn't take very many checks 
before it's highly unlikely for anyone to have created a type with functions 
with the same names and whose signatures are close enough to pass a template 
constraint when they're not supposed to.

- Jonathan M Davis


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Andrej Mitrovic
On 11/9/12, H. S. Teoh  wrote:
> Yeah, that's one major missing feature from D/Phobos/etc.: a mixin
> template called EliminateBugs that will fix all your program's bugs for
> you. I think that should be the next top priority on D's to-do list! ;-)

Considering you can do a string import of the module you're in
(provided the -J switch), and the fact that Pegged works at
compile-time, that wouldn't be a far-fetched dream at all. :p


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 21:24:49 -0800
Jonathan M Davis  wrote:

> On Thursday, November 08, 2012 21:10:55 Walter Bright wrote:
> > Many algorithms (at least the ones in Phobos do) already do a check
> > to ensure the inputs are the correct kind of range. I don't think
> > you'll get very far trying to use a range that isn't a range.
> > 
> > Of course, you can always still have bugs in your range
> > implementation.
> 
> Given that a range requires a very specific set of functions, I find
> it highly unlikely that anything which isn't a range will qualify as
> one. It's far more likely that you screw up and a range isn't the
> right kind of range because one of the functions wasn't quite right.
> 
> There is some danger in a type being incorrectly used with a function
> when that function requires and tests for only one function, or maybe
> when it requires two functions. But I would expect that as more is
> required by a template constraint, it very quickly becomes the case
> that there's no way that any type would ever pass it with similarly
> named functions that didn't do the same thing as what they were
> expected to do. It's just too unlikely that the exact same set of
> function names would be used for different things, especially as that
> list grows. And given that ranges are a core part of D's standard
> library, I don't think that there's much excuse for having a type
> that has the range functions but isn't supposed to be a range. So, I
> really don't see this as a problem.
> 

Looking at one set of interfaces in isolation, sure the chances might
be low. (Just like the chances of name collisions when hygeine is
lacking, and yet we thankfully have a real module system, instead of C's
clumsy "Well, it should usually work ok!" garbage.) But it's a terrible
precedent. Scale things up, use ducks as common practice, and all of a
sudden you're right back into the same old land of "no-hygeine". Bad,
sloppy, lazy precedent. AND the presumed benefit of the duckness is
minimal at best. Just not a good design, it makes all the wrong
tradeoffs.



Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jonathan M Davis
On Friday, November 09, 2012 01:44:51 Nick Sabalausky wrote:
> On Thu, 08 Nov 2012 21:10:55 -0800
> 
> Walter Bright  wrote:
> > Many algorithms (at least the ones in Phobos do) already do a check
> > to ensure the inputs are the correct kind of range. I don't think
> > you'll get very far trying to use a range that isn't a range.
> 
> It can't check semantics. If something "looks" like a range function,
> but wasn't written with the explicit intent of actually being one, then
> it's a crapshoot as to whether the semantics actually conform. But
> the ducktyping D does do will go and blindly assume.

True, but how likely is it that a type will define all of the necessary range 
functions and _not_ supposed to be a range? The type must define a specific set 
of functions, and those functions must compile with at isInputRange at 
minimum, which checks more than just the function names, and the more complex 
the range required, the more functions are required and the more specific the 
tests are (e.g. while isInputRange doesn't test the type of front, 
isBidirectionalRange does test that front and back have the same type). The 
odds of accidentally matching isInputRange are already low, but they dwindle 
to nothing pretty darn quickly as the type of range is more complex, simply 
because the number of functions and the tests made on them increase to the 
point that there's pretty much no way that anything will ever accidentally 
pass them without being intended to be a range.

I just don't think that the odds of anything accidentally passing the range 
traits - even isInputRange - are very high at all. And given that ranges are 
part of the standard library, I don't think that there's really any excuse for 
anyone using the names of range functions for something else, not more than 
one or two of them at once anyway. So, I really think that any worries about 
this are unfounded.

- Jonathan M Davis


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nick Sabalausky
On Thu, 8 Nov 2012 21:37:00 -0800
"H. S. Teoh"  wrote:
> 
> IOW, you want the user-defined type to declare that it's an input
> range, and not just some random struct that happens to have input
> range like functions?
> 

Yes.

> What about modifying isInputRange to be something like this:
> 
>   template isInputRange(R) {
>   static if (R.implementsInputRange &&
>   /* ... check for input range properties here
> */) {
>   enum isInputRange = true;
>   } else {
>   enum isInputRange = false;
>   }
>   }
> 
> Then all input ranges will have to explicitly declare they are an
> input range thus:
> 
>   struct MyInpRange {
>   // This asserts that we're trying to be an input range
>   enum implementsInputRange = true;
> 
>   // ... define .empty, .front, .popFront here
>   }
> 

Close. At that point you need two steps:

struct MyInpRange {
// Declare this as an input range
enum implementsInputRange = true;

// Enforce this really *IS* as an input range
static assert(isInputRange!MyRange,
"Dude, your struct isn't a range!"); // asserts

// ... define .empty, .front, .popFront here
}

My suggestion was to take basically that, and then wrap up the "Declare
and Enfore" in one simple step:

struct MyInpRange {

// Generate & mixin *both* the "enum" and the "static assert"
mixin(implements!InputRange);

// ... define .empty, .front, .popFront here
}

/Dreaming:

Of course, it'd be even nicer still to have all this wrapped up in
some language sugar (D3? ;) ) and just do something like:

struct interface InputRange {
// ... *declare* .empty, .front, .popFront here
}

struct interface ForwardRange : InputRange {
// ... *declare* .save here
}

struct MyForwardRange : ForwardRange {
// ... define .empty, .front, .popFront, .save here
// Actually validated by the compiler
}

Which would then amount to what we're doing by hand up above. So kinda
like Go, except not error-prone and ducky and all shitty.



Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 21:10:55 -0800
Walter Bright  wrote:
> 
> Many algorithms (at least the ones in Phobos do) already do a check
> to ensure the inputs are the correct kind of range. I don't think
> you'll get very far trying to use a range that isn't a range.
> 

It can't check semantics. If something "looks" like a range function,
but wasn't written with the explicit intent of actually being one, then
it's a crapshoot as to whether the semantics actually conform. But
the ducktyping D does do will go and blindly assume.




Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Walter Bright

On 11/8/2012 10:20 PM, H. S. Teoh wrote:

But of course that's just splitting hairs.


"Let's not go splittin' hares!"
   -- Bugs Bunny



Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread H. S. Teoh
On Thu, Nov 08, 2012 at 10:03:03PM -0800, Jonathan M Davis wrote:
> On Thursday, November 08, 2012 21:49:52 Walter Bright wrote:
> > BTW, there's no compiler magic in the world that will statically
> > guarantee you have a non-buggy implementation of a range.

Yeah, that's one major missing feature from D/Phobos/etc.: a mixin
template called EliminateBugs that will fix all your program's bugs for
you. I think that should be the next top priority on D's to-do list! ;-)


> That's what unit tests are for. :)
[...]

Well, unittests are a runtime check, and they don't *guarantee*
anything. (One could, in theory, write a pathological pseudo-range that
passes basic unittests but fail to behave like a range in some obscure
corner case. Transient ranges would fall under that category, should we
decide not to admit them as valid ranges. :-))

But of course that's just splitting hairs.


T

-- 
Amateurs built the Ark; professionals built the Titanic.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread H. S. Teoh
On Thu, Nov 08, 2012 at 09:37:00PM -0800, H. S. Teoh wrote:
> On Thu, Nov 08, 2012 at 11:51:29PM -0500, Nick Sabalausky wrote:
[...]
> > Those are only half-solutions as they only prevent false-negatives,
> > not false-positives. Plus, there's nothing to prevent people from
> > forgetting to do it in the first place.
> 
> IOW, you want the user-defined type to declare that it's an input
> range, and not just some random struct that happens to have input
> range like functions?
> 
> What about modifying isInputRange to be something like this:
> 
>   template isInputRange(R) {
>   static if (R.implementsInputRange &&
>   /* ... check for input range properties here */)
>   {
>   enum isInputRange = true;
>   } else {
>   enum isInputRange = false;
>   }
>   }
> 
> Then all input ranges will have to explicitly declare they are an
> input range thus:
> 
>   struct MyInpRange {
>   // This asserts that we're trying to be an input range
>   enum implementsInputRange = true;
> 
>   // ... define .empty, .front, .popFront here
>   }
> 
> Any prospective input range that doesn't define implementsInputRange
> will be rejected by all input range functions. (Of course, that's just
> a temporary name, you can probably think of a better one.)
> 
> You can also make it a mixin, or something like that, if you want to
> avoid the tedium of defining an enum to be true every single time.
[...]

Here's a slight refinement:

// Note: untested code
mixin template imAnInputRange() {
static assert(isInputRange!(typeof(this)));
enum implementsInputRange = true;
}

struct MyInpRange {
// This should croak loudly if this struct isn't a valid
// input range. Omitting this line makes range functions
// reject it too (provided we modify isInputRange as
// described above).
mixin imAnInputRange;

// implement range functions here
}


T

-- 
It only takes one twig to burn down a forest.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Andrej Mitrovic
On 11/9/12, H. S. Teoh  wrote:
> Then all input ranges will have to explicitly declare they are an input
> range thus:

Or:

Change definition of isInputRange to:

---
template isInputRange(T)
{
enum bool isInputRange = (__attributes(T, RequiresInputRangeCheck)
  && __attributes(T, IsInputRangeAttr))
|| is(typeof(
(inout int _dummy=0)
{
R r = void;   // can define a range object
if (r.empty) {}   // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
---

and in your user module:

---
module foo;
@RequiresInputRangeCheck:  // apply attribute to all declarations in this module

@IsInputRangeAttr struct MyRange { /* front/popFront/empty */ }
struct NotARange { /* front/popFront/empty defined but not designed to
be a range */ }
---

---
static assert(isInputRange!MyRange);
static assert(!(isInputRange!NotARange));
---

That way you keep compatibility with existing ranges and introduce
extra safety check for new types which want to be checked.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jonathan M Davis
On Thursday, November 08, 2012 21:49:52 Walter Bright wrote:
> BTW, there's no compiler magic in the world that will statically guarantee
> you have a non-buggy implementation of a range.

That's what unit tests are for. :)

- Jonathan M Davis


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Walter Bright

On 11/8/2012 9:24 PM, Jonathan M Davis wrote:

So, I really don't see this as a problem.


Neither do I.

BTW, there's no compiler magic in the world that will statically guarantee you 
have a non-buggy implementation of a range.




Re: Walter should start a Seattle D interest group

2012-11-08 Thread Walter Bright

On 11/8/2012 9:47 PM, Jesse Phillips wrote:

On Friday, 9 November 2012 at 00:40:07 UTC, Walter Bright wrote:

Actually, a bunch of us local D heads use the NWC++ user group meeting as an
excuse to get together and go out for beers afterwards.

http://nwcpp.org/

The next meeting is Nov. 21, so see ya all there!


Seems it is already officially accepted: "We are interested in C++, C, the D
language, concurrency..."


I've given many D presentations at those meetings. It's a friendly audience to 
try out new material on.


Re: Walter should start a Seattle D interest group

2012-11-08 Thread Jesse Phillips

On Friday, 9 November 2012 at 00:40:07 UTC, Walter Bright wrote:
Actually, a bunch of us local D heads use the NWC++ user group 
meeting as an excuse to get together and go out for beers 
afterwards.


http://nwcpp.org/

The next meeting is Nov. 21, so see ya all there!


Seems it is already officially accepted: "We are interested in 
C++, C, the D language, concurrency..."


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread H. S. Teoh
On Thu, Nov 08, 2012 at 11:51:29PM -0500, Nick Sabalausky wrote:
> On Thu, 8 Nov 2012 20:17:24 -0800
> "H. S. Teoh"  wrote:
> > 
> > Actually, I just thought of a solution to the whole duck-typing range
> > thing:
> > 
> > struct MyRange {
> > // Self-documenting: this struct is intended to be a
> > // range.
> > static assert(isInputRange!MyRange,
> > "Dude, your struct isn't a range!"); //
> > asserts
> > 
> > 
> On Fri, 09 Nov 2012 05:18:59 +0100
> "Adam D. Ruppe"  wrote:
> > 
> > Just a note, of course it still wouldn't *force*, but maybe it'd 
> > be a good habit to start writing this:
> > 
> > struct myrange {...}
> > static assert(isInputRange!myrange);
> > 
> 
> Those are only half-solutions as they only prevent false-negatives,
> not false-positives. Plus, there's nothing to prevent people from
> forgetting to do it in the first place.

IOW, you want the user-defined type to declare that it's an input range,
and not just some random struct that happens to have input range like
functions?

What about modifying isInputRange to be something like this:

template isInputRange(R) {
static if (R.implementsInputRange &&
/* ... check for input range properties here */)
{
enum isInputRange = true;
} else {
enum isInputRange = false;
}
}

Then all input ranges will have to explicitly declare they are an input
range thus:

struct MyInpRange {
// This asserts that we're trying to be an input range
enum implementsInputRange = true;

// ... define .empty, .front, .popFront here
}

Any prospective input range that doesn't define implementsInputRange
will be rejected by all input range functions. (Of course, that's just a
temporary name, you can probably think of a better one.)

You can also make it a mixin, or something like that, if you want to
avoid the tedium of defining an enum to be true every single time.


T

-- 
Long, long ago, the ancient Chinese invented a device that lets them see
through walls. It was called the "window".


Issue 8340: dmd backend bug

2012-11-08 Thread H. S. Teoh
See: http://d.puremagic.com/issues/show_bug.cgi?id=8340

Looks like the dmd backend sometimes produces wrong code by generating
128-bit instructions on an array of 64-bit integers.  It also appears to
generate 64-bit instructions for an int[], which violates the spec that
int==32 bits (in that case no bug is apparent because the int[] appears
to be constructed as (64-bit)[] for some reason, but for the long[]
case, the array is constructed as (64-bit)[] but the negation as negq,
so the negation spills over into the next array element).

But I've not the remotest idea how to even begin fixing this, so I'm
bringing attention to this issue here. :)

I tested gdc (git gdc-4.6 branch) and it didn't have this problem.


T

-- 
One reason that few people are aware there are programs running the
internet is that they never crash in any significant way: the free
software underlying the internet is reliable to the point of
invisibility. -- Glyn Moody, from the article "Giving it all away"


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jonathan M Davis
On Thursday, November 08, 2012 21:10:55 Walter Bright wrote:
> Many algorithms (at least the ones in Phobos do) already do a check to
> ensure the inputs are the correct kind of range. I don't think you'll get
> very far trying to use a range that isn't a range.
> 
> Of course, you can always still have bugs in your range implementation.

Given that a range requires a very specific set of functions, I find it highly 
unlikely that anything which isn't a range will qualify as one. It's far more 
likely that you screw up and a range isn't the right kind of range because one 
of the functions wasn't quite right.

There is some danger in a type being incorrectly used with a function when 
that function requires and tests for only one function, or maybe when it 
requires two functions. But I would expect that as more is required by a 
template constraint, it very quickly becomes the case that there's no way that 
any type would ever pass it with similarly named functions that didn't do the 
same thing as what they were expected to do. It's just too unlikely that the 
exact same set of function names would be used for different things, especially 
as that list grows. And given that ranges are a core part of D's standard 
library, I don't think that there's much excuse for having a type that has the 
range functions but isn't supposed to be a range. So, I really don't see this 
as a problem.

- Jonathan M Davis


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Walter Bright

On 11/8/2012 8:18 PM, Adam D. Ruppe wrote:

On Friday, 9 November 2012 at 03:45:11 UTC, Nick Sabalausky wrote:

the *one* thing I hate about D ranges is that they don't force you to
explicitly say "Yes, I *intend* this to be an InputRange" (what are we, Go
users?).


Just a note, of course it still wouldn't *force*, but maybe it'd be a good habit
to start writing this:

struct myrange {...}
static assert(isInputRange!myrange);

It'd be a simple way to get a check at the point of declaration and to document
your intent.


Interestingly, we could also do this if the attributes could run through a
template:

[check!isInputRange] struct myrange{}

@attribute template check(something, Decl) {
static assert(something!Decl);
alias check = Decl;
}



Many algorithms (at least the ones in Phobos do) already do a check to ensure 
the inputs are the correct kind of range. I don't think you'll get very far 
trying to use a range that isn't a range.


Of course, you can always still have bugs in your range implementation.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nick Sabalausky
On Thu, 8 Nov 2012 20:17:24 -0800
"H. S. Teoh"  wrote:
> 
> Actually, I just thought of a solution to the whole duck-typing range
> thing:
> 
>   struct MyRange {
>   // Self-documenting: this struct is intended to be a
>   // range.
>   static assert(isInputRange!MyRange,
>   "Dude, your struct isn't a range!"); //
> asserts
> 
> 
On Fri, 09 Nov 2012 05:18:59 +0100
"Adam D. Ruppe"  wrote:
> 
> Just a note, of course it still wouldn't *force*, but maybe it'd 
> be a good habit to start writing this:
> 
> struct myrange {...}
> static assert(isInputRange!myrange);
> 

Those are only half-solutions as they only prevent false-negatives, not
false-positives. Plus, there's nothing to prevent people from
forgetting to do it in the first place.

I outlined and implemented a proof-of-concept for a full solution
middle of last year:

http://www.semitwist.com/articles/EfficientAndFlexible/MultiplePages/Page5/

The basic gist (and there's definitely still room for plenty of
improvement):

// The General Tool:
string declareInterface(string interfaceName, string thisType)
{
return `
// Announce what interface this implements.
static enum _this_implements_interface_`~interfaceName~`_ =
true;

// Verify this actually does implement the interface
static assert(
is`~interfaceName~`!(`~thisType~`),
"This type fails to implement `~interfaceName~`"
);
`;
}

// Sample usage:
template isFoo(T)
{
immutable bool isFoo = __traits(compiles,
(){
T t;
static assert(T._this_implements_interface_Foo_);
t.fooNum = 5;
int x = t.fooFunc("");
// Check everything else here
});
} 

struct MyFoo
{
// Can probably be more DRY with fancy trickery
mixin(declareInterface("Foo", "MyFoo"));

int fooNum;
int fooFunc(string a) {...}
}

What I'd really like to see is a way to implement declareInterface so
that the isFoo is replaced by an ordinary "interface Foo {...}",
which MyFoo's members are automatically checked against. I suspect that
should be possible with some fancy metaprogramming-fu.



Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Adam D. Ruppe

On Friday, 9 November 2012 at 03:45:11 UTC, Nick Sabalausky wrote:
the *one* thing I hate about D ranges is that they don't force 
you to explicitly say "Yes, I *intend* this to be an 
InputRange" (what are we, Go users?).


Just a note, of course it still wouldn't *force*, but maybe it'd 
be a good habit to start writing this:


struct myrange {...}
static assert(isInputRange!myrange);

It'd be a simple way to get a check at the point of declaration 
and to document your intent.



Interestingly, we could also do this if the attributes could run 
through a template:


[check!isInputRange] struct myrange{}

@attribute template check(something, Decl) {
   static assert(something!Decl);
   alias check = Decl;
}

Same thing, diff syntax.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread H. S. Teoh
On Thu, Nov 08, 2012 at 10:45:05PM -0500, Nick Sabalausky wrote:
> On Thu, 08 Nov 2012 10:05:30 +0100
> Jacob Carlborg  wrote:
> > 
> > I think we should only allow user defined types marked with
> > @attribute, i.e.
> > 
> > @attribute struct foo {}
> > @attribute class foo {}
> > @attribute interface foo {}
> > @attribute enum foo {}
> > 
> > And so on.
> > 
> 
> I completely agree. I really hate when languages "play it loose" and
> leave things up to arbitrary convention. It's like duck/structural
> typing: the *one* thing I hate about D ranges is that they don't force
> you to explicitly say "Yes, I *intend* this to be an InputRange" (what
> are we, Go users?). I don't want to see the same unhelpful sloppiness
> here.

Actually, I just thought of a solution to the whole duck-typing range
thing:

struct MyRange {
// Self-documenting: this struct is intended to be a
// range.
static assert(isInputRange!MyRange,
"Dude, your struct isn't a range!"); // asserts

@property bool empty() { ... }
@property auto front() { ... }
int popFront() { ... }
}

struct WhatItShouldBe {
// Junior programmer modifies the function signatures
// below and the compiler gives him a scolding.
static assert(isInputRange!WhatItShouldBe,
"Dude, you just broke my range!"); // passes

@property bool empty() { ... }
@property auto front() { ... }
void popFront() { ... }
}

void main() {
auto needle = "abc";
auto x = find(WhatItShouldBe(), needle);
}


T

-- 
I see that you JS got Bach.


Re: What's C's biggest mistake?

2012-11-08 Thread H. S. Teoh
On Thu, Nov 08, 2012 at 10:47:10PM -0500, Nick Sabalausky wrote:
> On Thu, 08 Nov 2012 21:04:06 +0100
> "Kagamin"  wrote:
> 
> > Well, in the same vein one could argue that write(a,b) looks as 
> > if first function is called then arguments are computed and 
> > passed so the call should be written (a,b)write instead. The 
> > language has not only syntax, but also semantics.

In that case, we should just switch wholesale to reverse Polish
notation, and get rid of parenthesis completely. Why write hard-to-read
expressions like a+2*(b-c) when you can write a 2 b c - * +? Then
function calls would fit right in:

1 5 sqrt + 2 / GoldenRatio == assert;

Even constructs like if statements would be considerably simplified:

i 10 < if i++ else i--;

Things like function composition would actually make sense, as opposed
to the reversed order of writing things that mathematicians have imposed
upon us. Instead of f(g(x)) which makes no sense in terms of ordering,
we'd have x g f, which shows exactly in what order things are evaluated.

;-)


> Actually, that's one of the reasons I prefer UFCS function chaining
> over nested calls.

Fortunately for me, I got used to UFCS-style function chaining when
learning jQuery. (Yes, Javascript actually proved beneficial in that
case, shockingly enough.)


T

-- 
Prosperity breeds contempt, and poverty breeds consent. -- Suck.com


Re: What's C's biggest mistake?

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 21:04:06 +0100
"Kagamin"  wrote:

> Well, in the same vein one could argue that write(a,b) looks as 
> if first function is called then arguments are computed and 
> passed so the call should be written (a,b)write instead. The 
> language has not only syntax, but also semantics.

Actually, that's one of the reasons I prefer UFCS function chaining
over nested calls.



Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 10:05:30 +0100
Jacob Carlborg  wrote:
> 
> I think we should only allow user defined types marked with
> @attribute, i.e.
> 
> @attribute struct foo {}
> @attribute class foo {}
> @attribute interface foo {}
> @attribute enum foo {}
> 
> And so on.
> 

I completely agree. I really hate when languages "play it loose" and
leave things up to arbitrary convention. It's like duck/structural
typing: the *one* thing I hate about D ranges is that they don't force
you to explicitly say "Yes, I *intend* this to be an InputRange" (what
are we, Go users?). I don't want to see the same unhelpful sloppiness
here.



Re: How do you remove/insert elements in a dynamic array without allocating?

2012-11-08 Thread Malte Skarupke
On Wednesday, 7 November 2012 at 09:45:48 UTC, monarch_dodra 
wrote:
On Wednesday, 7 November 2012 at 03:45:06 UTC, Malte Skarupke 
wrote:
Having no clear ownership for the array is not something I am 
willing to accept.


"Strong ownership" puts you back into C++'s boat of "bordering 
psychotic duplication on every pass by value". In a GC 
language, and in particular, D, that favors pass by value, this 
might not be the best approach.


I'll re-iterate that you may consider looking into 
std.container.Array. It behaves much like would std::vector 
(reserve, etc)... You can extract an actual range from Array, 
but there is a clear "container" - "range" distinction.


An added bonus is that it uses "implicit reference" semantics. 
This means that when you write "a = b", then afterwards, you 
have "a is b", and they are basically alias. This is a good 
thing, as it avoids payload duplication without your explicit 
consent. The implicit means that it will lazily initialize if 
you haven't done so yet.


You claim you want "explicit ownership": Array gives you that, 
but not in the classic RAII sense. If you need to duplicate an 
Array, you call "dup" manually.



Also, Array uses a deterministic memory model, just like 
vector, that releases content as soon as it goes out of scope. 
I could have done without that, personally, but to each their 
own.


I think we have just had different experiences. In my experience 
a shared_ptr is usually not what you want. Instead I prefer a 
unique_ptr in many cases. Just because multiple things are 
referencing your data, that doesn't mean that they should all 
share ownership of it.


For me well defined ownership is just good coding practice, 
similar to the const keyword. I've seen it prevent bugs and 
therefore I use it for all cases that I can. I prefer to have 
those exceptions where I can not have clear ownership stand out, 
rather than them being the norm.


Re: Walter should start a Seattle D interest group

2012-11-08 Thread Walter Bright

On 11/8/2012 11:49 AM, Brad wrote:

I know Walter attends the NW C++ user group but I dont see why he does not head
his own user group for the D language, especially considering there is a
successful Go programming language group in the Seattle area:
http://www.meetup.com/golang/?a=wm1&rv=wm1&ec=wm1
Such a group would have no lack of topics and speakers considering the inventor
of the language would head the group.


Actually, a bunch of us local D heads use the NWC++ user group meeting as an 
excuse to get together and go out for beers afterwards.


http://nwcpp.org/

The next meeting is Nov. 21, so see ya all there!


Re: Uri class and parser

2012-11-08 Thread Jonathan M Davis
On Friday, November 09, 2012 01:16:54 Mike van Dongen wrote:
> Then I shall make it a struct. But is the following acceptable in
> phobos?
> 
> On Thursday, 8 November 2012 at 15:10:18 UTC, Mike van Dongen
> 
> wrote:
> > I agree with Jens Mueller on the fact that URI should be a
> > struct instead of a class. But then I won't be able to return
> > null anymore so I should throw an exception when an invalid URI
> > has been passed to the constructor.s

Sure. I see no problem with throwing an exception when a constructor is given 
invalid data. std.datetime does that with its types.

- Jonathan M Davis


Re: What's C's biggest mistake?

2012-11-08 Thread Tommi

On Thursday, 8 November 2012 at 18:45:35 UTC, Kagamin wrote:
Well, then read type declarations left-to-right. It's the 
strangest decision in design of golang to reverse type 
declarations. I always read byte[] as `byte array`, not `an 
array of bytes`.


How do you read byte[5][2] from left to right? "Byte arrays of 5 
elements 2 times in an array". It's impossible. On the other 
hand, [5][2]byte reads nicely from left to right: "Array of 5 
arrays of 2 bytes". You start with the most important fact: that 
it's an array. Then you start describing what the array is made 
of.


Re: Uri class and parser

2012-11-08 Thread Mike van Dongen
On Thursday, 8 November 2012 at 23:51:47 UTC, Jonathan M Davis 
wrote:

On Friday, November 09, 2012 00:42:42 jerro wrote:

> After trying your solution I found out I was calling
> indexOf(string, char) which apparently is different than
> indexOf(string, string) as I now no longer have that error.
> Instead, when I call parse on compile time I get the 
> following

> at the method parse:
> Error: URI class literals cannot be returned from CTFE

It looks like you can't have class enums. This fails too:

class A{}
enum a = new A;

I don't think you can get around this


Nope. To some extent, classes can be used at compile time, but 
they can't
persist from compile time to runtime. So, if you really want a 
type which is

useable with CTFE, make it a struct, not a class.

- Jonathan M Davis


Then I shall make it a struct. But is the following acceptable in 
phobos?


On Thursday, 8 November 2012 at 15:10:18 UTC, Mike van Dongen 
wrote:
I agree with Jens Mueller on the fact that URI should be a 
struct instead of a class. But then I won't be able to return 
null anymore so I should throw an exception when an invalid URI 
has been passed to the constructor.




Re: Const ref and rvalues again...

2012-11-08 Thread martin
On Thursday, 8 November 2012 at 22:44:26 UTC, Jonathan M Davis 
wrote:
I honestly wish that in didn't exist in the language. The fact 
that it it's an alias two different attributes is confusing, and
people keep using it without realizing what they're getting 
into.

If scope worked correctly, you'd only want it in specific
circumstances, not in general. And since it doesn't work 
correctly

aside from delegates, once it _does_ work correctly, it'll break
code all over the place, because people keep using in, because 
they like how it corresponds with out or whatever.


I agree that it may likely be a cause for future issues. I 
wouldn't remove it though, rather relax it to an alias for const 
only (yes, because I like how it corresponds with out (input only 
vs. output only) and especially because it is very short - this 
diff of 3 characters really make a difference in function 
signatures :D). That'd fortunately still be possible without 
breaking existing code.


So please generalize my countless mentionings of 'in ref' to 
'const ref'. ;)


Re: [RFC] Fix `object.destroy` problem

2012-11-08 Thread Regan Heath
On Thu, 08 Nov 2012 21:36:52 -, Denis Shelomovskij  
 wrote:



08.11.2012 15:57, Regan Heath пишет:

On Wed, 07 Nov 2012 21:20:59 -, Denis Shelomovskij
 wrote:


IMHO we have a huge design problem with `object.destroy`.

Please, carefully read "Now the worst thing with `object.destroy`"
section of the pull 344 about it:
https://github.com/D-Programming-Language/druntime/pull/344


I think you're missunderstood the purpose of "destroy" and I agree with
the comments here:
https://github.com/D-Programming-Language/druntime/pull/344#issuecomment-10160177


R



Thanks for the reply, but it doesn't help as I think these comments are  
incorrect.


I realise that, I'm just letting you know what I think.

It doesn't help that the comments in the pull request are not clear as to  
exactly what you think is wrong with the current behaviour.


Perhaps you could start by describing the current behaviour (try to avoid  
emotive words like 'bad' etc and just concentrate on what it does).  Then,  
you could describe each change you would make and why.  It's not clear to  
me from the linked pull request just what you want to change and why.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Uri class and parser

2012-11-08 Thread Jonathan M Davis
On Friday, November 09, 2012 00:42:42 jerro wrote:
> > After trying your solution I found out I was calling
> > indexOf(string, char) which apparently is different than
> > indexOf(string, string) as I now no longer have that error.
> > Instead, when I call parse on compile time I get the following
> > at the method parse:
> > Error: URI class literals cannot be returned from CTFE
> 
> It looks like you can't have class enums. This fails too:
> 
> class A{}
> enum a = new A;
> 
> I don't think you can get around this

Nope. To some extent, classes can be used at compile time, but they can't 
persist from compile time to runtime. So, if you really want a type which is 
useable with CTFE, make it a struct, not a class.

- Jonathan M Davis


Re: Uri class and parser

2012-11-08 Thread jerro
After trying your solution I found out I was calling 
indexOf(string, char) which apparently is different than 
indexOf(string, string) as I now no longer have that error.
Instead, when I call parse on compile time I get the following 
at the method parse:

Error: URI class literals cannot be returned from CTFE


It looks like you can't have class enums. This fails too:

class A{}
enum a = new A;

I don't think you can get around this, but you can still test if 
your URI class works at compile time by doing something like this:


auto foo()
{
   // Write code that uses the URI class here

   // return something that can be assigned to an enum
   return something;
}

enum bar = foo();


Re: Const ref and rvalues again...

2012-11-08 Thread martin

On Thursday, 8 November 2012 at 22:34:03 UTC, Timon Gehr wrote:
Ambiguous to me and all the interpretations are either wrong or 
irrelevant.


My point is that it may affect performance. If there was no 
const, the compiler would need to allocate a dedicated copy of a 
literal whenever passing it to a mutable ref parameter unless the 
optimizer worked so well it can prove it's not going to be 
modified (which I'm sure you'd expect though :D).


Maybe you should stop trying to show that 'const' is sufficient 
for resolving those issues. The point is that it is not 
_necessary_. It is too strong.


In that case it actually is - who cares if the read-only double 
rvalue the function is passed is the result of an implicit cast 
(and there's a reason for it being implicit) of the original 
argument (int rvalue)?


Anyway, I think we have moved on in this thread, so maybe you 
could contribute to trying to settle this rvalue => (const) ref 
issue once and for all by commenting my latest proposal.


Re: New language name proposal

2012-11-08 Thread Regan Heath
On Thu, 08 Nov 2012 21:13:51 -, monarch_dodra   
wrote:



On Thursday, 8 November 2012 at 20:42:40 UTC, Rob T wrote:
Seriously whatever cons there are to a name change cannot possibly  
outweigh the power of the Internet. If you cannot find it on the  
Internet, then it simply does not exist.


+1

If there's no official name change, we should define an unofficial name  
among ourselves by holding a contest and vote on a winner, then we  
start using the new name everywhere. Problem solved.


I don't think we should try to find something new, when "dlang" is  
already a great abbreviation of "The D programming language".


We also have DPL:
http://www.acronymfinder.com/DPL.html

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Uri class and parser

2012-11-08 Thread Mike van Dongen

On Thursday, 8 November 2012 at 17:02:25 UTC, jerro wrote:

Thnx. Got myself some new errors ;)
It seems that std.string.indexOf() does not work at compile 
time. Is there a solution or alternative method for this?


I guess the proper solution would be to make std.string.indexOf 
work at compile time. It looks like changing the first


if (std.ascii.isASCII(c))

line in std.string.indexOf to

if (!__ctfe && std.ascii.isASCII(c))


Makes it work at compile time.


After trying your solution I found out I was calling 
indexOf(string, char) which apparently is different than 
indexOf(string, string) as I now no longer have that error.
Instead, when I call parse on compile time I get the following at 
the method parse:

Error: URI class literals cannot be returned from CTFE

The method returns an instance of class URI and works perfectly 
when called at runtime.
As far as I can see it has nothing to do with my previous 
problem. I do thank you for your answer.


Re: New language name proposal

2012-11-08 Thread 1100110
On Thu, 08 Nov 2012 14:39:31 -0600, Flamaros   
wrote:



On Thursday, 8 November 2012 at 20:23:13 UTC, anonymous wrote:

On Thursday, 8 November 2012 at 20:11:31 UTC, Flamaros wrote:

I like the actual name : D, but there is some issues with it.
D is just to small to be able to do a search on it on internet, a lot  
of search engine just can't index correctly something to small, and I  
think it's just normal.

[...]

http://dlang.org/faq#q1_1


Sorry I didn't see it.

But I am not really agree to say it's too late, it just depend on how  
much we estimate the interest compared to the difficulty to do it.
How many years it will take to be able to do a search on D that isn't  
linked to the code? Like a job search, or anything else?

Maybe it's not necessary to change it in tools, but only on web sites?
Why dlang.org and not d.org?

I think that a name change will affect the speed of the community grow  
now, because the effect will be larger at the beginning.


I am french and D match really often with d' that is a prefix word.

most likely d.org is already taken.

You might succeed with d.io or d.me or some other variation, but i doubt  
it.

just google dlang.  It's smart enough to find what you want.

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Regarding opCast(bool) class method

2012-11-08 Thread bearophile
I think that now and then it's good to underline some D issues, 
even old ones.


This little program shows the asymmetry in opCast(bool) between 
struct instances and class instances:



struct FooStruct {
int x;
this(int xx) { this.x = xx; }
T opCast(T: bool)() {
return this.x != 0;
}
}

class FooClass {
int x;
this(int xx) { this.x = xx; }
T opCast(T: bool)() {
return this.x != 0;
}
}

void main() {
import std.stdio;
enum int N = 0;

auto fstruct = FooStruct(N);
if (fstruct)
writeln("fstruct true");
else
writeln("fstruct false"); //*

if (cast(bool)fstruct)
writeln("cast(bool)fstruct true");
else
writeln("cast(bool)fstruct false"); //*

auto fclass = new FooClass(N);

if (fclass)
writeln("fclass true"); //*
else
writeln("fclass false");

if (cast(bool)fclass)
writeln("cast(bool)fclass true");
else
writeln("cast(bool)fclass false"); //*
}



The output:

fstruct false
cast(bool)fstruct false
fclass true
cast(bool)fclass false


Is this situation a problem?
If in your code you convert a struct to class or the other way, 
and it contains an opCast(bool), you will have troubles. And 
generally I don't like how opCast(bool) works in classes, I think 
it's a bit bug-prone. I think "if(fclass)" and 
"if(cast(bool)fclass)" should always have the same boolean value.


If this situation is a problem, then here are two of the possible 
solutions:
- Do not allow opCast(bool) in classes. How much often do you 
need cast(bool) on class instances?
- Keep cast(bool) in classes, and remove the asymmetry between 
structs and classes, if possible. So "if(fclass)" on a class 
instance calls opCast(bool). Then to test the value of the 
reference you use "if(fclass is null)".



See also my issue on this:
http://d.puremagic.com/issues/show_bug.cgi?id=3926

More info on related matters in C++/C++11:
http://en.wikipedia.org/wiki/C%2B%2B11#Explicit_conversion_operators
http://www.artima.com/cppsource/safeboolP.html
http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Safe_bool

Bye,
bearophile


Re: Const ref and rvalues again...

2012-11-08 Thread Jonathan M Davis
On Thursday, November 08, 2012 21:49:58 Manu wrote:
> That's cute, but it really feels like a hack.
> All of a sudden the debugger doesn't work properly anymore, you need to
> step-in twice to enter the function, and it's particularly inefficient in
> debug builds (a point of great concern for my industry!).
> 
> Please just with the compiler creating a temporary in the caller space.
> Restrict is to const ref, or better, in ref (scope seems particularly
> important here).

I honestly wish that in didn't exist in the language. The fact that it it's an 
alias two different attributes is confusing, and people keep using it without 
realizing what they're getting into. If scope worked correctly, you'd only 
want it in specific circumstances, not in general. And since it doesn't work 
correctly aside from delegates, once it _does_ work correctly, it'll break 
code all over the place, because people keep using in, because they like how 
it corresponds with out or whatever.

- Jonathan M Davis


Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Tommi

On Thursday, 8 November 2012 at 21:43:32 UTC, Max Klyga wrote:
Dinamic polimorphism isn't gone anywhere, it was just shifted 
to delegates.


But there's no restrictive type hierarchy that causes unnecessary 
coupling. Also, compared to virtual functions, there's no 
overhead from the vtable lookup. Shape doesn't need to search for 
the correct member function pointer, it already has it.


It's either that, or else I've misunderstood how virtual 
functions work.


Re: Const ref and rvalues again...

2012-11-08 Thread Timon Gehr

On 11/08/2012 02:45 AM, martin wrote:

On Wednesday, 7 November 2012 at 21:39:52 UTC, Timon Gehr wrote:

You can pass him an object that does not support operations you want
to preclude. He does not have to _know_, that your book is not changed
when he reads it. This is an implementation detail. In fact, you could
make the book save away his reading schedule without him noticing.


I don't see where you want to go with this. Do you suggest creating
tailored objects (book variants) for each function you're gonna pass it
to just to satisfy perfect theoretical encapsulation?


No. The point is that the language should _support_ what you call 
'perfect theoretical encapsulation'.



So foo() shouldn't
be able to change the author => change from inout author reference to
const reference? bar() should only be allowed to read the book title,
not the actual book contents => hide that string? ;) For the sake of
simplicity, by using const we have the ability to at least control if
the object can be modified or not.


It is not _just_ the object. Anyway, this is what I stated in my last post.


So although my colleague doesn't have
to _know_ that he can't modify my book in any way (or know that the book
is modifiable in the first place), using const is a primitive but
practical way for me to prevent him from doing so.



It also weakens encapsulation, which was the point.



In the context of this rvalue => (const) ref discussion, const is useful
due to a number of reasons.

1) All possible side effects of the function invokation are required to
be directly visible by the caller. Some people may find that annoying,
but I find it useful, and there's a single-line workaround (lvalue
declaration) for the (in my opinion very rare) cases where a potential
side-effect is either known not to occur or simply uninteresting
(requiring exact knowledge about the function implementation, always,
i.e., during the whole life-time of that code!).



Wrong. Not everything is a perfect value type. (and anyway, the code 
that actually will observe the change may be a few frames up the call 
stack.)



2) Say we pass a literal string (rvalue) to a const ref parameter. The
location of the string in memory can then be freely chosen by the
compiler, possibly in a static data segment of the binary (literal
optimization - only one location for multiple occurrences). If the
parameter was a mutable ref, the compiler should probably allocate a
copy on the stack before calling the function, otherwise the literal may
not be the same when accessed later on, potentially causing funny bugs.



Ambiguous to me and all the interpretations are either wrong or irrelevant.


3) Implicit type conversion isn't a problem. Say we pass an int rvalue
to a mutable double ref parameter. The parameter will then be a
reference to another rvalue (the int cast to a double) and altering it
(the hidden double rvalue) may not really be what the coder intended.
Afaik D doesn't support implicit casting for user-defined types, so that
may not be a problem (for now at least).


Maybe you should stop trying to show that 'const' is sufficient for 
resolving those issues. The point is that it is not _necessary_. It is 
too strong.





Re: std.signals2 proposal

2012-11-08 Thread eskimo

> _tab.closed.connect((sender,args)=>this.Dispose());
> 
> If the closure dies prematurely, it won't free resources at all 
> or at the right time. Although you currently keep a strong 
> reference to closures, you claim it's a bug rather than feature. 
> You fix deterministic sloppiness of memory leaks at the cost of 
> undeterministic sloppiness of prematurely dying event handlers 
> (depending on the state of the heap).

Now I see where this is coming from, you got that wrong. It is an
absolute must to have a strong ref to the closure. Otherwise it would
not work at all, but the signal should not keep "this" from your example
alive, which is obviously not possible, because it would break the
closure, also the signal has no way to find out that this.Dispose() is
eventually invoked.

The trick that solved both problems is that I pass the object to the
delegate, instead of hiding it in its context. This way I don't have a
strong ref from the delegate, which would keep the object alive and the
signal can tell the runtime to get informed when the connected object
gets deleted.

The thing I claimed a side effect (not a bug, it really is not) is that
you can create a strong ref to the object easily, by issuing connect
with null for the object and simply contain the object in the delegates
context. This way also struct methods and other delegates can be
connected to a signal, but with strong ref semantics.

Maybe this misunderstanding was caused by this thread unfortunately
being split up in two threads, so you might have missed half of my
explanation and examples: One is starting with "std.signals2 proposal"
and one staring with "RE: std.signals2 proposal".

Best regards, 

Robert



Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Max Klyga

On 2012-11-08 17:27:40 +, Tommi said:
..and it got me thinking, couldn't we just get rid of dynamic 
polymorphism and classes altogether?


Compiler can do a lot of optimizations with knowledge about classes. 
Also it automates a lot things that would become boilerplate with 
proposed manual setup of delegates for each object.



 struct Shape // Represents an interface
 {
 std::functionresize;
 std::functionmoveTo;
 std::function draw;
 };


Dinamic polimorphism isn't gone anywhere, it was just shifted to delegates.

This approach complicates thing to much and produces template bloat 
with no real benefit.




Re: [RFC] Fix `object.destroy` problem

2012-11-08 Thread Denis Shelomovskij

08.11.2012 15:57, Regan Heath пишет:

On Wed, 07 Nov 2012 21:20:59 -, Denis Shelomovskij
 wrote:


IMHO we have a huge design problem with `object.destroy`.

Please, carefully read "Now the worst thing with `object.destroy`"
section of the pull 344 about it:
https://github.com/D-Programming-Language/druntime/pull/344


I think you're missunderstood the purpose of "destroy" and I agree with
the comments here:
https://github.com/D-Programming-Language/druntime/pull/344#issuecomment-10160177


R



Thanks for the reply, but it doesn't help as I think these comments are 
incorrect.


--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: Const ref and rvalues again...

2012-11-08 Thread martin
On Thursday, 8 November 2012 at 20:15:51 UTC, Dmitry Olshansky 
wrote:
The scope. It's all about getting the correct scope, destructor 
call and you know, the works. Preferably it can inject it 
inside temporary scope.


typeof(foo(...)) r = void;
{
someRef = SomeResource(x, y, ..);
r = foo(someRef); // should in fact construct in place not 
assign

}

I suspect this is hackable to be more clean inside of the 
compiler but not in terms of a re-write.


Right, I forgot the scope for a moment. I'd illustrate the rvalue 
=> (const) ref binding to a novice language user as follows:


T   const_foo(  in ref int x);
T mutable_foo(auto ref int x);

int bar() { return 5; }

T result;

result = const_foo(bar());
/* expanded to:
{
immutable int tmp = bar(); // avoidable for literals
result = const_foo(tmp);
} // destruction of tmp
*/

result = mutable_foo(bar());
/* expanded to:
{
int tmp = bar();
result = mutable_foo(tmp);
} // destruction of tmp
*/

I'd rather restrict it to 'auto ref' thingie. Though 'in auto 
ref' sounds outright silly.
Simply put const ref implies that callee can save a pointer to 
it somewhere (it's l-value). The same risk is with 'auto ref' 
but at least there an explicitly written 'disclaimer' by the 
author of accepting temporary stuff.


'in ref' as opposed to 'const ref' should disallow this escaping 
issue we've already tackled in this thread, but I'm not sure if 
it is already/correctly implemented. Anyway, this issue also 
arises with (short-lived) local lvalues at the caller site:


foreach (i; 0 .. 10)
{
int scopedLvalue = i + 2;
foo(scopedLvalue); // passed by ref
} // scopedLvalue is gone

In the ideal world name 'auto ref' would be shorter, logical 
and more to the point but we have what we have.


+1, but I don't have a better proposal anyway. ;)

I think that function plucked with auto ref is a enough 
indication that author is fine with passing to it mutable 
r-values and not seeing changes outside and related blah-blah.


Agreed.

Also certain stuff can't be properly bitwise const because of 
C-calls and what not. Logical const is the correct term but in 
the D world it's simply mutable.


As you know, I'd definitely allow rvalues to be bound to const 
ref parameters as alternative (that would also be useful for a 
lot of existing code). People who generally don't use const 
(Timon Gehr? :)) are free to only use 'auto ref', I'm most likely 
only going to use 'in ref', and there will certainly be people 
using both. Sounds like a really good compromise to me.


I'd say that even for templates the speed argument is mostly 
defeated by the bloat argument. But that's probably only me.


I haven't performed any benchmarks, but I tend to agree with you, 
especially since multiple 'auto ref' parameters lead to 
exponential bloating. I could definitely do without a special 
role for templates, which would further simplify things 
considerably. If performance is really that critical, an explicit 
pass-by-value (move) overload for rvalues ought to be enough 
flexibility imo.


Re: New language name proposal

2012-11-08 Thread monarch_dodra

On Thursday, 8 November 2012 at 20:42:40 UTC, Rob T wrote:
Seriously whatever cons there are to a name change cannot 
possibly outweigh the power of the Internet. If you cannot find 
it on the Internet, then it simply does not exist.


+1

If there's no official name change, we should define an 
unofficial name among ourselves by holding a contest and vote 
on a winner, then we start using the new name everywhere. 
Problem solved.


I don't think we should try to find something new, when "dlang" 
is already a great abbreviation of "The D programming language".


Re: Proposal to deprecate "retro.source"

2012-11-08 Thread monarch_dodra
On Thursday, 8 November 2012 at 18:20:31 UTC, Jonathan M Davis 
wrote:

On Thursday, November 08, 2012 10:56:38 monarch_dodra wrote:

On Thursday, 8 November 2012 at 09:18:54 UTC, Jonathan M Davis

wrote:
> In the case of retro, I think that it would good to have 
> source

> exposed for
> std.container's use. It's easy for std.container to 
> understand

> what retro's
> supposed to do, and use it accordingly, and I think that it
> would be silly for
> it have to call retro on the retroed range to do that. I do
> agree however that
> in general, it doesn't make sense to access source.

Yes, accessing the original range is useful, but AFAIK, 
container

doesn't use retro in any way. It does it with take (which also
has a source field). For take, there is no way to extract the
source other than with a specialized function.


std.container doesn't use retro right now, but it really should 
(though that
would require externalizing retro's return type). For instance, 
what would you
do if you had to remove the last five elemets from a DList? You 
can't simply
take the last 5 and pass that to remove with something like 
take(retro(list[],
5)) or retro(take(retro(list[], 5))), because the resulting 
type is

unrecognized by remove. You're forced to do something like

list.remove(popFrontN(list[], walkLength(list[]) - 5));

which is highly inefficient.


It's funny you bring this up, because I've been wrapping my head 
around this very problem for the last week. The root of the 
problem is that Bidirectional ranges (as *convenient* as they 
are) just don't have as much functionality as iterators or 
cursors. If you've used DList more than 5 minutes, you know what 
I'm talking about.


The "retro" trick you mention could indeed be a convenient 
mechanic. (Keep in mind that take is forward range, so can't be 
retro'ed though).


I'm just afraid of what it would really mean to interface with a 
retro'ed range: What exactly does that range represent for the 
original container?


What we'd *really* need (IMO) is a takeBack!Range, that would 
only implement back/popBack. No, the resulting range wouldn't 
*actually* be a range (since it wouldn't have a front), but it 
would still be incredibly useful even if *just* to interface with 
containers, eg:


list.remove(list[].takeBack(5)); //Fine, that's not a "real" 
range, but I know how to interface with that.


I'll toy around with this in my CDList implementation.


PS: The problem you bring up is common to all bidirRanges, and 
not just containers: As a general  rule, there is no way to 
extract a subrange from the end of a bidirrange...



[SNIP]*The rest*[/SNIP]

- Jonathan M Davis


Read and acknowledged.


Re: New language name proposal

2012-11-08 Thread Rob T

On Thursday, 8 November 2012 at 20:23:13 UTC, anonymous wrote:

On Thursday, 8 November 2012 at 20:11:31 UTC, Flamaros wrote:

I like the actual name : D, but there is some issues with it.
D is just to small to be able to do a search on it on 
internet, a lot of search engine just can't index correctly 
something to small, and I think it's just normal.

[...]

http://dlang.org/faq#q1_1


alias D somethingawholelotbetter;

Seriously whatever cons there are to a name change cannot 
possibly outweigh the power of the Internet. If you cannot find 
it on the Internet, then it simply does not exist.


As an added benefit, all those old D1 websites that have been 
dead for a few years will at least stop showing up during search 
results.


If there's no official name change, we should define an 
unofficial name among ourselves by holding a contest and vote on 
a winner, then we start using the new name everywhere. Problem 
solved.


--rt



Re: New language name proposal

2012-11-08 Thread Flamaros

On Thursday, 8 November 2012 at 20:23:13 UTC, anonymous wrote:

On Thursday, 8 November 2012 at 20:11:31 UTC, Flamaros wrote:

I like the actual name : D, but there is some issues with it.
D is just to small to be able to do a search on it on 
internet, a lot of search engine just can't index correctly 
something to small, and I think it's just normal.

[...]

http://dlang.org/faq#q1_1


Sorry I didn't see it.

But I am not really agree to say it's too late, it just depend on 
how much we estimate the interest compared to the difficulty to 
do it.
How many years it will take to be able to do a search on D that 
isn't linked to the code? Like a job search, or anything else?
Maybe it's not necessary to change it in tools, but only on web 
sites?

Why dlang.org and not d.org?

I think that a name change will affect the speed of the community 
grow now, because the effect will be larger at the beginning.


I am french and D match really often with d' that is a prefix 
word.


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Dmitry Olshansky

11/8/2012 11:34 PM, Nick Sabalausky пишет:

On Thu, 8 Nov 2012 14:27:14 -0500
Nick Sabalausky  wrote:


[...]Plus, I would imagine
that library-implemented features would be slower to compile (simply
because it's just that much more to be compiled).


Also, these particular sorts of things (compile time processing of
things that are in-library) essentially amount to executing interpreted
code to compile. Sure, that's very powerful, and very well worth having,
but should it really be done for very common features? For common
features, I'd imagine native non-"interpreted" support would help
compilation speed, which is one of D's major goals and benefits.
Suddenly interpreting large parts of the language might work against
that.


If we finally get to the usual byte-code interpreter then it is more the 
sufficiently fast to do trivial re-writes that features like 
synchronized are all about.


Anyway I'm not for trying to redo all (if any) of the built-in stuff. If 
it's bug free and works, fine let it be. We can't remove it anyway. I 
just anticipate a couple more features to crop up if UDA dropping from 
nowhere is any indicator. And then another tiny most useful thing, and 
another one, and ...






Not that I'm necessarily saying "Always stuff everything into the
language forever!" I just don't see it as quite so clear-cut.




--
Dmitry Olshansky


Re: New language name proposal

2012-11-08 Thread anonymous

On Thursday, 8 November 2012 at 20:11:31 UTC, Flamaros wrote:

I like the actual name : D, but there is some issues with it.
D is just to small to be able to do a search on it on internet, 
a lot of search engine just can't index correctly something to 
small, and I think it's just normal.

[...]

http://dlang.org/faq#q1_1


Re: Const ref and rvalues again...

2012-11-08 Thread Dmitry Olshansky

11/8/2012 11:30 PM, martin пишет:

On Thursday, 8 November 2012 at 18:28:44 UTC, Dmitry Olshansky wrote:

What's wrong with going this route:

void blah(auto ref X stuff){
...lots of code...
}

is magically expanded to:

void blah(ref X stuff){
...that code..
}

and

void blah(X stuff){
.blah(stuff); //now here stuff is L-value so use the ref version
}

Yeah, it looks _almost_ like a template now. But unlike with a
template we can assume it's 2 overloads _always_. External fucntion
issue is then solved by treating it as exactly these 2 overloads (one
trampoline, one real). Basically it becomes one-line declaration of 2
functions.

Given that temporaries are moved anyway the speed should be fine and
there is as much bloat as you'd do by hand.

Also hopefully inliner can be counted on to do its thing in this
simple case.


That second overload for rvalues would be a shortcut to save the lvalue
declarations at each call site - and it really doesn't matter if the
compiler magically added the lvalue declarations before each call or if
it magically added the rvalue overload (assuming all calls are inlined).


The scope. It's all about getting the correct scope, destructor call and 
you know, the works. Preferably it can inject it inside temporary scope.


Anticipating bugs in the implementation of this feature let me warn that 
re-writing this:

... code here ...
auto  r = foo(SomeResource(x, y, ..)); //foo is auto ref
... code here ...

Should not change semantics e.g. imagine the resource is a lock, we'd 
better unlock it sooner. That is call destructor right after foo 
returns. So we need {} around the call. But this doesn't work as it 
traps 'r':


{
auto someRef = SomeResource(x, y, ..);
auto r  = foo(someRef);
}

So it's rather something like this:

typeof(foo(...)) r = void;
{
someRef = SomeResource(x, y, ..);
r = foo(someRef); // should in fact construct in place not assign
}

I suspect this is hackable to be more clean inside of the compiler but 
not in terms of a re-write.



But it would create a problem if there already was an explicit 'void
blah(X)' overload in addition to 'void blah(auto ref X)' (not making
much sense obviously, but this would be something the compiler needed to
handle somehow).


Aye. But even then there is an ambiguity if there is one version of 
function with ref T / T and one with auto ref T.



What this 'auto ref' approach (both as currently implemented for
templates and proposed here for non-templated functions) lacks is the
vital distinction between const and mutable parameters.

For the much more common const ref parameters, I repeatedly tried to
explain why I'm absolutely convinced that we don't need another keyword
and that 'in/const ref' is sufficient, safe, logical and intuitive
(coupled with the overload rule that pass-by-value (moving) is preferred
for rvalues). Please prove me wrong.


I'd rather restrict it to 'auto ref' thingie. Though 'in auto ref' 
sounds outright silly.
Simply put const ref implies that callee can save a pointer to it 
somewhere (it's l-value). The same risk is with 'auto ref' but at least 
there an explicitly written 'disclaimer' by the author of accepting 
temporary stuff.


In the ideal world name 'auto ref' would be shorter, logical and more to 
the point but we have what we have.




For the less common mutable ref parameters, I also repeatedly tried to
explain why I find it dangerous/unsafe to allow rvalues to be bound to
mutable ref parameters. But if there are enough people wanting that, I'd
have no problem with an 'auto ref' approach for it (only for mutable
parameters!). That may actually be a good compromise, what do you guys
think? :)


I think that function plucked with auto ref is a enough indication that 
author is fine with passing to it mutable r-values and not seeing 
changes outside and related blah-blah. In most (all?) of cases it means 
that parameter is too big to be passed by copy so rather it takes it by ref.
Also certain stuff can't be properly bitwise const because of C-calls 
and what not. Logical const is the correct term but in the D world it's 
simply mutable.




'auto ref T' for templates expands to 'ref T' (lvalues) and 'T'
(rvalues), duplicating the whole function and providing best performance
- no pointer/reference indirection for rvalues in contrast to 'auto ref
T' (proposed above) for non-templates, otherwise the concept would be
exactly the same. But it's only for mutable parameters.


I'd say that even for templates the speed argument is mostly defeated by 
the bloat argument. But that's probably only me.



Such a templated option may also be worth for const parameters though
(expanding to 'const ref T' and 'const T'), so maybe something like the
(ambiguous) 'in/const auto ref T' wouldn't actually be that bad
(assuming there are only a few use cases, and only for templates! It'd
still be 'in ref T' for non-templates).




--
Dmitry Olshansky


Re: Walter should start a Seattle D interest group

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 20:49:56 +0100
"Brad"  wrote:

> I know Walter attends the NW C++ user group but I dont see why he 
> does not head his own user group for the D language, especially 
> considering there is a successful Go programming language group 
> in the Seattle area:
> http://www.meetup.com/golang/?a=wm1&rv=wm1&ec=wm1
> Such a group would have no lack of topics and speakers 
> considering the inventor of the language would head the group.

D's got a nice BIG user group right here :)



Re: What's C's biggest mistake?

2012-11-08 Thread Kagamin
Well, in the same vein one could argue that write(a,b) looks as 
if first function is called then arguments are computed and 
passed so the call should be written (a,b)write instead. The 
language has not only syntax, but also semantics.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Nathan M. Swan
On Wednesday, 7 November 2012 at 23:18:41 UTC, Walter Bright 
wrote:

Started a new thread on this.

On 11/7/2012 3:05 AM, Leandro Lucarella wrote:
> OK, that's another thing. And maybe a reason for listening to
people having
> more experience with UDAs than you.
>
> For me the analogy with Exceptions is pretty good. The issues
an conveniences
> of throwing anything or annotating a symbol with anything
instead of just
> type are pretty much the same. I only see functions making
sense to be accepted
> as annotations too (that's what Python do with annotations,
@annotation symbol
> is the same as symbol = annotation(symbol), but is quite a
different language).

There's another aspect to this.

D's UDAs are a purely compile time system, attaching arbitrary 
metadata to specific symbols. The other UDA systems I'm aware 
of appear to be runtime systems.


This implies the use cases will be different - how, I don't 
really know. But I don't know of any other compile time UDA 
system. Experience with runtime systems may not be as 
applicable.


Another interesting data point is CTFE. C++11 has CTFE, but it 
was deliberately crippled and burdened with "constexpr". From 
what I read, this was out of fear that it would turn out to be 
an overused and overabused feature. Of course, this turned out 
to be a large error.


One last thing. Sure, string attributes can (and surely would 
be) used for different purposes in different libraries. The 
presumption is that this would cause a conflict. But would it? 
There are two aspects to a UDA - the attribute itself, and the 
symbol it is attached to. In order to get the UDA for a symbol, 
one has to look up the symbol. There isn't a global repository 
of symbols in D. You'd have to say "I want to look in module X 
for symbols." Why would you look in module X for an attribute 
that you have no reason to believe applies to symbols from X? 
How would an attribute for module X's symbols leak out of X on 
their own?


It's not quite analogous to exceptions, because arbitrary 
exceptions thrown from module X can flow through your code even 
though you have no idea module X even exists.


In module sql.d:

/// For every field marked ["serialize"], add to table
void saveToDatabase(T)(DBConnection db, T model);

In module json.d:

/// For every field marked ["serialize"], add to JSON object
string jsonSerialize(T)(T obj);

In module userinfo.d:

["dbmodel"]
struct UserModel {
["serialize"] string username;
// What do you do if you want this in the database, but 
not the JSON?

string password;

["serialize"] Content ownedContentOrWhateverThisWebsiteIs;
}

The only solution to this question is to differentiate 
"db_serialize" and "json_serialize"; looks a lot like C, doesn't 
it?


My suggested soluion: @annotation (with [] UDA syntax):

module sql;
@annotation enum model;
@annotation enum serialize;

module json;
@annotation enum serialize;

module userinfo;
import sql, json;

[sql.model]
struct UserModel {
[sql.serialize, json.serialize] string username;
[sql.serialize] string password;
[sql.serialize, json.serialize] Content content;
}

My thoughts,
NMS


Re: Const ref and rvalues again...

2012-11-08 Thread Manu
That's cute, but it really feels like a hack.
All of a sudden the debugger doesn't work properly anymore, you need to
step-in twice to enter the function, and it's particularly inefficient in
debug builds (a point of great concern for my industry!).

Please just with the compiler creating a temporary in the caller space.
Restrict is to const ref, or better, in ref (scope seems particularly
important here).


On 8 November 2012 20:28, Dmitry Olshansky  wrote:

> 11/7/2012 3:54 AM, Manu пишет:
>
>  If the compiler started generating 2 copies of all my ref functions, I'd
>> be rather unimpressed... bloat is already a problem in D. Perhaps this
>> may be a handy feature, but I wouldn't call this a 'solution' to this
>> issue.
>> Also, what if the function is external (likely)... auto ref can't work
>> if the function is external, an implicit temporary is required in that
>> case.
>>
>>
> What's wrong with going this route:
>
> void blah(auto ref X stuff){
> ...lots of code...
> }
>
> is magically expanded to:
>
> void blah(ref X stuff){
> ...that code..
> }
>
> and
>
> void blah(X stuff){
> .blah(stuff); //now here stuff is L-value so use the ref version
> }
>
> Yeah, it looks _almost_ like a template now. But unlike with a template we
> can assume it's 2 overloads _always_. External  fucntion issue is then
> solved by treating it as exactly these 2 overloads (one trampoline, one
> real). Basically it becomes one-line declaration of 2 functions.
>
> Given that temporaries are moved anyway the speed should be fine and there
> is as much bloat as you'd do by hand.
>
> Also hopefully inliner can be counted on to do its thing in this simple
> case.
>
>
>
> --
> Dmitry Olshansky
>


Walter should start a Seattle D interest group

2012-11-08 Thread Brad
I know Walter attends the NW C++ user group but I dont see why he 
does not head his own user group for the D language, especially 
considering there is a successful Go programming language group 
in the Seattle area:

http://www.meetup.com/golang/?a=wm1&rv=wm1&ec=wm1
Such a group would have no lack of topics and speakers 
considering the inventor of the language would head the group.


Re: Const ref and rvalues again...

2012-11-08 Thread martin
On Thursday, 8 November 2012 at 18:28:44 UTC, Dmitry Olshansky 
wrote:

What's wrong with going this route:

void blah(auto ref X stuff){
...lots of code...
}

is magically expanded to:

void blah(ref X stuff){
...that code..
}

and

void blah(X stuff){
	.blah(stuff); //now here stuff is L-value so use the ref 
version

}

Yeah, it looks _almost_ like a template now. But unlike with a 
template we can assume it's 2 overloads _always_. External  
fucntion issue is then solved by treating it as exactly these 2 
overloads (one trampoline, one real). Basically it becomes 
one-line declaration of 2 functions.


Given that temporaries are moved anyway the speed should be 
fine and there is as much bloat as you'd do by hand.


Also hopefully inliner can be counted on to do its thing in 
this simple case.


That second overload for rvalues would be a shortcut to save the 
lvalue declarations at each call site - and it really doesn't 
matter if the compiler magically added the lvalue declarations 
before each call or if it magically added the rvalue overload 
(assuming all calls are inlined). But it would create a problem 
if there already was an explicit 'void blah(X)' overload in 
addition to 'void blah(auto ref X)' (not making much sense 
obviously, but this would be something the compiler needed to 
handle somehow).
What this 'auto ref' approach (both as currently implemented for 
templates and proposed here for non-templated functions) lacks is 
the vital distinction between const and mutable parameters.


For the much more common const ref parameters, I repeatedly tried 
to explain why I'm absolutely convinced that we don't need 
another keyword and that 'in/const ref' is sufficient, safe, 
logical and intuitive (coupled with the overload rule that 
pass-by-value (moving) is preferred for rvalues). Please prove me 
wrong.


For the less common mutable ref parameters, I also repeatedly 
tried to explain why I find it dangerous/unsafe to allow rvalues 
to be bound to mutable ref parameters. But if there are enough 
people wanting that, I'd have no problem with an 'auto ref' 
approach for it (only for mutable parameters!). That may actually 
be a good compromise, what do you guys think? :)


'auto ref T' for templates expands to 'ref T' (lvalues) and 'T' 
(rvalues), duplicating the whole function and providing best 
performance - no pointer/reference indirection for rvalues in 
contrast to 'auto ref T' (proposed above) for non-templates, 
otherwise the concept would be exactly the same. But it's only 
for mutable parameters.
Such a templated option may also be worth for const parameters 
though (expanding to 'const ref T' and 'const T'), so maybe 
something like the (ambiguous) 'in/const auto ref T' wouldn't 
actually be that bad (assuming there are only a few use cases, 
and only for templates! It'd still be 'in ref T' for 
non-templates).


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Nick Sabalausky
On Thu, 8 Nov 2012 14:27:14 -0500
Nick Sabalausky  wrote:

> [...]Plus, I would imagine
> that library-implemented features would be slower to compile (simply
> because it's just that much more to be compiled).

Also, these particular sorts of things (compile time processing of
things that are in-library) essentially amount to executing interpreted
code to compile. Sure, that's very powerful, and very well worth having,
but should it really be done for very common features? For common
features, I'd imagine native non-"interpreted" support would help
compilation speed, which is one of D's major goals and benefits.
Suddenly interpreting large parts of the language might work against
that.

> 
> Not that I'm necessarily saying "Always stuff everything into the
> language forever!" I just don't see it as quite so clear-cut.
> 




Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 21:53:11 +0400
Dmitry Olshansky  wrote:

> 11/7/2012 5:40 PM, deadalnix пишет:
> >
> > I think D has already too many feature, and that many of them can be
> > implemented as attribute + AST processing.
> 
> +1
> 

Doesn't that still amount to the same amount of features though? At
least from the user's standpoint anyway. Plus, I would imagine
that library-implemented features would be slower to compile (simply
because it's just that much more to be compiled).

Not that I'm necessarily saying "Always stuff everything into the
language forever!" I just don't see it as quite so clear-cut.



Re: What's C's biggest mistake?

2012-11-08 Thread Nick Sabalausky
On Thu, 08 Nov 2012 19:45:31 +0100
"Kagamin"  wrote:

> On Thursday, 8 November 2012 at 09:38:28 UTC, renoX wrote:
> > I agree with your previous point but your type declaration 
> > syntax is still awful IMHO declaring int[Y][X] and then using 
> > [x][y]..
> > I don't like reading type declaration right-to-left and then 
> > normal code left-to-right..
> 
> Well, then read type declarations left-to-right. It's the 
> strangest decision in design of golang to reverse type 
> declarations. I always read byte[] as `byte array`, not `an array 
> of bytes`.

Doing "int[y][x] ... foo[x][y]" is an odd reversal. But Issue 9's
"[x][y]int" *also* feels very backwards to me (though perhaps I'd get
used to it?). Either way though, they still both beat the hell out of
C/C++'s seemingly random arrangement which can't be read left-to-right
*or* right-to-left. So I'm happy either way :)



Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Tommi
On Thursday, 8 November 2012 at 17:50:48 UTC, 
DypthroposTheImposter wrote:
That example would crash hard if those stack allocated shapes 
were not in scope...


Making it work safely would probably require std::shared_ptr 
usage


But the correct implementation depends on the required ownership 
semantics. I guess with Canvas and Shapes, you'd expect the 
canvas to own the shapes that are passed to it. But imagine if, 
instead of Canvas and Shape, you have Game and Player. The game 
needs to pass messages to all kinds of different types of 
players, but game doesn't *own* the players. In that case, if a 
game passes a message to a player who's not in scope anymore, 
then that's a bug in the code that *uses* game, and not in the 
implementation of game. So, if Canvas isn't supposed to own those 
Shapes, then the above implementation of Canvas is *not* buggy.




Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread simendsjo
On Thursday, 8 November 2012 at 17:20:39 UTC, Jacob Carlborg 
wrote:

On 2012-11-08 17:53, simendsjo wrote:

I guess it depends. I find it easier to see that it's an 
attribute,

especially when you annotate it. But it's harder to grep for.

Is foo an attribute or not?
  @serializable
  @xmlRoot
  @attribute
  @displayName("foo")
  struct foo {}

Is foo an attribute or not?
  @serializable
  @xmlRoot
  @displayName("foo")
  struct @foo {}



I don't know really. In that bottom example, the struct 
declartion almost disappears among all the attributes.


Yeah.. But at least you'll always know where to look.

@[serializable, xmlRoot, attribute, displayName("foo")]
struct foo {}

@[serializable, xmlRoot, displayName("foo")]
struct @foo {}

but attribute could be required as the last type, and on a line 
of it's own, giving:


@[serializable, xmlRoot, displayName("foo")]
@attribute
struct foo {}



Re: What's C's biggest mistake?

2012-11-08 Thread Kagamin

On Thursday, 8 November 2012 at 09:38:28 UTC, renoX wrote:
I agree with your previous point but your type declaration 
syntax is still awful IMHO declaring int[Y][X] and then using 
[x][y]..
I don't like reading type declaration right-to-left and then 
normal code left-to-right..


Well, then read type declarations left-to-right. It's the 
strangest decision in design of golang to reverse type 
declarations. I always read byte[] as `byte array`, not `an array 
of bytes`.


Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread F i L

That's essentially how Go is designed:

type Shape interface {
draw()
}

type Circle struct { ... }
type Square struct { ... }

func (c *Circle) draw() { ... }
func (s *Square) draw() { ... }

func main() {
var shape Shape
var circle Circle
var square Square

shape = circle
shape.draw() // circle.draw()

shape = square
shape.draw() // square.draw()
}


Re: Const ref and rvalues again...

2012-11-08 Thread Dmitry Olshansky

11/7/2012 9:04 PM, martin пишет:

On Wednesday, 7 November 2012 at 14:07:31 UTC, martin wrote:

C++:
void f(T& a) { // for lvalues
this->resource = a.resource;
a.resetResource();
}
void f(T&& a) { // for rvalues (moved)
this->resource = a.resource;
a.resetResource();
}

D:
void f(ref T a) { // for lvalues
this.resource = a.resource;
a.resetResource();
}
void f(T a) { // rvalue argument is not copied, but moved
this.resource = a.resource;
a.resetResource();
}


You could probably get away with a single-line overload, both in C++ and D:

C++:
void f(T& a) { // for lvalues
 // convert a to mutable rvalue reference and
 // invoke the main overload f(T&&)
 f(std::move(a));
}

D:
void f(T a) { // rvalue argument is not copied, but moved
 // the original argument is now named a (an lvalue)
 // invoke the main overload f(ref T)
 f(a);
}


Yup, and I'd like auto ref to actually do this r-value trampoline for me.

--
Dmitry Olshansky


Re: Const ref and rvalues again...

2012-11-08 Thread Dmitry Olshansky

11/7/2012 3:54 AM, Manu пишет:

If the compiler started generating 2 copies of all my ref functions, I'd
be rather unimpressed... bloat is already a problem in D. Perhaps this
may be a handy feature, but I wouldn't call this a 'solution' to this issue.
Also, what if the function is external (likely)... auto ref can't work
if the function is external, an implicit temporary is required in that case.



What's wrong with going this route:

void blah(auto ref X stuff){
...lots of code...
}

is magically expanded to:

void blah(ref X stuff){
...that code..
}

and

void blah(X stuff){
.blah(stuff); //now here stuff is L-value so use the ref version
}

Yeah, it looks _almost_ like a template now. But unlike with a template 
we can assume it's 2 overloads _always_. External  fucntion issue is 
then solved by treating it as exactly these 2 overloads (one trampoline, 
one real). Basically it becomes one-line declaration of 2 functions.


Given that temporaries are moved anyway the speed should be fine and 
there is as much bloat as you'd do by hand.


Also hopefully inliner can be counted on to do its thing in this simple 
case.




--
Dmitry Olshansky


Re: std.signals2 proposal

2012-11-08 Thread Kagamin

On Wednesday, 7 November 2012 at 23:26:46 UTC, eskimo wrote:
Well I don't think it is a common pattern to create an object, 
connect

it to some signal and drop every reference to it.


Ok, example: suppose we have a tabbed interface and on closing a 
tab we want to free model data, displayed in the tab and we 
already have standard IDisposable.Dispose() method, so:


_tab.closed.connect((sender,args)=>this.Dispose());

If the closure dies prematurely, it won't free resources at all 
or at the right time. Although you currently keep a strong 
reference to closures, you claim it's a bug rather than feature. 
You fix deterministic sloppiness of memory leaks at the cost of 
undeterministic sloppiness of prematurely dying event handlers 
(depending on the state of the heap).


Re: Proposal to deprecate "retro.source"

2012-11-08 Thread Jonathan M Davis
On Thursday, November 08, 2012 10:56:38 monarch_dodra wrote:
> On Thursday, 8 November 2012 at 09:18:54 UTC, Jonathan M Davis
> 
> wrote:
> > In the case of retro, I think that it would good to have source
> > exposed for
> > std.container's use. It's easy for std.container to understand
> > what retro's
> > supposed to do, and use it accordingly, and I think that it
> > would be silly for
> > it have to call retro on the retroed range to do that. I do
> > agree however that
> > in general, it doesn't make sense to access source.
> 
> Yes, accessing the original range is useful, but AFAIK, container
> doesn't use retro in any way. It does it with take (which also
> has a source field). For take, there is no way to extract the
> source other than with a specialized function.

std.container doesn't use retro right now, but it really should (though that 
would require externalizing retro's return type). For instance, what would you 
do if you had to remove the last five elemets from a DList? You can't simply 
take the last 5 and pass that to remove with something like take(retro(list[], 
5)) or retro(take(retro(list[], 5))), because the resulting type is 
unrecognized by remove. You're forced to do something like

list.remove(popFrontN(list[], walkLength(list[]) - 5));

which is highly inefficient.

> regarding retro, I find it silly it has a "source" field at all,
> when the original could just be retrieved using retro again (and
> just as efficiently). I don't see any way using source over retro
> could be useful to anyone at all, except for actually
> implementing retro().retro() itself (in which case a _source
> would have done just as well).

Andrei has expressed interest in having _all_ ranges (or at least a sizeable 
number of them) expose source. That being the case, using retro to get at the 
original doesn't really make sense. That's what source is for. retro naturally 
returns the original type when you retro it again, because it avoids type 
proliferation, but that's specific to retro.

Now, how useful source will ultimately be in generic code which doesn't know 
exactly what range type it's dealing with, I don't know. It may ultimately be 
pretty much useless. But even if you have to know what the type is to use it 
appopriately, since they'd generally be exposing the original range through 
source, it makes sense that retro would do the same.

> > As for
> > 
> >> The problem though is in the way the documentation "The
> >> original range can be accessed by using the source property"
> >> and "Applying retro twice to the same range yields the
> >> original range": Looks like we forgot to notice these two
> >> sentences are contradicting.
> > 
> > I don't see anything contradictory at all. If you call retro on
> > a retroed
> > range, you get the original. If you access source, you get the
> > original.
> > Where's the contradiction?
> 
> In "If you access source, you get the original". This is only
> true if the "The original" is not itslef already a retro. This
> mind sound silly, but you may not have control of this. For
> example,

It makes perfect sense that retro would return the same type when retro is 
called on an already retroed range. And that's the _only_ case where source 
wouldn't exist. The real problem with relying on source from the type returned 
by retro is the fact the result could be _another_ range which exposes source 
rather than retro - e.g. retro(retro(take(range, 5))). The docs aren't really 
incorrect in either case. It's just that in that one case, the result is a 
different type than the rest.

I have no problem with removing mention of source from the docs. I don't think 
that it should be used normally. I do think that it makes sense to have it, 
but it makes sense primarily as part of the general approach of giving ranges 
source members. Certainly, using source directly after a call to retro makes 
no sense. For using source to _ever_ make sense (in general, not just with 
retro), you need to know that you're dealing with a wrapper range. So, using 
source immediately after having called a function which potentially returns a 
wrapper range doesn't really make sense. It makes sense when you already know 
that you're dealing with a wrapper range (e.g. you already know that it's a 
Take!Whatever), and even then, I think that it primarily makes sense when 
you're looking for a specific type that's being wrapped (as is the case with 
std.container) rather than when dealing with a generic type which was wrapped.

- Jonathan M Davis


Re: Proposal to deprecate "retro.source"

2012-11-08 Thread Mehrdad

Isn't this easy to fix?

Just make sure typeof(retro(r)) == typeof(retro(retro(retro(r

so that you always get back a retro'd range.


Re: deprecate deprecated?

2012-11-08 Thread Dmitry Olshansky

11/8/2012 12:13 PM, Don Clugston пишет:

On 07/11/12 00:56, Walter Bright wrote:

I know there's been some long term unhappiness about the deprecated
attribute - it's all-or-nothing approach, poor messages, etc. Each
change in it changes the language and the compiler.

Perhaps it could be done with a user defined attribute instead?

Anyone want to take on the challenge?


That *cannot* fix the problem.
The problem is not with the deprecated attribute at all, it's with the
command line switches.


That and it taking a looong time to add sensible "don't use 'this' use 
'that'" messages to deprecated.


--
Dmitry Olshansky


Re: How do you remove/insert elements in a dynamic array without allocating?

2012-11-08 Thread Jonathan M Davis
On Thursday, November 08, 2012 21:39:53 Dmitry Olshansky wrote:
> The ugly truth is that std.container even the most primitive collections
> are not tested well enough.
> 
> The stuff should have obligatory notes about it being *experimental*
> somewhere prominent so that people don't get tied strongly to its
> current behavior prematurely and don't get appalled because of the
> amount of bugs lurking inside.
> 
> That and it being quite novel in general (sealed containers, only range
> iteration/insertion/removal etc.) more then justifies the *experimental*
> status.

I agree. And it's in definite need of an overhaul. It's a good start, but in 
particular, all of the range stuff in it is completely new, and it definitely 
needs work even just from an API standpoint. I expect that in some cases, 
we're going to either have to break people's code or create std.container2 to 
sort it out, and that's not even taking the custom allocators into account 
(though depending, they may be able to be added without actually breaking any 
code).

- Jonathan M Davis


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Dmitry Olshansky

11/7/2012 5:40 PM, deadalnix пишет:
[snip]

[], because @ should be reserved for future language keywords.

Whenever people post suggested language features that require some
marking, they introduce a new @attribute, because introducing a plain
keyword breaks code. If you have @UDAs, this further limits language
expansion.

Example: let's say you want to introduce a "nogc" mark:
1. Not a nogc keyword, that could break "bool nogc;"
2. If you have @, @nogc could break an "enum nogc;" attribute.
3. Now you're stuck with __nogc or #nogc or something uglier.

There is a familiar-to-other-langauges advantage to @, but there is a
better-than-other-languages advantage to [].

My thoughts,
NMS


I think D has already too many feature, and that many of them can be
implemented as attribute + AST processing.


+1



D should work toward getting this AST stuff and stop adding new keywords
all the time.


--
Dmitry Olshansky


Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Jakob Ovrum

On Thursday, 8 November 2012 at 17:27:42 UTC, Tommi wrote:
..and it got me thinking, couldn't we just get rid of dynamic 
polymorphism and classes altogether? Doesn't static 
polymorphism through the use of duck typing and member function 
delegates provide all that we need?


For a lot of programs (or parts of programs) that currently use 
runtime polymorphism, the answer seems to be yes, and Phobos is 
very good at helping D programmers do their polymorphism at 
compile-time.


But dynamic polymorphism is special in that it is just that - 
dynamic.


You can decide which implementation to use at runtime rather than 
having to do it at compile-time. When this runtime component is 
necessary, there is no replacement for runtime polymorphism.


As for function pointers and delegates, class-based polymorphism 
provides a couple of additional niceties: for one, vtables are 
created at compile-time. Secondly, it provides a lot of syntax 
and structure to the system that you don't have with arbitrary 
function pointers or delegates.


Emulating OOP (no, not Object *Based* Programming) with function 
pointers is a real pain. Without classes, we'd only be marginally 
better off than C in this area, thanks to delegates.





Re: How do you remove/insert elements in a dynamic array without allocating?

2012-11-08 Thread Dmitry Olshansky

11/7/2012 2:44 PM, monarch_dodra пишет:

On Wednesday, 7 November 2012 at 10:18:51 UTC, Jonathan M Davis wrote:

By the way, monarch_dodra, since you've been messing around with Array
recently, I would point out that it looks like setting the length
doesn't work
properly if you set it greater than the current length, let alone
greater than
the current capacity.  _capacity is not adjusted if newLength is
greater than
it, and no calls to GC.removeRange or GC.addRange are made, so it
doesn't look
like newly allocated memory is being tracked by the GC like it should if
length is allocating it.


I kind of wanted to stay out of that part of the code, but good catch.
This creates an assertion error:

 auto a = Array!int();
 a.length = 5;
 a.insertBack(1);

Because at the point of insert back, length > capacity...

I'll correct the issues anyways. Good point about the GC.removeRange a,d
GC.addRange too.


The ugly truth is that std.container even the most primitive collections 
are not tested well enough.


The stuff should have obligatory notes about it being *experimental* 
somewhere prominent so that people don't get tied strongly to its 
current behavior prematurely and don't get appalled because of the 
amount of bugs lurking inside.


That and it being quite novel in general (sealed containers, only range 
iteration/insertion/removal etc.) more then justifies the *experimental* 
status.



--
Dmitry Olshansky


Re: Slicing static arrays should be @system

2012-11-08 Thread Dmitry Olshansky

11/7/2012 5:54 AM, Jonathan M Davis пишет:

On Wednesday, November 07, 2012 12:44:26 Daniel Murphy wrote:

Slicing static arrays on the stack is equivalent to taking the address of a
local variable, which is already illegal in SafeD.


Which is what I'm arguing, but a number of people seem to really not like the
idea of making slicing static arrays on the stack @system even though it _is_
the same thing as taking the address of a local variable, which is @system.



Same here. No matter how convenient this loophole is it just can't be safe.
I recall SafeD's motto was along the lines of
"memory safety is not negotiable almost everything else is".


--
Dmitry Olshansky


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jacob Carlborg

On 2012-11-08 17:53, simendsjo wrote:


I guess it depends. I find it easier to see that it's an attribute,
especially when you annotate it. But it's harder to grep for.

Is foo an attribute or not?
   @serializable
   @xmlRoot
   @attribute
   @displayName("foo")
   struct foo {}

Is foo an attribute or not?
   @serializable
   @xmlRoot
   @displayName("foo")
   struct @foo {}



I don't know really. In that bottom example, the struct declartion 
almost disappears among all the attributes.


--
/Jacob Carlborg


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jacob Carlborg

On 2012-11-08 17:26, David Nadlinger wrote:


Sorry, I could not resist:
http://cdn.memegenerator.net/instances/400x/29863604.jpg


Hehe, exactly :)

--
/Jacob Carlborg


Re: Uri class and parser

2012-11-08 Thread jerro

Thnx. Got myself some new errors ;)
It seems that std.string.indexOf() does not work at compile 
time. Is there a solution or alternative method for this?


I guess the proper solution would be to make std.string.indexOf 
work at compile time. It looks like changing the first


if (std.ascii.isASCII(c))

line in std.string.indexOf to

if (!__ctfe && std.ascii.isASCII(c))


Makes it work at compile time.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread simendsjo
On Thursday, 8 November 2012 at 13:19:29 UTC, Jacob Carlborg 
wrote:

On 2012-11-08 11:56, simendsjo wrote:


Or
struct @foo {}
interface @foo {}
enum @foo {0
etc


That syntax looks a bit backwards to me. What if I want to 
annotate the attribute.


@serializable struct @foo {}

Looks a bit confusing which is the name of the attribute and 
the which is the attached annotation.


Vs

@serializable @attribute struct foo {}

No confusion here, "foo" is the name of the attribute, the rest 
is attached annotations.


I guess it depends. I find it easier to see that it's an 
attribute, especially when you annotate it. But it's harder to 
grep for.


Is foo an attribute or not?
  @serializable
  @xmlRoot
  @attribute
  @displayName("foo")
  struct foo {}

Is foo an attribute or not?
  @serializable
  @xmlRoot
  @displayName("foo")
  struct @foo {}



Re: Uri class and parser

2012-11-08 Thread Mike van Dongen

On Thursday, 8 November 2012 at 15:32:59 UTC, jerro wrote:
Something entirely else is the CTFE compatibility of URI. At 
first I though that because a new instance of URI can be 
created as a const, it would be evaluated on compile time.

This is part of how I test CTFE at the moment:

const URI uri36 = URI.parse("http://dlang.org/";);
assert(uri36.scheme == "http");

I tried changing 'const' to 'static' but that resulted in an 
error.
(_adSort cannot be interpreted at compile time, because it has 
no available source code)


Now I'm not sure anymore how to test if my code meets the CTFE 
requirements.


To force something to be evaluated at compile time, you can 
assign it to an enum, like this:


enum uri = URI.parse("http://dlang.org/";);


Thnx. Got myself some new errors ;)
It seems that std.string.indexOf() does not work at compile time. 
Is there a solution or alternative method for this?


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread David Nadlinger
On Thursday, 8 November 2012 at 13:19:29 UTC, Jacob Carlborg 
wrote:
That syntax looks a bit backwards to me. What if I want to 
annotate the attribute.


Sorry, I could not resist: 
http://cdn.memegenerator.net/instances/400x/29863604.jpg


David


Re: Uri class and parser

2012-11-08 Thread jerro
Something entirely else is the CTFE compatibility of URI. At 
first I though that because a new instance of URI can be 
created as a const, it would be evaluated on compile time.

This is part of how I test CTFE at the moment:

const URI uri36 = URI.parse("http://dlang.org/";);
assert(uri36.scheme == "http");

I tried changing 'const' to 'static' but that resulted in an 
error.
(_adSort cannot be interpreted at compile time, because it has 
no available source code)


Now I'm not sure anymore how to test if my code meets the CTFE 
requirements.


To force something to be evaluated at compile time, you can 
assign it to an enum, like this:


enum uri = URI.parse("http://dlang.org/";);



Re: Uri class and parser

2012-11-08 Thread Mike van Dongen
Been thinking about this for a while now, but I can't decide 
which one I should choose.


Currently there is a class URI which has an static method 
(parser) which returns an instance of URI on success. On failure 
it will return null.


I agree with Jens Mueller on the fact that URI should be a struct 
instead of a class. But then I won't be able to return null 
anymore so I should throw an exception when an invalid URI has 
been passed to the constructor.


I'm not sure if this is how problems are being handled in phobos.


Something entirely else is the CTFE compatibility of URI. At 
first I though that because a new instance of URI can be created 
as a const, it would be evaluated on compile time.

This is part of how I test CTFE at the moment:

const URI uri36 = URI.parse("http://dlang.org/";);
assert(uri36.scheme == "http");

I tried changing 'const' to 'static' but that resulted in an 
error.
(_adSort cannot be interpreted at compile time, because it has no 
available source code)


Now I'm not sure anymore how to test if my code meets the CTFE 
requirements.



I hope someone can shed some light on my problems and help me 
make a decision.




Re: Const ref and rvalues again...

2012-11-08 Thread martin
On Thursday, 8 November 2012 at 03:07:00 UTC, Jonathan M Davis 
wrote:

Okay. Here are more links to Andrei discussing the problem:

http://forum.dlang.org/post/4f83dbe5.20...@erdani.com
http://www.mail-archive.com/digitalmars-d@puremagic.com/msg44070.html
http://www.mail-archive.com/digitalmars-d@puremagic.com/msg43769.html
http://forum.dlang.org/post/hg62rq$2c2n$1...@digitalmars.com


Thank you so much for these links, Jonathan.

So fortunately the special role of _const_ ref parameters has 
been acknowledged.


From the 2nd  link:

The problem with binding rvalues to const ref is that
once that is in place you have no way to distinguish an
rvalue from a const ref on the callee site. If you do want
to distinguish, you must rely on complicated conversion
priorities. For example, consider:

void foo(ref const Widget);
void foo(Widget);

You'd sometimes want to do that because you want to exploit
an rvalue by e.g. moving its state instead of copying it.
However, if rvalues become convertible to ref const, then
they are motivated to go either way. A rule could be put in
place that gives priority to the second declaration. However,
things quickly get complicated in the presence of other
applicable rules, multiple parameters etc. Essentially it
was impossible for C++ to go this way and that's how rvalue
references were born.

For D I want to avoid all that aggravation and have a simple
rule: rvalues don't bind to references to const. If you don't
care, use auto ref. This is a simple rule that works
promisingly well in various forwarding scenarios.


This is exactly what we propose (to be able to avoid 
pointer/reference indirection for rvalues in some absolutely 
performance-critical cases). Unlike Andrei though, I don't find 
the required overloading rules complicated at all, quite the 
contrary in fact.


From the 3rd link:

Binding rvalues to const references was probably the single
most hurtful design decisions for C++. I don't have time to
explain now, but in short I think all of the problems that
were addressed by rvalue references, and most of the
aggravation within the design of rvalue references, are owed
by that one particular conversion.


Here's where I totally disagree with Andrei. C++ rvalue 
references (T&&) aren't used to distinguish between lvalues and 
rvalues when expecting a _const_ reference (I still have to see a 
use case for 'const T&&'). They are used for _mutable_ references 
and primarily to enforce efficient move semantics in C++, i.e., 
to move _mutable_ rvalue arguments (instead of copying them) and 
to enforce 'Named Return Value Optimization' when returning 
lvalues (by using std::move; goal again is to avoid a redundant 
copy). D fortunately seems to implement move semantics 
out-of-the-box (at least now in v2.060), in both cases, see Rob 
T's posts and my replies in this thread.
Besides implementing move semantics, C++ with its rvalue refs 
also implicitly provides a way to distinguish between _mutable_ 
lvalue and rvalue references and so allows optimized 
implementations - that is something we'd also need in D, but 
that's what we've just covered with regard to the 2nd link.


So I still don't see a valid reason to preclude binding rvalues 
to const ref parameters.


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Jacob Carlborg

On 2012-11-08 11:56, simendsjo wrote:


Or
struct @foo {}
interface @foo {}
enum @foo {0
etc


That syntax looks a bit backwards to me. What if I want to annotate the 
attribute.


@serializable struct @foo {}

Looks a bit confusing which is the name of the attribute and the which 
is the attached annotation.


Vs

@serializable @attribute struct foo {}

No confusion here, "foo" is the name of the attribute, the rest is 
attached annotations.


--
/Jacob Carlborg


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Jacob Carlborg

On 2012-11-07 22:16, Timon Gehr wrote:


Text interpolation.

enum d = "c";

mixin(X!"abc@(d)ef"); // -> abccef

I use it mostly for code generation.

mixin(mixin(X!q{
 if(@(a)) @(b);
}));


This is what we need AST macros for.

--
/Jacob Carlborg


Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread Simen Kjaeraas

On 2012-11-08, 11:56, simendsjo wrote:


On Thursday, 8 November 2012 at 09:05:31 UTC, Jacob Carlborg wrote:
I think we should only allow user defined types marked with @attribute,  
i.e.


@attribute struct foo {}
@attribute class foo {}
@attribute interface foo {}
@attribute enum foo {}

And so on.


Or
struct @foo {}
interface @foo {}
enum @foo {0
etc



That's actually a very reasonable idea. votes++

--
Simen


Re: [RFC] Fix `object.destroy` problem

2012-11-08 Thread Regan Heath
On Wed, 07 Nov 2012 21:20:59 -, Denis Shelomovskij  
 wrote:



IMHO we have a huge design problem with `object.destroy`.

Please, carefully read "Now the worst thing with `object.destroy`"  
section of the pull 344 about it:

https://github.com/D-Programming-Language/druntime/pull/344


I think you're missunderstood the purpose of "destroy" and I agree with  
the comments here:

https://github.com/D-Programming-Language/druntime/pull/344#issuecomment-10160177

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Tobias Pankrath

On Thursday, 8 November 2012 at 11:01:13 UTC, Jonas Drewsen wrote:
On Thursday, 8 November 2012 at 00:03:34 UTC, Andrei 
Alexandrescu wrote:

On 11/7/12 10:24 PM, Walter Bright wrote:

On 11/7/2012 11:40 AM, Jonas Drewsen wrote:
I we were to allow for @foobar style UDA then "safe" would 
have to be

a reserved
keyword somehow. Otherwise I do not know what this would 
mean:


struct safe { }
@safe void foobar() { }


Yes, I agree this is a significant problem.



I think it's solvable. The basic approach would be to plop 
types "safe", "nothrow" etc. in object.di and then let them 
just behave like all other arguments.


The original argument that the @ in front of @safe is a way to 
prevent introducing new keywords is all gone then since "safe" 
becomes a normal symbol which is reserved in the library and in 
the compiler to let it do its magic.


Then @safe could just as well be a normal builtin storage class 
called "safe".


* Plopping types "safe","nothrow" etc. into object.di would be 
a breaking change.
* Making @safe, @nothrow into keywords called "safe", "nothrow" 
would be breaking change.


The latter would be the cleanest cut and not have the 
semantic/parse stage problems that Walter mentioned.


Another option would to enforce parenthesis @(safe) for UDA 
which would make it less nice for the eyes to look at.


/Jonas


Not safe but @safe should become a keyword.


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Jonas Drewsen
On Thursday, 8 November 2012 at 00:03:34 UTC, Andrei Alexandrescu 
wrote:

On 11/7/12 10:24 PM, Walter Bright wrote:

On 11/7/2012 11:40 AM, Jonas Drewsen wrote:
I we were to allow for @foobar style UDA then "safe" would 
have to be

a reserved
keyword somehow. Otherwise I do not know what this would mean:

struct safe { }
@safe void foobar() { }


Yes, I agree this is a significant problem.



I think it's solvable. The basic approach would be to plop 
types "safe", "nothrow" etc. in object.di and then let them 
just behave like all other arguments.


The original argument that the @ in front of @safe is a way to 
prevent introducing new keywords is all gone then since "safe" 
becomes a normal symbol which is reserved in the library and in 
the compiler to let it do its magic.


Then @safe could just as well be a normal builtin storage class 
called "safe".


* Plopping types "safe","nothrow" etc. into object.di would be a 
breaking change.
* Making @safe, @nothrow into keywords called "safe", "nothrow" 
would be breaking change.


The latter would be the cleanest cut and not have the 
semantic/parse stage problems that Walter mentioned.


Another option would to enforce parenthesis @(safe) for UDA which 
would make it less nice for the eyes to look at.


/Jonas




Re: UDAs - Restrict to User Defined Types?

2012-11-08 Thread simendsjo
On Thursday, 8 November 2012 at 09:05:31 UTC, Jacob Carlborg 
wrote:
I think we should only allow user defined types marked with 
@attribute, i.e.


@attribute struct foo {}
@attribute class foo {}
@attribute interface foo {}
@attribute enum foo {}

And so on.


Or
struct @foo {}
interface @foo {}
enum @foo {0
etc



Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Jacob Carlborg

On 2012-11-08 02:48, Walter Bright wrote:


Consider that if we do that, then someone will need to disambiguate with:

@object.safe

which is ambiguous:

@a.b .c x = 3;

or is it:

@a .b.c x = 3;

?


I would say:

@a.b .c x = 3;

I mean, we read left to right, at least with source code. But as Sönke 
said, that could require parentheses:


@(a.b) .c x = 3;


Another problem is it pushes off recognition of @safe from the parser to
the semantic analyzer. This has unknown forward reference complications.


Just make it a keyword? The current attributes are already keywords from 
a user/developer point of view.


--
/Jacob Carlborg


Re: Proposal to deprecate "retro.source"

2012-11-08 Thread monarch_dodra
On Thursday, 8 November 2012 at 09:18:54 UTC, Jonathan M Davis 
wrote:
In the case of retro, I think that it would good to have source 
exposed for
std.container's use. It's easy for std.container to understand 
what retro's
supposed to do, and use it accordingly, and I think that it 
would be silly for
it have to call retro on the retroed range to do that. I do 
agree however that

in general, it doesn't make sense to access source.


Yes, accessing the original range is useful, but AFAIK, container 
doesn't use retro in any way. It does it with take (which also 
has a source field). For take, there is no way to extract the 
source other than with a specialized function.


regarding retro, I find it silly it has a "source" field at all, 
when the original could just be retrieved using retro again (and 
just as efficiently). I don't see any way using source over retro 
could be useful to anyone at all, except for actually 
implementing retro().retro() itself (in which case a _source 
would have done just as well).



As for

The problem though is in the way the documentation "The 
original range can be accessed by using the source property" 
and "Applying retro twice to the same range yields the 
original range": Looks like we forgot to notice these two 
sentences are contradicting.


I don't see anything contradictory at all. If you call retro on 
a retroed
range, you get the original. If you access source, you get the 
original.

Where's the contradiction?


In "If you access source, you get the original". This is only 
true if the "The original" is not itslef already a retro. This 
mind sound silly, but you may not have control of this. For 
example,


For example, imagine you want a find function that searches from 
the end, and returns the result of everything before the match:


//
import std.stdio;
import std.algorithm;
import std.range;

auto findBefore(R)(R r, int n)
{
auto a = r.retro();
auto result = a.find(n);
static assert(is(typeof(result) == typeof(r.retro(; 
//result is a retro

return result.source; //Error: undefined identifier 'source'
}

void main()
{
auto a = [0, 1, 2, 3, 4, 5];
a.findBefore(3).writeln(); //expect [1, 2, 3]
a.retro().findBefore(3).writeln(); //expect [5, 4, 3]

auto b = indexed([2, 1, 3, 0, 4, 5], [3, 1, 0, 2, 4, 5]);
assert(b.equal(a)); // b is [0, 1, 2, 3, 4, 5]
b.retro().findBefore(3).writeln(); // produces [2, 1, 3, 0, 
4, 5]...

}
//
In "findBefore3", one should expect getting back the "original 
range", but that is not the case. However, using "return 
result.retro()"; works quite well.


Things get even stranger if the underlying range also as a 
"source" filed itself (because of the auto return type).


Both the issues can be fixed with "return result.source;"


I don't think there is anything to be gained using "source". I 
don't think it would be worth deprecating it either, and it is 
not a huge issue, but I think the documentation should favor the 
use of double retro over source, or even mention source at all.


Less surprises is always better.


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Jacob Carlborg

On 2012-11-08 04:34, Marco Leise wrote:


Which features are that? It would likely require a major
rewrite of many routines. Who would want to go through all
that and the following wave of bugs - some of which may have
already occurred in the past.

foreach and scope(...) lowerings are essentially AST
operations. Are there other D features you would implement as
AST processing, maybe a link to an earlier post ?


"synchronized" is another that I think would be easy to do with AST 
macros. Perhaps "with". It's probably not worth replacing existing 
language features with macros just for the sake of it. But for future 
features AST macros could perhaps be used instead.



How close is Rust currently to offering flexible AST
manipulation for lowerings/attributes/macros, does anyone
know ?


I have no idea. I haven't looked at Rust in this area.

--
/Jacob Carlborg


Re: [ ArgumentList ] vs. @( ArgumentList )

2012-11-08 Thread Jacob Carlborg

On 2012-11-07 20:40, Jonas Drewsen wrote:


I we were to allow for @foobar style UDA then "safe" would have to be a
reserved keyword somehow. Otherwise I do not know what this would mean:

struct safe { }
@safe void foobar() { }


The current attributes are keywords from a user/developer point view.

--
/Jacob Carlborg


Re: What's C's biggest mistake?

2012-11-08 Thread renoX

On Wednesday, 30 December 2009 at 14:32:01 UTC, merlin wrote:

Walter Bright wrote:

http://www.reddit.com/r/programming/comments/ai9uc/whats_cs_biggest_mistake/


That's a big one.  I don't know if it's the biggest, there are 
so many to choose from:


*) lack of standard bool type (later fixed)
*) lack of guaranteed length integer types (later fixed)
*) lack of string type and broken standard library string 
handling (not fixed)

*) obviously wrong type declaration (int v[] not int[] v)



I agree with your previous point but your type declaration syntax 
is still awful IMHO declaring int[Y][X] and then using [x][y]..
I don't like reading type declaration right-to-left and then 
normal code left-to-right..




*) grammar not context free (so close, yet so far...)
*) lousy exception handling implementation


You forgot no sane integer overflow behaviour, undefined program 
on overflow isn't a good default behaviour, it should be only a 
possible optimisation.

Same with array indexing.

renoX





  1   2   >