Re: D future ...

2016-12-19 Thread Tommi via Digitalmars-d
I see a lot of people arguing a lot about D and sorry to say 
but at times it sounds like a kindergarten. Walter/Andrei are 
right that updates and changes need to be focused on the 
standard library.


Improve the standard library!

Some of the proposals sounds very correct. The library needs to 
be split. Every module needs its own GIT. People need to be 
able to add standard modules ( after approval ).


Split the standard library! Forget that no other language does it!

No offense but where is the standard database library for D? 
There is none. That is just a load of bull. Anybody who wants 
to program in any language expect the language to have a 
standard database library! Not that you need to search the 
packages for a standard library. I have seen one man projects 
that have more standard library support then D.


Add to the standard library!

Its one of the most basic packages. How about a simple web 
server? A lot of languages offer this by default. It gets 
people going. vibe.d is not a simple web server. It's not a 
standard package.


Add to the standard library!

If you are a low level programmer, sure, you can write you way 
around it. Despite my PHP handicap i am writing a Sqlite 
wrapper for my work. I do not use 3th party packages because a) 
i do not know the code b) the fear that the work will be 
abandoned. I can excuse (a), when i know its the standard 
library because there will always be people willing to work on  
the standard library.


Add to the standard library!


Documentation:
--

I do not use it.


Improve documentation!


Editor support:
---
Too many need 3th party to do something that D needs to support 
from itself:


dcd - Used for auto completion
dfmt - Used for code formatting
dscanner - Used for static code linting
...

This needs to be in the default installation of dmd! It makes 
no sense that these are not included.


Add to the standard distribution!

No offense guys, just something that i see in a lot of posts. 
The hinting at people to add more to the standard libraries. 
That little push. But frankly, its annoying when nothing gets 
done.


Don't add to the standard library!

People complain about x feature. You tell people to add to the 
standard library or bring the project in. But anybody who has 
ever read this forum sees how adding things to the language is 
LONG process and a lot of times idea's get shot down very fast.


Don't add to the standard library!

For the standard library there is no process as far as i can 
tell. Experimental at best, where code seems to have a nice 
long death.


Don't use std.experimental! It's bad!

Just needed to get this off my chest. The problems with D are 
LONG TIME known. Anybody who spends some time reading the 
forums sees the exact same things.


My advice Walter / Andrei: Stop trying to add new things to the 
code.


Don't add anything anywhere!

It works. Its not going anywhere. There are no features that 
you can add, that people can not already do. Maybe it takes a 
few longer lines but those are not the issues.


Don't add features!


Focus on improving the other issues like stated above.


Don't work on the compiler or standard library! Don't use your 
skills! Write documentation and learn how to do editor 
integration!


Maybe also deal with some of the speed  bloat issues. If you 
ask people to help, give them the motivation to do so.


Work on the speed bloat! Wait, what?

Tell people what do do!

Bring more projects into D. When you have people like Webfreak 
making workspace-d, to simply the installation of those almost 
required editor extensions, it tells you that D has a issue.


Do marketing!

Ilya's proposals are too extreme and need a middle ground. But 
he is not wrong.


Seb posted a massive list of modules that can be standard 
candidates. And the response is more or less ignore it. People 
who work on Standard libraries are more motivated. Bring  them 
into the fold. But it seems that they simple get ignored.


Respond to people on the forums!

Like i said, please work on standard libraries is not the 
answer.


Don't add to the standard library!

Why is there no forum part for proposals about libraries that 
get added to Phobos ( why is it even still called that. Call it 
standard library and be done with it. It only confuses new 
people ). The current forum is a pure spam forum.


Work on the standard library! But discuss in a different forum!

You need a Standard forum. With subbranches for std.xxx, 
std.xxx, std... Let people talk about improvements there. If 
people want to add a new branch, let them put up a proposal and 
do NOT mothball it for months in discussions.


Add more forums!

Hell, the whole "D Programming Language - Development" forum is 
so much spam, it becomes almost useless. Its a distraction to 
post each issue there with 0 responses on 95%.


The forum "D Programming Language - Development" is spam! Even 
though it does not exist!


Sorry for the 

Re: Video of my LDC talk @ FOSDEM'14

2014-05-28 Thread Tommi via Digitalmars-d-announce

On Wednesday, 28 May 2014 at 09:14:18 UTC, Alix Pexton wrote:

On 27/05/2014 1:15 PM, Tommi wrote:

On Monday, 26 May 2014 at 05:59:35 UTC, Kai Nacke wrote:

[..]
http://video.fosdem.org/2014/K4401/Sunday/LDC_the_LLVMbased_D_compiler.webm



I can't watch this on my iPhone.


https://itunes.apple.com/gb/app/vlc-for-ios/id650377962?mt=8


Thanks. It worked. To anyone interested, you open the VLC app, 
click the cone in the upper left corner, select Open Network 
Stream and copy-paste the address of the video.


Re: Video of my LDC talk @ FOSDEM'14

2014-05-27 Thread Tommi via Digitalmars-d-announce

On Monday, 26 May 2014 at 05:59:35 UTC, Kai Nacke wrote:

[..]
http://video.fosdem.org/2014/K4401/Sunday/LDC_the_LLVMbased_D_compiler.webm


I can't watch this on my iPhone.


Re: More radical ideas about gc and reference counting

2014-05-13 Thread Tommi via Digitalmars-d

On Tuesday, 13 May 2014 at 21:16:55 UTC, Timon Gehr wrote:

On 05/13/2014 09:07 PM, Jacob Carlborg wrote:

On 2014-05-13 15:56, Dicebot wrote:

Judging by 
http://static.rust-lang.org/doc/0.6/tutorial-macros.html
those are not full-blown AST macros like ones you have been 
proposing,

more like hygienic version of C macros.


Hmm, I haven't looked at Rust macros that much.



Again, the following is an example of Rust macros in action. A 
bf program is compiled to Rust code at compile time.


https://github.com/huonw/brainfuck_macro/blob/master/lib.rs

Compile-time computations create an AST which is then spliced. 
Seems full-blown enough to me and not at all like C macros.


I really don't know, but understanding is that there are two 
types of macros in Rust. There's the simple hygienic macro one, 
which has some documentation: 
http://static.rust-lang.org/doc/0.10/guide-macros.html. Then 
there's the other, newer kind of macros (they may be called 
procedural macros... or not) and those are basically 
undocumented at this point. My understanding is that the newer 
kind of macros can do literally anything, like order you a pizza 
at compile-time.


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Monday, 12 May 2014 at 00:50:24 UTC, Walter Bright wrote:

On 5/11/2014 1:59 PM, Timon Gehr wrote:
Borrowed pointers are not even superficially similar to near*. 
They are
compatible with everything else, because they can store data 
that was borrowed

from anywhere else.


As long as those pointers don't escape. Am I right in that one 
cannot store a borrowed pointer into a global data structure?


Perhaps:

struct Test {
n: 'static int, // [1]
m: int
}

static val: int = 123;
static mut t: Test = Test { n: 'static val, m: 0 };

fn main() {
unsafe { // [2]
let p = mut t.m;
*p = 456;
println!({} {}, *t.n, t.m); // prints: 123 456
}
}

[1]: In order to create a static instance of 'Test', the 'n' 
field (which is a borrowed pointer) must be specified as to be 
pointing at a static immutable (int) variable.


[2]: Any use of static mutable data requires the use of an 
'unsafe' block (similar to @trusted in D)


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Monday, 12 May 2014 at 08:10:43 UTC, Tommi wrote:

Perhaps: [..]


Somewhat surprisingly to me, you can later on change the borrowed 
pointer in the mutable static 'Test' to point at a mutable static 
int:


struct Test {
n: 'static int
}

static old: int = 111;
static mut new: int = 222;
static mut t: Test = Test { n: 'static old };

fn main() {
unsafe {
println!({}, *t.n); // prints: 111
t.n = new; // Can point to a different static
println!({}, *t.n); // prints: 222
// ...but can't point to a local, e.g:
// let v = 123;
// t.n = v; // error: `v` does not live long enough
new = 333;
println!({}, *t.n); // prints: 333
}
}


Re: More radical ideas about gc and reference counting

2014-05-12 Thread Tommi via Digitalmars-d

On Sunday, 11 May 2014 at 21:43:06 UTC, sclytrack wrote:

I like this owner/unique, borrow thing.

@ is managed (currently reference counted)
~ is owner
 is borrow


I like it too. But a few notes:

1) The managed pointer @T has been deprecated and you should use 
the standard library types GcT and RcT instead.


2) The owned pointer ~T has been largely removed from the 
language and you should use the standard library type BoxT 
instead.


The basic idea is that if a function needs to have ownership of 
its argument, the function should take its argument by value. And 
if the function doesn't need the ownership, it should take its 
argument either by a mutable or immutable reference (they don't 
like to call it borrowed pointer anymore, it's called simply a 
reference now). Owned types get moved by default when you pass 
them to a function that takes its argument by value. You call the 
'clone' method to make a copy of a variable of an owned type.


Re: A serious security bug... caused by no bounds checking.

2014-04-11 Thread Tommi
On Friday, 11 April 2014 at 12:00:32 UTC, Steven Schveighoffer 
wrote:
On Fri, 11 Apr 2014 00:01:17 -0400, Tommi 
tommitiss...@hotmail.com wrote:


On Friday, 11 April 2014 at 00:52:25 UTC, Steven Schveighoffer 
wrote:
If @safe is just a convention, then I don't see the point of 
having it at all. If it can't be a guarantee, then it's 
pretty much another tech buzzword with no teeth.


In order to have @safe be a guarantee of memory-safety, we 
need to prevent @safe code from calling any @trusted code.


Or manually guarantee the safety of @trusted code.

I should be able to write an application with only @safe 
functions, and trust that phobos has implemented @trusted 
functions properly.


-Steve


I was talking about @safe in general sense, not only as it 
pertains to phobos.


Re: A serious security bug... caused by no bounds checking.

2014-04-11 Thread Tommi
On Friday, 11 April 2014 at 13:13:22 UTC, Steven Schveighoffer 
wrote:
On Fri, 11 Apr 2014 08:35:07 -0400, Daniel Murphy 
yebbliesnos...@gmail.com wrote:


Steven Schveighoffer  wrote in message 
news:op.xd3vzecweav7ka@stevens-macbook-pro.local...


No, the author of the @safe code expects bounds checking, 
it's part of the requirements. To compile his code with it 
off is like having
 -compilergeneratedhash switch that overrides any toHash 
functions with a compiler generated one. You are changing the 
agreement between the compiler and the code. When I say 
@safe, I mean I absolutely always want bounds checks.


If you have code that would ever fail a bounds check, that is 
a program error, similar to code that may fail an assertion.


And like assertions, if you would rather the code was as fast 
as possible instead of as safe as possible you can use a 
compiler switch to disable bound checks.


The usual switch to do stuff like this is '-release', but 
because @safe functions should still have the 'no memory 
corruption' even in release mode, disabling those bounds 
checks was moved into another compiler switch.



If you want to eliminate bounds checks, use @trusted.


No, @trusted means don't check my code while @safe + 
noboundschecks means (mostly) only check my code at 
compile-time.


Here is the horror scenario I envision:

1. Company has 100kLOC project, which is marked as @safe (I can 
dream, can't I?)
2. They find that performance is lacking, maybe compared to a 
competitor's C++ based code.
3. They try compiling with -noboundscheck, get a large 
performance boost. It really only makes a difference in one 
function (the inner loop one).
4. They pat themselves on the back, and release with the new 
flag, destroying all bounds checks, even bounds checks in 
library template code that they didn't write or scrutinize.

5. Buffer overflow attacks abound.
6. D @safe is labeled a joke


More likely:
6. This company's programming department is labeled a joke.


There should be a way to say, I still want all the @safety 
checks, except for this one critical array access, I have 
manually guaranteed the bounds. We don't have anything like 
that.


We have array.ptr[idx]



Re: A serious security bug... caused by no bounds checking.

2014-04-11 Thread Tommi
On Friday, 11 April 2014 at 13:44:09 UTC, Steven Schveighoffer 
wrote:
On Fri, 11 Apr 2014 09:35:12 -0400, Tommi 
tommitiss...@hotmail.com wrote:


On Friday, 11 April 2014 at 13:13:22 UTC, Steven Schveighoffer 
wrote:

[..]
6. D @safe is labeled a joke


More likely:
6. This company's programming department is labeled a joke.


Perhaps, but it doesn't change the idea that @safe code had 
memory bugs. What we are saying with @safe is that you CAN'T 
have memory bugs, no matter how incompetent your programmers 
are.


You can't gurantee @safe to be memory-safe in the general case 
without disallowing calls to @trusted, because those incompenent 
programmers can write buggy @trusted functions and call them from 
@safe code.



There should be a way to say, I still want all the @safety 
checks, except for this one critical array access, I have 
manually guaranteed the bounds. We don't have anything like 
that.


We have array.ptr[idx]


Not allowed in @safe code.



@trusted ref T unsafeIndex(T)(T[] array, ulong idx)
{
return array.ptr[idx];
}

There you go.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi

On Thursday, 10 April 2014 at 14:13:30 UTC, Steven Schveighoffer
wrote:
On Wed, 09 Apr 2014 13:35:43 -0400, David Nadlinger 
c...@klickverbot.at wrote:


On Tuesday, 8 April 2014 at 20:50:35 UTC, Steven Schveighoffer 
wrote:
This does not sound correct. In NO case should you be able to 
remove bounds checking in @safe code.


It is. In fact, that's the very reason why DMD has 
-noboundscheck in addition to -release.


I meant correct as in not wrong, not correct as in the current 
state of the compiler :)


Otherwise, @safe is just another meaningless convention. Walter?

-Steve


It's funny because just the other day I tried argue on Rust
mailing list why -noboundscheck flag should be added to the Rust
compiler. My argument didn't go down very well. But my point was
that someone at some point might have a genuine need for that
flag, and that having the option to compile the code to an unsafe
program doesn't make the language itself any less safe.

@safe guarantees memory-safety given that any @trusted code used
doesn't break its promise and that you don't use the
-noboundscheck flag. That doesn't sound like a convention to me.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 14:13:30 UTC, Steven Schveighoffer 
wrote:
On Wed, 09 Apr 2014 13:35:43 -0400, David Nadlinger 
c...@klickverbot.at wrote:


On Tuesday, 8 April 2014 at 20:50:35 UTC, Steven Schveighoffer 
wrote:
This does not sound correct. In NO case should you be able to 
remove bounds checking in @safe code.


It is. In fact, that's the very reason why DMD has 
-noboundscheck in addition to -release.


I meant correct as in not wrong, not correct as in the current 
state of the compiler :)


Otherwise, @safe is just another meaningless convention. Walter?

-Steve


It's funny because just the other day I tried argue on Rust 
mailing list why -noboundscheck flag should be added to the Rust 
compiler. My argument didn't go down very well. But my point was 
that someone at some point might have a genuine need for that 
flag, and that having the option to compile the code to an unsafe 
program doesn't make the language itself any less safe.


@safe guarantees memory-safety given that any @trusted code used 
doesn't break its promise and that you don't use the 
-noboundscheck flag. That doesn't sound like a convention to me.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 15:00:34 UTC, Steven Schveighoffer 
wrote:
On Thu, 10 Apr 2014 10:55:26 -0400, Tommi 
tommitiss...@hotmail.com wrote:


On Thursday, 10 April 2014 at 14:13:30 UTC, Steven 
Schveighoffer wrote:
On Wed, 09 Apr 2014 13:35:43 -0400, David Nadlinger 
c...@klickverbot.at wrote:


On Tuesday, 8 April 2014 at 20:50:35 UTC, Steven 
Schveighoffer wrote:
This does not sound correct. In NO case should you be able 
to remove bounds checking in @safe code.


It is. In fact, that's the very reason why DMD has 
-noboundscheck in addition to -release.


I meant correct as in not wrong, not correct as in the 
current state of the compiler :)


Otherwise, @safe is just another meaningless convention. 
Walter?


-Steve


It's funny because just the other day I tried argue on Rust 
mailing list why -noboundscheck flag should be added to the 
Rust compiler. My argument didn't go down very well. But my 
point was that someone at some point might have a genuine need 
for that flag, and that having the option to compile the code 
to an unsafe program doesn't make the language itself any less 
safe.


@safe guarantees memory-safety given that any @trusted code 
used doesn't break its promise and that you don't use the 
-noboundscheck flag. That doesn't sound like a convention to 
me.


No, the author of the @safe code expects bounds checking, it's 
part of the requirements. To compile his code with it off is 
like having a -compilergeneratedhash switch that overrides any 
toHash functions with a compiler generated one. You are 
changing the agreement between the compiler and the code.


Obviously if such or any other compiler flags exist, their 
existence and behaviour has been specified in binding agreement 
between the compiler and the source code, and thus, no breach of 
contract has happened if such compiler flags were used.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
A compiler flag is a blunt instrument. It affects all code the 
compiler touches, which may or may not affect code that you are 
intending to change.


Yes, such a compiler flag is a blunt and dangerous instrument and 
everybody should stay away from it. But everybody agrees on those 
points already. That's _not_ what you need to prove to show that 
such a flag shouldn't exist. What you need to show is that no-one 
will ever find them-self in a situation where such a blunt 
instrument would be useful.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 17:37:53 UTC, Steven Schveighoffer 
wrote:
On Thu, 10 Apr 2014 13:25:25 -0400, bearophile 
bearophileh...@lycos.com wrote:



Steven Schveighoffer:

No, the author of the @safe code expects bounds checking, 
it's part of the requirements.


Take a look ad Ada language. It has bounds checking and its 
compilers have a switch to disable those checks. If you want 
the bounds checking don't use the switch that disables the 
bounds checking. Safety doesn't mean to have no way to work 
around safety locks. It means have nice handy locks that are 
active on default. In a system language total safety is an 
illusion. Better to focus on real world safety and not a 
illusion of theoretical safety.


That's why we have @trusted.


No. @trusted is for code that cannot be guaranteed to be 
memory-safe by the compiler (either at runtime or at 
compile-time), but the programmer still wants to promise that the 
code is memory-safe. Array bounds checking doesn't land under 
that moniker, it can be checked by the compiler.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 17:43:42 UTC, Steven Schveighoffer 
wrote:

[..]
Note that I could find useful disabling of const checks, or 
override checks, or dynamic casts. It doesn't mean I should get 
a compiler switch.


There's no point in turning off any safety / correctness checks 
that can be performed at compile-time. But I could see an 
argument made for being able to disable any given category of 
runtime safety checks.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 17:56:04 UTC, Steven Schveighoffer 
wrote:
@safe code can be marked as @trusted instead, and nothing 
changes, except @trusted code can have bounds checks removed. 
How does this not work as a solution?


A compiler flag for disabling bounds checking is a blunt 
instrument. But using search  replace to change each @safe to 
@trusted is a blunt _and_ inconvenient instrument.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 18:13:30 UTC, Steven Schveighoffer 
wrote:
On Thu, 10 Apr 2014 14:08:48 -0400, Tommi 
tommitiss...@hotmail.com wrote:


On Thursday, 10 April 2014 at 17:56:04 UTC, Steven 
Schveighoffer wrote:
@safe code can be marked as @trusted instead, and nothing 
changes, except @trusted code can have bounds checks removed. 
How does this not work as a solution?


A compiler flag for disabling bounds checking is a blunt 
instrument. But using search  replace to change each @safe to 
@trusted is a blunt _and_ inconvenient instrument.


So don't use it bluntly. For example, disabling bounds checks 
on the args array in main will not help your performance.


Sometimes you need that blunt instrument. I wasn't complaining 
about that.



As a general rule, first profile, then optimize.


Exactly. I profile the difference between running with and 
without bounds checking. If the difference is deemed negligible 
for our purposes, we don't spend time and money in carefully 
optimizing away bound checks that are analyzed to be reasonably 
safe to remove. You need the compiler flag to potentially save 
you all the trouble.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Thursday, 10 April 2014 at 19:48:16 UTC, Steven Schveighoffer 
wrote:
On Thu, 10 Apr 2014 15:38:37 -0400, Tommi 
tommitiss...@hotmail.com wrote:


On Thursday, 10 April 2014 at 18:13:30 UTC, Steven 
Schveighoffer wrote:

As a general rule, first profile, then optimize.


Exactly. I profile the difference between running with and 
without bounds checking. If the difference is deemed 
negligible for our purposes, we don't spend time and money in 
carefully optimizing away bound checks that are analyzed to be 
reasonably safe to remove. You need the compiler flag to 
potentially save you all the trouble.


This is a weak argument. If you need to optimize, do it. Bounds 
checking is one of a thousand different possible explanations 
for slow code. You have to weigh that remote possibility with 
the threat of accidentally/inadvertently neutering @safe.


You also exaggerate the cost of changing a few @safe to 
@trusted. The cost of adding the -noboundscheck flag to the 
build system in the right places may be just as significant.


-Steve


Okay, I give up. You win this debate. Let's prevent D programmers 
from removing bounds checks from their programs with a compiler 
flag.


Re: A serious security bug... caused by no bounds checking.

2014-04-10 Thread Tommi
On Friday, 11 April 2014 at 00:52:25 UTC, Steven Schveighoffer 
wrote:
If @safe is just a convention, then I don't see the point of 
having it at all. If it can't be a guarantee, then it's pretty 
much another tech buzzword with no teeth.


In order to have @safe be a guarantee of memory-safety, we need 
to prevent @safe code from calling any @trusted code.


Re: protocol for using InputRanges

2014-03-23 Thread Tommi

On Sunday, 23 March 2014 at 00:50:34 UTC, Walter Bright wrote:
It's become clear to me that we've underspecified what an 
InputRange is. The normal way to use it is:


while (!r.empty) {
auto e = r.front;
... do something with e ...
r.popFront();
}

no argument there. But there are two issues:

1. If you know the range is not empty, is it allowed to call 
r.front without calling r.empty first?


If this is true, extra logic will need to be added to r.front 
in many cases.


2. Can r.front be called n times in a row? I.e. is calling 
front() destructive?


If true, this means that r.front will have to cache a copy in 
many cases.


Is InputRange supposed to be a one-pass range and am I supposed 
to able to use InputRange as a lightweight wrapper for a stream? 
If the answer is yes, then I think the fundamental issue is that 
the empty-front-popFront interface is not optimal for something 
like InputRange. But given that's the interface we have, I think 
that InputRange's front must be allowed to be destructive because 
the stream it could potentially be wrapping is destructive.


Re: protocol for using InputRanges

2014-03-23 Thread Tommi

On Sunday, 23 March 2014 at 07:54:12 UTC, Jonathan M Davis wrote:


If calling front were destructive, that would break a lot of 
code.


I thought that breaking existing code meant either causing 
existing code do something it wasn't supposed to do or causing 
existing code not compile, but you're using it in the meaning of 
making existing code not conform to the language specification. 
Are you sure that's the correct use of the expression?


Re: enum behaivor. another rehash of the topic

2013-12-15 Thread Tommi

On Sunday, 15 December 2013 at 12:40:53 UTC, monarch_dodra wrote:

On Sunday, 15 December 2013 at 09:38:28 UTC, deadalnix wrote:
Resulting in people giving name like TestT1, TestT2 as enum 
values in C++. As a result, you end up with the same verbosity 
as in D, without the possibility of using 'with'.


I've usually seen the namespace or struct approach, eg:

namespace CheckerBoardColor
// or struct CheckerBoardColor
{
enum Enumeration
{
Red,
Black,
};
};

This allows using CheckerBoardColor::Red, which (IMO) is nice 
and verbose. you can use using CheckerBoardColor for the 
equivalent of with (namespace only).


Unfortunatly, the actual enum type is 
CheckerBoardColor::Enumeration, which is strangely verbose.


I'd rather do this:

namespace CheckerBoardColorNamespace
{
enum CheckerBoardColor { Red, Black };
};
using CheckerBoardColorNamespace::CheckerBoardColor;

auto v = CheckerBoardColor::Red;

int main()
{
using namespace CheckerBoardColorNamespace;
auto v = Red;
}

...and you get to have a nice name for the enum type.


Re: enum behaivor. another rehash of the topic

2013-12-15 Thread Tommi

On Sunday, 15 December 2013 at 11:27:51 UTC, bearophile wrote:
And some people have criticized the verbosity in special 
situations, like this:


enum Foo { good, bad }
void bar(Foo f) {}
void main() {
// bar(bad); // Not enough
bar(Foo.bad);
}


A related discussion:
http://forum.dlang.org/thread/lssmuukdltmooehln...@forum.dlang.org


Re: Some problems with std.typecons.Nullable. It automaticaly calls get and fails (when null)

2013-12-09 Thread Tommi

Perhaps it's a result of a compiler-bug I reported over here:
https://d.puremagic.com/issues/show_bug.cgi?id=11499


Re: Feature request: Bringing mixed-in operators and constructors to the overload set

2013-11-19 Thread Tommi

On Tuesday, 12 November 2013 at 01:17:20 UTC, deadalnix wrote:

On Monday, 11 November 2013 at 21:58:44 UTC, Tommi wrote:

Filed an enhancement request:
https://d.puremagic.com/issues/show_bug.cgi?id=11500


Everything should work the same way.


Do you mean that what I'm reporting is a compiler-bug rather than 
a feature request?


Re: Feature request: Bringing mixed-in operators and constructors to the overload set

2013-11-19 Thread Tommi

On Tuesday, 19 November 2013 at 22:51:10 UTC, Timon Gehr wrote:

On 11/19/2013 11:13 AM, Tommi wrote:

On Tuesday, 12 November 2013 at 01:17:20 UTC, deadalnix wrote:

On Monday, 11 November 2013 at 21:58:44 UTC, Tommi wrote:

Filed an enhancement request:
https://d.puremagic.com/issues/show_bug.cgi?id=11500


Everything should work the same way.


Do you mean that what I'm reporting is a compiler-bug rather 
than a

feature request?


I'd argue yes, but similar issues have surprisingly attracted 
some controversy in the past. (The constructor aliasing is an 
enhancement though as it extends the language grammar.)


Hmmm... decisions decisions. I changed it from enhancement to bug 
(normal priority) anyway.


Re: How is std.traits.isInstanceOf supposed to work?

2013-11-11 Thread Tommi

Filed a bug report:
https://d.puremagic.com/issues/show_bug.cgi?id=11499


Re: Feature request: Bringing mixed-in operators and constructors to the overload set

2013-11-11 Thread Tommi

Filed an enhancement request:
https://d.puremagic.com/issues/show_bug.cgi?id=11500


Feature request: Bringing mixed-in operators and constructors to the overload set

2013-11-10 Thread Tommi
We can bring mixed-in methods to the desired overload set, but 
not operators or constructors. Here's what I mean:


mixin template methodMix()
{
void foo(int n) { }
}

mixin template operatorMix()
{
void opBinary(string op)(int n) { }
}

mixin template ctorMix()
{
this(int n) { }
}

struct MethodTest
{
mixin methodMix mix;

alias foo = mix.foo;

void foo(string s) { }
}

struct OperatorTest
{
mixin operatorMix mix;

alias opBinary = mix.opBinary;

void opBinary(string op)(string s) { } // [1]
}

struct CtorTest
{
mixin ctorMix mix;

// If only I could do the following to bring the
// mixed-in constructor to the overload set:
//alias this = mix.this;

this(string s) { }
}

void main()
{
MethodTest mt;
mt.foo(3);

OperatorTest ot;
ot + 3;

auto ct = CtorTest(3); // [2]
}
-
1. Error: template test.OperatorTest.opBinary(string op)(string 
s) conflicts with alias test.OperatorTest.opBinary
2. Error: constructor test.CtorTest.this (string s) is not 
callable using argument types (int)


How is std.traits.isInstanceOf supposed to work?

2013-11-10 Thread Tommi

The documentation says:

template isInstanceOf(alias S, T)
---
Returns true if T is an instance of the template S.

But, is isInstanceOf supposed to return true if T is a subtype of 
an instance of the template S ? (but is not an instance of the 
template S). Currently it does return true if T is a struct, and 
doesn't if it's a class:


import std.traits;

struct SuperSt(T, int size)
{}

struct SubSt
{
SuperSt!(short, 4) _super;
alias _super this;
}

class SuperCl(T, U)
{}

class SubCl : SuperCl!(int, char)
{}

void main()
{
static assert(isInstanceOf!(SuperSt, SubSt));
static assert(!isInstanceOf!(SuperCl, SubCl));
}


Re: How is std.traits.isInstanceOf supposed to work?

2013-11-10 Thread Tommi
On Sunday, 10 November 2013 at 23:43:21 UTC, TheFlyingFiddle 
wrote:
The template isInstanceOf checks to see if the second parameter 
is a template instantiation of the first parameter.


But in this example of mine:
isInstanceOf!(SuperSt, SubSt)
...the second parameter is not a template instantiation of the 
first parameter, yet isInstanceOf evaluates to true.



I think what this all boils down to is that this is a 
compiler-bug:


struct A(T)
{}

struct B
{
A!int _a;
alias _a this;
}

void main()
{
static assert(!is(B == A!int)); // OK
static assert(is(B == A!T, T)); // BUG (shouldn't compile)
}


Re: How is std.traits.isInstanceOf supposed to work?

2013-11-10 Thread Tommi

On Sunday, 10 November 2013 at 23:34:08 UTC, Dicebot wrote:


I'd say works as expected. 'alias this` is not equivalent to 
inheritance, it allows to type to completely act as another.


According to TDPL, 'alias this' makes the aliasing type a 
'subtype' of the aliased type. If type X is a subtype of type Y, 
it doesn't mean that X can completely act as Y. It means that 
objects of type X can completely act as objects of type Y. Which 
is not the same thing.


Re: s/type tuple/template pack/g please

2013-08-22 Thread Tommi
On Wednesday, 21 August 2013 at 17:53:21 UTC, Andrei Alexandrescu 
wrote:
So: shall we use template pack going forward exclusively 
whenever we refer to that stuff?


I think it should be called either a compile-time list or a 
static list, because it's a list of things which can be 
referred to at compile-time.


import std.typetuple;

void main()
{
int n;
alias T = TypeTuple!(int, 4, n);

static assert(T[1] == 4);
T[2]++;
assert(n == 1);
}


Re: s/type tuple/template pack/g please

2013-08-22 Thread Tommi

On Thursday, 22 August 2013 at 11:37:53 UTC, Tommi wrote:

I think it should be called either a compile-time list ...


And if we go with compile-time list, then we can follow the 
example of std.regex.ctRegex and name the thing ctList.


Re: s/type tuple/template pack/g please

2013-08-22 Thread Tommi

On Thursday, 22 August 2013 at 12:06:11 UTC, Michel Fortin wrote:
On 2013-08-22 11:55:55 +, Tommi 
tommitiss...@hotmail.com said:



On Thursday, 22 August 2013 at 11:37:53 UTC, Tommi wrote:

I think it should be called either a compile-time list ...


And if we go with compile-time list, then we can follow the 
example of std.regex.ctRegex and name the thing ctList.


Will .tupleof become .ctlistof?


I think it could be .fieldsof


Re: A possible suggestion for the Foreach loop

2013-08-21 Thread Tommi

On Wednesday, 21 August 2013 at 02:46:06 UTC, Dylan Knutson wrote:

[..]
--
T foo(T)(ref T thing)
{
thing++; return thing * 2;
}

foreach(Type; TupleType!(int, long, uint))
{
unittest
{
Type tmp = 5;
assert(foo(tmp) == 12);
}

unittest
{
Type tmp = 0;
foo(tmp);
assert(tmp == 1);
}
}
--
[..]


Why not just do this:

import std.typetuple;

T foo(T)(ref T thing)
{
thing++; return thing * 2;
}

unittest
{
foreach(Type; TypeTuple!(int, long, uint))
{
{
Type tmp = 5;
assert(foo(tmp) == 12);
}

{
Type tmp = 0;
foo(tmp);
assert(tmp == 1);
}
}
}


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi
On Thursday, 15 August 2013 at 02:30:54 UTC, Jonathan M Davis 
wrote:

On Thursday, August 15, 2013 02:57:25 Tommi wrote:

Thus, my point is this:
Shouldn't this magical property of static arrays (implicitly
converting to dynamic array during type deduction) extend to 
all

type deduction situations?


No. If anything, we should get rid of the implicit conversion 
from a static
array to a dynamic one. It's actually unsafe to do so. It's 
just like if you
implicitly converted a local variable to the address of that 
local variable
when you passed it to a function that accepted a pointer. IMHO, 
it never
should have been allowed in the first place. At least with 
taking the address
of a local variable, the compiler treats it as @system. The 
compiler doesn't

currently do that with taking the slice of a static array.

http://d.puremagic.com/issues/show_bug.cgi?id=8838

If you want a dynamic array from a static one, then slice it. 
That works just
fine with templated functions, and makes what you want 
explicit. Also, making
it so that IFTI instantiated templates with the dynamic array 
type when given
a static array would make it so that you would have to 
explicitly instantiate
any templates where you actually wanted to pass a static array, 
which is a

definite negative IMHO.

At this point, I don't really expect that it'll be changed so 
that static
arrays do not implicitly convert to dynamic ones (much as it 
would be ideal to
make that change), but I really don't think that we should do 
anything to make

the problem worse by adding yet more such implicit conversions.

- Jonathan M Davis


Now I feel like these three different notions are getting 
somewhat conflated:

1) memory safety
2) implicit conversion
3) implicit conversion during type deduction

Memory safety
-
If the following code is treated as @system code (which it is):

void getLocal(int* p) { }

void main()
{
int n;
getLocal(n);
}

...then the following code should be treated as @system code as 
well (which it currently isn't):


void getLocal(int[] da) { }

void main()
{
int[5] sa;
getLocal(sa[]); // Notice the _explicit_ conversion
}

...There's no two ways about it. But implicit conversion has 
nothing to do with this. If you want to disallow implicit 
conversion because it's a bug prone programming construct, then 
you have a valid reason to do so (and it would be fine by me). 
But if you want to disallow implicit conversion from static array 
to a slice because it's not _always_ memory safe, then you're 
doing it for the wrong reason: e.g. the following code is memory 
safe even though we're using the implicit conversion:


void getGlobal(int[] da) { }
int[5] gsa;

void main()
{
getGlobal(gsa); // implicit conversion
}

Like I said: implicit conversion doesn't really have anything to 
do with the potential memory safety issues.


Implicit conversion VS Implicit conversion during type deduction

I don't think that you failed to see the distinction between 
these two things, but because someone might, I'll talk about this 
a bit more.


This is regular implicit conversion (nothing weird or magical 
about it):


void foo(int[] da) { }
int[3] sa;
foo(sa); // implicit conversion

This is implicit conversion during type deduction (totally weird 
and magical):


void foo(T)(T[] da) { }
int[3] sa;
foo(sa); // implicit conversion during type deduction

That above example about implicit conversion during type 
deduction is like writing the following C++ code and having it 
actually compile:


template typename T
struct DynamicArray { };

template typename T
struct StaticArray {
operator DynamicArrayT() const
{
return DynamicArrayT{};
}
};

template typename T
void foo(DynamicArrayT da) { }

void bar(DynamicArrayint da) { }

int main(int argc, char *argv[])
{
StaticArrayint sa;
foo((DynamicArrayint)sa); // OK: explicit conversion
bar(sa); // OK: regular implicit conversion
foo(sa); // Error: No matching function call to 'foo'
return 0;
}

It is this implicit conversion during type deduction that makes 
D's static arrays' behavior illogical and inconsistent (because 
it doesn't happen in all type deduction contexts). Whereas 
regular implicit conversion may be a dangerous or an unsafe 
programming tool, there's nothing illogical about it.


I don't want to increase the number of implicit conversions. All 
I'm trying to do is make D's static arrays behave more logically. 
And there are two ways to do this:
1) make static arrays free to implicitly convert to dynamic array 
in all type deduction situations
2) disallow static arrays from implicitly converting during type 
deduction


Notice that in order to make static arrays' behavior logical, 
there's no need to disallow them from implicitly converting to 
dynamic arrays in the general sense (just during type deduction).


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 07:16:01 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:


Static array types are somewhat magical in D; they can do 
something that no other non-subtype type can do: static arrays 
can implicitly convert to another type during type deduction. 
More specifically, e.g. int[3] can implicitly convert to int[] 
during type deduction.


I don't understand why are you surprising that int[3] can be 
implicitly converted to int[] during type deduction. There is 
nothing special about it since it can be converted in other 
contexts too.


I don't expect int[3] to implicitly convert to int[] during type 
deduction for the same reason that I don't expect int to 
implicitly convert to long during type deduction (even though I 
know that int implicitly converts to long in all kinds of other 
contexts):


void getLong(T)(T arg)
if (is(T == long))
{
}

void main()
{
int n;
getLong(n); // int doesn't implicitly convert to long
}

On Thursday, 15 August 2013 at 07:16:01 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:

[..]
Here's an example of this:

module test;

enum Ret { static_array, dynamic_array, input_range }

Ret foo(T)(T[3] sa) // [1]
if (is(T == uint))
{
   return Ret.static_array;
}

Ret foo(T)(T[] da) // [2]
{
   return Ret.dynamic_array;
}

enum uint[3] saUints = [0, 0, 0];
enum char[3] saChars = [0, 0, 0];
static assert(foo(saUints) == Ret.static_array);  // [3]
static assert(foo(saChars) == Ret.dynamic_array); // [4]

---
[3]: A perfect match is found in the first overload of 'foo' 
[1] when its template type parameter 'T' is deduced to be an 
int. Then, the type of the function parameter 'sa' is exactly 
the same as the type of the argument 'saUints'.


[4]: No perfect match is found. But, because 'saChars' is a 
static array, the compiler gives all 'foo' overloads the 
benefit of the doubt, and also checks if a perfect match could 
be found if 'saChars' were first converted to a dynamic array, 
int[] that is. Then, a perfect match is found for int[] in the 
second overload of 'foo' [2] when 'T' is deduced to be an int.


Compiler is integral entity. There is no providing benefits to 
doubt. What actually happens is checking in 
TypeSArray::deduceType that base type of T[N] is base type of 
T[]. This check gives true since type 'int' is equal to 'int'. 
And resulted match is not a perfect match, but conversion match.


Agreed.

Also actual conversion happens later in some completely 
unrelated to type deduction compiler part.


I know implicit conversion happens at a later stage (not during 
type deduction). But I don't have any other words to describe it 
other than saying implicit conversion during type deduction. If 
you can provide me some more exact language, please do.


On Thursday, 15 August 2013 at 07:16:01 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:

[..]
Now, let's add to the previous example:

import std.range;

Ret bar(T)(T[3] sa) // [5]
if (is(T == uint))
{
   return Ret.static_array;
}

Ret bar(R)(R r) // [6]
if (std.range.isInputRange!R)
{
   return Ret.input_range;
}

static assert(bar(saUints) == Ret.static_array); // [7]
static assert(bar(saChars) == Ret.input_range);  // [8]

---
[7]: This is effectively the same as [3]
[8]: This line throws a compile-time error: template test.bar 
does not match any function template declaration. Compare 
this to [4]: for some reason, the compiler doesn't give the 
second overload of 'bar' [6] the benefit of the doubt and 
check to see if the call to 'bar' could be made if 'saChars' 
were first converted to int[]. Were the compiler to consider 
this implicit conversion as it does in [4], then it would see 
that the second overload of 'bar' [6] could in fact be called, 
and the code would compile.


Compiler doesn't give the benefit to doubt as there are no 
hints here about dynamic array.


Why would the compiler need a hint? The compiler knows that the 
static array can implicitly convert to dynamic array, so it 
should be able to check if the function could be called with the 
argument first implicitly converted to a dynamic array.


Taking into account general case, when A - B and B -C you are 
asking A -C. And if B and C can convert to other types as 
well, than you could end up with exponentially increasing trees 
of what R may be or in situations where int[3] is converted to 
some ranged struct S { void popFront(){ } ... }
In other way, int[3] cannot be directly converted to R if 
int[3] - int[] and int[] - R holds. Also,as Jonathan said, 
there is safety aspect of this issue.


No, I'm not asking A - C, I'm just asking that int[3] convert to 
int[]. I don't understand where you get this exponential 
increase. Since they are both built-in types, there can't be any 
increase: int[3] can never implicitly convert to some 
user-defined struct S, and int[] can never implicitly convert

Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 13:25:58 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 12:12:59 UTC, Tommi wrote:


Implicit conversion VS Implicit conversion during type 
deduction


I don't think that you failed to see the distinction between 
these two things, but because someone might, I'll talk about 
this a bit more.


This is regular implicit conversion (nothing weird or magical 
about it):


void foo(int[] da) { }
int[3] sa;
foo(sa); // implicit conversion

This is implicit conversion during type deduction (totally 
weird and magical):


void foo(T)(T[] da) { }
int[3] sa;
foo(sa); // implicit conversion during type deduction



Please stop spreading this misconveption. There is *no*
conversion during type deduction. There is *a check* whether 
base

type of static array is same as base type of dynamic array which
happens to be true in this case. *Conversion* happens in
completely related compiler part.


I've been saying implicit conversion during type deduction, but 
I seriously doubt that anyone who knows what type deduction is, 
would think that actual implicit conversion somehow happens 
during that.


(Type deduction is simply figuring out what the actual type of a 
templated type is given a certain context).


But, from now on, I'll use the phrase checking for implicit 
conversions during type deduction instead of implicit 
conversion during type deduction.


On Thursday, 15 August 2013 at 13:25:58 UTC, Maxim Fomin wrote:

And there is no special about it during template type deduction.
This (meaning that base type of int[N] array is same as base 
type
of int[] array) happens or could happen in any stage of 
compiling

proccess. How C++ does it is irrelevant to what D does.


I agree that what C++ does is irrelevant to what D does (I never 
said it was). I used an example of C++ code to make a point which 
_is_ relevant to D. Since you seemed to miss the point, I'm going 
to translate that previous C++ example code D code now to get my 
point across:


struct DynamicArray(T) { }

struct StaticArray(T, size_t n)
{
DynamicArray!T opImplicitCast() const
{
return DynamicArray!T();
}
}

void foo(T)(DynamicArray!T da) { }

void bar(DynamicArray!int da) { }

void main()
{
StaticArray!(int, 5) sa;
bar(sa); // OK: implicit conversion
foo(cast(DynamicArray!int) sa); // OK: explicit conversion
foo(sa); // Error: No matching function call to 'foo'
}

Don't try to compile that, it won't work because I used the 
opImplicitCast operator which is a future feature of D (it 
provides an implicit conversion for used defined types). How do I 
know it's a future feature of D? Don't ask... okay, you got me... 
I'm a time traveller.


Now, to fulfill my promise of being excruciatingly explicit: What 
said before about being a time traveller, it was a joke. My point 
is that D could have an implicit cast operator in the future. And 
if it did have it, then it would become obvious to everybody why 
it is that a static array, such as:


int[5]

...is magical, that is, it is fundamentally different from:

StaticArray!(int, 5)

...in that whereas the above code example would _not_ compile, 
this next example would (and it does) compile:


void foo(T)(T[] da) { }

void bar(int[] da) { }

void main()
{
int[5] sa;
bar(sa); // OK: implicit conversion
foo(cast(int[]) sa); // OK: explicit conversion
foo(sa); // OK !!! (compare to the previous example)
}


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 13:50:45 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 12:44:09 UTC, Tommi wrote:


I don't expect int[3] to implicitly convert to int[] during 
type deduction for the same reason that I don't expect int to 
implicitly convert to long during type deduction (even though 
I know that int implicitly converts to long in all kinds of 
other contexts):


void getLong(T)(T arg)
if (is(T == long))
{
}

void main()
{
   int n;
   getLong(n); // int doesn't implicitly convert to long
}



OK,

void getLong(T)(T arg)
if (is(T : long))
{

}

void main()
{
int n;
getLong(n); // int is implicitly convertible to long
}

now you have implicit conversion from int to long during type 
deduction.


No it's not:

void getLong(T)(T arg)
if (is(T : long))
{
static assert(is(typeof(arg) == int));
}

void main()
{
int n;
getLong(n); // int is _not_ implicitly converted to long
}


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 13:53:17 UTC, Artur Skawina wrote:

On 08/15/13 14:44, Tommi wrote:
[...]
No, I'm not asking A - C, I'm just asking that int[3] convert 
to int[].



From you earlier post:


  Ret bar(R)(R r) // [6]
  if (std.range.isInputRange!R)
  {
  return Ret.input_range;
  }

You'd like to be able to call 'bar' with a static array. 
Currently
you can't, because 'R' becomes a /static array/, hence not a 
input range.


Note that

  Ret baz(R)(R[] r) // [6]
  if (std.range.isInputRange!(R[]))
  {
  return Ret.input_range;
  }

[..]


To be exact, I want either of the following options (but not both 
of them):


1) I want to be able to call 'bar' with a static array

OR

2) I want to _not_ be able to call 'baz' with a static array

Either one of those options is fine by me. All I want is make D's 
static arrays behave logically (and either one of those options 
would do it).


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 14:53:08 UTC, Maxim Fomin wrote:

On Thursday, 15 August 2013 at 14:32:29 UTC, Tommi wrote:


No it's not:

void getLong(T)(T arg)
if (is(T : long))
{
   static assert(is(typeof(arg) == int));
}

void main()
{
   int n;
   getLong(n); // int is _not_ implicitly converted to long
}


Yes, because during type deduction there was constraint test 
that checked whether int is convertible to long which is the 
same story as in static-dynamic array case.


The code snippet above is not the same thing as what happens in 
the static-dynamic array case. In the above code snippet the type 
deduction for 'T' during the instantiation of 'getLong' has 
already finished by the time the template constraint is evaluated 
(how could the template constraint be evaluated if the actual 
type for 'T' hadn't already been deduced?). So, the compiler 
doesn't consider the implicit conversion from int to long 
_during_ type deduction, but after it.


The only time when the compiler is willing to consider the 
possible implicit conversions during type deduction is with 
static arrays: hence... magic.


Re: Request: a more logical static array behavior

2013-08-15 Thread Tommi

On Thursday, 15 August 2013 at 15:07:23 UTC, Tommi wrote:
The only time when the compiler is willing to consider the 
possible implicit conversions during type deduction is with 
static arrays: hence... magic.


Barring this special case which I mentioned in my original post:

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:

NOTE:
Subtypes can implicitly convert to their supertype during type 
deduction, there's nothing magical about that.


Here's an example of that:

class SuperClass(T) { }

class SubClass(T) : SuperClass!T { }

struct Supertype(T) { }

struct Subtype(T)
{
Supertype!T s;
alias s this;
}

void foo(T)(SuperClass!T superc) { }

void bar(T)(Supertype!T supert) { }

void main()
{
SubClass!int subc;
foo(subc); // OK

Subtype!int subt;
bar(subt); // OK
}

Although, it might be a bit misleading to call that implicit 
conversion, because nothing is actually converted: it's just that 
an entity is interpreted as something else, which it also is.


Re: Parameter-less templates?

2013-08-14 Thread Tommi

On Tuesday, 13 August 2013 at 09:41:45 UTC, monarch_dodra wrote:


In regards to template (I mean the actual template), I guess 
I wish we could either:

1. Allow non-parameterized templates (eg template foo {...})
2. Allow invoking a template without parameters (provided the 
template has 0 parameters, or default parameters), without 
doing !()


I think your proposal no. 1 is good, but there's a problem with 
your proposal no. 2. This is current D code:


template get(int n = 1)
{
static if (n == 1) {
alias get = one;
}
else static if (n == 0) {
enum get = 0;
}
}

template one(int n)
{
enum one = 1;
}

template wrap(alias a)
{
enum wrap = a!0;
}

void main()
{
static assert(wrap!get == 0);
static assert(wrap!(get!()) == 1);
}

So, if we go with your proposal no. 2 (allow invoking a template 
without a parameter list), then the meaning of 'get' in:

wrap!get
...becomes ambiguous. We could be either passing an alias to 
'get', or an alias to a parameterless instantiation of template 
'get', i.e. get!().


Request: a more logical static array behavior

2013-08-14 Thread Tommi
Disclaimer: I'm going to be excruciatingly explicit here just so 
that everybody can follow my train of thought.


References:
1) Subtype/Supertype: http://en.wikipedia.org/wiki/Subtyping

2) Type deduction: 
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Core-C-/Stephan-T-Lavavej-Core-C-2-of-n


3) Range: http://dlang.org/phobos/std_range.html#isInputRange

---
e.g. = for example

Static array, e.g. int[3], is not a range.
Dynamic array, e.g. int[], is a range.

int[] is not a supertype of int[3].

Non-subtype type: type that is not a subtype of any other type

Static array types are somewhat magical in D; they can do 
something that no other non-subtype type can do: static arrays 
can implicitly convert to another type during type deduction. 
More specifically, e.g. int[3] can implicitly convert to int[] 
during type deduction.


Here's an example of this:

module test;

enum Ret { static_array, dynamic_array, input_range }

Ret foo(T)(T[3] sa) // [1]
if (is(T == uint))
{
return Ret.static_array;
}

Ret foo(T)(T[] da) // [2]
{
return Ret.dynamic_array;
}

enum uint[3] saUints = [0, 0, 0];
enum char[3] saChars = [0, 0, 0];
static assert(foo(saUints) == Ret.static_array);  // [3]
static assert(foo(saChars) == Ret.dynamic_array); // [4]

---
[3]: A perfect match is found in the first overload of 'foo' [1] 
when its template type parameter 'T' is deduced to be an int. 
Then, the type of the function parameter 'sa' is exactly the same 
as the type of the argument 'saUints'.


[4]: No perfect match is found. But, because 'saChars' is a 
static array, the compiler gives all 'foo' overloads the benefit 
of the doubt, and also checks if a perfect match could be found 
if 'saChars' were first converted to a dynamic array, int[] that 
is. Then, a perfect match is found for int[] in the second 
overload of 'foo' [2] when 'T' is deduced to be an int.


Now, let's add to the previous example:

import std.range;

Ret bar(T)(T[3] sa) // [5]
if (is(T == uint))
{
return Ret.static_array;
}

Ret bar(R)(R r) // [6]
if (std.range.isInputRange!R)
{
return Ret.input_range;
}

static assert(bar(saUints) == Ret.static_array); // [7]
static assert(bar(saChars) == Ret.input_range);  // [8]

---
[7]: This is effectively the same as [3]
[8]: This line throws a compile-time error: template test.bar 
does not match any function template declaration. Compare this 
to [4]: for some reason, the compiler doesn't give the second 
overload of 'bar' [6] the benefit of the doubt and check to see 
if the call to 'bar' could be made if 'saChars' were first 
converted to int[]. Were the compiler to consider this implicit 
conversion as it does in [4], then it would see that the second 
overload of 'bar' [6] could in fact be called, and the code would 
compile.


Thus, my point is this:
Shouldn't this magical property of static arrays (implicitly 
converting to dynamic array during type deduction) extend to all 
type deduction situations?


I think this would be a breaking change.


NOTE:
Subtypes can implicitly convert to their supertype during type 
deduction, there's nothing magical about that.


Re: Request: a more logical static array behavior

2013-08-14 Thread Tommi

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:

...


And half the time I say converted to int[] I mean converted to 
uint[] and the other half of the time I mean converted to 
char[]. Sorry about that.


Re: Request: a more logical static array behavior

2013-08-14 Thread Tommi

On Thursday, 15 August 2013 at 01:05:54 UTC, Tommi wrote:

On Thursday, 15 August 2013 at 00:57:26 UTC, Tommi wrote:

...


And half the time I say converted to int[] I mean converted 
to uint[] and the other half of the time I mean converted to 
char[]. Sorry about that.


Actually that's not true either. Whenever I say 'int' after the 
examples start, just think 'char'.


Re: UFCS for templates

2013-08-13 Thread Tommi

On Thursday, 8 August 2013 at 17:35:02 UTC, JS wrote:

Can we have UFCS for templates?

e.g.,

T New(T, A...)(A args) { }


T t = T.New(args);


Note, in this case, the type parameter is substituted.


A while back I made this enhancement request:
http://d.puremagic.com/issues/show_bug.cgi?id=8381

It might be similar to what you're proposing (although it's not 
clear what it is that you're proposing).


I'd be interested in hearing some actual use cases for this 
feature though (I can't think of any even though I made that 
feature request).


Re: Finding the path to a executable?

2013-08-07 Thread Tommi

On Wednesday, 7 August 2013 at 05:31:24 UTC, Alan wrote:
Hello!  This may seem like a simple question, maybe an 
embarassing question but how would I fetch the absolute path to 
the directory where my executable is located?  My wording is 
known to be confusing so I will give an example:


cd ~/projects/program
dmd Program.d -ofProgram

That executable would be located in /home/alan/projects/program 
for example.  SO my question is how would I fetch the absolute 
path to that directory?


Thanks for any help!


This should work on at least couple of systems, but I haven't 
tested this on other than Mac. Also, it's not completely robust, 
because paths could be longer than 4096 chars.


version(Windows) {
import std.c.windows.windows;
}
else version(OSX) {
private extern(C) int _NSGetExecutablePath(char* buf, uint* 
bufsize);

}
else version(linux) {
import std.c.linux.linux;
}
else {
static assert(0);
}
import std.conv;

// Returns the full path to the currently running executable
string executablePath()
{
static string cachedExecutablePath;

if (!cachedExecutablePath) {
char[4096] buf;
uint filePathLength;

version(Windows) {
filePathLength = GetModuleFileNameA(null, buf.ptr, 
buf.length - 1);

assert(filePathLength != 0);
}
else version(OSX) {
filePathLength = cast(uint) (buf.length - 1);
int res = _NSGetExecutablePath(buf.ptr, 
filePathLength);

assert(res == 0);
}
else version(linux) {
filePathLength = readlink(toStringz(selfExeLink), 
buf.ptr, buf.length - 1);

}
else {
static assert(0);
}
cachedExecutablePath = to!string(buf[0 .. 
filePathLength]);

}
return cachedExecutablePath;
}

// Returns the file name of the currently running executable
string executableName()
{
return executablePath().baseName();
}

// Returns the path to the directory of the currently running 
executable

string executableDirPath()
{
return executablePath().dirName();
}


Re: Finding the path to a executable?

2013-08-07 Thread Tommi

...sorry, add import std.path; if you need the last two functions.


Re: alias this doesn't work for properties.

2013-08-02 Thread Tommi

On Friday, 2 August 2013 at 01:55:35 UTC, JS wrote:

What is the difference between

class A
{
int x;
}

class A
{
private int _x;
@property int x() { }
@property int x(int v) { }
}

? From the outside world there is suppose to be no difference.


In the first case other modules can create a mutable pointer to 
the field x. In the second case other modules cannot create any 
pointer to field _x.


Re: Template functions, can we make it more simple?

2013-08-02 Thread Tommi

On Friday, 2 August 2013 at 20:50:00 UTC, SteveGuo wrote:
auto Add(A, B)(A a, B b); // Yes, this is the template function 
declaration of D manner


But I mean if the syntax can be more simple?

auto Add(A, B)(A a, B b); // type A, B appears twice

// Can we just make it more simple?
auto Add(a, b)
{
}


I think it could get pretty confusing. I.e.:

module foo;

struct a { }
struct b { }


module bar;

import foo;

auto Add(a, b) { } // [1]


[1] Here the unsuspecting programmer thinks he's writing a 
template function, not knowing that one module he's importing 
actually specifies 'a' and 'b' as types, which makes his Add a 
regular function taking unnamed variables of types 'a' and 'b'.


Re: Module visibility feature?

2013-07-18 Thread Tommi

On Thursday, 18 July 2013 at 22:22:15 UTC, Tommi wrote:
All modules can import foo.barstuff, but only other modules in 
foo could import a module declared as 'protected':


protected module foo.detail;

// declarations

This makes a clearer distinction between implementation detail 
modules vs normal modules.


'package' rather than 'protected'


Re: Module visibility feature?

2013-07-18 Thread Tommi

On Thursday, 18 July 2013 at 10:55:39 UTC, Dicebot wrote:

On Wednesday, 17 July 2013 at 11:33:39 UTC, Jeremy DeHaan wrote:
So I was reading this: 
http://wiki.dlang.org/Access_specifiers_and_visibility

...


How will it be any different from

module foo.barstuff;

package:

// declarations


All modules can import foo.barstuff, but only other modules in 
foo could import a module declared as 'protected':


protected module foo.detail;

// declarations

This makes a clearer distinction between implementation detail 
modules vs normal modules.


Re: Module visibility feature?

2013-07-17 Thread Tommi

On Wednesday, 17 July 2013 at 11:33:39 UTC, Jeremy DeHaan wrote:
Has anything like this been suggested/thought of before? I 
looked through the DIP's but didn't see anything.


I don't know, but I think this is a good suggestion; stronger 
language support for modules that are an implementation detail of 
a package. The 'package' keyword you suggested is definitely more 
logical than 'private'.


Re: Feature request: Path append operators for strings

2013-07-09 Thread Tommi

On Monday, 8 July 2013 at 21:46:24 UTC, Walter Bright wrote:


I'm sure you're self-aware, as I'm sure Siri and Watson are not.


But there is no way for you to prove to me that you are 
self-aware. It could be that you are simply programmed to appear 
to be self-aware; think of an infinite loop containing a massive 
switch statement, where each case represents a different 
situation in life and the function to execute in each case 
represents what you did in that situation in life. As long as we 
can't test whether and entity is self-aware or not, for our 
purposes, it kind of doesn't matter whether it is or not.


If we ever are able to define what consciousness is (and I'm 
quite sure we will), I suspect it's going to be some kind of a 
continuous feedback loop from sensory data to brain, and from 
brain to muscles and through them back to sensory data again. 
Consciousness would be kind of your ability to predict what kind 
of sensory data would be likely to be produced if you sent a 
certain set of signals to your muscles.


I like this guy's take on consciousness:
http://www.youtube.com/watch?v=3jBUtKYRxnA


Re: Feature request: Path append operators for strings

2013-07-09 Thread Tommi

On Tuesday, 9 July 2013 at 06:07:12 UTC, Tommi wrote:
Consciousness would be kind of your ability to predict what 
kind of sensory data would be likely to be produced if you sent 
a certain set of signals to your muscles.


...and the better you are at predicting those very-near-future 
sensory signals, the more you feel that you're conscious.


Re: Feature request: Path append operators for strings

2013-07-08 Thread Tommi

On Sunday, 7 July 2013 at 20:35:49 UTC, Walter Bright wrote:

On 7/7/2013 8:38 AM, Andrei Alexandrescu wrote:
All Siri does is recognize a set of stock patterns, just like 
Eliza. Step out of that, even slightly, and it reverts to a 
default, again, just like Eliza.


Of course, Siri had a much larger set of patterns it 
recognized, but with a bit of experimentation you

quickly figure out what those stock patterns are.
There's nothing resembling human understanding there.


But that applies to humans, too - they just have a much larger 
set of patterns they recognize.


I don't buy that. Humans don't process data like computers do.


Humans don't and _can't_ process data like computers do, but 
computers _can_ process data like humans do.


Human brain does it's computation in a highly parallel manner, 
but signals run much slower than they do in computers. What human 
brain does is a very specific process, optimized for survival on 
planet Earth.


But computers are generic computation devices. They can model any 
computational processes, including the ones that human brain uses 
(at least once we get some more cores in our computers).


Disclaimer: I'm basically just paraphrasing stuff I read from 
The Singularity Is Near and How to Create a Mind.


Re: Feature request: Path append operators for strings

2013-07-08 Thread Tommi

On Monday, 8 July 2013 at 10:48:05 UTC, John Colvin wrote:
For me, the most interesting question in all of this is What 
is intelligence?. While that might seem the preserve of 
philosophers, I believe that computers have the ability to (and 
already do) demonstrate new and diverse types of intelligence, 
entirely unlike human intelligence but nonetheless highly 
effective.


A quite fitting quote from How to Create a Mind, I think:

American philosopher John Searle (born in 1932) argued recently 
that Watson is not capable of thinking. Citing his “Chinese room” 
thought experiment (which I will discuss further inchapter 11), 
he states that Watson is only manipulating symbols and does not 
understand the meaning of those symbols.


Actually, Searle is not describing Watson accurately, since its 
understanding of language is based on hierarchical statistical 
processes—not the manipulation of symbols. The only way that 
Searle’s characterization would be accurate is if we considered 
every step in Watson’s self-organizing processes to be “the 
manipulation of symbols.” But if that were the case, then the 
human brain would not be judged capable of thinking either.
It is amusing and ironic when observers criticize Watson for just 
doing statistical analysis of language as opposed to possessing 
the “true” understanding of language that humans have. 
Hierarchical statistical analysis is exactly what the human brain 
is doing when it is resolving multiple hypotheses based on 
statistical
inference (and indeed at every level of the neocortical 
hierarchy). Both Watson and the human brain learn and respond 
based on a similar approach to hierarchical understanding. In 
many respects Watson’s knowledge is far more extensive than a 
human’s; no human can claim to have mastered all of Wikipedia, 
which is only part of Watson’s knowledge base. Conversely, a 
human can today master more conceptual levels than Watson, but 
that is certainly not a permanent gap.


Re: Feature request: Path append operators for strings

2013-07-08 Thread Tommi

On Monday, 8 July 2013 at 12:04:14 UTC, Walter Bright wrote:

On 7/8/2013 2:02 AM, Tommi wrote:

I don't buy that. Humans don't process data like computers do.


Humans don't and _can't_ process data like computers do, but 
computers _can_

process data like humans do.

Human brain does it's computation in a highly parallel manner, 
but signals run
much slower than they do in computers. What human brain does 
is a very specific

process, optimized for survival on planet Earth.

But computers are generic computation devices. They can model 
any computational
processes, including the ones that human brain uses (at least 
once we get some

more cores in our computers).


Except that we have no idea how brains actually work.


How to Create a Mind makes a pretty convincing argument to the 
contrary. It's true that we don't have the full picture of how 
brains work. But both the temporal and spatial resolution of that 
picture is increasing rapidly with better brain scanners.


Are fruit flies self-aware? Probably not. Are dogs? Definitely. 
So at what point between fruit flies and dogs does 
self-awareness start?


We have no idea. None at all.


How to Create a Mind talks plenty of consciousness as well. My 
personal guess is that consciousness is not a binary property.


I feel I should get some royalties for plugging that book like 
this.


Feature request: assert expressions should live inside version(assert)

2013-07-07 Thread Tommi
Sometimes you need to have some extra data to check against in 
the assert expression. That data isn't needed in release mode 
when assertions are ignored. Therefore, you put that extra data 
inside a version(assert). But then those assertions fail to 
compile in release mode because the symbol lookup for that extra 
data fails. For this reason, assert statements should live inside 
version(assert) blocks by default.


Example:

version (assert)
{
const int[1000] maximums = 123;
}

void foo(int value, int index)
{
assert(value  maximums[index]); // [1]
}

void main()
{
foo(11, 22);
}

[1] (In release mode) Error: undefined identifier maximums

...so you need to introduce a redundant version(assert):

void foo(int value, int index)
{
version (assert)
{
assert(value  maximums[index]);
}
}


Re: Fun with templates

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 11:59:36 UTC, Marco Leise wrote:

Am Sun, 07 Jul 2013 13:17:23 +0200
schrieb TommiT tommitiss...@hotmail.com:


const would have the same effect:

void f(T)(const T var) { ... }

...but then you can't mutate var.


That doesn't handle shared.


That seems like a compiler bug to me.


Re: Feature request: assert expressions should live inside version(assert)

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 12:30:28 UTC, Andrej Mitrovic wrote:

On 7/7/13, Tommi tommitiss...@hotmail.com wrote:

Sometimes you need to have some extra data to check against in
the assert expression. That data isn't needed in release mode
when assertions are ignored. Therefore, you put that extra data
inside a version(assert). But then those assertions fail to
compile in release mode because the symbol lookup for that 
extra
data fails. For this reason, assert statements should live 
inside

version(assert) blocks by default.


I've ran into an issue when implementing this feature back in 
February

(see the pull request):

http://d.puremagic.com/issues/show_bug.cgi?id=9450
https://github.com/D-Programming-Language/dmd/pull/1614


Oh, should have searched bugzilla before posting this.

But would it be possible to implement it something like:
During a release build, even though version(assert) blocks are 
compiled out of existence, the compiler would keep a separate 
list of symbols (and other info) of the things declared inside 
those version(assert) blocks. Then, when parsing assert 
expressions, if an undefined symbol is found, the compiler would 
check that separate list of symbols that it has been keeping, and 
if the symbol is found there and use of the symbol is 
syntactically correct, the compiler would just keep on going 
instead of spewing an unidentified identifier error.


That way we'd make sure that things like:

assert(this_identifier_doesnt_exist  12);

...wouldn't compile.


Re: Feature request: assert expressions should live inside version(assert)

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 13:12:06 UTC, Tommi wrote:
[..] Then, when parsing assert expressions, if an undefined 
symbol is found, the compiler would check that separate list of 
symbols that it has been keeping, and if the symbol is found 
there and use of the symbol is syntactically correct, the 
compiler would just keep on going instead of spewing an 
unidentified identifier error.


One thing that might also help there, is the fact that once an 
unidentified identifier is encountered in an assert expression 
and that same identifier is found on that separate list of things 
we've been keeping, then we know that the current assert 
expression can only end up either being an unidentified 
identifier error or being compiled out of existence, but it 
can't end up being turned into an assert(0).


Re: Feature request: assert expressions should live inside version(assert)

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 13:12:06 UTC, Tommi wrote:

But would it be possible to implement it something like:
[..]


Although, I don't know if the best possible behaviour is to 
silently compile the following assert out of existence in release 
mode:


version (assert)
{
enum cond = false;
}

void main()
{
assert(cond);
}

A better behaviour probably would be to spew a warning message 
asking the user if he's sure he knows what he's doing.


Then, yet another solution (a code breaking one) would be to make 
it so that only literally saying:

assert(0);
or
assert(false);
or
assert(null);

...would exhibit that special assert behaviour.

Anything else would be semantically runtime-evaluated and no 
other forms of assert would remain in release builds. For 
example, this kind of an assert would be compiled out of 
existence in release mode:


enum bool cond = false;
assert(cond);


Re: Feature request: assert expressions should live inside version(assert)

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 13:42:52 UTC, Tommi wrote:
Then, yet another solution (a code breaking one) would be to 
make it so that only literally saying:

assert(0);
or
assert(false);
or
assert(null);

...would exhibit that special assert behaviour.

Anything else would be semantically runtime-evaluated and no 
other forms of assert would remain in release builds. For 
example, this kind of an assert would be compiled out of 
existence in release mode:


enum bool cond = false;
assert(cond);


The programmer can always work around this new behaviour.

Old code:

enum bool shouldHalt = someExpression();
assert(shouldHalt);

New code:

enum bool shouldHalt = someExpression();
static if (shouldHalt) assert(0);


Re: Fun with templates

2013-07-07 Thread Tommi

On Sunday, 7 July 2013 at 17:48:17 UTC, Dicebot wrote:

On Saturday, 6 July 2013 at 18:54:16 UTC, TommiT wrote:
He's talking about changing the semantics only on POD types, 
like int, struct of ints, static array of ints... only types 
that can implicitly convert from immutable to mutable.


Than it does not really solve anything. Have you measured how 
much template bloat comes from common meta-programming tools 
like std.algorithm and how much - from extra instances for 
qualified POD types? It is better to solve broad problem 
instead and get this special case for free, not add more 
special cases in a desperate attempts to contain it.


Basically all I know about the performance issue being discussed 
here is that: Code bloat is bad, m'kay. But note that Artur's 
suggestion would also change language semantics to a more 
sensible and convenient default. For example:


void foo(T)(T value)
if (isIntegral!T)
{
   value = 42;
}

...I can feel free to mutate 'value' without having to worry 
about somebody breaking my function by calling it with an 
argument of type like const(int), immutable(short), shared(long) 
...


Re: Make dur a property?

2013-01-24 Thread Tommi

I've always secretly hated the ambiguity in D's syntax. E.g:

foo.bar

What could foo and bar be? D has many more answers than C++:

D   C++
foo bar foo bar
Module/Namespacex   x
Typex   x
Variablex   x   x   x
Method  x
Free function   x   x

For this reason I initially agreed with Nick Sabalausky on 
disallowing calling non-property functions without parenthesis.


...but now I'm thinking that this ambiguity stops being an issue 
once we have an IDE that can render different things in different 
colors (or different fonts or with other visual cues if you're 
color-blind). Color is a much stronger and faster visual cue than 
having parenthesis at the end of a name. For this reason, I think 
that it's fine to allow non-property functions to be called 
without parenthesis. But I still think that property functions 
should not be allowed to be called with parenthesis.


Re: ref is unsafe

2013-01-10 Thread Tommi
On Thursday, 3 January 2013 at 21:56:22 UTC, David Nadlinger 
wrote:
I must admit that I haven't read the rest of the thread yet, 
but I think the obvious and correct solution is to disallow 
passing locals (including non-ref parameters, which are 
effectively locals in D) as non-scope ref arguments.


The scope attribute, once properly implemented, would make sure 
that the reference is not escaped. For now, we could just make 
it behave overly conservative in @safe code.


David


If you disallow passing local variables as non-scope ref 
arguments, then you effectively disallow all method calls on 
local variables. My reasoning is as follows:


struct T
{
int get(int v) const;
void set(int v);
}

Those methods of T can be thought of as free functions with these 
signatures:


int get(ref const T obj, int v);
void set(ref T obj, int v);

And these kinds of method calls:

T obj;
int n = obj.get(v);
obj.set(n);

...can be thought of as being converted to these free function 
calls:


T obj;
int n = .get(obj, v);
.set(obj, n);

I don't know what the compiler does or doesn't do, but it is 
*as_if* the compiler did this conversion from method calls to 
free functions.


Now it's obvious, given those free function signatures, that if 
you disallow passing function-local variables as non-scope 
references, you also disallow this code:


void func()
{
T obj;
obj.set(123);
}

Because that would effectively be the same as:

void func()
{
T obj;  // obj is a local variable
.set(obj, 123); // obj is passed as non-scope ref
}

Then, you might ask, why don't those methods of T correspond to 
these free function signatures:


int get(scope ref const T obj, int v);
void set(scope ref T obj, int v);

And the answer is obviously that it would prevent these kinds of 
methods:


struct T
{
int v;

ref T increment()
{
v++;
return this;
}
}

...because that would then convert to this free function 
signature:


ref T increment(scope ref T obj)
{
obj.v++;
return obj; // Can't return a reference to a scope argument
}


Re: ref is unsafe

2013-01-10 Thread Tommi
...Although, I should add that my analogy between methods and 
free functions seems to break when the object is an rvalue. Like 
in:


struct T
{
int v;

this(int a)
{
v = a;
}

int get()
{
return v;
}
}

int v = T(4).get();

Given my analogy, the method get() should be able to be thought 
of as a free function:


int gget(ref T obj)
{
return obj.v;
}

But then the above method call should be able to thought of as:

int v = gget(T(4));

...which won't compile because T(4) is an rvalue, and according 
to D, rvalues can't be passed as ref (nor const ref). I don't 
know which one is flawed, my analogy, or the logic of how D is 
designed.




Re: ref is unsafe

2013-01-10 Thread Tommi

On Thursday, 10 January 2013 at 16:42:09 UTC, Tommi wrote:
...which won't compile because T(4) is an rvalue, and according 
to D, rvalues can't be passed as ref (nor const ref). I don't 
know which one is flawed, my analogy, or the logic of how D is 
designed.


My analogy is a bit broken in the sense that methods actually see 
their designated object as a reference to lvalue even if it is an 
rvalue. But I don't think that affects the logic of my main 
argument about scope arguments.


A more strict language logic would be inconvenient.

But, this logic does introduce a discrepancy between non-member 
operators and member operators in C++ (which D actually 
side-steps by disallowing non-member operators... and then 
re-introduces by providing UFCS):


// C++:

struct T
{
int val = 10;

T operator--()
{
--val;
return *this;
}
};

T operator++(T t)
{
++t.val;
return t;
}

int main()
{
_cprintf(%d\n, (--T()).val); // Prints: 9
_cprintf(%d\n, (++T()).val); // Error: no known conversion
   // from 'T' to 'T'
return 0;
}

// D:

import std.stdio;

struct T
{
int val;

int get()
{
++val;
return val;
}
}

int het(ref T t)
{
++t.val;
return t.val;
}

void main()
{
writeln(T().get());
writeln(T().het()); // Error: T(0) is not an lvalue
}


Re: ref is unsafe

2013-01-09 Thread Tommi
On Wednesday, 9 January 2013 at 04:33:21 UTC, Zach the Mystic 
wrote:
I felt confident enough about my proposal to submit it as 
enhancement request:


http://d.puremagic.com/issues/show_bug.cgi?id=9283


I like it. One issue though, like you also indicated by putting 
question marks on it:


ref T get(T)()
{
  T local;
  return cast(out) local; // This shouldn't compile
}

Because, wouldn't returning a local variable as a reference be a 
dangling reference in all cases? No matter if the programmer 
claims it's correct by saying cast(out)... it just can't be 
correct.


And T can be a type that has reference semantics or value 
semantics, it doesn't matter. That function would always return a 
dangling reference, were it allowed to compile.


Re: ref is unsafe

2013-01-09 Thread Tommi
On Wednesday, 9 January 2013 at 04:33:21 UTC, Zach the Mystic 
wrote:
I felt confident enough about my proposal to submit it as 
enhancement request:


http://d.puremagic.com/issues/show_bug.cgi?id=9283


By the way, what do you propose is the correct placement of this 
new out keyword:


#1: out ref int get(ref int a);
#2: ref out int get(ref int a);
#3: ref int get(ref int a) out;

I wouldn't allow #3.


Re: ref is unsafe

2013-01-04 Thread Tommi

On Friday, 4 January 2013 at 06:30:55 UTC, Sönke Ludwig wrote:
In other words, references returned by a function call that 
took any references to locals would be
tainted as possibly local (in the function local data flow) and 
thus are not allowed to escape the
scope. References derived from non-local refs could still be 
returned and returning references to

fields from a struct method also works.

---
@safe ref int test(ref int v) {
return v; // fine
}

@safe ref int test2() {
int local;
return test(local); // error: (possibly) returning ref to local
}

@safe ref int test3() {
int local;
int* ptr = test(local); // fine, ptr is tainted 'local'
return *ptr; // error: (possibly) returning ref to local
}

@safe ref int test4(ref int val) {
	return test(val); // fine, can only be a ref to the external 
'val' or to a global

}
---


Trying to say that formally:

Definitions:
'Tainter function':
A function that:
1. takes at least one of its parameters by reference
and
2. returns by reference

'Tainting function call':
A call to a 'tainter function' where at least one of the
arguments passed by reference is ref to a local variable

Then the rules become:
Function may not return a reference to:
Rule 1: a function-local variable
Rule 2. a value returned by a 'tainting function call'

@safe:

ref int tfun(ref int v) { // tfun tagged 'tainter function'
...
}

ref int test1() {
int local;
return local; // error by Rule 1
}

ref int test2() {
int local;
return tfun(local); // error by Rule 2
}

ref int test3() {
int local;
int* ptr = tfun(local); // ptr tagged 'local'
return *ptr; // error by Rule 2
}

ref int test4(ref int val) {
return tfun(val); // fine
}

int global;

ref int test5() {
int local;
int* ptr = tfun(local); // ptr tagged 'local'
ptr = global; // ptr's 'local' tag removed
return *ptr; // fine
}


Re: ref is unsafe

2013-01-04 Thread Tommi

On Friday, 4 January 2013 at 14:15:01 UTC, Tommi wrote:

'Tainting function call':
A call to a 'tainter function' where at least one of the
arguments passed by reference is ref to a local variable


I forgot to point out that the return value of a 'tainting 
function call' is considered to be a reference to a 
function-local variable (even if it's not in reality).


Re: Bret Victor - Inventing on Principle

2012-12-03 Thread Tommi
I found this article (Learnable Programming) by Bret Victor 
interesting:


http://worrydream.com/#!/LearnableProgramming


Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they introduce in the video 
(in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}




Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they introduce in the video 
I posted (in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}



Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they introduce in the video 
(in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}




Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi

Here's the duck-typing design pattern they introduce in the video
I posted (in C++ again, sorry):

#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
 SomeShape() = default;
 explicit SomeShape(int volume) {}

 void draw() const
 {
 _cprintf(Drawing SomeShape\n);
 }
};

class OtherShape
{
public:
 OtherShape() = default;
 explicit OtherShape(int x, int y) {}

 void draw() const
 {
 _cprintf(Drawing OtherShape\n);
 }
};

class ShapeInterface
{
public:
 virtual void draw() const = 0;

 virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
 template typename ...Args
 PolymorphicShape(Args... args)
 : _shape (std::forwardArgs(args)...)
 {}

 void draw() const override
 {
 _shape.draw();
 }

private:
 T _shape;
};

template typename T, typename _ = void
struct is_shape
 : std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
 std::is_samedecltype(T().draw()), void::value

::type

 : std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
 is_shapeT::value

::type

{
 shape.draw();
}

int main()
{
 std::vectorstd::unique_ptrShapeInterface shapes;

 shapes.emplace_back(new PolymorphicShapeSomeShape(42));
 shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

 // Dynamic polymorphism:
 shapes[0]-draw(); // Prints: Drawing SomeShape
 shapes[1]-draw(); // Prints: Drawing OtherShape

 // Static polymorphism:
 drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
 drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

 _getch();
 return 0;
}



Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they introduced in the 
video I posted (in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}



Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they intrdouced in the 
video I posted (in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}



Re: Getting rid of dynamic polymorphism and classes

2012-11-14 Thread Tommi
Here's the duck-typing design pattern they intrdouced in the 
video I posted (in C++ again, sorry):


#include conio.h
#include memory
#include type_traits
#include utility
#include vector

class SomeShape
{
public:
SomeShape() = default;
explicit SomeShape(int volume) {}

void draw() const
{
_cprintf(Drawing SomeShape\n);
}
};

class OtherShape
{
public:
OtherShape() = default;
explicit OtherShape(int x, int y) {}

void draw() const
{
_cprintf(Drawing OtherShape\n);
}
};

class ShapeInterface
{
public:
virtual void draw() const = 0;

virtual ~ShapeInterface() {}
};

template typename T
class PolymorphicShape : public ShapeInterface
{
public:
template typename ...Args
PolymorphicShape(Args... args)
: _shape (std::forwardArgs(args)...)
{}

void draw() const override
{
_shape.draw();
}

private:
T _shape;
};

template typename T, typename _ = void
struct is_shape
: std::false_type
{};

template typename T
struct is_shapeT, typename std::enable_if
std::is_samedecltype(T().draw()), void::value

::type

: std::true_type
{};

template typename T
auto drawAgain(const T shape) - typename std::enable_if
is_shapeT::value

::type

{
shape.draw();
}

int main()
{
std::vectorstd::unique_ptrShapeInterface shapes;

shapes.emplace_back(new PolymorphicShapeSomeShape(42));
shapes.emplace_back(new PolymorphicShapeOtherShape(1, 2));

// Dynamic polymorphism:
shapes[0]-draw(); // Prints: Drawing SomeShape
shapes[1]-draw(); // Prints: Drawing OtherShape

// Static polymorphism:
drawAgain(SomeShape(123));   // Prints: Drawing SomeShape
drawAgain(OtherShape(2, 4)); // Prints: Drawing OtherShape

_getch();
return 0;
}



Re: Transience of .front in input vs. forward ranges

2012-11-12 Thread Tommi
Now I finally see that deepDup/deepCopy/clone is not a (good) 
solution, because it would be inefficient in a lot of situations. 
This whole mess makes me wish that D was designed so that all 
types had value semantics (by convention, since it's probably not 
possible to enforce by the language).


That would mean:
1) No classes. Just duck-typing based polymorphism à la go 
language.
2) Dynamic arrays of mutable types would have had to been 
implemented as copy-on-write types à la Qt containers.


Don't know about the performance implications of COW though.


Re: Transience of .front in input vs. forward ranges

2012-11-12 Thread Tommi

On Monday, 12 November 2012 at 08:37:20 UTC, Tommi wrote:
This whole mess makes me wish that D was designed so that all 
types had value semantics (by convention, since it's probably 
not possible to enforce by the language).


That would mean:
1) No classes. Just duck-typing based polymorphism à la go 
language.
2) Dynamic arrays of mutable types would have had to been 
implemented as copy-on-write types à la Qt containers.


...forgot to add how this relates to this issue:

Then you'd solve this issue by specifying range concept so that 
front should return by value if it's transient, and front should 
return by reference or const reference if it's persistent. Then 
you'd capture front by const reference à la C++. If front 
returns reference or const reference, then const reference simply 
references that persistent data. If front returns by value 
(that's an rvalue from caller's point of view), then this C++ 
style const reference that we use for capture would extend the 
lifetime of this temporary rvalue to the const reference 
variable's scope. And, just to have some code in a post:


ref const auto saved = range.front;




Re: Transience of .front in input vs. forward ranges

2012-11-12 Thread Tommi
On Monday, 12 November 2012 at 20:52:07 UTC, Jonathan M Davis 
wrote:
That wouldn't work. It's the complete opposite of what a 
generative range would require if it generates the return value 
in front. Narrow strings in particularly would be screwed by 
it, because their front is calculated, and

since it's a free function, as is popFront, there's no way
to save the return value of front in popFront. It has to
be calculated in front, and it's not at all transient.

It also screws with the ability to have sealed containers, 
since in that case, something like front would never

return by ref.

The refness of front's type really shouldn't have anything
to do with its transience. They're completely unrelated.


I didn't understand any of those arguments, so perhaps I still 
don't fully comprehend this issue.


Persistence
---
I thought persistent X means that there's a dedicated storage 
space for X where it will exist after front returns and X's value 
will not be tampered with by calling popFront. That notion of 
persistence led to me into thinking that persistent front could 
(and really should) return either a reference to X or a const 
reference if the range doesn't want to allow mutation through 
front.


Transience
--
The other side of the coin in my thinking was that transient 
means that X doesn't have a dedicated storage space. So transient 
X could be a variable that's local to function front, or it could 
be that X is stored in a buffer and will exist there after front 
returns, but the buffer would be overwritten by a call to 
popFront. And that notion of transience led me into thinking that 
transient front could never return a reference.


NOTE: All of the above assumes that D were designed so that all 
types had value semantics.


Re: Transience of .front in input vs. forward ranges

2012-11-12 Thread Tommi
On Monday, 12 November 2012 at 22:44:22 UTC, Jonathan M Davis 
wrote:

On Monday, November 12, 2012 23:01:22 Tommi wrote:

NOTE: All of the above assumes that D were designed so that all
types had value semantics.


Which will never happen. So, any discussion based on that 
premise is pretty pointless. And this discussion would
never have happened in the first place if D had no reference 
types, because you'll never have a transient front with a value 
type.


Oh, it was just a matter of miscommunication then. I bet you 
missed the following post of mine, which made it clear that I 
wasn't suggesting a solution, but simply dreaming of a better 
language design (like I usually am):


On Monday, 12 November 2012 at 08:37:20 UTC, Tommi wrote:
Now I finally see that deepDup/deepCopy/clone is not a (good) 
solution, because it would be inefficient in a lot of 
situations. This whole mess makes me wish that D was designed 
so that all types had value semantics (by convention, since 
it's probably not possible to enforce by the language).


That would mean:
1) No classes. Just duck-typing based polymorphism à la go 
language.
2) Dynamic arrays of mutable types would have had to been 
implemented as copy-on-write types à la Qt containers.


Don't know about the performance implications of COW though.


I may throw these wild ideas around, but I don't do it in order 
to have them implemented by D, but so that some-one could crush 
those ideas with counter-arguments. But yeah, this would be a 
wrong thread to have that discussion anyway.


Re: Transience of .front in input vs. forward ranges

2012-11-12 Thread Tommi

On Monday, 12 November 2012 at 08:37:20 UTC, Tommi wrote:
This whole mess makes me wish that D was designed so that all 
types had value semantics (by convention, since it's probably 
not possible to enforce by the language).


..so that all types had value semantics. That's a bit too 
harsh. Rather there would need to two kinds of types:


1) struct:  own their data, value semantics
2) referer: don't own their data, reference semantics

Dynamic arrays would fall into the first category; owns their 
data, have value semantics.


Slices of dynamic arrays would be a separate type, falling into 
the second category; don't own the data, reference semantics.


Range could be of either kind of type.

You'd need two kinds of pointers too: the kind that owns its 
data, and the kind that references data that someone else owns.


And you'd be able to differentiate between these two kinds of 
types at compile-time.


Disclaimer: I promise not to spam this thread with this idea any 
further.


Re: Getting rid of dynamic polymorphism and classes

2012-11-11 Thread Tommi

On Sunday, 11 November 2012 at 13:24:07 UTC, gnzlbg wrote:

How would you create a vector of canvas and iterate over them?


I assume you mean to ask How'd you create a vector of shapes?.
And I assume you mean ... by using the
pointers-to-member-functions design pattern introduced in the
first post. Notice that in the video I posted, they introduce a
similar but cleaner design pattern, that uses virtual functions
to accomplish the same goal. I think it's better to use that,
given that virtual function calls aren't noticeably slower than
calling through those member function pointers.

But anyway, here's how you'd do it using the
pointers-to-member-functions idiom:

#include conio.h
#include functional
#include memory
#include type_traits
#include utility
#include vector

template int typeId
class MyShape
{
public:
 void refresh()
 {
 _cprintf(MyShape%d refreshed\n, typeId);
 }
};

struct Shape
{
 std::functionvoid () refresh;

 template typename S, typename _ = typename std::enable_if
 std::is_samedecltype(S().refresh()), void::value
 ::type
 Shape(S s)
 : _s (new S(s))
 {
 refresh = []()
 { return reinterpret_castS*(_s.get())-refresh(); };
 }

private:
 std::unique_ptrvoid _s; // or std::shared_ptr
};

int main()
{
 std::vectorShape shapes;
 shapes.emplace_back(MyShape2());
 shapes.emplace_back(MyShape4());

 for (auto shape : shapes)
 {
 shape.refresh();
 }

 _getch();
 return 0;
}

// Prints:
MyShape2 refreshed
MyShape4 refreshed


Re: Getting rid of dynamic polymorphism and classes

2012-11-11 Thread Tommi

I should point out, just so that no-one is missing the obvious
here, that you'd be using static polymorphism through the use of
template functions whenever it's possible, i.e. when you don't
need to store polymorphic types.

So, you'd do this:

template typename S
auto func(const S shape)
- typename std::enable_if
 is_shapeS::value

::type

{
 // ...
}

You would NOT do this:

void func(const Shape shape)
{
 // ...
}

The big question is then: is the code bloat introduced by the
massive use of template functions worth the speed gained through
static polymorphism (vs. dynamic). Whether it is or not, the main
gain for me is in de-coupling of types, which helps in writing
truly generic code.


Re: Getting rid of dynamic polymorphism and classes

2012-11-11 Thread Tommi

On Monday, 12 November 2012 at 05:49:55 UTC, Tommi wrote:

Notice that in the video I posted, they introduce a
similar but cleaner design pattern, that uses virtual functions
to accomplish the same goal.


I didn't mean to say 'similar', it's rather different. It
accomplishes the same goal though, but using less memory and
through the use of virtual functions.


Re: Getting rid of dynamic polymorphism and classes

2012-11-10 Thread Tommi

On Saturday, 10 November 2012 at 09:23:40 UTC, Marco Leise wrote:

They work like this: Each object has as a pointer to a table
of method pointers. When you extend a class, the new method
pointers are appended to the list and existing entries are
replaced with overrides where you have them.
So a virtual method 'draw()' may get slot 3 in that table and
at runtime it is not much more than:

obj.vftable[3]();


Is vftable essentially an array? So, it's just a matter of 
offsetting a pointer to get access to any particular slot in the 
table?


If virtual method calls are really that fast to do, then I think 
the idiom in the code snippet of my first post is useless, and 
the idiom they represent in that video I linked to is actually 
pretty great.


Note: In order to make that video's sound bearable, you have to 
cut out the highest frequencies of the sound and lower some of 
the middle ones. I happened to have this Realtek HD Audio 
Manager which made it simple. Using the city filter helped a 
bit too. Don't know what it did.


Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Tommi
On Thursday, 8 November 2012 at 17:50:48 UTC, 
DypthroposTheImposter wrote:
That example would crash hard if those stack allocated shapes 
were not in scope...


Making it work safely would probably require std::shared_ptr 
usage


But the correct implementation depends on the required ownership 
semantics. I guess with Canvas and Shapes, you'd expect the 
canvas to own the shapes that are passed to it. But imagine if, 
instead of Canvas and Shape, you have Game and Player. The game 
needs to pass messages to all kinds of different types of 
players, but game doesn't *own* the players. In that case, if a 
game passes a message to a player who's not in scope anymore, 
then that's a bug in the code that *uses* game, and not in the 
implementation of game. So, if Canvas isn't supposed to own those 
Shapes, then the above implementation of Canvas is *not* buggy.




Re: Getting rid of dynamic polymorphism and classes

2012-11-08 Thread Tommi

On Thursday, 8 November 2012 at 21:43:32 UTC, Max Klyga wrote:
Dinamic polimorphism isn't gone anywhere, it was just shifted 
to delegates.


But there's no restrictive type hierarchy that causes unnecessary 
coupling. Also, compared to virtual functions, there's no 
overhead from the vtable lookup. Shape doesn't need to search for 
the correct member function pointer, it already has it.


It's either that, or else I've misunderstood how virtual 
functions work.


Re: What's C's biggest mistake?

2012-11-08 Thread Tommi

On Thursday, 8 November 2012 at 18:45:35 UTC, Kagamin wrote:
Well, then read type declarations left-to-right. It's the 
strangest decision in design of golang to reverse type 
declarations. I always read byte[] as `byte array`, not `an 
array of bytes`.


How do you read byte[5][2] from left to right? Byte arrays of 5 
elements 2 times in an array. It's impossible. On the other 
hand, [5][2]byte reads nicely from left to right: Array of 5 
arrays of 2 bytes. You start with the most important fact: that 
it's an array. Then you start describing what the array is made 
of.


Re: Transience of .front in input vs. forward ranges

2012-11-05 Thread Tommi
I don't think this whole issue has anything to do with ranges. I 
think this is an issue of assuming that the symbol = means copy 
what's on the right to what's on the left. When in reality, = 
could mean: (if what's on the right has reference semantics) 
make what's on the left reference the same thing that the thing 
on the right references.


I think all range types should be allowed to return whatever they 
want from their front property. It's the responsibility of the 
user of the range to *actually* copy what front returns (if 
that's what he intends), instead of assuming that = means copy.


In the code below, X marks the bug:

module main;

import std.stdio;
import std.range;

class MyData
{
int _value;
}

struct MyFwdRange
{
MyData _myData;

this(MyData md)
{
_myData = md;
}

@property MyData front()
{
return _myData;
}

@property bool empty() const
{
return _myData._value  10;
}

void popFront()
{
_myData._value++;
}

@property MyFwdRange save() const
{
auto copy = MyFwdRange();
copy._myData = new MyData();
copy._myData._value = _myData._value;
return copy;
}
}

void printFirstAndSecond(R)(R range)
if (isForwardRange!MyFwdRange)
{
// X
auto first = range.front; // That is *not* copying

range.popFront();
writeln(first._value,  , range.front._value);
}

void main()
{
auto mfr = MyFwdRange(new MyData());

printFirstAndSecond(mfr); // prints: 1 1 (instead of 0 1)
}


Re: Transience of .front in input vs. forward ranges

2012-11-05 Thread Tommi

On Tuesday, 6 November 2012 at 04:31:56 UTC, H. S. Teoh wrote:
The problem is that you can't do this in generic code, because 
generic code by definition doesn't know how to copy an

arbitrary type.


I'm not familiar with that definition of generic code. But I do 
feel that there's a pretty big problem with a language design if 
the language doesn't provide a generic way to make a copy of a 
variable. To be fair, e.g. C++ doesn't provide that either.




Generic and fundamental language design issue

2012-11-04 Thread Tommi
I have a fundamental language design talking point for you. It's 
not specific to D. I claim that, most of the time, a programmer 
cannot, and shouldn't have to, make the decision of whether to 
allocate on stack or heap. For example:


void func(T)()
{
auto t = allocate T from heap or stack?
...
}

The question of whether variable t should lay in heap or stack, 
depends not only on the sizeof(T), but also on the context in 
which func is called at. If func is called at a context in which 
allocating T on stack would cause a stack overflow, then t should 
be allocated from the heap. On the other hand, if func is called 
at a context where T still fits nicely on the stack, it would 
probably be better and faster to allocate t on stack.


So, the question of whether to allocate that variable t on stack 
or heap is something that only the compiler (or runtime) can 
answer. Is there any language where you have the ability to say 
allocate T from wherever it's best?


I wonder if it would be possible in D to let the compiler 
allocate dynamic arrays on stack when it can statically guarantee 
that it's safe to do so (no dangling references, never increasing 
the size of the array, etc).


Re: Generic and fundamental language design issue

2012-11-04 Thread Tommi
On Sunday, 4 November 2012 at 17:41:17 UTC, Andrei Alexandrescu 
wrote:


I don't think that claim is valid. As a simple example, 
polymorphism requires indirection (due to variations in size of 
the dynamic type compared to the static type) and indirection 
is strongly correlated with dynamic allocation.


Sure, there are situations where heap allocation is always 
needed. But couldn't the question of whether to heap or stack 
allocate be considered just an implementation detail of the 
language. And hide that implementation detail so that the 
programmer doesn't even need to know what the words heap and 
stack mean.


I mean, wouldn't it be theoretically possible to sometimes even 
let the compiler allocate class objects in D from the stack, if 
the compiler can see that it's safe to do so (if that variable's 
exact type is known at compile time and never changes).


  1   2   3   >