Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 12:43 PM, Gary Whatmore wrote:

bearophile Wrote:

- A way to list what attributes are modified in a method (similar to my @outer).


The compiler should do this itself.


Doesn't make sense.


My reference issue:
http://d.puremagic.com/issues/show_bug.cgi?id=4571


Walter, please close this as wontfix. We don't need those. These extra runtime 
checks will slow down my code. I know myself when my pointer is null.

  - G.W.


How, exactly, do you know when your references are null? Without 
runtime checks, of course.


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 02:39 PM, Kagamin wrote:

bearophile Wrote:


Spec# adds only few things to C# 2.0:
- Non-nullable types;


It's hard to tell, whether they fix anything. When you cast nullable to 
non-nullable, you get your runtime exception as usual, if you if out access to 
nullable (e.g. in delayed method), you get your runtime exception again or 
rather logical bug.


Getting the error early is actually a lot better than getting the error 
late.


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 02:48 PM, Gary Whatmore wrote:

Pelle Månsson Wrote:


On 11/05/2010 12:43 PM, Gary Whatmore wrote:

bearophile Wrote:

- A way to list what attributes are modified in a method (similar to my @outer).


The compiler should do this itself.


Doesn't make sense.


My reference issue:
http://d.puremagic.com/issues/show_bug.cgi?id=4571


Walter, please close this as wontfix. We don't need those. These extra runtime 
checks will slow down my code. I know myself when my pointer is null.

   - G.W.


How, exactly, do you know when your references are null? Without
runtime checks, of course.


Good code design makes the null pointer exceptions go away. I carefully ponder 
what code goes where. Simple as that. That language just introduces a boatload 
of unnecessary cruft in forms of runtime null checks. I don't need to know the 
exact temporal location of nulls, it's enough if the code takes care of 
handling it at run time.


Say you write a library, with a class and a function. Something like this:

class C {
/* stuff */
}

void foo(C[] cs) {
foreach (c; cs) {
// do stuff with c
}
}

How do you handle null, in this case?


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 03:04 PM, Kagamin wrote:

Pelle Månsson Wrote:


Getting the error early is actually a lot better than getting the error
late.


OK, but it doesn't reduce the number of bugs. You had an error with nullables 
and you still has error with non-nullables.


But in the non-nullable version you actually know where the bug is, 
namely where you assign the null to the thing that shouldn't be null. 
The segfault can come from any unrelated part of the program whereto 
your null has slipped, at any later point in time.


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 02:56 PM, Gary Whatmore wrote:

Pelle Månsson Wrote:


On 11/05/2010 02:39 PM, Kagamin wrote:

bearophile Wrote:


Spec# adds only few things to C# 2.0:
- Non-nullable types;


It's hard to tell, whether they fix anything. When you cast nullable to 
non-nullable, you get your runtime exception as usual, if you if out access to 
nullable (e.g. in delayed method), you get your runtime exception again or 
rather logical bug.


Getting the error early is actually a lot better than getting the error
late.


Getting the error early means that less code compiles and that makes the rapid 
development fail and turns it into a waterfall misery. It's important to make 
your tests run quickly in the background. One reason I prefer Python is that it 
let's me run even (semantically) buggy code, because syntactical correctness is 
enough. It really improves productivity.


Yes, let's turn off compiler errors entirely!


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/05/2010 08:30 PM, bearophile wrote:

Denis Koroskin:


Is anyone FORCING you to use non-nullable references?
What's the big deal?


If non-nullables are introduced in D3, then Phobos will start to use them. So 
probably you can't avoid using some of them.

Bye,
bearophile


If we're still following the 'calls to phobos considered external 
input'-thing, the nulls still have to be checked. So, no loss there, 
performance wise.


Re: Spec#, nullables and more

2010-11-05 Thread Pelle Månsson

On 11/06/2010 12:41 AM, Walter Bright wrote:

Denis Koroskin wrote:

On Fri, 05 Nov 2010 23:44:58 +0300, Walter Bright
 wrote:


To eliminate null pointers is the same as shooting the canary in your
coal mine because its twitter annoys you.


I'm tired of pointing out that NO ONE is talking about eliminating
null pointers, but rather extending an existing type system to support
non-nulls. Your hate towards non-nullables comes from misunderstanding
of the concept.


Consider non-nullable type T:

T[] a = new T[4];
... time goes by ...
T[1] = foo;
T[3] = bar;
... more time goes by ...
bar(T[2]);

In other words, I create an array that I mean to fill in later, because
I don't have meaningful data for it in advance. What do I use to default
initialize it with non-nullable data? And once I do that, should
bar(T[2]) be an error? How would I detect the error?

In general, for a non-nullable type, how would I mark an instance as not
having meaningful data?

For example, an int is a non-nullable type. But there's no int value
that means "no meaningful value", and this can hide an awful lot of bugs.

I'm not sure at all that non-nullable types do more than make easy to
find bugs much, much harder to find.


I tried to find a good analogy but failed, so I'll just say that in the 
case you presented you would obviously not use a non-nullable type. As, 
you know, you wanted nulls in the array.


We're using signalling nans to get rid of the null-ish thing for floats, 
as well.


Re: The D Scripting Language

2010-11-09 Thread Pelle Månsson

On 11/09/2010 06:12 PM, Andrei Alexandrescu wrote:

On 11/7/10 9:12 PM, Eric Poggel wrote:

On 11/7/2010 8:49 PM, Andrei Alexandrescu wrote:

On 11/7/10 5:34 PM, Jesse Phillips wrote:

Tomek Sowiñski Wrote:


This wraps up a thread from a few days ago. Pascal featured my D
examples
on his Scriptometer site.

http://rigaux.org/language-study/scripting-language/

D comes 17th out of 28, so it's so-so for scripting.

--
Tomek


When I looked over his scoring from the original post, it seemed> 100
was a great choice for a scripting language and everything below
wasn't. D hit where I expected, just good enough to use for scripting.


Perhaps a module std.scripting could help quite a lot, too.

Andrei


I'm having trouble thinking of something that would go in this module
that wouldn't be a better fit somewhere else. What do you envision?


I thought of it for a bit, but couldn't come up with anything :o). I
think you're right!

Someone proposed to add something like
http://docs.python.org/library/fileinput.html to Phobos. I think it's a
good idea. We have all mechanics in place (byLine/byChunk, chain). So it
should be easy to define byLine to accept an array of filenames:

import std.stdio;
void main(string args[]) {
getopt(args, ...);
foreach (line; File.byLine(args[1 .. $]) {
...
}
}

I hypothetically made byLine a static method inside File to avoid
confusing beginners (one might think on first read that byLine goes line
by line through an array of strings).


Andrei


module std.script;

public import std.stdio, std.file, std.process, std.algorithm, ... etc

I use at least some of these for most of my programs/scripts. And 
std.all is probably a bit too heavy.


std.script could basically fetch us enough stuff to be on par with 
importless python.


[OT] Re: Passing dynamic arrays -- example *reference* array type

2010-11-09 Thread Pelle Månsson

On 11/09/2010 09:36 AM, spir wrote:

On Mon, 8 Nov 2010 17:08:32 -0800
Jonathan M Davis  wrote:


As Jesse says, they _are_ passed by reference. The struct itself _is_ the
reference.


(Well, that is a sensible redefinition of "reference"; but it is simply _not_ 
what the word means in any other context.)

It is true that the inner, hidden, memory area (static array) containing the elements is 
indeed referenced, actually "pointed", from the dynamic array struct:

struct ValueArray(Element) {
 Element* elements;
 uint length;
}
(Well, actually, this may not be a struct, but it's easier to imagine it so.)

But: the dyn array itself, meaning the struct, is not referenced: "a2 = a1" 
copies it, as well as parameter passing. And the fact that the internal memory is 
referenced is an implementation detail that should *not* affect semantics. The inner 
pointer is there because we need some kind of indirection to implement variable-size 
thingies, and the means for this is pointers.
This is precisely where&  why people get bitten: implementation leaks out into 
semantics.
Actually, one could conceptually replace the (pointer,length) pair by a single 
field of type MemoryArea -- which would be a plain value. Then, there would be 
no more (visible) pointer in the dyn array, right? (Actually, it would just be 
hidden deeper inside the MemoryArea field... but that is again implementation 
detail!)

We should not mess up pointers used for implementation mechanics, like in the 
case of dyn arrays, or more generally variable size data structure, with 
pointers used as true references carrying semantics, like in the case of the 
difference between struct and class.

And precisely, replacing array struct by a class, or explicitely referencing 
the struct, would make a *reference* dyn array type. See below an example of a 
primitive sort of such an array type (you can only put new elements in it ;-), 
implemented as class.
After "a2 = a1", every change to one of the vars affects the other var; whether 
the change requires reallocation is irrelevant; this detail belongs to implementation, 
not to semantics.
Now, replace class with struct and you have a type for *value* dyn arrays. 
Which works exactly like D ones.
The assertion will fail; and output should be interesting ;-)

Hope it's clear, because I cannot do better.
I do not mean that D arrays are bad in any way. They work perfectly and are very efficient. 
Enforcing a true interface between implementation and semantics would certainly have a 
relevant cost in terms of space&  time. But please, stop stating D arrays are 
referenced if you want newcomers to have a chance&  understand the actual behaviour, to 
use them without beeing constantly bitten, and to stop&  complain.


Denis

class RefArray(Element) {
 Element* elements;
 uint length;
 private uint capacity;
 this () {
 this.elements = cast(Element*) malloc(Element.sizeof);
 this.capacity = 1;
 this.length = 0;
 }
 void reAlloc() {
 writeln("realloc");
 this.capacity *= 2;
 size_t memSize = this.capacity * Element.sizeof;
 realloc(this.elements, memSize);
 }
 void put(Element element) {
 if (this.length>= this.capacity)
 this.reAlloc();
 this.elements[this.length] = element;
 ++ this.length;
 }
 void opBinary(string op) (Element element)
 if (op == "+") {


...wait!

Did you just overload binary operator + to mean append?


Re: Function, signatures and tuples

2010-11-16 Thread Pelle Månsson

On 11/13/2010 11:43 AM, Russel Winder wrote:

On Sat, 2010-11-13 at 08:18 +, Iain Buclaw wrote:
[ . . . ]

import std.typecons; ?


Hummm... I thought I had put that in but clearly I had not :-((  OK so
that explains the bulk of the problems on this code, I knew it was
something stupid on my part, thanks for spotting it.

However, now we may be getting to something more serious.  The line:

  foreach ( i ; 0 .. numberOfTasks ) { inputData[i] = tuple ( 1 + i * 
sliceSize , ( i + 1 ) * sliceSize , delta ) ; }

now results in the error:

 
/home/users/russel/lib.Linux.x86_64/DMD2/bin/../../src/phobos/std/typecons.d(662):
 Error: can only initialize const member _field_field_2 inside constructor
 
/home/users/russel/lib.Linux.x86_64/DMD2/bin/../../src/phobos/std/typecons.d(26):
 Error: template instance std.typecons.tuple!(long,long,immutable(double)) 
error instantiating

Which at first sight seems to indicate an error in the typecons package
of Phobos.  On the other hand, it is probably more reasonable to assume
I still have something stupid wrong in my code.


It's not your code, you can work around it with cast(double)delta, or 
using tuple!(long,long,double) explicitly, I think.


Tuple doesn't handle immutable or const really well, yet.


Re: Principled method of lookup-or-insert in associative arrays?

2010-11-20 Thread Pelle Månsson

On 11/20/2010 05:09 PM, spir wrote:

???
backdoor and s do not denote the same element. One is a mutable array, the other is 
immutable. Why should changing backdoor affect s? Whether backdoor and chars denote the 
same array depends on whether "=" copies or not dyn arrays. But from immutable 
string to mutable array, there must be a copy (read: dup).


Must also be a copy the other way. Secret heap allocations are not fun.


Anyway, the most annoying issue is not about assignments inside a given scope, 
but parameter passing (including implicit ones like in Andrei's example of 
foreach).
void f (char[] chars) {}
void g (string str) {}
...
string str = "abc";
char[] chars = "abc".dup;
f(str);
g(chars);
__trials__.d(30): Error: function __trials__.f (char[] chars) is not callable 
using argument types (string)
__trials__.d(31): Error: function __trials__.g (string str) is not callable 
using argument types (char[])
...
f(str.dup); // ok
g(chars.idup);  // ditto



I do not understand the alternative.



By the way, why isn't the definition of string immutable(char[]), instead of 
immutable(char)[]?



string s = "abc";
s = "bde"; // fails with immutable(char[]), rightly so.


Re: DIP9 -- Redo toString API

2010-11-21 Thread Pelle Månsson

On 11/21/2010 11:10 AM, spir wrote:

What I do not want is tostring be deprecated in any case. The proposal would be 
OK if it introduced an _alternative_ for the cases (?) where string output 
efficiency is relevant. The language could/should default to writeTo is 
toString is not defined, *and* conversely default to toString if writeTo is not 
defined. But in no way let down toString. Below quote of the DIP's relevant 
part.
It is also, certainly, a very good idea to allow passing formatting data 
default textual expression routines; but I fail to see why deprecating tostring 
is necessary for that. Instead, it would certainly be a highly useful toString 
parameter in numerous cases.


Instead of

return "something";

write

sink("something");

You lose nothing. You do, however, gain the ability to output an object 
without concatenating a string over and over.



I consider time&  space efficiency for string output to be irrelevant, not even 
a theoretic question. Maybe I simply have never reached points where it would? Have 
you ever stepped on a app not running correctly because toString allocates on the 
heap? Or is it random thoughts?
First, as the proposal states, "Debug output is a common need when testing code or 
logging data." Most uses of such tools are for programmer own feedback -- user 
interface requires far more sophisticated, and in most case custom, tools. Who cares how 
much memory or time is required? Memory is eventually freed ayway, and output speed is 
not limited on the program side, but well by rendering computations and/or physical 
limits (try to write to buffer vs file vs terminal).
Moreover, string output tasks often come last in a process chain -- that's what 
a programmer waits for to get useful information on program behaviour and be 
able to control, diagnose, compare...

But the key point is that language features like D's toString are far to be 
used only for _direct_ string output. They are extremely useful for numerous 
tasks of string manipulation and processing, most of which again for 
programmer's own use. Sometimes, at the end of process chain comes string 
otuput -- but indirectly.


With this, it doesn't need to be. Instead of writing another function 
for non-debuglike string output, just use writeTo.


Re: DIP9 -- Redo toString API

2010-11-21 Thread Pelle Månsson

On 11/21/2010 03:17 PM, spir wrote:

No, just use toString. As said above, I don't want to writeTo, I want the 
string; and be free to do whatever I want to with it. Being only able to write 
is... (rather censure).


I... don't think you understand what writeTo is supposed to do.

Inside to!string, it would be something like this:

string s;
arg.writeTo((const(char)[] data) { s ~= data; });
return s;

There, you got the string. It will even be in a function for you, so 
you'll never have to write that piece of code. If you ever need the 
string of an object, you just write to!string(obj). No functionality 
lost, ever, at all.


However, writeln can do this:

foreach (arg; args) {
arg.writeTo((const(char)[] data) { outputbuffer.put(data); })
}

thereby removing the need to store the string, and the extra allocations.

This design is much cleaner than the current strategy, and also more 
flexible.


Re: DIP9 -- Redo toString API

2010-11-21 Thread Pelle Månsson

On 11/21/2010 09:49 PM, spir wrote:

(Sorry for the irony.) "Make simple things easy." Have to write a delegate to 
get feedback... to print a bit of text.
(What is "hello, world!" in D?)



Missing the point, are we? Hello world is unchanged.


Re: DIP9 -- Redo toString API

2010-11-21 Thread Pelle Månsson

On 11/21/2010 09:37 PM, Jacob Carlborg wrote:

Inside to!string, it would be something like this:

string s;
arg.writeTo((const(char)[] data) { s ~= data; });
return s;


Why can't toString do the same ?


Because then you'd have to write it? I'm afraid I don't understand.


Re: Passing dynamic arrays

2010-11-26 Thread Pelle Månsson

On 11/26/2010 07:22 PM, Bruno Medeiros wrote:

But more importantly, there is a simple solution: don't write such code,
don't use arrays like if they are lists, preallocate instead and then
fill the array. So with this alternative behavior, you can still write
efficient code, and nearly as easily.


What about when you don't know the length before, or working with 
immutable elements?



The only advantage of the current behavior is that it is more noob
friendly, which is an advantage of debatable value.


I believe you will find to have that exactly backwards.


Re: String compare performance

2010-11-28 Thread Pelle Månsson

On 11/28/2010 12:44 PM, bearophile wrote:

Robert Jacques:


I've spent some time having fun this afternoon optimizing array-equals
using vectorization techniques. I found that vectorizing using ulongs
worked best on my system except with the shortest strings, where a simple
Duff's device edged it out. If you'd like to try it out on your data set:


Thank you for your work :-)
A version with your function, D version #8:


// D version #8
import std.file: read;
import std.c.stdio: printf;
import std.exception: assumeUnique;

bool arrayComp(bool useBitCompare=true, T)
   (const T[] a, const T[] b) pure nothrow {
 if (a.length != b.length)
 return false;

 static if (useBitCompare) {
 auto pab = cast(ubyte*)a.ptr;
 auto pbb = cast(ubyte*)b.ptr;
 if (pab is pbb)
 return true;

 auto byte_length = a.length * T.sizeof;
 auto pa_end = cast(ulong*)(pab + byte_length);

 final switch (byte_length % ulong.sizeof) {
 case 7: if (*pab++ != *pbb++) return false;
 case 6: if (*pab++ != *pbb++) return false;
 case 5: if (*pab++ != *pbb++) return false;
 case 4: if (*pab++ != *pbb++) return false;
 case 3: if (*pab++ != *pbb++) return false;
 case 2: if (*pab++ != *pbb++) return false;
 case 1: if (*pab++ != *pbb++) return false;
 case 0:
 }

 auto pa = cast(ulong*)pab;
 auto pb = cast(ulong*)pbb;

 while (pa<  pa_end) {
 if (*pa++ != *pb++)
 return false;
 }
 } else { // default to a short duff's device
 auto pa = a.ptr;
 auto pb = b.ptr;
 if (pa == pb)
 return true;
 auto n  = (a.length + 3) / 4;

 final switch (a.length % 4) {
 case 0: do { if (*pa++ != *pb++) return false;
 case 3:  if (*pa++ != *pb++) return false;
 case 2:  if (*pa++ != *pb++) return false;
 case 1:  if (*pa++ != *pb++) return false;
} while (--n>  0);
 }
 }

 return true;
}

int test(string data) {
 int count;
 foreach (i; 0 ..  data.length - 3) {
 auto codon = data[i .. i + 3];
 if (arrayComp(codon, "TAG") || arrayComp(codon, "TGA") || arrayComp(codon, 
"TAA"))
 count++;
 }
 return count;
}

void main() {
 char[] data0 = cast(char[])read("data.txt");
 int n = 300;
 char[] data = new char[data0.length * n];
 for (size_t pos; pos<  data.length; pos += data0.length)
 data[pos .. pos+data0.length] = data0;
 string sdata = assumeUnique(data);

 printf("%d\n", test(sdata));
}


Timings, dmd compiler, best of 4, seconds:
   D #1: 5.72
   D #4: 1.84
   D #5: 1.73
   Psy:  1.59
   D #8: 1.51
   D #7: 0.56 (like #6 without length comparisons)
   D #2: 0.55
   D #6: 0.47
   D #3: 0.34


Your function can't be inlined because it's big, so this code isn't faster than 
inlined code like this generated by the compiler:
(codon.length == 3&&  codon[0] == 'T'&&  codon[1] == 'A'&&  codon[2] == 'G')

Bye,
bearophile


I don't have your data set, but for me using random data this was within 
a factor 2 of your #3, without any fiddly code.


int test(string data) {
int count;
while (true) {
data = data.find("TAG", "TGA", "TAA")[0];

if (data.empty) return count;

count += 1;
data.popFront;
}
}

Also, this one is far easier to generalize for strings of different 
lengths, and such :-)


Re: String compare performance

2010-11-28 Thread Pelle Månsson

On 11/28/2010 04:48 PM, bearophile wrote:

Pelle Månsson:


I don't have your data set,


In my first post I have explained how to create the data set, using a little 
Python script that I have listed there. You just need a Python2 interpreter.



but for me using random data this was within a factor 2 of your #3, without any 
fiddly code.


Thank you for your code. I have added your version, plus a modified version 
that works on ubytes, because it's faster, #9 and #10:

-

// D version #9
import std.file: read;
import std.c.stdio: printf;
import std.algorithm: find;
import std.array: empty, popFront;

int test(char[] data) {
 int count;
 while (true) {
 data = data.find("TAG", "TGA", "TAA")[0];
 if (data.empty)
 return count;
  count++;
  data.popFront();
  }
}

void main() {
 char[] data0 = cast(char[])read("data.txt");
 int n = 300;
 char[] data = new char[data0.length * n];
 for (size_t pos; pos<  data.length; pos += data0.length)
 data[pos .. pos + data0.length] = data0;

 printf("%d\n", test(data));
}

-

// D version #10
import std.file: read;
import std.c.stdio: printf;
import std.algorithm: find;
import std.array: empty, popFront;

int test(ubyte[] data) {
 int count;
 while (true) {
 data = data.find("TAG", "TGA", "TAA")[0];
 if (data.empty)
 return count;
  count++;
  data.popFront();
  }
}

void main() {
 ubyte[] data0 = cast(ubyte[])read("data.txt");
 int n = 300;
 ubyte[] data = new ubyte[data0.length * n];
 for (size_t pos; pos<  data.length; pos += data0.length)
 data[pos .. pos + data0.length] = data0;

 printf("%d\n", test(data));
}

-

Timings, dmd compiler, best of 4, seconds:
   D #1:  5.72
   D #9:  5.04
   D #10: 3.31
   D #4:  1.84
   D #5:  1.73
   Psy:   1.59
   D #8:  1.51
   D #7:  0.56
   D #2:  0.55
   D #6:  0.47
   D #3:  0.34


So on this PC it's not much fast. And generally I don't like to use find, empty 
and popFront to solve this problem, it's quite unnatural. Here I have explained 
what I think is a good enough solution to this performance problem:
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=123044

Bye,
bearophile


Measuring incorrectly in a performance test is a silly mistake, I was 
indeed wrong about the factor two thing!


I thought find was specialized for chars so it wouldn't require casting 
to ubyte. Maybe I was wrong again.


Thank you for your engaging performance oriented posts. :-)


Re: New syntax for string mixins

2010-12-16 Thread Pelle Månsson

On 12/15/2010 11:00 PM, Nick Sabalausky wrote:

"Jonathan M Davis"  wrote in message
news:mailman.1035.1292441722.21107.digitalmar...@puremagic.com...

On Wednesday, December 15, 2010 11:27:47 Jacob Carlborg wrote:


That was my idea as well, that

@get_set("int", "bar");

could be translated into

mixin(get_set("int", "bar")); just like

just like scope statements are translated into try/catch/finally.


Honestly, I don't see much gain in using @ rather than mixin(). It's a
little
less typing, but that's it.


It does seem like a small difference, just replacing "mixin" with "@" and
removing one layer of parens. But I think that extra layer of parens, minor
as it seems, makes a big difference in the readability (and "typeability")
of mixin invocations. Those extra parens do get to be a real bother, major
visual noise at least to my eyes.



I agree with this. Actually, just removing the parenthesis would be a 
huge gain for me.



And it precludes stuff like mixin("lhs " ~ op ~ "
rhs") like happens all the time in overloaded operator functions.



I don't see why these shouldn't work:

@"int foo;";
return @("lhs " ~ op ~ " rhs");

At least with just the "@" part of the proposal. Maybe the delegate thing
might make it tricker, I dunno.



This could work, but I don't think anyone is suggesting completely 
replacing the mixin. I think @ could be a function/template thing, and 
have strings in a more explicit mixin"";


Then again, inconsistency sucks.


Re: emscripten

2010-12-16 Thread Pelle Månsson

On 12/15/2010 03:17 PM, Michael Stover wrote:

And that's the problem - we're talking about applications that happen to
be distributed via the web, not a "website".  Everyone's demands that it
work in lynx, FF2, with javascript turned off, etc are ludicrous.


I disagree.


You don't get to make such demands of applications.


Yes I do!


Some applications areWindows only.


Not running them.


Some don't follow platform standards.


They suck!


Some require 1GB to work effectively.


Probably not running them.


These expectations are invalid.


They are not.

The idea that you shouldn't expect things to be good is backwards. 
Web-as-a-platform isn't good. Maybe it can be in the future. It's not now.


Re: Binary heap method to update an entry.

2010-12-17 Thread Pelle Månsson

On 12/16/2010 04:53 PM, Andrei Alexandrescu wrote:

On 12/16/10 7:55 AM, Matthias Walter wrote:

On 12/16/2010 04:17 AM, Andrei Alexandrescu wrote:

On 12/15/10 10:21 PM, Matthias Walter wrote:

Hi all,

I uploaded [1] a patch for std.container to use BinaryHeap as a
priority
queue. For the latter one it is often necessary to change a value
(often
called decreaseKey in a MinHeap). For example, Dijkstra's shortest path
algorithm would need such a method. My implementation expects that the
user calls the "update" method after changing the entry in the
underlying store.

My method works for value-decrease and -increase, but one might want to
split this functionality into two methods for efficiency reasons. But I
thought it'll be better, because one can change the MaxHeap to be a
MaxHeap by changing the template alias parameter, but this wouldn't
change the method names :-)

The patch is against current svn trunk.

[1]
http://xammy.xammy.homelinux.net/files/BinaryHeap-PriorityQueue.patch


A better primitive is to define update to take an index and a new
value, such that user code does not need to deal simultaneously with
the underlying array and the heap. No?

Well, I thought of the case where you have an array of structs and use a
custom less function for ordering. There you might not have a new value,
i.e. a replaced struct, but just a minor change internally. But I see
your idea, in most cases you would just call update after replacing your
array entry... Could we provide both, maybe?


Good point. Here's what I suggest:

/**
Applies unary function fun to the element at position index, after which
moves that element to preserve the heap property. (It is assumed that
fun changes the element.) Returns the new position of the element in the
heap.

Example:


int[] a = [ 4, 1, 3, 2, 16, 9, 10, 14, 8, 7 ];
auto h = heapify(a);
assert(equal(a, [ 16, 14, 10, 9, 8, 7, 4, 3, 2, 1 ]));
h.update!"a -= 5"(1);
assert(equal(a, [ 16, 10, 9, 9, 8, 7, 4, 3, 2, 1 ]));

*/
size_t update(alias fun)(size_t index);

Let me know of what you think, and thanks for contributing. When using
unaryFun inside update, don't forget to pass true as the second argument
to unaryFun to make sure you enact pass by reference.

Obviously, if you have already changed the element, you may always call
update with an empty lambda.


Andrei


Isn't passing the index slightly weird? Shouldn't it use a predicate, or 
something?


Looks to me like I'd be doing something like this:

auto arr = myheap.release();
auto i = indexOf!pred(arr);
myheap.assume(arr);
myheap.update!"a.fiddle()"(i);

Would I be doing it wrong?


Re: Threads and static initialization.

2010-12-18 Thread Pelle Månsson

On 12/18/2010 07:53 AM, Jonathan M Davis wrote:

On Friday 17 December 2010 19:52:19 Vladimir Panteleev wrote:

On Sat, 18 Dec 2010 03:06:26 +0200, Jonathan M Davis

wrote:

And how about every other variable?


I'm sorry, I'm not following you. What other variables?

* Globals and class/struct/function statics are in TLS
* Explicitly shared vars are in the data segment
* Locals are in the stack or registers (no problem here)
* Everything else (as referenced by the above three) is in the heap


Value types which on the stack are going to be okay. That's true. You're right.
They wouldn't be in TLS.

However, anything that involves the heap wouldn't be okay, and that's a _lot_ of
variables. Any and all references and pointers - inluding dynamic arrays - would
be in TLS unless you marked them as shared. So, you'd have to use shared all
over the place except in very simple cases or cases where you went out of your
way to avoid using the heap.

D is designed to avoid using shared memory except in cases where data is
immutable. So, if you try to set up your program so that it uses shared memory
primarily, then you're going to have problems. And not calling the static
constructors on thread creation would mean using shared memory for everything
which uses the heap. You couldn't even create local variables which are class
objects using TLS in such a case, because they might have a static constructor
which then would never have been called.

Really, I don't think that trying to avoid calling static constructors is going
to work very well. It may very well be a good reason to minimize what's done in
static constructors, but skipping them entirely would be very difficult to pull 
off
safely.

- Jonathan M Davis


The heap is the heap is the heap. You can have local variables on the 
heap which are not shared. I think you are overstating the need for 
shared, probably some misunderstanding.


You could not have classes/structs with static members, or call 
functions with static variables. Everything else should work, probably.


The spawned thread could use the parent thread immutable globals, to 
avoid the need to construct them in the spawned tls. I don't know if 
this is actually possible :-)


Re: Threads and static initialization.

2010-12-18 Thread Pelle Månsson

On 12/18/2010 10:00 AM, Jonathan M Davis wrote:

The problem is that the OP wants the static constructors to be skipped. If
they're skipped, anything and everything which could be affected by that can't 
be
used. That pretty much means not using TLS, since the compiler isn't going to be
able to track down which variables in TLS will or won't be affected by it. So,
you're stuck using shared memory only. _That_ is where the problem comes in.


Exactly, not using TLS. You can still use the heap, as it is not thread 
local. Meaning you can create non-shared anything all you like, as long 
as you're not using TLS.


Re: Why is D slower than LuaJIT?

2010-12-23 Thread Pelle Månsson

On 12/22/2010 11:04 PM, Andreas Mayer wrote:

To see what performance advantage D would give me over using a scripting 
language, I made a small benchmark. It consists of this code:


auto L = iota(0.0, 1000.0);
auto L2 = map!"a / 2"(L);
auto L3 = map!"a + 2"(L2);
auto V = reduce!"a + b"(L3);


It runs in 281 ms on my computer.

The same code in Lua (using LuaJIT) runs in 23 ms.

That's about 10 times faster. I would have expected D to be faster. Did I do 
something wrong?

The first Lua version uses a simplified design. I thought maybe that is unfair 
to ranges, which are more complicated. You could argue ranges have more 
features and do more work. To make it fair, I made a second Lua version of the 
above benchmark that emulates ranges. It is still 29 ms fast.

The full D version is here: http://pastebin.com/R5AGHyPx
The Lua version: http://pastebin.com/Sa7rp6uz
Lua version that emulates ranges: http://pastebin.com/eAKMSWyr

Could someone help me solving this mystery?

Or is D, unlike I thought, not suitable for high performance computing? What 
should I do?



I changed the code to this:

auto L = iota(0, 1000);
auto L2 = map!"a / 2.0"(L);
auto L3 = map!"a + 2"(L2);
auto V = reduce!"a + b"(L3);

and ripped the caching out of std.algorithm.map. :-)

This made it go from about 1.4 seconds to about 0.4 seconds on my 
machine. Note that I did no rigorous or scientific testing.


Also, if you really really need the performance you can change it all to 
lower level code, should you want to.


Re: To help LDC/GDC

2013-04-09 Thread Pelle Månsson

On Tuesday, 9 April 2013 at 13:49:12 UTC, Dicebot wrote:
On Tuesday, 9 April 2013 at 12:56:04 UTC, Andrei Alexandrescu 
wrote:

It is valid code. It is "weak pure". "pure' keyword means both
"strong pure" or "weak pure" depending on function body. Crap.


s/body/signature/
s/Crap/Awesome/


Not gonna argue latter but former is just wrong.

struct Test
{
int a;
pure int foo1() // strong pure
{
return 42;
}

pure int foo2() // weak pure
{
return a++;
}
}

Signature is the same for both functions.


Think of all your member functions as non-member functions taking 
the object as a ref parameter.


pure int foo1(ref Test this) {
return 42;
}

shouldn't be strongly pure (as it can access mutable non local 
state).


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Max Samukha wrote:

On Tue, 20 Oct 2009 18:12:39 +0800, Lionello Lunesu
 wrote:


On 20-10-2009 6:38, Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?

I don't agree with this one.

There's extra cost involved, and the added keyword makes that clear. 
Also, somebody mentioned using 'new' to allocate structs on the heap; 
I've never actually done that, but it sounds like using 'new' would 
be the perfect way to do just that.


L.


I don't think the extra cost should be emphasized with 'new' every
time you instantiate a class. For example, in C#, they use 'new' for
creating structs on stack (apparently to make them consistent with
classes, in a silly way).

I think the rarer cases when a class instance is allocated in-place (a
struct on heap) can be handled by the library.

BTW, why "in-situ" is better in this context than the more common
"in-place"? Would be nice to know.


The term originated with this:

class A {
InSitu!B b;
...
}

meaning that B is embedded inside A. But I guess InPlace is just as good.


Andrei


I actually do not understand what InSitu is supposed to mean.

I like the name Scope, but InPlace works for me.


Re: Proposed D2 Feature: => for anonymous delegates

2009-10-20 Thread Pelle Månsson

Jason House wrote:

Andrei Alexandrescu Wrote:


Jason House wrote:

Am I the only one that has trouble remembering how to write an inline
anonymous delegate when calling a function? At a minimum, both Scala
and C# use (args) => { body; } syntax. Can we please sneak it into
D2?

We have (args) { body; }

Andrei


Somehow, I missed that. What kind of type inference, if any, is allowed? Scala and C# 
allow omiting the type. Lately I'm doing a lot of (x) => { return x.foo(7); } in C# 
and it's nice to omit the amazingly long type for x. The IDE even knows the type of x 
for intellisense... I think scala would allow x => foo(7), or maybe even => 
_.foo(7) or even _.foo(7). I haven't written much scala, so I may be way off...


Recent experiments by myself indicate you cannot omit the type and you 
cannot use auto for the type, so you actually need to type your 
VeryLongClassName!(With, Templates) if you need it.


I sort of miss automatic type deduction.


Re: Proposed D2 Feature: => for anonymous delegates

2009-10-21 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Jason House wrote:

Andrei Alexandrescu Wrote:


Jason House wrote:

Am I the only one that has trouble remembering how to write an inline
anonymous delegate when calling a function? At a minimum, both Scala
and C# use (args) => { body; } syntax. Can we please sneak it into
D2?

We have (args) { body; }

Andrei


Somehow, I missed that. What kind of type inference, if any, is 
allowed? Scala and C# allow omiting the type. Lately I'm doing a lot 
of (x) => { return x.foo(7); } in C# and it's nice to omit the 
amazingly long type for x. The IDE even knows the type of x for 
intellisense... I think scala would allow x => foo(7), or maybe even 
=> _.foo(7) or even _.foo(7). I haven't written much scala, so I may 
be way off...


Recent experiments by myself indicate you cannot omit the type and you 
cannot use auto for the type, so you actually need to type your 
VeryLongClassName!(With, Templates) if you need it.


I sort of miss automatic type deduction.


Actually, full type deduction should be in vigor, but it is known that 
the feature has more than a few bugs. Feel free to report any instance 
in which type deduction does not work in bugzilla.


Andrei


int f(int delegate(int) g) {
return g(13);
}
void main() {
f((auto x) { return x+13; });
}

This does not compile in D v2.034. Am I missing something?


Re: Revamping associative arrays

2009-10-21 Thread Pelle Månsson

Piotrek wrote:

Bill Baxter pisze:

On Sun, Oct 18, 2009 at 1:12 PM, Piotrek  wrote:

Bill Baxter pisze:

I think the default should be to iterate over whatever 'in' looks at.

I was almost convinced, because that rule has a sense. But treating 
normal

arrays and associative array has more sense to me.


fun (SomeObject object) {
foreach (element;object.arr1){ //normal, but how do I know at first look
//just do something with element
}

foreach (element;object.arr2){ // assoc, but how do I know at first look
//just do something with element hopefully not index
}


That sounds like an argument that there should be no default, because
either way it's not clear whether you're iterating over keys or
values. 



Really?! That wasn't my intention :) In both cases I wish it were values ;)

 > Just get rid of the the one-argument foreach over AAs altogether and 
force the user to be

 > explicit about it.

I wouldn't do so. Would anybody do an error by thinking that foreach 
(elem,table) should iterate over keys?


Maybe I'm not thinking correctly but for me an assoc array is just an 
array with additional key (index) features thanks to which I save space 
and/or have more indexing method than only integers.



e.g.

Normal array

No.   Item
0George
1Fred
2Dany
3Lil

Index/key is infered from position (offset)


Now Assoc array:

No.Item
10Lindsey
21Romeo
1001C-Jay

Or
No.Item
firstEurope
secondSouth America
thirdAustralia

Or
Names occurrence frequency:

No. Item
Andy21
John23
Kate12

And the only difference is the need for using a hash function for value 
lookup (calculate position) which should not bother a user when he 
doesn't care.


Then when you ask somebody to iterate over the tables, what he will do 
almost for certain? If it would be me, you know... values all the time. 
Even for last example most important values are those numbers (despite 
in this case they're meaningless without keys).


Cheers
Piotrek




Put it this way:
 Is there any time you are interested in the values without the keys?
 Is there any time you are interested in the keys without the values?

If you're not interested in the keys, the real question would be why you 
are using an associative array instead of just an array.


I can think of at least one example of when you want key iteration, 
which would be when using a bool[T] as a set.


Re: Proposed D2 Feature: => for anonymous delegates

2009-10-21 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Jason House wrote:

Andrei Alexandrescu Wrote:


Jason House wrote:
Am I the only one that has trouble remembering how to write an 
inline

anonymous delegate when calling a function? At a minimum, both Scala
and C# use (args) => { body; } syntax. Can we please sneak it into
D2?

We have (args) { body; }

Andrei


Somehow, I missed that. What kind of type inference, if any, is 
allowed? Scala and C# allow omiting the type. Lately I'm doing a 
lot of (x) => { return x.foo(7); } in C# and it's nice to omit the 
amazingly long type for x. The IDE even knows the type of x for 
intellisense... I think scala would allow x => foo(7), or maybe 
even => _.foo(7) or even _.foo(7). I haven't written much scala, so 
I may be way off...


Recent experiments by myself indicate you cannot omit the type and 
you cannot use auto for the type, so you actually need to type your 
VeryLongClassName!(With, Templates) if you need it.


I sort of miss automatic type deduction.


Actually, full type deduction should be in vigor, but it is known 
that the feature has more than a few bugs. Feel free to report any 
instance in which type deduction does not work in bugzilla.


Andrei


int f(int delegate(int) g) {
return g(13);
}
void main() {
f((auto x) { return x+13; });
}

This does not compile in D v2.034. Am I missing something?


Dropping the "auto" should yield a compilable program. Please report 
that to bugzilla (http://d.puremagic.com/issues/enter_bug.cgi) or let me 
know and I'll do so.


Thanks!

Andrei


I'm afraid I do not understand, simply omitting the auto does not 
compile either. Which one is the bug?


I'm putting this on the bugzilla now.


Re: Array, AA Implementations

2009-10-22 Thread Pelle Månsson

Don wrote:

Andrei Alexandrescu wrote:

Bill Baxter wrote:

On Wed, Oct 21, 2009 at 6:35 PM, Andrei Alexandrescu
 wrote:


3. Remove some element from the container and give it to me

E removeAny();

4. Add an element to the container is possible

bool add(E);


I think any container must support these primitives in O(1), and I 
find it

difficult to think of a significant category of containers that can't
support them (but then there may as well be, so please join me in 
thinking

of that). A lot of stuff can be done with only these few methods.


I think balanced trees generally take O(lg N) to add and remove 
elements.


Good point, thanks. Logarithmic of better.

Andrei

Can it be amortized O(lg N) ?


Amortized logarithmic is equivalent to logarithmic.


Re: Targeting C

2009-10-22 Thread Pelle Månsson

bearophile wrote:

Tim Matthews:


OOC. I quite like how this one myself personally. http://ooc-lang.org/about


Type of arguments can be stated once:

Vector3f: class {
  x, y, z : Float
  init: func(x, y, z : Float) {
this x = x // 'this' is called 'self' in some other languages
this y = y
this z = z
  }
}


It doesn't need the is() when you test for type equality:

print: func  (arg: T) {
  if(T == Int) {
printf("%d\n", arg as Int) // 'as' allow casting
  } else if(T == String) {
printf("%s\n", arg as String)
  }
}

Uses a syntax better than the D foreach:

list := ArrayList new()
for(i in 0..10) list add(i) // oh yeah no needs for brackets
for(i in list) printf("%d\n")

And I have omitted some other handy features.
There's something to learn for D too :-)

Bye,
bearophile


Personally, I like this:

foreach (i; 0..10) list ~= i;

more. :)


Re: Semicolons: mostly unnecessary?

2009-10-22 Thread Pelle Månsson

KennyTM~ wrote:

On Oct 22, 09 13:57, AJ wrote:

"KennyTM~"  wrote in message
news:hbopns$125...@digitalmars.com...

On Oct 22, 09 12:29, AJ wrote:

"Adam D. Ruppe"   wrote in message
news:mailman.228.1256181155.20261.digitalmar...@puremagic.com...

On Wed, Oct 21, 2009 at 09:25:34PM -0500, AJ wrote:
That's not D source code. Why do you keep trying to use English 
text as

an
example?


The logic behind all the arguments you make,


That would be "all fine and dandy", but I'm not arguing about anything.
(So
you must be arguing? About what?).


except for one, should apply
equally well to English as it does to D.


That's silly. There is no need to use the text of Shakespeare's 
tragedies

to
allude to source code. There is no need and it is entirely 
inappropriate

to
expand the context of the issue to other realms. The context is
(currently):
semicolons as statement terminators for single-statement lines.



Cons:

1. Makes source code less comprehensible.

Based on what? Because you say so?


It's more to digest when it's not necessary. It's easier to identify
something when it's in less intricate (read, plain) surroundings.





2. Is redundant with the newline designator.





is obviously false,


If I put it on the list, it wasn't obvious at all, even if it is
incorrect
(though I think it is correct).


unless
you specifically require a line continuation character:

a = b +
c


Without semicolon requirement:

a=b+ // this is an error
c // this is OK

With semicolon requirement:

a=b+; // this is an error
c; // this is OK

What's the diff?


A newline and a semicolon are not redundant unless you specifically
define
a statement as being one and only one line.


A semicolon is redundate with newline for single-statement lines. 
Oh, you
say that a lot of constructs are inherently single statements but 
written

on
multiple lines? Well, that may be just the kind of examination I was
looking
for (ironic that _I_ had to bring it up, huh):

if(true)
  dothis()

That situation has to be evaluated: is parsing at the construct 
level too

much effort or is it desireable? (ParseIfStatement()). Statement-level
parsing better/worse than line-level parsing?


Back to the magic of above though. What if you rewrote it:
a = b
 +c


Without semicolon requirement:

a=b // OK
   +c // error

With semicolon requirement:

a=b; // OK
   +c; // error

What's the diff?


a=b
   +c(d)  // no error


Why not?


Good question. Because the compiler accepts a=b;+c(d);.

Whether c is declared as a variable or a function, it still looks

wrong to me. A statement can't begin with a +.


OK.

struct S { int a }
int a

void main () {
  S s
  auto t = s
  .a = 1   // ambiguity: Note that .sth means global scope.
}

That clearly means the global int a is set to one, and the local t has 
type S.


Re: Semicolons: mostly unnecessary?

2009-10-22 Thread Pelle Månsson

KennyTM~ wrote:

On Oct 22, 09 19:03, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 13:57, AJ wrote:

"KennyTM~" wrote in message
news:hbopns$125...@digitalmars.com...

On Oct 22, 09 12:29, AJ wrote:

"Adam D. Ruppe" wrote in message
news:mailman.228.1256181155.20261.digitalmar...@puremagic.com...

On Wed, Oct 21, 2009 at 09:25:34PM -0500, AJ wrote:

That's not D source code. Why do you keep trying to use English
text as
an
example?


The logic behind all the arguments you make,


That would be "all fine and dandy", but I'm not arguing about
anything.
(So
you must be arguing? About what?).


except for one, should apply
equally well to English as it does to D.


That's silly. There is no need to use the text of Shakespeare's
tragedies
to
allude to source code. There is no need and it is entirely
inappropriate
to
expand the context of the issue to other realms. The context is
(currently):
semicolons as statement terminators for single-statement lines.



Cons:

1. Makes source code less comprehensible.

Based on what? Because you say so?


It's more to digest when it's not necessary. It's easier to identify
something when it's in less intricate (read, plain) surroundings.





2. Is redundant with the newline designator.





is obviously false,


If I put it on the list, it wasn't obvious at all, even if it is
incorrect
(though I think it is correct).


unless
you specifically require a line continuation character:

a = b +
c


Without semicolon requirement:

a=b+ // this is an error
c // this is OK

With semicolon requirement:

a=b+; // this is an error
c; // this is OK

What's the diff?


A newline and a semicolon are not redundant unless you specifically
define
a statement as being one and only one line.


A semicolon is redundate with newline for single-statement lines.
Oh, you
say that a lot of constructs are inherently single statements but
written
on
multiple lines? Well, that may be just the kind of examination I was
looking
for (ironic that _I_ had to bring it up, huh):

if(true)
dothis()

That situation has to be evaluated: is parsing at the construct
level too
much effort or is it desireable? (ParseIfStatement()). 
Statement-level

parsing better/worse than line-level parsing?


Back to the magic of above though. What if you rewrote it:
a = b
+c


Without semicolon requirement:

a=b // OK
+c // error

With semicolon requirement:

a=b; // OK
+c; // error

What's the diff?


a=b
+c(d) // no error


Why not?


Good question. Because the compiler accepts a=b;+c(d);.

Whether c is declared as a variable or a function, it still looks

wrong to me. A statement can't begin with a +.


OK.

struct S { int a }
int a

void main () {
S s
auto t = s
.a = 1 // ambiguity: Note that .sth means global scope.
}


That clearly means the global int a is set to one, and the local t has
type S.


No, s(lots of whitespace).a is a valid expression. You shouldn't insert 
a statement break there.

Try an editor that shows you where the line breaks are.


Re: Semicolons: mostly unnecessary?

2009-10-22 Thread Pelle Månsson

KennyTM~ wrote:

On Oct 22, 09 21:17, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 19:03, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 13:57, AJ wrote:

"KennyTM~" wrote in message
news:hbopns$125...@digitalmars.com...

On Oct 22, 09 12:29, AJ wrote:

"Adam D. Ruppe" wrote in message
news:mailman.228.1256181155.20261.digitalmar...@puremagic.com...

On Wed, Oct 21, 2009 at 09:25:34PM -0500, AJ wrote:

That's not D source code. Why do you keep trying to use English
text as
an
example?


The logic behind all the arguments you make,


That would be "all fine and dandy", but I'm not arguing about
anything.
(So
you must be arguing? About what?).


except for one, should apply
equally well to English as it does to D.


That's silly. There is no need to use the text of Shakespeare's
tragedies
to
allude to source code. There is no need and it is entirely
inappropriate
to
expand the context of the issue to other realms. The context is
(currently):
semicolons as statement terminators for single-statement lines.



Cons:

1. Makes source code less comprehensible.

Based on what? Because you say so?


It's more to digest when it's not necessary. It's easier to 
identify

something when it's in less intricate (read, plain) surroundings.





2. Is redundant with the newline designator.





is obviously false,


If I put it on the list, it wasn't obvious at all, even if it is
incorrect
(though I think it is correct).


unless
you specifically require a line continuation character:

a = b +
c


Without semicolon requirement:

a=b+ // this is an error
c // this is OK

With semicolon requirement:

a=b+; // this is an error
c; // this is OK

What's the diff?

A newline and a semicolon are not redundant unless you 
specifically

define
a statement as being one and only one line.


A semicolon is redundate with newline for single-statement lines.
Oh, you
say that a lot of constructs are inherently single statements but
written
on
multiple lines? Well, that may be just the kind of examination I 
was

looking
for (ironic that _I_ had to bring it up, huh):

if(true)
dothis()

That situation has to be evaluated: is parsing at the construct
level too
much effort or is it desireable? (ParseIfStatement()).
Statement-level
parsing better/worse than line-level parsing?


Back to the magic of above though. What if you rewrote it:
a = b
+c


Without semicolon requirement:

a=b // OK
+c // error

With semicolon requirement:

a=b; // OK
+c; // error

What's the diff?


a=b
+c(d) // no error


Why not?


Good question. Because the compiler accepts a=b;+c(d);.

Whether c is declared as a variable or a function, it still looks

wrong to me. A statement can't begin with a +.


OK.

struct S { int a }
int a

void main () {
S s
auto t = s
.a = 1 // ambiguity: Note that .sth means global scope.
}


That clearly means the global int a is set to one, and the local t has
type S.


No, s(lots of whitespace).a is a valid expression. You shouldn't
insert a statement break there.

Try an editor that shows you where the line breaks are.


By whitespace I mean spaces (U+0020), tabs (U+0009), newlines (U+000A) 
and other space characters (U+000B, U+000C, U+000D).
You can obviously not have newlines as whitespace and remove semicolon. 
That would, as you have demonstrated, not work. I think we all knew that.


Re: Semicolons: mostly unnecessary?

2009-10-22 Thread Pelle Månsson

KennyTM~ wrote:

On Oct 22, 09 21:12, Ary Borenszweig wrote:

KennyTM~ wrote:

On Oct 22, 09 19:03, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 13:57, AJ wrote:

"KennyTM~" wrote in message
news:hbopns$125...@digitalmars.com...

On Oct 22, 09 12:29, AJ wrote:

"Adam D. Ruppe" wrote in message
news:mailman.228.1256181155.20261.digitalmar...@puremagic.com...

On Wed, Oct 21, 2009 at 09:25:34PM -0500, AJ wrote:

That's not D source code. Why do you keep trying to use English
text as
an
example?


The logic behind all the arguments you make,


That would be "all fine and dandy", but I'm not arguing about
anything.
(So
you must be arguing? About what?).


except for one, should apply
equally well to English as it does to D.


That's silly. There is no need to use the text of Shakespeare's
tragedies
to
allude to source code. There is no need and it is entirely
inappropriate
to
expand the context of the issue to other realms. The context is
(currently):
semicolons as statement terminators for single-statement lines.



Cons:

1. Makes source code less comprehensible.

Based on what? Because you say so?


It's more to digest when it's not necessary. It's easier to 
identify

something when it's in less intricate (read, plain) surroundings.





2. Is redundant with the newline designator.





is obviously false,


If I put it on the list, it wasn't obvious at all, even if it is
incorrect
(though I think it is correct).


unless
you specifically require a line continuation character:

a = b +
c


Without semicolon requirement:

a=b+ // this is an error
c // this is OK

With semicolon requirement:

a=b+; // this is an error
c; // this is OK

What's the diff?

A newline and a semicolon are not redundant unless you 
specifically

define
a statement as being one and only one line.


A semicolon is redundate with newline for single-statement lines.
Oh, you
say that a lot of constructs are inherently single statements but
written
on
multiple lines? Well, that may be just the kind of examination I 
was

looking
for (ironic that _I_ had to bring it up, huh):

if(true)
dothis()

That situation has to be evaluated: is parsing at the construct
level too
much effort or is it desireable? (ParseIfStatement()).
Statement-level
parsing better/worse than line-level parsing?


Back to the magic of above though. What if you rewrote it:
a = b
+c


Without semicolon requirement:

a=b // OK
+c // error

With semicolon requirement:

a=b; // OK
+c; // error

What's the diff?


a=b
+c(d) // no error


Why not?


Good question. Because the compiler accepts a=b;+c(d);.

Whether c is declared as a variable or a function, it still looks

wrong to me. A statement can't begin with a +.


OK.

struct S { int a }
int a

void main () {
S s
auto t = s
.a = 1 // ambiguity: Note that .sth means global scope.
}


That clearly means the global int a is set to one, and the local t has
type S.


No, s(lots of whitespace).a is a valid expression. You shouldn't
insert a statement break there.


But without semicolons the line break becomes the new semicolon. That's
what most people here don't understand. There's no ambiguity: if you
have a line break and a semicolon would have been good in that place,
then that line break becomes the semicolon.


So

auto t = s.
a = 1

would now become a syntax error?

Yes. Do you use this particular style of coding often?


Re: Semicolons: mostly unnecessary?

2009-10-22 Thread Pelle Månsson

KennyTM~ wrote:

On Oct 22, 09 21:36, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 21:12, Ary Borenszweig wrote:

KennyTM~ wrote:

On Oct 22, 09 19:03, Pelle Månsson wrote:

KennyTM~ wrote:

On Oct 22, 09 13:57, AJ wrote:

"KennyTM~" wrote in message
news:hbopns$125...@digitalmars.com...

On Oct 22, 09 12:29, AJ wrote:

"Adam D. Ruppe" wrote in message
news:mailman.228.1256181155.20261.digitalmar...@puremagic.com...

On Wed, Oct 21, 2009 at 09:25:34PM -0500, AJ wrote:

That's not D source code. Why do you keep trying to use English
text as
an
example?


The logic behind all the arguments you make,


That would be "all fine and dandy", but I'm not arguing about
anything.
(So
you must be arguing? About what?).


except for one, should apply
equally well to English as it does to D.


That's silly. There is no need to use the text of Shakespeare's
tragedies
to
allude to source code. There is no need and it is entirely
inappropriate
to
expand the context of the issue to other realms. The context is
(currently):
semicolons as statement terminators for single-statement lines.



Cons:

1. Makes source code less comprehensible.

Based on what? Because you say so?


It's more to digest when it's not necessary. It's easier to
identify
something when it's in less intricate (read, plain) surroundings.





2. Is redundant with the newline designator.





is obviously false,


If I put it on the list, it wasn't obvious at all, even if it is
incorrect
(though I think it is correct).


unless
you specifically require a line continuation character:

a = b +
c


Without semicolon requirement:

a=b+ // this is an error
c // this is OK

With semicolon requirement:

a=b+; // this is an error
c; // this is OK

What's the diff?


A newline and a semicolon are not redundant unless you
specifically
define
a statement as being one and only one line.


A semicolon is redundate with newline for single-statement lines.
Oh, you
say that a lot of constructs are inherently single statements but
written
on
multiple lines? Well, that may be just the kind of examination
I was
looking
for (ironic that _I_ had to bring it up, huh):

if(true)
dothis()

That situation has to be evaluated: is parsing at the construct
level too
much effort or is it desireable? (ParseIfStatement()).
Statement-level
parsing better/worse than line-level parsing?


Back to the magic of above though. What if you rewrote it:
a = b
+c


Without semicolon requirement:

a=b // OK
+c // error

With semicolon requirement:

a=b; // OK
+c; // error

What's the diff?


a=b
+c(d) // no error


Why not?


Good question. Because the compiler accepts a=b;+c(d);.

Whether c is declared as a variable or a function, it still looks

wrong to me. A statement can't begin with a +.


OK.

struct S { int a }
int a

void main () {
S s
auto t = s
.a = 1 // ambiguity: Note that .sth means global scope.
}

That clearly means the global int a is set to one, and the local t 
has

type S.


No, s(lots of whitespace).a is a valid expression. You shouldn't
insert a statement break there.


But without semicolons the line break becomes the new semicolon. That's
what most people here don't understand. There's no ambiguity: if you
have a line break and a semicolon would have been good in that place,
then that line break becomes the semicolon.


So

auto t = s.
a = 1

would now become a syntax error?

Yes. Do you use this particular style of coding often?


auto s = "No, but you've just broken " ~
 "some other perfectly working " ~
 "D codes."
writeln(s)

Maybe true, but not ambigous.


Re: Targeting C

2009-10-23 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Yigal Chripun wrote:

On 23/10/2009 13:02, bearophile wrote:

Chris Nicholson-Sauls:


I prefer this (Scala):
list = list ++ (0 to 10)


That's quite less readable. Scala sometimes has some unreadable 
syntax. Python has taught me how much useful a readable syntax is :-)
Designing languages requires to find a balance between several 
different and opposed needs.


Bye,
bearophile


how about this hypothetical syntax:

list ~= [0..10];


I'm not sure what the type of "list" is supposed to be, but this works 
today for arrays:


list ~= array(iota(0, 10));


Andrei

What does iota mean?


Re: Targeting C

2009-10-23 Thread Pelle Månsson

bearophile wrote:

Yigal Chripun:


Hell no. This is why I hate certain programming languages.
if you are trying to obfuscate the language than why not just define:
rtqfrdsg and fdkjtkf as the function names?


Don't be silly. In my dlibs "xsomething" are the lazy functions, and 
"something" are the strict ones. That's not obfuscated, you need seconds to learn a 
single easy rule.

Bye,
bearophile
I think the complaint was not as much about the x as the iota. 
Seriously, iota?


However, I like the array(range(0,10)) where range is always lazy, and 
array forces eagerness, better than separate xrange and range functions.


Re: Targeting C

2009-10-23 Thread Pelle Månsson

Leandro Lucarella wrote:

Andrei Alexandrescu, el 23 de octubre a las 11:09 me escribiste:

Bill Baxter wrote:

On Fri, Oct 23, 2009 at 5:13 AM, Andrei Alexandrescu
 wrote:

Yigal Chripun wrote:

On 23/10/2009 13:02, bearophile wrote:

Chris Nicholson-Sauls:


I prefer this (Scala):
list = list ++ (0 to 10)

That's quite less readable. Scala sometimes has some unreadable syntax.
Python has taught me how much useful a readable syntax is :-)
Designing languages requires to find a balance between several different
and opposed needs.

Bye,
bearophile

how about this hypothetical syntax:

list ~= [0..10];

I'm not sure what the type of "list" is supposed to be, but this works today
for arrays:

list ~= array(iota(0, 10));

While we're not on the subject
"Iota" is right up there with "inSitu".
I know it has a precedent elsewhere, but it sounds about as user
friendly as monads.  It just sounds like the language it trying to be
snooty.  Like "if you don't even know what iota is, you're clearly not
qualified to join our little D club. Maybe you should try Java... or
Logo".   Compare that to Python where it's called "range", something
every Joe the Programmer can certainly grok without having to get a
Greek to English dictionary.

Given that "range" is already taken, what name do you think would work best?

(I sometimes deliberately prefer less-used names because the more
used ones often come with baggage and ambiguities (as is the case
with "range"). Case in point, "in-situ" is more informative than
"in-place" because the former suggests emplacement of a substructure
within a larger structure. So to me an "in-situ" class member inside
a class has a clear meaning that the member sits right there within
the class. But anyhow I will use in-place from now on.)


I don't see "range" taken inside the range module. I think it even makes
sense, iota() is the more primitive range ever, so why don't just call it
range()? :)


This was my thought as well.

I don't know if it fares well in the ambiguity department, though.


Re: Semicolons: mostly unnecessary?

2009-10-23 Thread Pelle Månsson

Walter Bright wrote:

Max Samukha wrote:

On Thu, 22 Oct 2009 23:21:25 +0200, bambo  wrote:


Walter Bright schrieb:

Adam D. Ruppe wrote:

mostcertainly
doesNOTmeanalanguageisnecessarilyeasiertoparseSymbolsgiveus
aparsinganchorperiodsinasentencearentstrictlynecessarywecould
putoneperlineorjustfigureoutwheretheybelongbyparsingthecontext
Butthatsfairlyobviouslymuchharderthanusingperiodstofollowwhere
youareSemicolonsarethesamething

(Fixed that for you!)

Walter, what a remarkable proove the semicolon helps us all a lot!
You are so BRIGHT! You are so creative and intelligent!

I LOVE YOU!


This is one of Walter's proofs that don't prove anything. Spaces
between words are *not redundant*.


Armed with a dictionary, there's really only one parse of the above text 
that works. Consider also that when text is encrypted using pre-computer 
methods, the first thing done is all spaces are removed and it is put in 
monocase (because that makes it harder for cryptanalysis). Human 
decryptors put them back in.


Consider the fragment:

Ift hepo inti sn tplainobviousfromthe abovefewersymbols


SYNTAX ERROR!


Re: Thread-local storage and Performance

2009-10-26 Thread Pelle Månsson

dsimcha wrote:

Has D's builtin TLS been optimized in the past 6 months to year?  I had
benchmarked it awhile back when optimizing some code that I wrote and
discovered it was significantly slower than regular globals (the kind that are
now __gshared).  Now, at least on Windows, it seems that there is no
discernible difference and if anything, TLS is slightly faster than __gshared.
 What's changed?


I was under the impression that TLS should be faster due to absence of 
synchronization.


Re: TDPL reaches Thermopylae level

2009-10-27 Thread Pelle Månsson

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 6:56 AM, Michel Fortin
 wrote:

On 2009-10-27 09:07:06 -0400, Andrei Alexandrescu
 said:


My current thought is to ascribe lhs ~ rhs the same type as lhs (thereby
making ~ consistent with ~= by making lhs ~= rhs same as lhs = lhs ~ rhs) in
case lhs is a string type. If lhs is a character type, the result type is
obviously the same as rhs.

Seems the most intuitive option to me. Also, it makes "a ~= b" equivalent to
"a = a ~ b" which is always nice.


And that kind of suggests to me that even  a = b  should work.
It has many of the same characteristics as ~=.  It's pretty
unambiguous what you'd expect to happen if not an error.


--bb

int a;
float b = 2.1;
a = b;
also unambiguous?


Re: TDPL reaches Thermopylae level

2009-10-27 Thread Pelle Månsson

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 12:48 PM, Pelle Månsson  wrote:

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 6:56 AM, Michel Fortin
 wrote:

On 2009-10-27 09:07:06 -0400, Andrei Alexandrescu
 said:


My current thought is to ascribe lhs ~ rhs the same type as lhs (thereby
making ~ consistent with ~= by making lhs ~= rhs same as lhs = lhs ~
rhs) in
case lhs is a string type. If lhs is a character type, the result type
is
obviously the same as rhs.

Seems the most intuitive option to me. Also, it makes "a ~= b" equivalent
to
"a = a ~ b" which is always nice.

And that kind of suggests to me that even  a = b  should work.
It has many of the same characteristics as ~=.  It's pretty
unambiguous what you'd expect to happen if not an error.


--bb

int a;
float b = 2.1;
a = b;
also unambiguous?


I'm not sure what point you're trying to make, but wstring <-> string
<-> dstring are all lossless conversions.  That isn't the case with
int and float.

--bb

They are?

...Then what is the point of wstring, dstring?


Re: TDPL reaches Thermopylae level

2009-10-27 Thread Pelle Månsson

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 1:06 PM, Pelle Månsson  wrote:

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 12:48 PM, Pelle Månsson 
wrote:

Bill Baxter wrote:

On Tue, Oct 27, 2009 at 6:56 AM, Michel Fortin
 wrote:

On 2009-10-27 09:07:06 -0400, Andrei Alexandrescu
 said:


My current thought is to ascribe lhs ~ rhs the same type as lhs
(thereby
making ~ consistent with ~= by making lhs ~= rhs same as lhs = lhs ~
rhs) in
case lhs is a string type. If lhs is a character type, the result type
is
obviously the same as rhs.

Seems the most intuitive option to me. Also, it makes "a ~= b"
equivalent
to
"a = a ~ b" which is always nice.

And that kind of suggests to me that even  a = b  should work.
It has many of the same characteristics as ~=.  It's pretty
unambiguous what you'd expect to happen if not an error.


--bb

int a;
float b = 2.1;
a = b;
also unambiguous?

I'm not sure what point you're trying to make, but wstring <-> string
<-> dstring are all lossless conversions.  That isn't the case with
int and float.

--bb

They are?

...Then what is the point of wstring, dstring?


They are all just different representations of Unicode.

string, which is unicode in UTF-8, is good because it's the least
wasteful for mostly ASCII text.  And has a nice ASCII backwards
compatibility story.

dstring, which is unicode in UTF-32, is good because you have one
element = one character.  So it's good for doing substring and other
text manipulations.

wstring, which is UTF-16, is good because it lets you call Windows
Unicode functions.

Here's Daniel Keep's nice explanation:
http://docs.google.com/View?docid=dtqh79k_1rbxfmb

--bb

Thank you, that cleared things up for me :)


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Andrei Alexandrescu wrote:
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not considered 
very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" properties 
or functions. How should they be called?



Thanks,

Andrei

aa.each, aa.keys and aa.values seem good names?

Also, foreach with a single variable should default to keys, in my opinion.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those 
cases iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of 
makign opSlice() a universal method of getting the "all" iterator, a 
default that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei

aa.each, aa.keys and aa.values seem good names?


The latter two would break existing definitions of keys and values.

Is this bad? If you want an array from them you could just construct it 
from the iterator.


Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things for 
e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Denis Koroskin wrote:
On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu 
 wrote:


Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on 
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's 
a default iteration scheme), why should AA provide 2 other iteration 
schemes? Can't they be implemented externally (using adaptor ranges) 
with the same efficiency?


foreach (e; keys(aa)) {
writefln("key: %s", e);
}

foreach (e; values(aa)) {
writefln("value: %s", e);
}


Why would you prefer keys(aa) over aa.keys?

Last, I believe foreach loop should automatically call opSlice() on 
iteratee. There is currently an inconsistency with built-in types - you 
don't have to call [] on them, yet you must call it on all the other types:


Try implementing the range interface (front, popFront and empty), and 
they are ranges. Magic! opApply is worth mentioning here, as well.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Robert Jacques wrote:
On Wed, 28 Oct 2009 15:06:34 -0400, Denis Koroskin <2kor...@gmail.com> 
wrote:


On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu 
 wrote:


Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact 
on compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those 
cases iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of 
makign opSlice() a universal method of getting the "all" iterator, a 
default that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and 
it's a default iteration scheme), why should AA provide 2 other 
iteration schemes? Can't they be implemented externally (using adaptor 
ranges) with the same efficiency?


foreach (e; keys(aa)) {
 writefln("key: %s", e);
}

foreach (e; values(aa)) {
 writefln("value: %s", e);
}

I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a 
pair of Key and Value, if possible (i.e. if this change won't 
sacrifice performance).
Second, AA.remove method should accept result of opIn operation to 
avoid an additional lookup for removal:


if (auto value = key in aa) {
 aa.remove(key); // an unnecessary lookup
}

Something like this would be perfect:

struct Element(K,V)
{
 const K key;
 V value;
}

struct AA(K,V)
{
 //...
 ref Element opIn(K key) { /* throws an exception if element is 
not found */ }


Not finding an element is a common use case, not an exception. Using 
exceptions to pass information is bad style, slow and prevents the use 
of AAs in pure/nothrow functions. Returning a pointer to an element 
would allow both key and value to be accessed and could be null if no 
element is found.


Also, if opIn throws an exception, it kind of defeats the point of opIn, 
and turns it to opIndex.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things 
for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same for 
associative arrays.


-Lars
I don't understand this, when do you want the values without the keys? 
If you do, shouldn't you be using a regular array?


Actually, it doesn't matter all that much, as long as we get .keys and 
.values as alternatives.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different 
things for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or 
the keys and values together, rarely just the values. Maybe that's 
just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same 
for associative arrays.


-Lars
I don't understand this, when do you want the values without the keys? 
If you do, shouldn't you be using a regular array?


Here's an example:

   class SomeObject { ... }
   void doStuffWith(SomeObject s) { ... }
   void doOtherStuffWith(SomeObject s) { ... }

   // Make a collection of objects indexed by ID strings.
   SomeObject[string] myObjects;
   ...

   // First I just want to do something with one of the
   // objects, namely the one called "foo".
   doStuffWith(myObjects["foo"]);

   // Then, I want to do something with all the objects.
   foreach (obj; myObjects)  doOtherStuffWith(obj);

Of course, if iteration was over keys instead of values, I'd just write

   foreach (id, obj; myObjects)  doOtherStuffWith(obj);

But then again, right now, when iteration is over values and I want the 
keys I can just write the same thing. It all comes down to preference, 
and I prefer things the way they are now. :)



Actually, it doesn't matter all that much, as long as we get .keys and 
.values as alternatives.


I still think the default for foreach should be consistent with normal 
arrays.


-Lars

I think foreach should be consistent with opIn, that is,
if (foo in aa) { //it is in the aa.
  foreach (f; aa) { // loop over each item in the aa
//I expect foo to show up in here, since it is "in" the aa.
  }
}

I use key iteration more than I use value iteration, and it is what I am 
used to. It is, as you say, a matter of preference.


Re: The Thermopylae excerpt of TDPL available online

2009-10-29 Thread Pelle Månsson

Andrei Alexandrescu wrote:
It's a rough rough draft, but one for the full chapter on arrays, 
associative arrays, and strings.


http://erdani.com/d/thermopylae.pdf

Any feedback is welcome. Thanks!


Andrei


Your "Hallå Värd!" should be "Hallå Värld!", to be Swedish. D:

Also, I am wondering, why is the undefined behavior of opCatAssign kept? 
Couldn't every T[] know if it is the owner of the memory in question? 
Sorry if I bring outdated discussions up unnecessarily.


Re: associative arrays: iteration is finally here

2009-10-29 Thread Pelle Månsson

bearophile wrote:

Andrei Alexandrescu:

I'll make aa.remove(key) always work and return a bool that tells you 
whether there was a mapping or not.


I think that's a small design mistake. In a high level language you want things 
to not fail silently. You want them to fail in an explicit way because 
programmers often forget to read and use return values.

So AAs may have two methods, "remove" and "drop" (Python sets use "remove" and "discard" for this). 
The "remove" can be the safer one and used by default in D programs (especially in SafeD modules, safety is in things like this 
too), that raises an exception when you try to remove a missing key. "drop/discard" is faster and silent, it removes the key if 
it's present, as you want.

Bye,
bearophile


I agree with this. I usually want exceptions.


Re: The Thermopylae excerpt of TDPL available online

2009-10-29 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:
It's a rough rough draft, but one for the full chapter on arrays, 
associative arrays, and strings.


http://erdani.com/d/thermopylae.pdf

Any feedback is welcome. Thanks!


Andrei


Your "Hallå Värd!" should be "Hallå Värld!", to be Swedish. D:


Thanks! Ouch, two mistakes in one word. Also, should I put a comma in 
between the words?


Actually, I see now that you had it as "Hallå, värd!", and it should be 
"Hallå, värld!", no need to capitalize the second word.


Unless what you want to say is "Hello, host!", in which case your värd 
is correct.


Also, I am wondering, why is the undefined behavior of opCatAssign 
kept? Couldn't every T[] know if it is the owner of the memory in 
question? Sorry if I bring outdated discussions up unnecessarily.


We don't know how to do that cheaply.


Andrei


How about doing it expensively? Maybe storing a boolean in each T[]? I 
think undefined behavior is bad.


Re: associative arrays: iteration is finally here

2009-10-30 Thread Pelle Månsson

Nick Sabalausky wrote:
"Pelle Månsson"  wrote in message 
news:hcaaro$15e...@digitalmars.com...

I think foreach should be consistent with opIn, that is,
if (foo in aa) { //it is in the aa.
  foreach (f; aa) { // loop over each item in the aa
//I expect foo to show up in here, since it is "in" the aa.
  }
}

I use key iteration more than I use value iteration, and it is what I am 
used to. It is, as you say, a matter of preference.


I've thought for a long while that "in" should be value-based (so you can do 
things like "if(foo in [1,2,7,9])" instead of the not-as-nice 
"if([1,2,7,9].contains(foo))"), and that there should be some other way to 
check for the existance of a key (like "aa.hasKey(key)" or "key in aa.keys", 
or something like that). I need to check for values in an array much more 
often than I need to check for keys in an aa. 


I, too, want opIn to work on arrays. On values. As a linear search. I do 
not see why you would want to remove it on AA keys, though.


Re: opPow, opDollar

2009-11-07 Thread Pelle Månsson

dsimcha wrote:

== Quote from Robert Jacques (sandf...@jhu.edu)'s article

On Sat, 07 Nov 2009 10:48:11 -0500, KennyTM~  wrote:

On Nov 7, 09 18:43, Don wrote:

Walter Bright wrote:

Don wrote:

A little while ago I said I'd create a patch for ^^ as an
exponentiation. A couple of people had requested that I make a post
to the ng so they'd know when it happens. Here it is.

This is opPow(), x ^^ y

http://d.puremagic.com/issues/show_bug.cgi?id=3481

I don't understand the rationale for an exponentiation operator. It
isn't optimization, because pow() could become an intrinsic that the
compiler knows about. pow() is well known, ^^ isn't. (Fortran uses **)

It's primarily about syntax sugar: pow() is so ugly. In practice, the
most important case is squaring, which is an extremely common operation.
pow(xxx,2) is horribly ugly for something so fundamental. It's so ugly
that noone uses it: you always change it to xxx * xxx. But then, xxx
gets evaluated twice.


Nice. Meanwhile, I'd like an opSum() operator (∑ range) as well. It's
primarily about syntax sugar: reduce!("a+b")(range) is so ugly. In
practice, the most important case is the sum from 1 to n, which is an
extremely common operation. reduce!("a+b")(iota(1,n+1)) is horribly ugly
for something so fundamental. It's so ugly that noone uses it: you
always change it to n*(n+1)/2. But then, n gets evaluated twice.


Yes, ^^ hasn't been used for exponentiation before. Fortran used **
because it had such a limited character set, but it's not really a
natural choice; the more mathematically-oriented languages use ^.
Obviously C-family languages don't have that possibility.


Well, since D supports unicode, you can always define: alias
reduce!("a+b") ∑;


On a more serious note, I'm starting to think that Phobos needs a specific
convenience function for sum, not because reduce!"a + b" is ugly (it isn't) or 
too
much typing (it isn't) but because reduce!"a + b" doesn't work on zero-length
ranges.  Obviously, the sum of a zero-length range is zero, but reduce is too
general to know this.  This bit me a few times in some code I was debugging last
week.  Rather than inserting extra checks or passing an explicit start value in
(which requires you to remember the element type of your range; is it an int or 
a
float?), I simply handwrote a sum function and replaced all my reduce!"a + b" 
with
sum.
I am all in favor of adding convenience functions sum and product to 
phobos. I use them both often enough.


Re: typedef: what's it good for?

2009-11-11 Thread Pelle Månsson

Walter Bright wrote:
When I originally worked out ideas for D, there were many requests from 
the C and C++ community for a 'strong' typedef, and so I put one in D. I 
didn't think about it too much, just assumed that it was a good idea.


Now I'm not so sure. Maybe it should be removed for D2.

Does anyone use typedef's?

What do you use them for?

Do you need them?

There are typedefs in D?


Re: Short list with things to finish for D2

2009-11-20 Thread Pelle Månsson

Rainer Deyke wrote:

Andrei Alexandrescu wrote:

I am thinking that representing operators by their exact token
representation is a principled approach because it allows for
unambiguous mapping, testing with if and static if, and also allows
saving source code by using only one string mixin. It would take more
than just a statement that it's hackish to convince me it's hackish. I
currently don't see the hackishness of the approach, and I consider it a
vast improvement over the current state of affairs.


Isn't opBinary just a reduced-functionality version of opUnknownMethod
(or whatever that is/was going to be called)?

T opBinary(string op)(T rhs) {
static if (op == "+") return data + rhs.data;
else static if (op == "-") return data - rhs.data;
...
else static assert(0, "Operator "~op~" not implemented");
}

T opUnknownMethod(string op)(T rhs) {
static if (op == "opAdd") return data + rhs.data;
else static if (op == "opSub") return data - rhs.data;
...
else static assert(0, "Method "~op~" not implemented");
}

I'd much rather have opUnknownMethod than opBinary.  If if I have
opUnknownMethod, then opBinary becomes redundant.


Shouldn't you use opUnknownMethod for, you know, unknown methods? 
Implementing binary operators with an unknown method method seems unclean.


Re: Short list with things to finish for D2

2009-11-20 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Bill Baxter wrote:

On Thu, Nov 19, 2009 at 8:46 AM, Andrei Alexandrescu
 wrote:

grauzone wrote:

What's with opSomethingAssign (or "expr1[expr2] @= expr3" in general)?
opBinary doesn't seem to solve any of those.
opBinary does solve opIndex* morass because it only adds one function 
per

category, not one function per operator. For example:

struct T {
   // op can be "=", "+=", "-=" etc.
   E opAssign(string op)(E rhs) { ... }
   // op can be "=", "+=", "-=" etc.
   E opIndexAssign(string op)(size_t i, E rhs) { ... }
}


Rewrite
a.prop = x;   =>a.opPropertyAssign!("prop", "=")(x);

to that and we're really getting somewhere!

--bb


I swear I was thinking of that.

Andrei

Is this doable without a performance drop?


Re: Short list with things to finish for D2

2009-11-20 Thread Pelle Månsson

Andrei Alexandrescu wrote:

dsimcha wrote:
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s 
article

dsimcha wrote:
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s 
article

dsimcha wrote:
== Quote from Andrei Alexandrescu 
(seewebsiteforem...@erdani.org)'s article

3. It was mentioned in this group that if getopt() does not work in
SafeD, then SafeD may as well pack and go home. I agree. We need 
to make

it work. Three ideas discussed with Walter:
* Allow taking addresses of locals, but in that case switch 
allocation

from stack to heap, just like with delegates. If we only do that in
SafeD, behavior will be different than with regular D. In any 
case, it's

an inefficient proposition, particularly for getopt() which actually
does not need to escape the addresses - just fills them up.
IMHO this is a terrible solution.  SafeD should not cause major 
ripple

effects for
pieces of code that don't want to use it.  I'm all for safe 
defaults even if
they're less efficient or less flexible, but if D starts 
sacrificing performance
or flexibility for safety **even when the programmer explicitly 
asks it not

to**,
then it will officially have become a bondage and discipline 
language.


Furthermore, as you point out, having the semantics of something 
vary in subtle

ways between SafeD and unsafe D is probably a recipe for confusion.


* Allow @trusted (and maybe even @safe) functions to receive 
addresses

of locals. Statically check that they never escape an address of a
parameter. I think this is very interesting because it enlarges the
common ground of D and SafeD.
This is a great idea if it can be implemented.  Isn't escape 
analysis a pretty
hard thing to get right, though, especially when you might not 
have the source

code to the function being called?

Escape analysis is difficult when you don't have information about the
functions you're passing the pointer to. For example:
void fun(int* p) {
 if (condition) gun(p);
}
Now the problem is that fun's escape-or-not behavior depends on flow
(i.e. condition) and on gun's escaping behavior.
If we use @safe and @trusted to indicate unequivocally "no escape", 
then

there is no analysis to be done - the hard part of the analysis has
already been done manually by the user.
But then the @safe or @trusted function wouldn't be able to escape 
pointers to
heap or static data segment memory either, if I understand this 
proposal

correctly.

Yah. The question is to what extent is that necessary.
Andrei


Too kludgey for me.  I'd rather just see ref parameters get fixed and 
just don't
allow taking the address of locals in @safe functions.  I'd say that, 
except in
low-level systems programming that would probably not be @safe for 
other reasons
anyhow, there would be very few good if any good reasons to take the 
address of a

local if reference tuples just worked.


Unfortunately it's more complicated than that. getopt takes pairs of 
strings and pointers. The strings don't necessarily have to be lvalues, 
so constraining getopt to only take references is not the right solution.


Andrei

How about allowing const references to rvalues?


Re: Short list with things to finish for D2

2009-11-20 Thread Pelle Månsson

Andrei Alexandrescu wrote:
* Encode operators by compile-time strings. For example, instead of the 
plethora of opAdd, opMul, ..., we'd have this:


T opBinary(string op)(T rhs) { ... }

The string is "+", "*", etc. We need to design what happens with 
read-modify-write operators like "+=" (should they be dispatch to a 
different function? etc.) and also what happens with index-and-modify 
operators like "[]=", "[]+=" etc. Should we go with proxies? Absorb them 
in opBinary? Define another dedicated method? etc.


Andrei


What about pure, what about const?

Will we need to

pure T opBinary(string op)(T rhs) if (op == "+" || op == "-") {
  static if (op == "+") { /* ... */ }
  else static if (op == "-") { /* ... */ }
}
T opBinary(string op)(T rhs) if (op == "+=" || ...) {
  // more static if's
}

thereby duplicating every case?


Re: removal of cruft from D

2009-11-21 Thread Pelle Månsson

Walter Bright wrote:

Yigal Chripun wrote:
in the long term, I'd like to see a more general syntax that allows to 
write numbers in any base.

something like:
[base]n[number] - e.g. 16nA0FF, 2n0101, 18nGH129, etc.
also define syntax to write a list of digits:
1024n[1005, 452, 645, 16nFFF] // each digit can also be defined in 
arbitrary base



Is there any language that does this?
"Integers can be specified in any base supported by Integer.parseInt(), 
that is any radix from 2 to 36; for example 2r101010, 8r52, 36r16, and 
42 are all the same Integer."


http://clojure.org/reader


Re: removal of cruft from D

2009-11-23 Thread Pelle Månsson

dsimcha wrote:

== Quote from retard (r...@tard.com.invalid)'s article

Mon, 23 Nov 2009 17:14:54 +, dsimcha wrote:
[snip]

as opposed to the
Java way of having to use 5 different classes just to read in a file
line by line in the default character encoding.

That's a library issue. Has nothing to do with the language.


I agree completely, but for all practical purposes basic parts of the standard
library that are used by almost everyone are part of the language.  Heck, in 
many
languages (D being one of them) you can't even write a canonical hello world
program w/o the standard lib.

Sure you can!

extern (C) int puts(char *);
void main() {
puts("Hello world!\0".dup.ptr);
}

Pretty!


Re: removal of cruft from D

2009-11-23 Thread Pelle Månsson

Bill Baxter wrote:

On Mon, Nov 23, 2009 at 12:04 PM, Pelle M�nsson  wrote:

dsimcha wrote:

== Quote from retard (r...@tard.com.invalid)'s article

Mon, 23 Nov 2009 17:14:54 +, dsimcha wrote:
[snip]

as opposed to the
Java way of having to use 5 different classes just to read in a file
line by line in the default character encoding.

That's a library issue. Has nothing to do with the language.

I agree completely, but for all practical purposes basic parts of the
standard
library that are used by almost everyone are part of the language. �Heck,
in many
languages (D being one of them) you can't even write a canonical hello
world
program w/o the standard lib.

Sure you can!

extern (C) int puts(char *);
void main() {
� �puts("Hello world!\0".dup.ptr);
}


I think he means that the GC from the standard lib will still be there
to perform that .dup for you.
(You don't need the dup though, btw, string literals are null
terminated and can be passed to C funcs as-is).

Even without that, the GC doesn't get eliminated from executables just
because you don't use it.
There's still some hidden calls to gc init routines that go into any D exe.

--bb
Fair enough. :) I do think I need the dup, though, since the literal is 
immutable otherwise.


I lean more towards that the standard libs are a core part of the 
language anyway, and the possibility of writing your own simplifications 
doesn't help the usefulness of the language.


Re: Short list with things to finish for D2

2009-11-25 Thread Pelle Månsson

Denis Koroskin wrote:
On Wed, 25 Nov 2009 21:11:48 +0300, Ellery Newcomer 
 wrote:



On 11/25/2009 10:46 AM, Don wrote:

Denis Koroskin wrote:

I recall that Visual Basic has UBound function that returns upper
bound of a multi-dimensional array:
Dim a(100, 5, 4) As Byte
UBound(a, 1) -> 100
UBound(a, 2) -> 5
UBound(a, 3) -> 4
Works for single-dimensional arrays, too:
Dim b(8) As Byte
UBound(b) -> 8



I brought a point that VB has a UBound function that does exactly what
opDollar is supposed to do, so something like opUpperBound() might fit.


Finally, a viable alternative to opDollar! I could live with
opUpperBound.




VB's ubound doesn't do exactly the same thing as $; in your code snippet

b(0)
b(8)

are both valid elements.

Does opUpperBound imply an opLowerBound?

In VB you can declare things like

dim a(20 to 100, 5, 1 to 4) as Byte

LBound(a,1) -> 20

Yep. Visual Basic. Awesome language. *Cough*


Lower bound is always 0 in D, unlike VB where is can take an arbitrary 
value. As such, there is no need for opLowerBound in D.
Why does it make any sense that the lower bound of any arbitrary class 
needs to be 0?


I'd say opUpperBound is as wrong as opEnd.


Re: dmd optimizer bug under linux

2009-11-29 Thread Pelle Månsson

Janzert wrote:

Hi,

I've been chasing a bug for a few days and finally have it narrowed down
to the following example. Basically a branch that should always be false
is taken anyway.

The printf line below should never be executed, but it is if compiled
under linux with "dmd -release -O -inline badbranch.d".

I first started chasing this while using 1.043 but have since upgraded
to 1.052 and still see it. Not surprisingly I also see it with phobos or
tango. Unfortunately my assembly reading skills are poor enough that I
can't quite tell what is going on by looking at obj2asm output once the
optimizer is done with it.

Janzert

badbranch.d:
extern(C) int printf(char*, ...);

struct Container
{
ulong[2] bits = [0UL, 1];
}

int called(ulong value)
{
value = value * 3;
return value;
}

int test(Container* c, int[] shift)
{
int count = 0;
if (c.bits[0])
count = 1;
count |= called(c.bits[1]) << shift[0];
// This is always false, but is taken anyway.
if (c.bits[0])
printf("Impossible output %lld\n", c.bits[0]);

return count;
}

int main(char[][] args)
{
int[] shift = [0];
Container c;
return test(&c, shift);
}

I just tried it (in 2.034), and indeed, you should bugzilla it!


Re: Phobos packages a bit confusing

2009-11-30 Thread Pelle Månsson

Ary Borenszweig wrote:

KennyTM~ wrote:

By
far the two most important pieces of I/O functionality I need are:

1.  Read a text file line-by-line.


foreach (line; new Lines!(char) (new File ("foobar.txt")))
   Cout (line).newline;
}



yuck.


Yuck?? I find that code very elegant. How would you like it to be?

foreach (line; open("foobar.txt")) {
  writeln(line);
}

I find the .newline idea rather hackish.


Re: dynamic classes and duck typing

2009-11-30 Thread Pelle Månsson

Walter Bright wrote:

Bill Baxter wrote:

On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright

   void opDispatch(string name, T...)(T values...)
  ^^^


You didn't use to have to do that with variadic templates.  Is that
also a new change in SVN?


I believe it was always like that.


What do you mean? Not the D I played with?

void test1(T...)(T ts) {
writeln(ts); //works as expected
}
void test2(string s, T...)(T ts) {
writeln(s);  // requires manual specifying of each type
writeln(ts); // e.g. test2!("foo", int, int)(1,2)
}
void test3(string s, T...)(T ts...) {
writeln(s);  // silently dies when called with
writeln(ts); // test3!("foo")(1,2,3,4) in v2.034
}


Re: dynamic classes and duck typing

2009-11-30 Thread Pelle Månsson

Simen kjaeraas wrote:
On Mon, 30 Nov 2009 23:13:23 +0100, Pelle Månsson 
 wrote:



Walter Bright wrote:

Bill Baxter wrote:

On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright

   void opDispatch(string name, T...)(T values...)
  ^^^


You didn't use to have to do that with variadic templates.  Is that
also a new change in SVN?

 I believe it was always like that.


What do you mean? Not the D I played with?

void test1(T...)(T ts) {
 writeln(ts); //works as expected
}
void test2(string s, T...)(T ts) {
 writeln(s);  // requires manual specifying of each type
 writeln(ts); // e.g. test2!("foo", int, int)(1,2)
}
void test3(string s, T...)(T ts...) {
 writeln(s);  // silently dies when called with
 writeln(ts); // test3!("foo")(1,2,3,4) in v2.034
}


It would seem Walter is right, but only for opDispatch. This compiles
fine. If you want compile errors, move the ellipsis around:


struct foo {
void opDispatch( string name, T... )( T value... ) {
}

void bar( T... )( T args ) {

}
}

void main( ) {
foo f;
f.bar( 3 );
f.baz( 3.14 );
}


So, why have this special case for opDispatch? Maybe I am missing something.


Re: dynamic classes and duck typing

2009-12-01 Thread Pelle Månsson

Walter Bright wrote:

retard wrote:
Overall these simplifications don't remove any crucial high level 
language features, in fact they make the code simpler and shorter. For 
instance there isn't high level code that can only be written with 
8-bit byte primitives, static methods or closures, but not with 32-bit 
generic ints, singletons, and generic higher order functions. The only 
thing you lose is some type safety and efficiency.


I'm no expert on Python, but there are some things one gives up with it:

1. the ability to do functional style programming. The lack of 
immutability makes for very hard multithreaded programming.


2. as you mentioned, there's the performance problem. It's fine if you 
don't need performance, but once you do, the complexity abruptly goes 
way up.


3. no contract programming (it's very hard to emulate contract inheritance)

4. no metaprogramming

5. simple interfacing to C

6. scope guard (transactional processing); Python has the miserable 
try-catch-finally paradigm


7. static verification

8. RAII

9. versioning

10. ability to manage resources directly

11. inline assembler

12. constants


I mostly agree, but python actually has a rather elegant version of RAII.


Re: Phobos packages a bit confusing

2009-12-01 Thread Pelle Månsson

bearophile wrote:

Andrei Alexandrescu:
Why not just reuse the same buffer as the previous line? That approach 
is inherently adaptive.


That approach is unsafe. xfile yields byte strings, in D1. When I write 10 
lines long scripts I usually don't need every bit of optimization, I need the 
less bug-prone code as possible, because the thing I have to optimize is my 
coding time. In D1 strings are mutable, so if you put them in an AA as keys you 
must dup them to avoid bugs if you reuse the same buffer.



You'll have to .dup them if you want to use them as non-views always. I 
for one like that approach more.


Why call it xfile and not just open?


And why is there a need for xstdin vs. xfile? Stdin _is_ a file.<


I use it like this:
foreach (line; xstdin) { ... }
line is a string with newline at the end.
I know this isn't the best design, but it's the most handy for my purposes. I 
need to do a limited number of things in those scripts and iterating over the 
lines of a fine and over the lines of the stdin are the only two that matter.

Bye,
bearophile


Re: Phobos packages a bit confusing

2009-12-01 Thread Pelle Månsson

Denis Koroskin wrote:
On Tue, 01 Dec 2009 15:22:23 +0300, Pelle Månsson 
 wrote:



bearophile wrote:

Andrei Alexandrescu:
Why not just reuse the same buffer as the previous line? That 
approach is inherently adaptive.
 That approach is unsafe. xfile yields byte strings, in D1. When I 
write 10 lines long scripts I usually don't need every bit of 
optimization, I need the less bug-prone code as possible, because the 
thing I have to optimize is my coding time. In D1 strings are 
mutable, so if you put them in an AA as keys you must dup them to 
avoid bugs if you reuse the same buffer.




You'll have to .dup them if you want to use them as non-views always. 
I for one like that approach more.


Why call it xfile and not just open?


And why is there a need for xstdin vs. xfile? Stdin _is_ a file.<

 I use it like this:
foreach (line; xstdin) { ... }
line is a string with newline at the end.
I know this isn't the best design, but it's the most handy for my 
purposes. I need to do a limited number of things in those scripts 
and iterating over the lines of a fine and over the lines of the 
stdin are the only two that matter.

 Bye,
bearophile


In his notation, xfoo is a lazy version of foo (i.e. it reads file in 
chunks as opposed to reading the whole file at once).


So you are essentially asking, "why file instead of open?". What's the 
difference? It's a bikeshed discussion, but I believe file("filename") 
is more clear than open("filename"). Besides, I'm used to "close" 
everything I "open", which is not suitable here/.


File looks like a constructor. You are not constructing a file you open 
for reading.


Also, saying that you close everything you open, are you deallocating 
everything you allocate as well? I feel we have moved past such symmetry.


Re: dynamic classes and duck typing

2009-12-01 Thread Pelle Månsson

Bill Baxter wrote:

2009/12/1 Ary Borenszweig :

Denis Koroskin wrote:

On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig 
wrote:


Denis Koroskin wrote:

On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 wrote:


Ary Borenszweig wrote:

retard wrote:

Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:


Ary Borenszweig wrote:

Can you show examples of points 2, 3 and 4?

Have opDispatch look up the string in an associative array that
returns
an associated delegate, then call the delegate.

The dynamic part will be loading up the associative array at run
time.

This is not exactly what everyone of us expected. I'd like to have
something like

void foo(Object o) {
�o.duckMethod();
}

foo(new Object() { void duckMethod() {} });

The feature isn't very dynamic since the dispatch rules are defined
statically. The only thing you can do is rewire the associative array when
forwarding statically precalculated dispatching.

�Exactly! That's the kind of example I was looking for, thanks.

Actuall, just the first part of the example:

void foo(Object o) {
� �o.duckMethod();
}

Can't do that because even if the real instance of Object has an
opDispatch method, it'll give a compile-time error because Object does not
defines duckMethod.

That's why this is something useful in scripting languages (or ruby,
python, etc.): if the method is not defined at runtime it's an error unless
you define the magic function that catches all. Can't do that in D because
the lookup is done at runtime.

Basically:

Dynanic d = ...;
d.something(1, 2, 3);

is just a shortcut for doing

d.opDispatch!("something")(1, 2, 3);

(and it's actually what the compiler does) but it's a standarized way
of doing that. What's the fun in that?

�The fun is that you can call d.foo and d.bar() even though there is no
such method/property.
�In ActionScript (and JavaScript, too, I assume), foo.bar is
auto-magically rewritten as foo["bar"]. What's fun in that?

The fun is that in Javascript I can do:

---
function yourMagicFunction(d) {
� d.foo();
}

var something = fromSomewhere();
yourMagicFunction(something);
---

and it'll work in Javascript because there's no type-checking at
compile-time (well, because there's no compile-time :P)

Let's translate this to D:

---
void yourMagicFunction(WhatTypeToPutHere d) {
� d.foo();
}

auto something = fromSomewhere();
yourMagicFunction(something);
---


I believe there will soon be a library type that would allow that.

It's called a template:

void yourMagicFunction(T)(T d) {
�d.foo();
}

I can write that and I can always compile my code. I can use that function
with any kind of symbol as long as it defines foo, whether it's by
definining it explicitly, in it's hierarchy, in an aliased this symbol or in
an opDispatch. That's the same concept as any function in Javascript (except
that in Javascript if the argument doesn't define foo it's a runtime error
and in D it'll be a compile-time error).


If you define a catch-all opDispatch that forwards to a method that
does dynamic lookup, then the error will be a runtime error.

--bb

Which is correct, awesome, great, etc. Wouldn't want it any other way!


Re: dynamic classes and duck typing

2009-12-01 Thread Pelle Månsson

Steven Schveighoffer wrote:
On Tue, 01 Dec 2009 13:50:38 -0500, Andrei Alexandrescu 
 wrote:



Steven Schveighoffer wrote:
On Sat, 28 Nov 2009 18:36:07 -0500, Walter Bright 
 wrote:



And here it is (called opDispatch, Michel Fortin's suggestion):

http://www.dsource.org/projects/dmd/changeset?new=trunk%2f...@268&old=trunk%2f...@267 


 I have a few questions:
 1. How should the compiler restrict opDispatch's string argument?  
i.e. if I implement opDispatch, I'm normally expecting the string to 
be a symbol, but one can directly call opDispatch with any string (I 
can see clever usages which compile but for instance circumvent const 
or something), forcing me to always constrain the string argument, 
i.e. always have isValidSymbol(s) in my constraints.  Should the 
compiler restrict the string to always being a valid symbol name (or 
operator, see question 2)?


Where in doubt, acquire more power :o). I'd say no checks; let user 
code do that or deal with those cases.


It is unlikely that anything other than symbols are expected for 
opDispatch, I can't think of an example that would not want to put the 
isValidSymbol constraint on the method.


An example of abuse:

struct caseInsensitiveWrapper(T)
{
   T _t;
   auto opDispatch(string fname, A...) (A args)
   {
  mixin("return _t." ~ toLower(fname) ~ "(args);");
   }
}

class C { int x; void foo(); }

caseInsensitiveWrapper!(C) ciw;
ciw._t = new C;
ciw.opDispatch!("x = 5, delete _t, _t.foo")();

I don't know if this is anything to worry about, but my preference as an 
author for caseInsensitiveWrapper is that this last line should never 
compile without any special requirements from me.




2. Can we cover templated operators with opDispatch?  I can envision 
something like this:

 opDispatch(string s)(int rhs) if(s == "+") {...}


How do you mean that?


Isn't opBinary almost identical to opDispatch?  The only difference I 
see is that opBinary works with operators as the 'symbol' and dispatch 
works with valid symbols.  Is it important to distinguish between 
operators and custom dispatch?


-Steve
opBinary is a binary operator, opDispatch can be anything. I think they 
should be kept separate.


Re: shortcut for dynamic dispatch and operators

2009-12-01 Thread Pelle Månsson

Andrei Alexandrescu wrote:

KennyTM~ wrote:

On Dec 1, 09 22:30, Steven Schveighoffer wrote:

An idea I just had when thinking about how ugly opDispatch and opBinary
operators will be if we get those was, wouldn't it be cool if the
compiler could translate:

myTemplateMethod("abc" || "def")() if(condition) {}

to

myTemplateMethod(string __x)() if((__x == "abc" || __x == "def") &&
condition) {}

It makes dispatch based on compile-time strings much more palatable, for
example:

opDispatch("foo" || "bar")() {...}
opBinary("+" || "-" || "*")(int rhs) {...}

instead of:

opDispatch(string fn)() if(fn == "foo" || fn == "bar") {...}
opBinary(string op)() if(op == "+" || op == "-" || op == "*")(int rhs)
{...}

In fact, it can be generalized to any type which has literals:

factorial(int x)(){ return factorial!(x-1)() * x;}
factorial(1)() { return 1;}

What I don't know is if the || works in all cases -- because something
like true || false is a valid expression. Maybe someone can come up with
a better way.

-Steve


Alternative suggestion:

Make "x in y" returns a bool and works for arrays. Then you can write

int opBinary(string s)(int rhs) if (s in ["+", "-", "*", "/", "^", 
"|", "&"]) { ... }




It's a bit difficult to see a very thin operator mask a linear 
operation, but I'm thinking maybe "x in y" could be defined if y is a 
compile-time array. In that case, the compiler knows the operation and 
the operand so it may decide to change representation as it finds fit.


Andrei
What do you suggest using when you need to find out if an object is in 
an array? Arrays lacking opIn bothers me.


Re: shortcut for dynamic dispatch and operators

2009-12-01 Thread Pelle Månsson

bearophile wrote:

KennyTM~:

Make "x in y" returns a bool and works for arrays.


That's something more useful than the sum of usefulness of opDispatch, opPow 
and opLength. You use it all the time in code, and in D it's even more useful 
than in Python because in D a small linear scan can be very fast. To do that in 
my dlibs I use the function isIn(item, items), where items can be an AA too of 
course.

Bye,
bearophile
I somewhat agree. For small arrays I find it very useful, I use it all 
the time.


compare:

if (x in [1, 2, 3]) { }

if (x == 1 || x == 2 || x == 3) { }


I find the first one prettier. :)


Re: shortcut for dynamic dispatch and operators

2009-12-01 Thread Pelle Månsson

Bill Baxter wrote:

On Tue, Dec 1, 2009 at 12:23 PM, Bill Baxter  wrote:

On Tue, Dec 1, 2009 at 12:15 PM, Pelle M�nsson  wrote:

It's a bit difficult to see a very thin operator mask a linear operation,
but I'm thinking maybe "x in y" could be defined if y is a compile-time
array. In that case, the compiler knows the operation and the operand so it
may decide to change representation as it finds fit.

Andrei

What do you suggest using when you need to find out if an object is in an
array? Arrays lacking opIn bothers me.

I'm guessing Andrei would recommend std.range.find.


er... std.algorithm.find, I mean.

--bb


I find

if (x in [1, 2, 3]) { }

more clear than

if ([1, 2, 3].find(x).length != 0) { }


Re: Phobos packages a bit confusing

2009-12-01 Thread Pelle Månsson

retard wrote:

Tue, 01 Dec 2009 18:58:25 -0500, bearophile wrote:


Rainer Deyke:

"open" by itself is ambiguous.  What are you opening?  A window?  A
network port?  I think the word "file" needs to be in there somewhere
to disambiguate.

When you program in Python you remember that open is a built-in function
to open files :-) When you want to open other things you import other
names from some module. So this ambiguity usually doesn't introduce
bugs. It' a well known convention. Few well chosen conventions (sensible
defaults) save you from a lot of useless coding.


These default values are sometimes very annoying. For instance almost in 
every game you have a game object hierarchy and the super class of game 
objects usually conflicts with built-in 'Object'. If I write an adventure 
game and some event opens a dungeon door, open() suddenly deals with 
files. Also IIRC Python has built-in print() command. What if I want to 
redefine this to mean printing to a graphical quake like game console.


Namespaces in general seem rather useful. I hate the php like 'there's a 
flat global scope and everything is a free function approach'. It's 
annoying me each time I use phobos.


door.open() ? In python, you can just override what open does if you 
need open(door).


Re: Phobos packages a bit confusing

2009-12-01 Thread Pelle Månsson

Rainer Deyke wrote:

Pelle Månsson wrote:

File looks like a constructor. You are not constructing a file you open
for reading.


"open" by itself is ambiguous.  What are you opening?  A window?  A
network port?  I think the word "file" needs to be in there somewhere to
disambiguate.


Something like new BufferedReader(new FileReader("foo.txt"))? It's quite 
unambiguous.


I'll rather have open as a file-opening function.


Re: dynamic classes and duck typing

2009-12-01 Thread Pelle Månsson

retard wrote:

Tue, 01 Dec 2009 14:22:10 -0800, Walter Bright wrote:


bearophile wrote:

Right. But what people care in the end is programs that get the work
done. If a mix of Python plus C/C++ libs are good enough and handy
enough then they get used. For example I am able to use the PIL Python
lib with Python to load, save and process jpeg images at high-speed
with few lines of handy code. So I don't care if PIL is written in C++:
http://www.pythonware.com/products/pil/

Sure, but that's not about the language. It's about the richness of the
ecosystem that supports the language, and Python certainly has a rich
one.


I thought D was supposed to be a practical language for real world 
problems. This 'D is good because everything can and must be written in 
D' is beginning to sound like a religion. To me it seems the Python way 
is more practical in all ways. Even novice programmers can produce 
efficient programs with it by using a mixture of low level C/C++ libs and 
high level python scripts.


I agree that Python isn't as fast as D and it lacks type safety things 
and so on, but in the end of day the Python coder gets the job done while 
the D coder still fights with inline assembler, compiler bugs, porting 
the app, fighting the type system (mostly purity/constness issues). 
Python has more libs available, you need to write less code to implement 
the same functionality and it's all less troublesome because the lack of 
type annotations. So it's really understandable why a greater amount 
people favor Python.
You don't actually have to use pure, const, inline assembler, etc. D is 
a wonderful language to just do string-and-hashtable code in. All the 
other features are there to help bigger projects (contracts, yay!) or 
projects with special needs (I for one have never needed inline ASM).


Re: dynamic classes and duck typing

2009-12-01 Thread Pelle Månsson

Walter Bright wrote:

But you can do that with the 'with' statement!


The with goes at the use end, not the object declaration end. Or I read 
the spec wrong.
So does the scope guard. I think scope guard solves the same problem as 
the with-statement, only it does it in a more flexible and arguably 
sexier way.


Re: dynamic classes and duck typing

2009-12-02 Thread Pelle Månsson

Walter Bright wrote:

Leandro Lucarella wrote:

I guess D can greatly benefit from a compiler that can compile and run
a multiple-files program with one command


dmd a b c -run args...


Can we have

dmd -resolve-deps-and-run main.d

I use rdmd when I can, but it doesn't manage to link C-libs in properly.


Re: Phobos packages a bit confusing

2009-12-02 Thread Pelle Månsson

retard wrote:

Wed, 02 Dec 2009 08:38:29 +0100, Pelle Månsson wrote:


retard wrote:

Tue, 01 Dec 2009 18:58:25 -0500, bearophile wrote:


Rainer Deyke:

"open" by itself is ambiguous.  What are you opening?  A window?  A
network port?  I think the word "file" needs to be in there somewhere
to disambiguate.

When you program in Python you remember that open is a built-in
function to open files :-) When you want to open other things you
import other names from some module. So this ambiguity usually doesn't
introduce bugs. It' a well known convention. Few well chosen
conventions (sensible defaults) save you from a lot of useless coding.

These default values are sometimes very annoying. For instance almost
in every game you have a game object hierarchy and the super class of
game objects usually conflicts with built-in 'Object'. If I write an
adventure game and some event opens a dungeon door, open() suddenly
deals with files. Also IIRC Python has built-in print() command. What
if I want to redefine this to mean printing to a graphical quake like
game console.

Namespaces in general seem rather useful. I hate the php like 'there's
a flat global scope and everything is a free function approach'. It's
annoying me each time I use phobos.

door.open() ? In python, you can just override what open does if you
need open(door).


In internal class methods the door.open can be written as this.open() or 
just open(). In that case you need to worry about other symbols, if they 
are globally available built-ins.
Not in python you can't. Also, in D, this wouldn't be a problem, since 
if it is ambiguous, the compiler will tell you so.


Re: dynamic classes and duck typing

2009-12-02 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Walter Bright wrote:

Leandro Lucarella wrote:

I guess D can greatly benefit from a compiler that can compile and run
a multiple-files program with one command


dmd a b c -run args...


Can we have

dmd -resolve-deps-and-run main.d

I use rdmd when I can, but it doesn't manage to link C-libs in properly.


Could you please submit a sample to bugzilla?

Andrei


http://d.puremagic.com/issues/show_bug.cgi?id=3564

Thank you.


Re: should postconditions be evaluated even if Exception is thrown?

2009-12-03 Thread Pelle Månsson

Andrei Alexandrescu wrote:
If a function throws a class inheriting Error but not Exception (i.e. an 
unrecoverable error), then the postcondition doesn't need to be satisfied.


I just realized that postconditions, however, must be satisfied if the 
function throws an Exception-derived object. There is no more return 
value, but the function must leave everything in a consistent state. For 
example, a function reading text from a file may have the postcondition 
that it closes the file, even though it may throw a malformed file 
exception.


This may sound crazy, but if you just follow the facts that distinguish 
regular error handling from program correctness, you must live with the 
consequences. And the consequence is - a function's postcondition must 
be designed to take into account exceptional paths. Only in case of 
unrecoverable errors is the function relieved of its duty.



Andrei
Isn't the post-condition mainly to assert the correctness of the return 
value? Or at least partially? The output cannot be correct if an 
exception is thrown, so any assertion in the post condition concerning 
the output would fail by definition, right?


I would say the invariant() is the correct part to run.


Re: should postconditions be evaluated even if Exception is thrown?

2009-12-03 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:
If a function throws a class inheriting Error but not Exception (i.e. 
an unrecoverable error), then the postcondition doesn't need to be 
satisfied.


I just realized that postconditions, however, must be satisfied if 
the function throws an Exception-derived object. There is no more 
return value, but the function must leave everything in a consistent 
state. For example, a function reading text from a file may have the 
postcondition that it closes the file, even though it may throw a 
malformed file exception.


This may sound crazy, but if you just follow the facts that 
distinguish regular error handling from program correctness, you must 
live with the consequences. And the consequence is - a function's 
postcondition must be designed to take into account exceptional 
paths. Only in case of unrecoverable errors is the function relieved 
of its duty.



Andrei
Isn't the post-condition mainly to assert the correctness of the 
return value? Or at least partially? The output cannot be correct if 
an exception is thrown, so any assertion in the post condition 
concerning the output would fail by definition, right?


I would say the invariant() is the correct part to run.


As others have mentioned, you may have different postconditions 
depending on whether an exception was thrown or not.


I think a major failure of exceptions as a language mechanism is that 
they gave the illusion that functions need not worry about what happens 
when an exception traverses them, and only need to focus on the success 
case.



Andrei
In the case of special postconditions for exceptions, I agree it should 
be there. Something to replace the finally.


Re: lazy redux

2009-12-06 Thread Pelle Månsson

Andrei Alexandrescu wrote:
Should we sack lazy? I'd like it to have a reasonable replacement. Ideas 
are welcome!


Andrei
I think they are broken as they are not really lazy, but just convenient 
syntax for passing delegates.


In my mind, a lazy parameter should evaluate just once, and save that 
value. In case of further usage, it should use the saved value instead.


This is actually how I thought they worked until I saw Walter's example 
with writef(x++).


Re: lazy redux

2009-12-07 Thread Pelle Månsson

retard wrote:

Mon, 07 Dec 2009 13:17:10 +, Michal Minich wrote:


Hello bearophile,


Michal Minich:


But introduction "{ epx }" as delegate/function literal for functions
with no arguments, which implicitly returns result of the expression,
seems to me as a good idea.


It's a special case, and special cases help to kill languages. It's not
important enough.
But a general shorter syntax for lambdas is possible, like the C# one.
Evaluations lazy arguments only 0 or 1 times sounds like a nice idea.
Bye,
bearophile

Yes, it works well in C#, and it is one of the best extension of this
language (only adding generics was better).

Consider how it works in C#, and how it could in D

// 1. lambda with no parameter
int a;
var t = new Thread (  () => a=42  );

// 2. lambda with one parameter
string[] arr;
Array.FindAll (arr, item => item.Contains ("abc"));
  
// 3. lambda with more parameters

Foo (  (a, b) => a + b );


You surely understand that Walter doesn't have enough time to change this 
before the Andrei's book is out. So D2 won't be getting this. Besides, he 
hasn't even said that he likes the syntax. And D can't infer the types 
that way, you would need



Foo (  (auto a, auto b) => a + b );


or


Foo (  [T,S](T a, S b) => a + b );



// 4. lambda with statement (previous examples were expressions)
Array.FindAll (arr, item =>  { return item.Contains ("abc"); } ); //
curly braces, semicolon and return are required when statement is used.

D could use:

1. auto t = new Thread ( { a=42 } );
or auto t = new Thread ( () { a=42 } );

2. array.findAll (arr, (item) { item.contains ("abc") } );


Andrei invented the string template parameter hack to avoid this. This 
would work too slowly since the dmd backend from the 1960s cannot inline 
anonymous functions. It can only inline named functions.


  
3. foo ( (a, b) { a + b } );


4. array.findAll (arr, (item) { return item.contains ("abc"); } );

I'm not proposing this syntax (maybe I probably should, but I have
feeling I would not be first). It may not even be possible to parse it,
but seems to me more similar to how currently functions are written. In
this setting {exp} or {stm} is not *special* case.



Actually, it can, and will, infer the types for (a, b) { ... }

It not doing so right now is on the bugzilla.


Re: auto ref

2009-12-16 Thread Pelle Månsson

On 12/17/2009 01:05 AM, Michel Fortin wrote:

Object? func(Object? o) {
writeln(o.toString());
return o;
}

MyObject o = func(new MyObject);

Here, "Object?" means Object or a derived type.

You know, just Object means Object or a derived type. That's what 
inheritance is.


Re: This seems to be the Haskell equivalent

2009-12-21 Thread Pelle Månsson

On 12/22/2009 01:16 AM, Nick Sabalausky wrote:

It's already been pointed out in a number of places that that's not
quicksort. Quicksort is in-place and doesn't use the head as the pivot.
Besides "It probably performs like a bitch" defeats the whole point of
quicksort, doesn't it? And, going along with what Andrei pointed out in
another thread, it's hard call a piece of code "beautiful" if it's either
flat-out wrong or otherwise defeats its own point.


It does however display the idea behind quicksort quite nicely: pivot, 
sort the larger and smaller portions of the array separately. The rest 
is just optimization.


Re: dmd-x64

2009-12-23 Thread Pelle Månsson

On 12/23/2009 10:40 PM, retard wrote:

Wed, 23 Dec 2009 12:02:53 -0500, bearophile wrote:


Leandro Lucarella:


bearophile, el 23 de diciembre a las 00:13 me escribiste:

Compared to GCC LLVM lacks vectorization (this can be important for
certain heavy numerical computing code), profile-guided optimization
(this is usually less important, it's uncommon that it gives more
than 5-25% performance improvement)


I don't know if that are accurate numbers, but 5-25% looks like a *lot*
to me.


Vectorization can improve 2X or 3X+ the performance of certain code
(typical example: matrix multiplication done right).

Performance differences start to matter in practice when they are 2X or
more. In most situations users aren't able to appreciate a 20%
performance improvement of an application. (But small improvements are
important for the compiler devs because they are cumulative, so many
small improvements may eventually lead some a significant difference).


Aren't able to appreciate? Where are those numbers pulled from?
Autovectorization mostly deals with expression optimizations in loops.
You can easily calculate how much faster some code runs when it uses e.g.
SSE2 instructions instead of plain old x86 instructions.


I think you miss the point, he said vectorization was a big deal. The 
numbers on profile guided optimization seem a bit odd though.



LLVM devs are also very nice people, they help me when I have a problem,
and they even implement large changes I ask them, often in a short
enough time. Helping them is fun. This means that probably the compiler
will keep improving for some more time, because in open source projects
the quality of the community is important.


And GCC devs aren't nice people? They won't help you if you have a
problem? Helping them isn't fun? GCC won't keep improving because it's
open source? You make no sense. How much do the LLVM devs pay you for
advertising them?


LLVM is way younger than GCC. In my experiments, I get mostly better 
performance out of clang than out of gcc. Working with LLVM seems like 
more fun to me.


Re: dmd-x64

2009-12-24 Thread Pelle Månsson

On 12/24/2009 11:44 AM, alkor wrote:

D already has TLS. What exactly do you need?

hmm ... i don't think so.

i've worked out the following info:
http://www.digitalmars.com/d/2.0/cpp0x.html#local-classes
http://www.digitalmars.com/d/2.0/migrate-to-shared.html

but "shared data" are not TLS or i misunderstand something

whether you could give a TLS example?


int i;
void main() { }

compile with -vtls. :)


Re: What's wrong with D's templates?

2009-12-25 Thread Pelle Månsson

On 12/25/2009 08:17 PM, Walter Bright wrote:

I believe there's plenty that can be achieved with it first. D has a
fairly simple GC implementation in it right now, probably early 90's
technology. It could be pushed an awful lot further.

If you want to help out with it, you're welcome to.


How about a simple way to allocate in TLS? Could be a garbage collected 
heap, which stops only the current thread when collecting.


It doesn't need to be an all out solution, and you could just advise 
against casting the pointers away from TLS, just as is done with 
immutability.


Then again, I have not measured the performance differences involved in 
the current solution, maybe this is a non-problem.


Re: Is this a bug or a feature?

2010-01-07 Thread Pelle Månsson

On 01/07/2010 02:04 PM, Daniel Murphy wrote:

The following code compiles under dmd v2.037, no closing brace required.

---
import std.stdio;

void main()
{
while(true)
writeln("Bug!");
---

Bug, or feature for the extremely lazy?
I bumped into this myself the other day, I find it being quite a 
feature. If nothing else, it's kinda cool!


Re: D's auto keyword

2010-01-13 Thread Pelle Månsson

On 01/13/2010 06:19 PM, dsimcha wrote:

== Quote from Justin Johansson (n...@spam.com)'s article

Happy New Year 2010 Everybody.
Having resumed C++ nationality for the last few months, I kind of miss D's auto

keyword.

I am wondering, though, from an OO/polymorphism perspective, and UML and sound

software engineering perspective as well, what does D's auto keyword buy you
except (perhaps) laziness (in keystrokes)?

Sure the auto variable decl allows the declared var to take on the static type

(i.e. as inferred by the compiler), but the programmer still has to know (in
subsequent method invocations applied to the auto var) just what methods are 
valid
for the statically inferred var type being the subject of the auto decl.

In some ways, as I said above, I miss "D auto" in C++; but then again, when I

explicitly write the exact same type as the function return signature says, I 
feel
more in control of my software design.

In an ideal world, which in of course such utopia does not really exist,  a pure

OO model may well be that of single inheritance, and therefore all methods 
would,
or could, be forced into a base class and hence, for object/polymorphic types
D'auto keyword would not prove much advantage.

(Pray, let's not get into fragile base class discussions.)
At the end of the day, I'm not sure if D's auto keyword really helps to make my

code more readable to others (alla programming-in-the-large) or if it just helps
me with typing shortcuts (alla programming-in-the-small).

btw. 20 years ago I thought the Forth language was fantastic.  Then later I

learned the difference between programming-in-the-small and 
programming-in-the-large.

Of course, Forth still hold fond memories for me .. but today I'd still rather

stick to C++.

In writing this NG post, I was wondering about a subject line like "what's the

best thing about D", but then my love/hate relationship with D's auto keyword
really got me.

btw. Do any other languages have an "auto" var idiom?  I don't remember Scala

having such (and it's really modern), though perhaps my memory lapses.

Cheers again,
Justin Johansson


One underappreciated thing auto gives is DRY for types.  It makes it easier to
change the type of some object in the place where it's initially decided, 
because
those changes will automagically be propagated to everything that uses that
object, as long as the new type supports the same compile-time interface as the
old type.

This!

Makes some refactoring tasks ridiculously smooth.


Re: Class Instance allocations

2010-01-13 Thread Pelle Månsson

On 01/13/2010 06:18 PM, bearophile wrote:

bearophile:

And currently the D GC seems to always return aligned to 16 bytes (even chunks of 
memory smaller than 16 bytes).<


I hope to be wrong :-)
I think the GC has a 16 byte minimum allocation block, I believe I read 
it somewhere around here.


Re: @disable

2010-01-14 Thread Pelle Månsson

On 01/14/2010 03:55 PM, Leandro Lucarella wrote:

What is @disable supposed to be for?
http://www.dsource.org/projects/dmd/changeset/336

Thanks.

#define STCdisable   0x20LL // for functions that are not 
callable


Re: @disable

2010-01-14 Thread Pelle Månsson

On 01/15/2010 07:35 AM, john foo wrote:

Walter Bright Wrote:


Leandro Lucarella wrote:

Exactly, it seems to me that the generalization in this case is
counterproductive.


It's similar to the motivation for the "= delete" capability proposed
for C++0x. Lawrence Crowl makes a good case for it:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2326.html#delete

Lawrence mentions several uses for it.


So you're copying yet another C++0x feature and renaming it to attract more 
positive publicity..


How is this not a good thing?


Re: @disable

2010-01-16 Thread Pelle Månsson

On 01/16/2010 01:46 AM, Leandro Lucarella wrote:

Ali Çehreli, el 15 de enero a las 16:01 me escribiste:

http://www.digitalmars.com/d/2.0/declaration.html#AutoDeclaration


It is news to me that the following works without 'auto':

struct S
{
 int i;

 this(int i)
 {
 this.i = i;
 }
}

void main()
{
 const c = S(42);   //<-- no 'auto' needed
}

Now I realize that 'auto' is for when we want type inference for
mutable variables because this doesn't work:

 c = S(42);   //<-- compiler ERROR

So we have to use a keyword:

 auto c = S(42);
 ++c.i;

If I understand it correctly, 'auto' serves as the nonexistent
'mutable' keyword in this case.

I think to be consistent, I will continue using 'auto' even for when
a storage class is specified:

 const auto c = S(42);  // works too

For me, that gives 'auto' a single meaning: "the inferred type".

Do I get it right? :)


I don't think so. auto means in D the same that in C/C++, the difference
is that D do type inference when a *storage class* is given. const,
static, immutable, shared are other storage classs, so when you used
them, you can infer the type too (if no type is given).

You can do const auto c = 1; (I think), but I can't do static auto c = 1;
(I think too). You can omit auto when declaring automatic variables if you
specify the type (seen the other way :), because it defaults to auto. And
you can omit the type if you use a storage class, because it defaults
to the infered type.


Makes sense, but static auto totally works.

I think auto just means inferred type.


Re: Tidy auto [Was: Re: @disable]

2010-01-17 Thread Pelle Månsson

On 01/17/2010 04:20 PM, bearophile wrote:

dsimcha:

What would this accomplish?  Everyone who's been using D for a while knows that,


It will help people that aren't using D for a lot of time yet, as I have said.



If doing this were more verbose, i.e. if I couldn't just write:
immutable y = 2 * x + 1;
I might be less inclined to do this.


Is this too much long?
auto immutable y = 2 * x + 1;

If it's too much long, there are other ways to shorten it (require a syntax 
change):
var immutable y = 2 * x + 1;
Or:
immutable y := 2 * x + 1;

Bye,
bearophile


Do you find that the extra : adds a lot of otherwise missing clarity?

I think the way it is now is great. Maybe if the := syntax added 
automatic immutability too, it would be useful.


  1   2   >