Re: Implicit enum conversions are a stupid PITA

2010-03-28 Thread Yigal Chripun
KennyTM~ Wrote:

 On Mar 26, 10 18:52, yigal chripun wrote:
  KennyTM~ Wrote:
 
  On Mar 26, 10 05:46, yigal chripun wrote:
 
  while it's true that '?' has one unicode value for it, it's not true for 
  all sorts of diacritics and combine code-points. So your approach is to 
  pass the responsibility for that to the end user which in 99.% will 
  not handle this correctlly.
 
 
  Non-issue. Since when can a character literal store  1 code-point?
 
  character != code-point
 
  D chars are really as you say code-points and not always complete 
  characters.
 
  here's a use case for you:
  you want to write a fully unicode aware search engine.
  If you just try to match the given sequnce of code-points in the search 
  term, you will miss valid matches since, for instance you do not take into 
  account permutations of the order of combining marks.
  you can't just assume that the code-point value identifies the character.
 
 Stop being off-topic. '?' is of type char, not string. A char always 
 holds an octet of UTF-8 encoded sequence. The numerical content is 
 unique and well-defined*. Therefore adding 4 to '?' also has a meaning.
 
 * If you're paranoid you may request the spec to ensure the character is 
 in NFC form.

Huh? You jump in in the middle of conversation and I'm off-topic?

Now, to get back to the topic at hand:

D's current design is:
char/dchar/wchar are integral types that can contain any value/encoding even 
though D prefers Unicode. This is not enforced.  
e.g. you can have a valid wchar which you increment by 1 and get an invalid 
wchar. 

Instead, Let's have proper well defined semantics in D:

Design A: 
char/wchar/dchar are defined to be Unicode code-points for the respective 
encodings. These is enforces by the language so if you want to define a 
different encoding you must use something like bits!8
arithmetic on code-points is defined according to the Unicode  standard. 

Design B: 
char represents a (perhaps multi-byte) character. 
Arithmetic on this type is *not* defined.

In either case these types should not be treated as plain integral types.


Re: Implicit enum conversions are a stupid PITA

2010-03-26 Thread yigal chripun
KennyTM~ Wrote:

 On Mar 26, 10 05:46, yigal chripun wrote:
 
  while it's true that '?' has one unicode value for it, it's not true for 
  all sorts of diacritics and combine code-points. So your approach is to 
  pass the responsibility for that to the end user which in 99.% will not 
  handle this correctlly.
 
 
 Non-issue. Since when can a character literal store  1 code-point?

character != code-point 

D chars are really as you say code-points and not always complete characters. 

here's a use case for you:
you want to write a fully unicode aware search engine. 
If you just try to match the given sequnce of code-points in the search term, 
you will miss valid matches since, for instance you do not take into account 
permutations of the order of combining marks.
you can't just assume that the code-point value identifies the character.


Re: Implicit enum conversions are a stupid PITA

2010-03-26 Thread yigal chripun
Walter Bright Wrote:

 
 That's true, '?' can have different encodings, such as for EBCDIC and 
 RADIX50. 
 Those formats are dead, however, and ASCII has won. D is specifically a 
 Unicode 
 language (a superset of ASCII) and '?' has a single defined value for it.
 
 Yes, Unicode has some oddities about it, and the poor programmer using those 
 characters will have to deal with it, but that does not change that quoted 
 character literals are always the same numerical value. '?' is not going to 
 change to another one tomorrow or in any conceivable future incarnation of 
 Unicode.
 

another point regarding encodings -
While it's true that for English there's a clear winner - Ascii and unicode as 
a superset of it, it doesn't (yet) apply to other languages. For example, it is 
still prefered for russian to use another pre-existing encoding over Unicode. 


Re: Implicit enum conversions are a stupid PITA

2010-03-25 Thread yigal chripun
Walter Bright Wrote:

 Nick Sabalausky wrote:
  To put it simply, I agree with this even on mere principle. I'm convinced 
  that the current D behavior is a blatant violation of strong-typing and 
  smacks way too much of C's so-called type system.
 
 You're certainly not the first to feel this way about implicit conversions. 
 Niklaus Wirth did the same, and designed Pascal with no implicit conversions. 
 You had to do an explicit cast each time.
 
 Man, what a royal pain in the ass that makes coding in Pascal. 
 Straightforward 
 coding, like converting a string of digits to an integer, becomes a mess of 
 casts. Even worse, casts are a blunt instrument that *destroys* type checking 
 (that wasn't so much of a problem with Pascal with its stone age abstract 
 types, 
 but it would be killer for D).
 
 Implicit integral conversions are not without problems, but when I found C I 
 threw Pascal under the nearest bus and never wrote a line in it again. The 
 taste 
 was so bad, I refused to even look at Modula II and its failed successors.
 
 D has 12 integral types. Disabling implicit integral conversions would make 
 it 
 unbearable to use.

here's a simple version without casts:
int toString(dchar[] arr) {
  int temp = 0;
  for (int i = 0; i  arr.length; i++) {
  int digit = arr[i].valueOf - 30; // *
  if (digit  0 || digit  9) break;
  temp += 10^^i * digit;
  }
  return temp;
}

[*] Assume that dchar has a valueOf property that returns the value.

where's that mess of casts you mention?
Pascal is hardly the only language without excplicit casts. ML is also properly 
strongly typed and is an awesome language to use.

The fact that D has 12 integral types is a bad design, why do we need so many 
built in types? to me this clearly shows a need to refactor this aspect of D. 

  


Re: Implicit enum conversions are a stupid PITA

2010-03-25 Thread Yigal Chripun
Walter Bright Wrote:

 yigal chripun wrote:
  here's a simple version without casts: int toString(dchar[] arr) { int temp 
  =
  0; for (int i = 0; i  arr.length; i++) { int digit = arr[i].valueOf - 30; 
  //
  * if (digit  0 || digit  9) break; temp += 10^^i * digit; } return temp; }
  
  [*] Assume that dchar has a valueOf property that returns the value.
  
  where's that mess of casts you mention?
 
 In Pascal, you'd have type errors all over the place. First off, you cannot do
 arithmetic on characters. You have to cast them to integers (with the ORD(c)
 construction).
 

  Pascal is hardly the only language without excplicit casts.
 
 Pascal has explicit casts. The integer to character one is CHR(i), the 
 character 
 to integer is ORD(c).
 

I meant implicit, sorry about that. The pascal way is definitely the correct 
way. what's the semantics in your opinion of ('f' + 3) ? what about ('?' + 4)? 
making such arithmetic valid is wrong.
I'm sure that the first Pascal versions had problems which caused you to ditch 
that language (they where fixed later). I doubt it though that this had a large 
impact on Pascal's problems. 

 
  ML is also
  properly strongly typed and is an awesome language to use.
 
 I don't know enough about ML to comment intelligently on it.
 
 
  The fact that D has 12 integral types is a bad design, why do we need so 
  many
  built in types? to me this clearly shows a need to refactor this aspect of 
  D.
 
 Which would you get rid of? (13, I forgot bool!)
 
 bool
 byte
 ubyte
 short
 ushort
 int
 uint
 long
 ulong
 char
 wchar
 dchar
 enum

you forgot the cent and ucent types and what about 256bit types? 

Here's How I'd want it designed:
First of, a Boolean type should not belong to this list at all and shouldn't be 
treated as a numeric type.
Second, there really only few use-cases that are relevant

signed types for representing numbers:
1) unlimited integral type - int
2) limited integral type  - int!(bits), e.g. int!16, int!8, etc..
3) user defined range: e.g. [0, infinity) for positive numbers, etc..

unsigned bit-packs:
4) bits!(size), e.g. bits!8, bits!32, etc.. 

of course you can define useful aliases, e.g.
alias bits!8 Byte;
alias bits!16 Word; 
.. 
or you can define the aliases per the architecture, so that Word above will be 
defined for the current arch (I don't know what's the native word size on say 
ARM and other platforms)

char and relatives should be for text only per Unicode, (perhaps a better name 
is code-point). for other encodings use the above bit packs, e.g.
alias bits!7 Ascii;
alias bits!8 ExtendedAscii;
etc..

enum should be an enumeration type. You can find an excellent strongly-typed  
design in Java 5.0


Re: Implicit enum conversions are a stupid PITA

2010-03-25 Thread Yigal Chripun
Regan Heath Wrote:

 yigal chripun wrote:
  Here's a Java 5 version with D-like syntax: 
  
  enum Flag {
  READ  (0x1), WRITE (0x2), OTHER(0x4)
  
  const int value;
  private this (int value) {
  this.value = value;
   }
  }
  
   int main(string[] args) {
  foo(FLAG.READ.value);
  foo(FLAG.READ.value | FLAG.WRITE.value);
  return 0;
  }
  
  No conversions required. 
 
 Cool.  I wasn't aware of that Java feature/syntax - shows how much Java 
 I do :p
 
 But.. what is the definition of 'foo' in the above, specifically does it 
 take an argument of type Flag? or int?
 

foo's signature in this case would be something like:
void foo(int);


 If the latter, then all you're doing is shifting the conversion.  In my 
 example it was a cast, in the above it's a property called 'value' which 
 converts the enum to 'int'.
 

It might do something very similar but it is not the same semantically. 
by casting the enum member to an int you say something about its identity vs. a 
value property is just a property.
 For example, I can define a Color Enum that has two properties, an ordinal 
value and a hex RGB value.



 Interestingly you can do something similar in D...
 
 import std.stdio;
 
 struct Enum { this(int v) { value = v; } int value; }
 
 struct Flag
 {
Enum READ  = Enum(1);
Enum WRITE = Enum(2);
Enum OTHER = Enum(4);
 }
 
 static Flag FLAG;
 
 void foo(int flag)
 {
writefln(flag = %d, flag);
 }
 
 void main()
 {
foo(FLAG.READ.value);
foo(FLAG.READ.value|FLAG.WRITE.value);
 }
 
 What I really want is something more like...
 
 import std.stdio;
 import std.string;
 
 struct Enum
 {
int value;
 
this(int v)
{
  value = v;
}
 
Enum opBinary(string s:|)(Enum rhs)
{
  return Enum(value|rhs.value);
}
 
const string toString()
{
  return format(%d, value);
}
 }
 
 struct Flag
 {
Enum READ  = Enum(1);
Enum WRITE = Enum(2);
Enum OTHER = Enum(4);
 }
 
 static Flag FLAG;
 
 void foo(Enum e)
 {
writefln(e = %d, e);
 }
 
 void main()
 {
foo(FLAG.READ);
foo(FLAG.READ|FLAG.WRITE);
 }
 
 This is only a partial implementation, to complete it I would have to 
 manually define all the numeric and logical operators in my Enum struct.
 
 What I want is for D to do all this with some syntactical sugar, eg.
 
 enum FLAG : numeric
 {
READ = 1, WRITE = 2, OTHER = 4
 }
 
 R

That's not how it's implemented. the enum members are actually singleton 
instances of anonymous inner-classes. each member can have it's own methods as 
well as methods defined for the enum type itself. 
I can have:
enum SolarSystem { Earth(mass, distance_from_sun), ...}
SolarSystem.Earth.rotate();
etc...

You could implement this in D with structs/classes but it'll take a lot of 
code. Java does this for you.




Re: Implicit enum conversions are a stupid PITA

2010-03-25 Thread yigal chripun
Walter Bright Wrote:

 Yigal Chripun wrote:
  Walter Bright Wrote:
  Pascal has explicit casts. The integer to character one is CHR(i), the
  character to integer is ORD(c).
  I meant implicit, sorry about that. The pascal way is definitely the correct
  way. what's the semantics in your opinion of ('f' + 3) ? what about ('?' +
  4)? making such arithmetic valid is wrong.
 
 Yes, that is exactly the opinion of Pascal. As I said, I've programmed in 
 Pascal, suffered as it blasted my kingdom, and I don't wish to do that again. 
 I 
 see no use in pretending '?' does not have a numerical value that is very 
 useful 
 to manipulate.
 

'?' indeed does *not* have a single numerical value that identiies it in a 
unique manner. You can map it to different numeric values based on encoding and 
even within the same encoding this doesn't always hold. See normalization in 
Unicode for different encodings for the same character.

  I'm sure that the first Pascal
  versions had problems which caused you to ditch that language (they where
  fixed later).
 
 They weren't compiler bugs I was wrestling with. They were fundamental design 
 decisions of the language.
 
  I doubt it though that this had a large impact on Pascal's
  problems.
 
 I don't agree. Pascal was a useless language as designed. This meant that 
 every 
 vendor added many incompatible extensions. Anyone who used Pascal got locked 
 into a particular vendor. That killed it.
 
 
  The fact that D has 12 integral types is a bad design, why do we need so
  many built in types? to me this clearly shows a need to refactor this
  aspect of D.
  Which would you get rid of? (13, I forgot bool!)
  
  bool byte ubyte short ushort int uint long ulong char wchar dchar enum
  
  you forgot the cent and ucent types and what about 256bit types?
 
 They are reserved, not implemented, so I left them out. In or out, they don't 
 change the point.
 
 
  Here's How I'd want it designed: First of, a Boolean type should not belong
  to this list at all and shouldn't be treated as a numeric type. Second, 
  there
  really only few use-cases that are relevant
  
  signed types for representing numbers: 1) unlimited integral type - int 2)
  limited integral type  - int!(bits), e.g. int!16, int!8, etc.. 3) user
  defined range: e.g. [0, infinity) for positive numbers, etc..
  
  unsigned bit-packs: 4) bits!(size), e.g. bits!8, bits!32, etc..
  
  of course you can define useful aliases, e.g. alias bits!8 Byte; alias
  bits!16 Word; .. or you can define the aliases per the architecture, so that
  Word above will be defined for the current arch (I don't know what's the
  native word size on say ARM and other platforms)
 
 People are going to quickly tire of writing:
 
 bits!8 b;
 bits!16 s;
 
 and are going to use aliases:
 
 alias bits!8 ubyte;
 alias bits!16 ushort;
 
 Naturally, either everyone invents their own aliases (like they do in C with 
 its 
 indeterminate int sizes), or they are standardized, in which case we're back 
 to 
 pretty much exactly the same point we are at now. I don't see where anything 
 was 
 accomplished.
 
Not true. say I'm using my own proprietary hardware and I want to have bits!24. 
How would I do that in current D? 
what if new hardware adds support for larger vector ops and 512bit registers, 
will we now need to extend the language with another type?

On the flip side of this, programmers almost always will need just an int since 
they need the mathematical notion of an integral type. 
Iit's prtty rare when programmers want something other than int and in those 
cases they'll define thir own types anyway since they know what their 
requirements are. 
 
 
  char and relatives should be for text only per Unicode, (perhaps a better
  name is code-point).
 
 There have been many proposals to try and hide the fact that UTF-8 is really 
 a 
 multibyte encoding, but that makes for some pretty inefficient code in too 
 many 
 cases.

I'm not saying we should hide that, on the contrary, the compiler should 
enforce unicode and other encodings should use a bits type instead. a [w|d]char 
must always contain a valid unicode value.
calling char[] a string is wrong since it is actually an array of code-points 
which is not always a valid encoding. a dchar[] is however a valid string since 
each individual dchar contains a full code-unit. 

 
  for other encodings use the above bit packs, e.g. alias
  bits!7 Ascii; alias bits!8 ExtendedAscii; etc..
  
  enum should be an enumeration type. You can find an excellent strongly-typed
  design in Java 5.0
 
 Those enums are far more heavyweight - they are a syntactic sugar around a 
 class 
 type complete with methods, interfaces, constructors, etc. They aren't even 
 compile time constants! If you need those in D, it wouldn't be hard at all to 
 make a library class template that does the same thing.
 

They aren't that heavy weight. Instead of assigning an int to each symbol you 
assign a pointer address

Re: Implicit enum conversions are a stupid PITA

2010-03-25 Thread yigal chripun
It seems that on a conceptual level we are in complete agreement. 
the difference seems to be that you want to push some things onto the user 
which I think the language should provide.

Walter Bright Wrote:

 yigal chripun wrote:
  Walter Bright Wrote:
  
  Yigal Chripun wrote:
  Walter Bright Wrote:
  Pascal has explicit casts. The integer to character one is CHR(i), the 
  character to integer is ORD(c).
  I meant implicit, sorry about that. The pascal way is definitely the
  correct way. what's the semantics in your opinion of ('f' + 3) ? what
  about ('?' + 4)? making such arithmetic valid is wrong.
  Yes, that is exactly the opinion of Pascal. As I said, I've programmed in 
  Pascal, suffered as it blasted my kingdom, and I don't wish to do that
  again. I see no use in pretending '?' does not have a numerical value that
  is very useful to manipulate.
  
  
  '?' indeed does *not* have a single numerical value that identiies it in a
  unique manner. You can map it to different numeric values based on encoding
  and even within the same encoding this doesn't always hold. See 
  normalization
  in Unicode for different encodings for the same character.
 
 
 That's true, '?' can have different encodings, such as for EBCDIC and 
 RADIX50. 
 Those formats are dead, however, and ASCII has won. D is specifically a 
 Unicode 
 language (a superset of ASCII) and '?' has a single defined value for it.
 
 Yes, Unicode has some oddities about it, and the poor programmer using those 
 characters will have to deal with it, but that does not change that quoted 
 character literals are always the same numerical value. '?' is not going to 
 change to another one tomorrow or in any conceivable future incarnation of 
 Unicode.
 

while it's true that '?' has one unicode value for it, it's not true for all 
sorts of diacritics and combine code-points. So your approach is to pass the 
responsibility for that to the end user which in 99.% will not handle this 
correctlly. 

  Naturally, either everyone invents their own aliases (like they do in C
  with its indeterminate int sizes), or they are standardized, in which case
  we're back to pretty much exactly the same point we are at now. I don't see
  where anything was accomplished.
  
  Not true. say I'm using my own proprietary hardware and I want to have
  bits!24. How would I do that in current D?
 
 You'd be on your own with that. I had a discussion recently with a person who 
 defended C's notion of compiler defined integer sizes, pointing out that this 
 enabled compliant C compilers to be written for DSLs with 32 bit bytes. That 
 is 
 pedantically correct, compliant C compilers were written for it. 
 Unfortunately, 
 practically no C applications could be ported to it without extensive 
 modification!
 
 For your 24 bit machine, you will be forced to write all your own custom 
 software, even if the D specification supported it.
 
I completely agree with you that the C notion isn't good for integral types. It 
would only make sense when you use bits kind of type, where you'd see size_t in 
D (not common in user code).
Of course any software that depends on a specifc size, e.g. bits!32 will need 
to be extensively modified if it's ported to an arch which requires a different 
size. But I'm talking about the need to define bits!(T) myself instead of 
having it in the standard library.

 
  what if new hardware adds support
  for larger vector ops and 512bit registers, will we now need to extend the
  language with another type?
 
 D will do something to accommodate it, obviously we don't know what that will 
 be 
 until we see what those types are and what they do. What I don't see is using 
 512 bit ints for normal use.
 

There's another issue here and that's all those types are special cases in the 
compiler and handled separately from library types. 
Had the stdlib provided the templeted types it would have allowed to use them 
in more generic ways instead of special caseing them everywhere. 

 
 
  char and relatives should be for text only per Unicode, (perhaps a better
   name is code-point).
  There have been many proposals to try and hide the fact that UTF-8 is
  really a multibyte encoding, but that makes for some pretty inefficient
  code in too many cases.
  
  I'm not saying we should hide that, on the contrary, the compiler should
  enforce unicode and other encodings should use a bits type instead. a
  [w|d]char must always contain a valid unicode value. calling char[] a string
  is wrong since it is actually an array of code-points which is not always a
  valid encoding. a dchar[] is however a valid string since each individual
  dchar contains a full code-unit.
 
 Conceptually, I agree, it's wrong, but it's not practical to force the issue.

 
 
  enum should be an enumeration type. You can find an excellent
  strongly-typed design in Java 5.0
  Those enums are far more heavyweight - they are a syntactic sugar around a
  class type complete

Re: Implicit enum conversions are a stupid PITA

2010-03-24 Thread yigal chripun
Nick Sabalausky Wrote:

 yigal chripun yigal...@gmail.com wrote in message 
 news:hobg4b$12e...@digitalmars.com...
 
  This also interacts with the crude hack of this enum is actually a 
  constant.
  if you remove the implicit casts than how would you be able to do:
  void foo(int p);
  enum { bar = 4 }; // don't remember the exact syntax here
  foo(bar); // compile-error?!
 
 
 AIUI, That style enum is already considered different by the compiler 
 anyway. Specifically, it's doesn't create any new type, whereas the other 
 type of enum creates a new semi-weak type. I don't think it would be too big 
 of a step to go one step further and change this kind of enum creates a new 
 semi-weak type to this kind of enum creates a new strong type. But yea, I 
 absolutely agree that calling a manifest constant an enum is absurd. It 
 still bugs the hell out of me even today, but I've largely shut up about it 
 since Walter hasn't wanted to change it even though he seems to be the only 
 one who doesn't feel it's a bad idea (and it's not like it causes practical 
 problems when actually using the language...although I'm sure it must be a 
 big WTF for new and prospective D users).
 
 
  I feel that enum needs to be re-designed. I think that C style enums are 
  numbers are *bad*, *wrong* designs that expose internal implementation 
  and the only valid design is that of Java 5.
 
  e.g.
  enum Color {blue, green}
  Color c = Color.blue;
  c++; // WTF?  should NOT compile
 
  A C style enum with values assigned is *not* an enumeration but rather a 
  set of meaningful integral values and should be represented as such.
 
  This was brought up many many times in the NG before and based on past 
  occurences will most likely never change.
 
 I would hate to see enums lose the concept of *having* a base type and base 
 values because I do find that to be extremely useful (Haxe's enums don't 
 have a base type and, from direct experience with them, I've found that to 
 be a PITA too). But I feel very strongly that conversions both to and from 
 the base type need to be explicit. In fact, that was one of the things that 
 was bugging me about C/C++ even before I came across D. D improves the 
 situation of course, but it's still only half-way.
 
 
 

Regarding the base type notion, I re-phrased my inccurate saying above in a 
reply to my post. I don't agree that enums should have a base type, enums 
should be distinct storng types.
The numeric value should be a *property* of an enum member and not define its 
identity. 
Which is how it works in Java 5 where each each member is a singelton class. 

you should never do:
void foo(int); 
foo(MyEnum.Bar); // this is bad design
instead do:
foo(MyEnum.Bar.value); // value is a regular property.

This is also more flexible, since you could do things like:
// assume I defined a Color enum
foo(Color.Red.ordinal);
bar(Color.Red.rgb);

where foo belongs to an API that defines a a list of colors (red is 5)and bar 
belongs to a different API that uses the rgb value (red is 0xff)

how would you do that with a C style enum?



Re: Implicit enum conversions are a stupid PITA

2010-03-24 Thread yigal chripun
Regan Heath Wrote:

 
 One thing being able to convert enum to it's base type does allow is this:
 
 import std.stdio;
 
 enum FLAG
 {
READ  = 0x1,
WRITE = 0x2,
OTHER = 0x4
 }
 
 void foo(FLAG flags)
 {
writeln(Flags = , flags);
 }
 
 int main(string[] args)
 {
foo(FLAG.READ);
foo(FLAG.READ|FLAG.WRITE);
return 0;
 }
 
 
snip

Here's a Java 5 version with D-like syntax: 

enum Flag {
READ  (0x1), WRITE (0x2), OTHER(0x4)

const int value;
private this (int value) {
this.value = value;
 }
}

 int main(string[] args) {
foo(FLAG.READ.value);
foo(FLAG.READ.value | FLAG.WRITE.value);
return 0;
}

No conversions required. 


Re: Implicit enum conversions are a stupid PITA

2010-03-23 Thread yigal chripun
Nick Sabalausky Wrote:

 I'm bringing this over here from a couple separate threads over on D.learn 
 (My D1: Overloading across modules and bearophile's Enum equality test).
 
 Background summary:
 
 bearophile:
  I'm looking for D2 rough edges. I've found that this D2 code
  compiles and doesn't assert at runtime:
 
  enum Foo { V1 = 10 }
  void main() {
   assert(Foo.V1 == 10);
  }
 
  But I think enums and integers are not the same type,
  and I don't want to see D code that hard-codes comparisons
  between enum instances and number literals, so I think an
  equal between an enum and an int has to require a cast:
 
  assert(cast(int)(Foo.V1) == 10); // OK
 
 He goes on to mention C++0x's enum class that, smartly, gets rid of that 
 implicit conversion nonsense.
 
 To put it simply, I agree with this even on mere principle. I'm convinced 
 that the current D behavior is a blatant violation of strong-typing and 
 smacks way too much of C's so-called type system.
 
 But here's another reason to get rid it that I, quite coincidentally, 
 stumbled upon right about the same time:
 
 Me:
  In D1, is there any reason I should be getting an error on this?:
 
  // module A:
  enum FooA { fooA };
  void bar(FooA x) {}
 
  // module B:
  import A;
  enum FooB { fooB };
  void bar(FooB x) {}
 
  bar(FooB.fooB); // Error: A.bar conflicts with B.bar (WTF?)
 
 In the resulting discussion (which included a really hackish workaround), it 
 was said that this is because of a rule (that I assume exists in D2 as well) 
 that basically goes two functions from different modules are in conflict if 
 they have the same name. I assume (and very much hope) that the rule also 
 has a qualification ...but only if implicit conversion rules make it 
 possible for one to hijack the other.
 
 It was said that this is to prevent a function call from getting hijacked by 
 merely importing a module (or making a change in an imported module). That I 
 can completely agree with. But I couldn't understand why this would cause 
 conflicts involving enums until I thought about implicit enum-to-base-type 
 conversion and came up with this scenario:
 
 // Module Foo:
 enum Foo { foo }
 
 // module A:
 import Foo;
 void bar(Foo x){}
 
 // module B version 1:
 import Foo; // Note: A is not imported yet
 void bar(int x){}
 bar(Foo.foo); // Stupid crap that should never be allowed in the first place
 
 // module B version 2:
 import Foo;
 import A; // - This line added
 void bar(int x){}
 bar(Foo.foo); // Now that conflict error *cough* helps.
 
 So thanks to the useless and dangerous ability to implicitly convert an enum 
 to its base type, we can't have certain perfectly sensible cross-module 
 overloads.
 
 Although, frankly, I *still* don't see why bar(SomeEnum) and 
 bar(SomeOtherEnum) should ever be in conflict (unless that's only D1, or 
 if implicit base-type-to-enum conversions are allowed (which would make 
 things even worse)).
 
 

This also interacts with the crude hack of this enum is actually a constant. 
if you remove the implicit casts than how would you be able to do:
void foo(int p); 
enum { bar = 4 }; // don't remember the exact syntax here
foo(bar); // compile-error?!

I feel that enum needs to be re-designed. I think that C style enums are 
numbers are *bad*, *wrong* designs that expose internal implementation and the 
only valid design is that of Java 5.

e.g.
enum Color {blue, green}
Color c = Color.blue;
c++; // WTF?  should NOT compile

A C style enum with values assigned is *not* an enumeration but rather a set of 
meaningful integral values and should be represented as such.

This was brought up many many times in the NG before and based on past 
occurences will most likely never change.


Re: Implicit enum conversions are a stupid PITA

2010-03-23 Thread yigal chripun
yigal chripun Wrote:

 A C style enum with values assigned is *not* an enumeration but rather a set 
 of meaningful integral values and should be represented as such.
 

The above isn't accurate. I'll re-phrase:
The values assigned to the members of the enums are just properties of the 
members, they do not define their identity. 
void bar(int);
bar(Color.Red.rgb); // no-problem
bar(Color.Red); // compile-error


Re: Implicit enum conversions are a stupid PITA

2010-03-23 Thread yigal chripun
bearophile Wrote:

 yigal chripun:
  This was brought up many many times in the NG before and based on past 
  occurences will most likely never change.
 
 If I see some semantic holes I'd like to see them filled/fixed, when 
 possible. Keeping the muzzle doesn't improve the situation :-)
 
 Bye,
 bearophile

I agree with you about the gaping semantic hole. All I'm saying is that after 
bringing this so many times to discussion before I lost hope that this design 
choice will ever be re-considered. 


Re: Static attributes immutability, static attributes seen from instances

2010-03-08 Thread yigal chripun
Nick Sabalausky Wrote:

 Nick Sabalausky a...@a.a wrote in message 
 news:hmsqdk$9u...@digitalmars.com...
 
  bearophile bearophileh...@lycos.com wrote in message 
  news:hmrtbk$1ao...@digitalmars.com...
 
  A bit later in the discussion div0 and Pelle M. have said/suggested that 
  accessing static vars through an instance can be a bad thing, and it's 
  better to allow the programmer to access them only through the 
  class/struct name.
 
  Bye,
  bearophile
 
  I've always felt that the ability to access static members through an 
  instance was just a bad idea in general, and this seems to add another 
  reason not to allow it.
 
 
 The one possible exception I can think of (and I'm not even sure if it's 
 applicable to D or not) is if you're passed an instance of something and 
 want to call a static member of it polymorphically. Without polymorphism you 
 can just do typeof(instance).staticFunc(), but I'm not sure offhand 
 whether or not there's a way to do that polymorphically (or if static 
 members can even be polymorphic).
 
 However, I think that if people are calling static members through instances 
 instead of types just to make the call polymorphic, then I'd consider that 
 less of a reason to allow instance.staticMember and more of a reason to 
 have some sort of polymorphic runtime equivilent to typeof() (Do we 
 currently have such a thing that can suffice?). 
 
 

In dynamic languages like Ruby, instances carry a pointer to their class 
instance which contains the static variables.

say we have: 

class Foo { 
static int bar = 42;
//...
}
foo = new Foo();

foo.bar is resolved at runtime as e.g. foo.Class.bar where Class is the 
singelton instance that represents the Foo class itself.
if foo is const then D style transitivity would mean that bar must be const as 
well. regarding immutability, this is imposaible since the Foo singleton is 
shared between all (mutable and immutable) instances. In this case foo mustn't 
contain a Class member at all and have no access to it's data in order to keep 
the transitivity.

D has a different implementation but I think the above semantics is what people 
expect and the implementation differences shouldn't affect the semantics, they 
should be encapsulated. 

IMO, the entire const design is backwards and ties toghether two completely 
separate concerns (immutability for conccurency and const for interface 
definitions)  but there's zero change any of it will ever be changed. 



Re: Holes in structs and opEquals

2010-03-08 Thread yigal chripun
Walter Bright Wrote:

 yigal chripun wrote:
  The compiler knows at compile-time what variables are initialized with 
  =void. The compiler than can add a compilation error when such a struct 
  variable is used in an equals expression. 
  this doesn't cover use of malloc() which must be the user's responsebility. 
  
  e.g.
   S s1=void,s2=void;
   s1.s=0; s1.d=0;
   s2.s=0; s2.d=0;
   assert(s1 == s2); // - this line should not compile
  
 
 It can, but I don't agree that it should. For an =void initialization, 
 it's the user's responsibility to initialize it properly. The use of 
 =void implies the user knows what he's doing with it, and will take care 
 to initialize the 'holes' as necessary.
 
 Trying to disable == for such structs is a losing battle, anyway, as the 
 compiler could only detect the most obvious cases. Pass a reference to 
 it to a function, store it in a data structure, etc., and all that goes 
 away.

Ok, that sound's reasonable if you want to keep the compiler simple and fast.
How about same mode though? maybe it's worth to add this check only as part of 
same mode, where there are less cases anyway since malloc() isn't safe, and 
there are no pointers. is it allowed to use =void in safe mode at all?

another question I have: How would a user initialize the holes and doesn't it 
negate the bebefits of void as optimisation? 



Re: Holes in structs and opEquals

2010-03-07 Thread yigal chripun
Walter Bright Wrote:

 Fawzi Mohamed wrote:
  one could argue that the unsafe operation is memset.
 
 That's right.
 
 
  The compiler always initializes a struct, so that what you describe 
  should never happen in a safe program.
 
 Right.
 
 
  Still as you say the following example that might indeed considered a bug:
  
  S s1=void,s2=void;
  s1.s=0; s1.d=0;
  s2.s=0; s2.d=0;
  assert(s1 == s2);
  
  here the assert might fail depending on the previous content of the memory.
  I am not sure of what is the best solution, I am not sure that defining 
  a special comparison operation by default for each struct is the correct 
  solution (it can be quite some bloat), please note that a user defined 
  comparison function will not have these problems.
 
 No, it's not a bug in the language, it's a bug in the user code. Using 
 =void is an advanced feature, and it requires the user to know what he's 
 doing with it. That's why it isn't allowed in safe mode.
 
 
  Still I agree that traking down a bug due to this might be very ugly...
 
 The idea with =void initializations is that they are findable using 
 grep, and so can be singled out for special attention when there is a 
 problem.

The compiler knows at compile-time what variables are initialized with =void. 
The compiler than can add a compilation error when such a struct variable is 
used in an equals expression. 
this doesn't cover use of malloc() which must be the user's responsebility. 

e.g.
 S s1=void,s2=void;
 s1.s=0; s1.d=0;
 s2.s=0; s2.d=0;
 assert(s1 == s2); // - this line should not compile



Re: Whither Tango?

2010-02-20 Thread Yigal Chripun

On 20/02/2010 05:03, Justin Johansson wrote:

Nick Sabalausky wrote:

dave eveloper ta...@land.net wrote in message
news:hlm402$1mr...@digitalmars.com...

Ezneh Wrote:


So, it is not better to find a compromise between these libraries ?
Why they have to be two libraries rather than one which was
designed by larsivi, Walter Bright and Andrei Alexandrescu ?

I haven't seen larsivi around lately. Is it possible that there's a
communication problem? Perhaps a personality mismatch?

Because of silly symbol names like 'retro' I think there's more
reason for someone to not like Phobos. Bearophile also always reminds
us that a proper closure inlining support would make collection
algorithms as fast as the ugly string template hack Phobos. That way
you wouldn't have hard coded parameter symbols like a and b.



Dictionary.com Unabridged, Based on the Random House Dictionary:

retro-

a prefix occurring in loanwords from Latin meaning “backward”
(retrogress); on this model, used in the formation of compound words
(retrorocket).

So can we stop this retro is a bad name nonsense now?


Sure, just include a copy of, or link to, an English dictionary
alongside D documentation, together with appropriate annotations.
That's tantamount to what you are saying. imho, use of silly
words like this in the language are a retrograde step.

Cheers

Justin Johansson





Hum, didn't you mean a link to a *Latin* dictionary? ;)


Re: Whither Tango?

2010-02-20 Thread yigal chripun
Andrei Alexandrescu Wrote:

 Michel Fortin wrote:
  On 2010-02-19 09:11:11 -0500, Andrei Alexandrescu 
  seewebsiteforem...@erdani.org said:
  
  If you could provide a list of silly named symbols that could be a 
  dealbreaker for a prospective D user, please let me know. Thanks.
  
  I don't think there are really any 'silly' names (except perhaps iota), 
  it's just that many don't do exactly what you think they do on first 
  reading.
 
 The lengthy post below talks about _one_. If there are many, you 
 should be able to effortlessly list a handful.
 
  For instance, the other day I was working with input ranges and 
  needed a way to take N elements from the range and put them aside to 
  work on them later. So I defined my own function take for that.
  
  To me, take means you take one or more element and they aren't in the 
  range anymore afterwards.
  
  If you look at std.range, you'll find the same function 'take', but it 
  does its job lazily. That's great, except when you don't expect it.
 
 I'd contend that _you_ don't expect it. Others may as well think it's 
 not copying, otherwise it'd be called takeCopy or setAside.
 
 At the end of the day you can't expect to fly blind through an entire 
 library. take could be defined with a number of arguably good 
 semantics. Phobos picked one. (By the way per popular request I reversed 
 the order of arguments such that method notation works.)
 
 
 Andrei

I think Michel raised a good point and Kenny did provide a list in a different 
post. 

I think that having lists of function names that people here don't like is 
silly and not productive. IMO, the problem is not in the names themselves but 
rather it's a bigger picture issue of lack of consistency and strict naming 
scheme for phobos. 

let's take a look at popular language that are widely used: 
they all have very detailed schemes which make it easier to start using the 
language right away which makes it easier to adopt. 
you _can_ look at e.g. Ruby code and have a general understanding  what the 
code does without reading the docs. 
The fact of the matter is that programmers do expect to be able to get as much 
information as possible from the code itself before going to the docs.  RTFM 
simply won't do. 

you think managers care about the code? you think they want to spend time on 
their programmers reading TDPL? of course not. They want the job done as 
quickly as possible. D is counter productive for that ATM. 


Re: Whither Tango?

2010-02-20 Thread yigal chripun
Andrei Alexandrescu Wrote:

 yigal chripun wrote:
  Nick Sabalausky Wrote:
  
  Justin Johansson n...@spam.com wrote in message 
  news:hlop1u$o1...@digitalmars.com...
  Nick Sabalausky wrote:
  Right, that's what I meant. Use a word starting with retro-
  when talking to a english-speaking person, and even if they're
  uneducated, they'll most likely have a good idea what is meant
  by that prefix.
  What about persons with English not as a first language?
  
  I do realize that different native languages can be an issue, but
  at some point a library has to use *some* language, and the
  established standard for phobos just happens to be english. If we
  start banning terms from use in a language or a library on the
  basis of whether a non-native english speaker is likely to know it,
  then I suspect (though I admit that I don't know for certain) you'd
  have to eliminate most of the given language/library because 
  there's no guarantee non-native speakers would know any of it.
  
  For instance, if there were a russian-langauge library, and I tried
  to use it, I wouldn't understand any of the words except nyet and
  da (and I'm not even sure of the correct spellings of those - in
  either roman or cyrillic). And I would be well aware that I
  wouldn't be able to assume I knew what something did without a
  little digging. Of course, I certainly sympathize that this can be
  a pain for non-native-english-speaking programmers, and that it's
  an issue native english speaking programmers like me will probably
   never be able to truly understand, but until we get to some
  hypothetcal point in the future where everyone speaks the same
  language, then, again, at some point there really is no choice but
  to just assume at least some particular language.
  
  Besides, computer terminology is already, at best, just a bunch of
  vague meaphors anyway. When I started programing, it took me all of
  about a minute to learn that string had nothing to do with the
  stuff cloth is made of and stitched together with. And SCSI
  doesn't mean a damn thing at all, even to an english speaker, but I
  still learned it quickly enough. So even if I wasn't familiar with
  retro as anything other than old style, I'm sure I still could
  have gotten used to it very quickly, especially considering that in
  99.99% of contexts it's going to be pretty damn clear that it's not
  being used to refer to bell-bottoms, chome appliances, and
  flock-of-seagulls haircuts.
  
  
  
  This is being silly (and needlessly long). There's no need to collect
  statistics on the level of English of non-native D programmers
  worldwide to decide what name to use for a function.
  
  It's very simple actually: you want to name a function that reverses
  your range and you have several valid names for it, please choose the
  most common word (the first that comes to mind) which in this case is
  (surprise!) - reverse. (or any variation that makes sense in this
  particular case, like reversed)
  
  simple logic, don't you agree? Any human language has more than one
  way to express oneself. The best way to reach a wide (and
  international) audience is to use the most common phrases - don't go
  all academic on me with Latin or Shakespearean words and don't go
  getho on me with misspelled slang. Is that so much to ask for?
 
 There's no reason to get agitated as nobody is trying to push 
 incomprehensible crap on anyone. The problem I was confronted with was:
 
 (a) reverse was already taken;
 
 (b) I found reversed too subtly different from reverse. Besides, it 
 wasn't clear to me that it was descriptive enough - e.g. some people 
 might believe that reversed returns a reversed copy of the range;
 
 (c) I was looking for a short name because I presume the function will 
 be used often;
 
 (d) In my experience names that are slightly odd but evocative tend to 
 stick to memory.
 
 So I chose retro. What exactly seems to be the problem? If half of 
 Phobos' names were weird, I'd say fine, but this discussion latched on 
 poor retro and iota as if posters' lives depended on it. Again: how 
 exactly are these two names preventing you from getting work done?
 
 
 Andrei

you just refuse to get it. It's not the specific retro function that is so 
frustrating to me. 
any public API (and especially so for the standard library) *must* have a 
consistent naming scheme. It *must* prefer clarity over shortness and it *must* 
be designed such that it will lead to easier understanding of code using it 
_without_ referring to the manual every function call. 
phobos fails on all the above. 

It's easy to read Java/Python/Ruby/D-with-tango code and understand most of it 
from first read. This is VERY important since 95% of time is spent on 
maintenance of code that was most likely written by someone else. 

easy maintenance require clarity, not creativity. 


Re: foreach_reverse is better than ever

2010-02-16 Thread Yigal Chripun

On 14/02/2010 19:18, Andrei Alexandrescu wrote:

Leandro Lucarella wrote:

Michel Fortin, el 14 de febrero a las 07:48 me escribiste:

On 2010-02-14 05:12:41 -0500, Jacob Carlborg d...@me.com said:


It iterates backwards, all the way back to the 50s. I think
reverse is a much better word.

Agree.

My dictionary says: retro: imitative of a style, fashion, or
design from the recent past.

It's an amusing name in the way Andrei likes it, but the meaning
isn't very clear. reverse would be a better name.


This is a pattern in Andrei, which I think it really hurts the language
(the names are very clever and funny, but that shouldn't be the point of
a name, a name should be clear).


At least in this case being funny was not the point. I needed a name
that was (a) short, (b) different from reverse, (c) memorable. It is
understood that other paint colors are available, but please don't
forget to give a little love to the painter. :o) It would be of course
best if names that arguably hurt the language were changed, so please
compile a list.

Andrei


As I said multitude of times before, if you want D to be commercially 
successful, you need to change your priorities.
the *most* important thing to have is _clear_ and _understandable_ names 
by a wide international audience. The *least* important thing is shortness.


you might argue as long as you want that C++/D is technically superior 
to Java, but fact remains that Java is the favorite language in 
enterprise. One huge factor which helps this is of course the clear 
naming scheme in its stdlib.







Re: foreach_reverse is better than ever

2010-02-16 Thread Yigal Chripun

On 14/02/2010 20:07, Andrei Alexandrescu wrote:

Mike James wrote:

Andrei Alexandrescu Wrote:


Leandro Lucarella wrote:

Michel Fortin, el 14 de febrero a las 07:48 me escribiste:

On 2010-02-14 05:12:41 -0500, Jacob Carlborg d...@me.com said:


It iterates backwards, all the way back to the 50s. I think
reverse is a much better word.

Agree.

My dictionary says: retro: imitative of a style, fashion, or
design from the recent past.

It's an amusing name in the way Andrei likes it, but the meaning
isn't very clear. reverse would be a better name.

This is a pattern in Andrei, which I think it really hurts the language
(the names are very clever and funny, but that shouldn't be the
point of
a name, a name should be clear).

At least in this case being funny was not the point. I needed a name
that was (a) short, (b) different from reverse, (c) memorable. It
is understood that other paint colors are available, but please don't
forget to give a little love to the painter. :o) It would be of
course best if names that arguably hurt the language were changed, so
please compile a list.

Andrei


1. Contrawise
2. Rearward
3. AssBackwards
4. Reorientated
5. Turnedabout
6. Turnedaround
7. Inversified
8. Flipped
9. Refluxed
10. VolteFace

or how about Reverse...

-=mike=-


I meant a list with other cases (aside from this particular one) in
which choices of names were unfortunate.

I thought the following is clear but let me state it: in this particular
case, using reverse is not desirable because the name already exists
as an array property. If we drop the existing feature and choose
reverse for the new feature, code will silently change semantics.


Andrei


What's the change in semantics that you're worried about?
doesn't D's built in arrays conform to the range interface?
I'd expect that array.reverse would be the same as retro(array).




Re: foreach_reverse is better than ever

2010-02-16 Thread Yigal Chripun

On 15/02/2010 15:00, Jacob Carlborg wrote:

On 2/14/10 18:18, Andrei Alexandrescu wrote:

Leandro Lucarella wrote:

Michel Fortin, el 14 de febrero a las 07:48 me escribiste:

On 2010-02-14 05:12:41 -0500, Jacob Carlborg d...@me.com said:


It iterates backwards, all the way back to the 50s. I think
reverse is a much better word.

Agree.

My dictionary says: retro: imitative of a style, fashion, or
design from the recent past.

It's an amusing name in the way Andrei likes it, but the meaning
isn't very clear. reverse would be a better name.


This is a pattern in Andrei, which I think it really hurts the language
(the names are very clever and funny, but that shouldn't be the point of
a name, a name should be clear).


At least in this case being funny was not the point. I needed a name
that was (a) short, (b) different from reverse, (c) memorable. It is
understood that other paint colors are available, but please don't
forget to give a little love to the painter. :o) It would be of course
best if names that arguably hurt the language were changed, so please
compile a list.

Andrei


I never understood the reason for that the names need to be short. I
think the most important thing is that the names are clear. Just look at
the C standard library, it's horrible, almost every name is an
abbreviation of some kind.


C was designed in the dark ages when programmers actually tried to save 
bytes in their extremely limited hardware. That same age brought us the 
tab character, Y2k bug and FORTRAN.


Re: Tango 0.99.9 Kai released

2010-02-13 Thread Yigal Chripun

On 12/02/2010 22:20, Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com  wrote in message
news:hl3j9e$nl...@digitalmars.com...

On 12/02/2010 11:10, Nick Sabalausky wrote:

My 4 y/o laptop that I already upgraded runs faster with Win7 compared to
XP tablet edition it had before.



Really? You know, I've heard a *lot* about Win7 being better than Vista,
with one of those improvements being speed, but this is the first I've seen
*any* direct comparison of Win7 to XP. And I have to say I'm very surprised
to hear that it runs faster...Although...What kind of hardware do you have
in that laptop? Probably 64-bit multi-core, I'm guessing, right? I wouldn't
be totally surprised if something like that does runs faster on Win7, but
with hardware like that it still would have been super-fast anyway - like
getting an extra 10 horsepower out of a porche (And if all the car
dealerships stop selling everything except porches...well, they'd still be
porches, period). And I think I heard somewhere that Win7 required a minimum
of 4GB ram (or was that just Vista?). If so, anythng less than that would
certainly make Win7 run vastly slower than XP, if even at all.




I have no idea what you're talking about.
I have a ThinkPad x41-tablet (almost 4 years old) which came with win XP 
tablet edition which I hated to reboot since it took 10 minutes or so. I 
kept putting it in hibernate instead. I wouldn't even consider putting 
Vista on it cause it won't boot at all with that.


Since I installed a fresh Win7 copy on it it runs much better and boots 
almost immediately. I also read online similar reports by other owners 
of the X41-tablet.


you can find the spec online but in short it's a pentium-m with a 1.5gb 
ram (I added 1gb long time ago to make the XP work better). 32bit, no 
multi-core.


I don't like MS software in general (they do make wicked hardware though 
- best keyboards and mice) but this time they managed to do a decent 
job. Of course Ubuntu will run 5 times faster on similar hardware with 
only 256mb ram. Unfortunately that's not really the best option for a 
tablet PC.


Re: Tango 0.99.9 Kai released

2010-02-12 Thread Yigal Chripun

On 12/02/2010 03:36, Daniel Keep wrote:



Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com  wrote in message
news:hl204m$m8...@digitalmars.com...

Starting with Vista, MS exposed the ability to have symlinks and hardlinks
on windows, just run help mklink in a cmd.exe.

In reality NTFS supported this for a long time now (IIRC, since circa
2000) but the problem is that the windows shell/cmd.exe is always late at
providing access to new NTFS features - they're always late by at least
one version of windows so this is why you can't do that on XP even though
the NTFS version that comes with XP does support it.


Oh, so at least in theory, symlinks should still be possible on 2k/XP given
a third-party tool to manage them and avoidance of using them on the
command-line and in batch files?


Given that SysInternals had a tool for doing hard links on 2000+, but no
tool for doing symlinks, I doubt it.

I recall reading something about how symlinks were new to Vista
specifically; not simply a tool to make them, but something changed in
NTFS or the system's support for it.


http://homepage1.nifty.com/emk/symlink.html

I think this provides the ability to have symlinks on windows XP. I'm 
not 100% sure since it's in Japanese.


Re: Tango 0.99.9 Kai released

2010-02-12 Thread Yigal Chripun

On 12/02/2010 11:10, Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com  wrote in message
news:hl33en$2p6...@digitalmars.com...

On 12/02/2010 03:36, Daniel Keep wrote:



Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com   wrote in message
news:hl204m$m8...@digitalmars.com...

Starting with Vista, MS exposed the ability to have symlinks and
hardlinks
on windows, just run help mklink in a cmd.exe.

In reality NTFS supported this for a long time now (IIRC, since circa
2000) but the problem is that the windows shell/cmd.exe is always late
at
providing access to new NTFS features - they're always late by at least
one version of windows so this is why you can't do that on XP even
though
the NTFS version that comes with XP does support it.


Oh, so at least in theory, symlinks should still be possible on 2k/XP
given
a third-party tool to manage them and avoidance of using them on the
command-line and in batch files?


Given that SysInternals had a tool for doing hard links on 2000+, but no
tool for doing symlinks, I doubt it.

I recall reading something about how symlinks were new to Vista
specifically; not simply a tool to make them, but something changed in
NTFS or the system's support for it.


http://homepage1.nifty.com/emk/symlink.html

I think this provides the ability to have symlinks on windows XP. I'm not
100% sure since it's in Japanese.


If you run it through google translater, and (attempt to) read through the
Symbolic misconception that Windows NT/2000/XP is available in section, it
sounds like he saying that pre-vista could only do hardlinks and junctions
but that some people (maybe the author?) had been inaccurately calling them
symlinks anyway, thus causing confusion. But of course, that's assuming
that the translation is accurate and that I'm actually interpreting the
translation correctly.


I can't say that I fully understand what that page says, but it seems 
that this utility does provide for some sort of symlinks for files.


Anyway, I'm not that interested in support for a decade old and 
deprecated OS - I've upgraded long time ago and currently use both Vista 
and Win7. I'll be upgrading my Vista to Win7 as soon as I get some free 
time.
My 4 y/o laptop that I already upgraded runs faster with Win7 compared 
to XP tablet edition it had before.




Re: Proposal: Dedicated-string-mixin templates/functions

2010-02-06 Thread Yigal Chripun

On 05/02/2010 23:24, Trass3r wrote:

Proposed:
---
mixin template foo1 {
const char[] foo1 = int a;;
}
mixin char[] foo2() {
return int b;;
}
foo1!();
foo2();
---



Well, it's a little bit indistinctive, hard to tell if it's a normal
function call or a mixin without e.g. using a mixin prefix for the
function name (which is nothing better than it is now)
But an advantage would be that these functions could be omitted in the
final executable since they are only used at compile-time.


IMO, this is a bad idea.
The most important thing we should get from Nemerle regarding this is 
the much better compilation model and not just the syntax. The syntax 
idea itself is iffy at best especially in the D version.


To contrast with the Nemerle solution:
the function foo2 above would be put in a separate file and would be 
compiled *once* into a lib.
Than, at a separate phase,  this lib can be loaded by the compiler and 
used in the client code.
Also, In Nemerle, foo2 is a regular function which means, unlike D, it 
isn't restricted compared to regular functions and for example can 
call stdlib functions like the equivalent of writef (no need for 
special pragma(msg, ..) constructs).




Re: TDPL a bad idea?

2010-02-06 Thread Yigal Chripun

On 06/02/2010 05:11, Walter Bright wrote:

BCS wrote:

If D were to quit providing a NNTP interface, I'd loose interest in
participating in these discussions. Heck, (HINT, HINT, HINT) the fact
that Tango has a forum rather than a news group is half or more of the
reason I don't use it.


I love the news interface, too, and see no reason to give it up. But the
web forums have their advantages, too. That's why I'd like to have a
system that is accessible from both. Post on the web forum, and it is
also posted to NNTP, and vice versa.


Walter, Please take a look at FUDForum. It's a web forum with NNTP 
support built in.
all you need to do is to add a cron job and it'll keep the forum 
synchronized with NNTP. the cron job imports from NNTP to the forum and 
updates it and the forum will post to NNTP on behalf of the forum users.


I've tried it on my PC and it works great. I've imported everything from 
news.digitalmars.com and when I posted to my local forum, it immediately 
also showed my message with my e-mail on the NG.


Also, I've found a simple NNTP server written in python that has modular 
back-end support so it can be set-up to provide a bi-directional NNTP 
interface for various web forums.





Re: TDPL a bad idea?

2010-02-06 Thread Yigal Chripun

On 06/02/2010 15:23, Lutger wrote:

On 02/06/2010 01:58 PM, Yigal Chripun wrote:
...

Also, I've found a simple NNTP server written in python that has modular
back-end support so it can be set-up to provide a bi-directional NNTP
interface for various web forums.




What is the name / link?

Thanks


Papercut -  http://pessoal.org/papercut/




Re: TDPL a bad idea?

2010-02-06 Thread Yigal Chripun

On 06/02/2010 23:42, Walter Bright wrote:

Yigal Chripun wrote:

Walter, Please take a look at FUDForum.


I did, thanks for the reference. I think reddit blows it away for user
interface. Fudforum has the usual problem with web forums of using too
much vertical space, meaning you have a hard time keeping track of where
you are in a thread.


Did you try switching to the tree view? It looks almost like reddit, IMO.
Also, you can't really evaluate a web-forum by just looking at their 
online forum, Did you try installing a local copy and experiment with it?
all modern web forums are completely customizable via template systems 
and themes, but you need to be an admin to see that. Basically you can 
change the entire UI by changing the template.




Re: TDPL a bad idea?

2010-02-03 Thread Yigal Chripun

On 03/02/2010 09:19, Lutger wrote:

On 02/03/2010 02:42 AM, Walter Bright wrote:

Yigal Chripun wrote:

I've thought about building such a system for these forums many times.
Registration would not be required to post, but registering would
enable
features like voting on posts, establishing a profile, preferences,
etc.


That sounds awesome. Another useful feature would be storing session
info in the profile such that if I read a post at work the post will
be marked as such when I use a different computer/news-reader like my
home PC.


Yup. What I hate about reddit/slashdot/ycombinator is there's no way to
mark ones I've read as read. On a long thread, it's really hard to see
if there's anything new.



wouldn't it be easier to just use web forums (there are many existing
system with all the bells and whistles) and write a news-gateway for
it than to implement all the features for the current news-server?
it'll also fix the currently broken web interface for the NG.


They all suck. Sorry.

Most use far too much vertical space, spreading the thread out over
multiple pages, or don't indent a threaded view. And *none* of them have
the ability to mark what you've read.


I know that at least vBulletin and phpBB can do mark-as-read, it's just
that not everybody uses it. Most of the more 'advanced' forum software
like this however, is stacked with community features and geared towards
markup heavy posting. They would require extensive hacking to adapt to a
more efficient system.

vBulletin also has a (sucky) threaded view btw.




that's the beauty of the proposal, you don't have to use the web forum 
interface. you'll continue using your favorite news-reader which will 
use the forum back-end to store the messages. all those dis-advantages 
both you and Walter mention are in the web forum's UI, not their back-end.


also, I found http://pessoal.org/papercut/ which implements a NTTP 
server gateway to phpBB and other back-ends. it's written in python and 
would be easy to enhance.


also, I want to emphasize, you can also post with that gateway and it is 
not only for reading.




Re: TDPL a bad idea?

2010-02-02 Thread Yigal Chripun

On 02/02/2010 21:09, dsimcha wrote:

== Quote from BCS (n...@anon.com)'s article

Hello Rainer,

BCS wrote:


Anything a group does to it's self is not censorship. Censorship is
where someone from the outside imposes controls.


By that definition, there is no censorship in China, because it's
something the group (i.e. China) does to itself.


Group = citizens of china
controller = government of china
for the case in question (this NG)
group = people posting on NG
controller = people in NG wanting someone banned.
I see a difference


By that logic censorship would be ok in a democracy then.


Absolutely true.
case in point is my country, Israel. we have military censorship where 
any security related info does in fact need to be approved by the army 
before it is published. unlike china it is OK since it is accepted by 
the nation (we are a democracy after all) and we all realize that for a 
country surrounded by enemies this is acceptable approach in order to 
protect ourselves.
In fact, the army censor has very little work to do, mainly with regards 
to the media. We all serve in the army and understand the importance of 
this issue so there's little need to have an outside censorship police.


Based on my personal experience, this self imposed censorship is 
misunderstood by other people which shows in the way foreign media 
misjudges us on many occasions.


To conclude, It is perfectly fine to have self imposed censorship.


Re: TDPL a bad idea?

2010-02-02 Thread Yigal Chripun

On 02/02/2010 21:47, retard wrote:

Tue, 02 Feb 2010 06:20:19 -0500, Bane wrote:



Except that you could argue that the government is censoring it for the
people, thereby making it an outside force imposing control on the
inside. Merriam-Webster's online definition would tend to go with the
whole outside force idea:
http://www.merriam-webster.com/dictionary/censor . Generally speaking,
censorship refers to one group cutting out or blocking material from
coming into contact with another group, but you might be able to argue
that it doesn't _have_ to be an outside force. Still, in any kind of
normal use, it would be.

- Jonathan M Davis


Legal/moral mumbo jumbo. There are group with resources to provide/deny
something to other groups, and there are those without that power.
Reason for first to do it at first place? Same why dog licks his ass -
because he can.

So if admin of his mailing list can exercise his power to make it more
useful to majority of readers on expense of few (troublesome)
individuals, the better. Its not like anyone is going to gulag if placed
on ban list, for fucks sake.


At least in this newsgroup it's easy to get into peoples' killfile. Just
disagree with your beloved deitys, Andrei and W. A good way to piss them
off is to mention dmd's broken support for tuples or .stringof, critizise
the featuritis and language inconsistency, support Tango, or know
something about functional languages.


Walter is a very reasonable person to talk with whether you agree or 
disagree with his point of view. He never gets angry at anyone and he'll 
have a discussion even with the worst troll if he's got a tiny potential 
of an interesting point to make in the discussion.

That's my experience at least.
He wouldn't even ban superdan, which frankly I would had i been in 
Walter's shoes.


Re: TDPL a bad idea?

2010-02-02 Thread Yigal Chripun

On 02/02/2010 23:05, Jeff Nowakowski wrote:

BCS wrote:


Group = citizens of china
controller = government of china

for the case in question (this NG)

group = people posting on NG
controller = people in NG wanting someone banned.

I see a difference


The government of China are Chinese people. I see no difference. Once
you create a controller class in the newsgroup, they become the
government.


As others tried to explain regrading china - that's not the same thing.
regarding this newsgroup and online communities in general - have you 
ever heard of the debian project? nothing prevents us from establishing 
similar mechanisms such that all active and registered posters in this 
group will have a say regarding policies and policing of those policies. 
Since this NG is on Walter's servers and belongs to him, nothing 
prevents doing the above on different community servers if Walter 
disagrees with this.


I'm not saying we should do this, btw. I'm perfectly fine with the 
current scheme of things where I need to filter superdan in my own 
news-reader instead of having him banned from the NG. other people might 
find his posts amusing and they have the right to read them.


IMO, we should have a registration system for regular people, not for 
censoring purposes but for keeping track.
there are many posts by different people that call themselves with the 
same name and it seems confusing and unproductive to me. people don't 
have to register with their real names if they don't want to but at 
least I could tell the difference between john and john1 when I'm 
answering to john.
Also, regular folks can ensure by registering that no-one else can reply 
to posts in their names.




Re: TDPL a bad idea?

2010-02-02 Thread Yigal Chripun

On 03/02/2010 00:41, Walter Bright wrote:

Yigal Chripun wrote:

He wouldn't even ban superdan, which frankly I would had i been in
Walter's shoes.


superdan was harmless. I enjoyed his rants, and underneath it he did
know what he was talking about.


As I said before, you must be a much more tolerant person than I am :)

What bothered me the most about his language was not the fact the it was 
insulting but rather that it reduces the readability which is even more 
so for non native speakers (which I am).
I mean, it isn't that hard to using the correct spelling (even in curse 
words) so that others can understand you. using bad spelling 
intentionally like that is plain inconsiderate.


This especially irks me every time I see a post that boils down to 
demeaning a non-native English speaker about using your instead of 
you're or nit-picking on tiny differences of a meaning of a word.


Re: TDPL a bad idea?

2010-02-02 Thread Yigal Chripun

On 03/02/2010 00:44, Walter Bright wrote:

Yigal Chripun wrote:

IMO, we should have a registration system for regular people, not for
censoring purposes but for keeping track.
there are many posts by different people that call themselves with the
same name and it seems confusing and unproductive to me. people don't
have to register with their real names if they don't want to but at
least I could tell the difference between john and john1 when I'm
answering to john.
Also, regular folks can ensure by registering that no-one else can
reply to posts in their names.



I've thought about building such a system for these forums many times.
Registration would not be required to post, but registering would enable
features like voting on posts, establishing a profile, preferences, etc.


That sounds awesome. Another useful feature would be storing session 
info in the profile such that if I read a post at work the post will be 
marked as such when I use a different computer/news-reader like my home PC.


wouldn't it be easier to just use web forums (there are many existing 
system with all the bells and whistles) and write a news-gateway for it 
than to implement all the features for the current news-server? it'll 
also fix the currently broken web interface for the NG.


Re: TDPL a bad idea?

2010-02-01 Thread Yigal Chripun

On 01/02/2010 01:56, BCS wrote:

Hello Bane,


Lars T. Kyllingstad Wrote:


When TDPL is published D2 will be frozen. That's the whole point.

-Lars


Aha! What about... D3 ? :)



TDPL 2e

And FWIW, I'm in the lets kill trees camp.

p.s. Why doesn't anyone ever bring up the power requirements for reading
digital docs? Making a book is a one time investment, reading a file
requiters continues power.

--

IXOYE




Don't go the power requirements route. This will just bring endless 
discussion:

1) what about green power - like using solar energy?
2) what about using recycled paper for books?
3) what about the pollution caused by manufacturing the PC and batteries 
if it's a laptop?

4) what about the pollution caused by manufacturing books?
...

Personally, I prefer paper for stuff that's meant for long-term use and 
digital for one-offs. newspaper is a prime example of what not to do - 
either you pollute by printing daily on new paper or you provide a 
crappy experience with recycled paper. This is IMO a prime example where 
digital is better. YMMV


Re: Google's Go Exceptions

2010-01-31 Thread Yigal Chripun

On 27/01/2010 02:57, Justin Johansson wrote:

Ary Borenszweig wrote:

Walter Bright wrote:

Justin Johansson wrote:

(1) For some reason (possibly valid only in an historic context), I
have this great aversion to throwing exceptions from inside C++
constructors. From memory, I once threw an exception from inside a
constructor
with an early C++ compiler and wound up with stack corruption or
something like that, and consequently I developed the practice of
forever more avoiding throwing from inside a C++ constructor.


I'm a believer in the methodology that a constructor should be
trivial in that it cannot fail (i.e. cannot throw). I'm probably in
the minority with that view, but you shouldn't feel like you're doing
the wrong thing when you stick to such a style.


auto x = new BigInt(someString);

How do you implement BigInt's constructor without being able to throw
an exception? Or would you do it like:

auto x = BigInt.fromString(someString);

to be able to throw? (just to obey the no throw in constructors...
but that's not as trivial as the previous form)


A factory method is the way to go. Different languages give you
different means for achieving this design pattern but nevertheless
all such means make for the better factoring of code.

In C++ there are three means :-

(1) Use of static class member, so your example would like like this:

BigInt x = BitInt::fromString( someString);

(2) Use of () function call operator overload on a factory class
so your example would now look like this

BigIntFactory bigIntFactory; // may be statically declared
BigInt x = bigIntFactory( someString);

(3) Global function, which I won't discuss any futher for obvious reasons.

In D, similar to C++, though function call () operator overload is
effected in much cleaner fashion with D's absolutely wonderful
static opCall. So your example would look something like this
(as said earlier I haven't done D for 6 months so pls forgive
any error in detail) :

class BigInt
{
static BigInt opCall( someString)
{
if (!validate( someString))
throw someError;

// extract data for BitInt instance somehow
// from string .. maybe tied into validate function

byte[] bigIntData = ...

return new BigInt( bigIntData);
}

this( byte[] bigIntData)
{
this.bigIntData = bigIntData;
}

private: byte[] bigIntData;

// other BigInt methods ...

}

Now in D,

BigInt x = BigInt( someString);



In Java, well, let's not discuss that here. :-)

In Scala you have companion classes that go hand-in-hand with
the regular class. Scala uses companion classes to reduce the
noise that the static class members introduce in other languages.

(Example anybody?)


Summary for D:

It really isn't that much work to use D's static opCall() to
good effect and, IMHO, complex designs do end up a lot cleaner.
As they say, necessity is the mother of invention. It seems to
me that both Scala and D have been driven by necessity in the
design of companion classes and static opCall respectively.

Cheers
Justin Johansson


Factories are a hack to overcome limitations of the language, mainly the 
fact that constructors aren't virtual.
The above solution(s) have two main drawbacks: testability and 
Multi-threading will be affected.







Re: dmd warning request: warn for bitwise OR in conditional

2010-01-30 Thread Yigal Chripun

On 23/01/2010 20:10, Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com  wrote in message
news:hjek8e$4j...@digitalmars.com...


uint a, b; // init to whatever
bool c, d; // ditto

auto r1 = a AND b; //  a  b
auto r2 = c AND d; // c  d
...
AND stands for whatever *single* syntax is chosen for this.



Yuck, that amounts to language-enforced operator overloading abuse, just
like the common mis-design of overloading '+' to mean both 'add' and
'concat'.




No operator was abused during the making of this post...

unlike the string concat. case, both the bit ops and the bool ops have 
the exact same semantics (OR, AND, NOT) and the only difference is the 
scale . This is already represented by the type system and there is no 
need to repeat yourself a-la Java:

Foo foo = new Foo(); // is this really a Foo?

in the same spirit of things, no-one argues for a different addition op 
for each integral type:

int a = 2 + 4;
long b = 200 ++ 4000; // LONG addition

it ain't assembly language.

Also, it prevents common bugs and makes for more readable code. In the 
same vain, I'd be willing to remove other shortcuts that are come causes 
of bugs, like the assignment inside the if condition and not requiring 
explicit check in if condition.

if (foo is null) instead of if (foo).

Last thing, Stop with the moronic oh my god, I need to type a few more 
characters attitude. (Yes, bearophile, that's you.)
FACT - code is read 1000 times more than it's written. readability IS 
important. No, xfoo is NOT a legit name for a function, call it lazyFoo 
if you want to emphasize its laziness. are you still trying to save 3 
bytes in the age of cheap Tera-byte HDDs?
In the same spirit, stop removing f*cking vowels from words. You ain't 
coding in Hebrew.


Re: dmd warning request: warn for bitwise OR in conditional

2010-01-23 Thread Yigal Chripun

On 22/01/2010 09:59, bearophile wrote:

Ali:

We've been bitten by the following bug recently in C code: uint
flag = 0x1; uint flags;

if (flags | flag) { dout.writefln(oops); }

The programmer intended. It is (almost?) always an error to use |
in a conditional.


Why do you think it's almost always an error?

I have seen more than one time a related bug in C code (once written
by me and other times written by other people): if (foo  bar) {...

instead of: if (foo  bar) {...

To avoid this kind of bug you can disallow integers in conditionals
(requiring something like a ! or == 0 to turn an integral value in a
boolean) as Java (and partially Pascal), or you can remove the  ||
from the language and replace them with and and or, so it becomes
easy to tell them apart from bitwise operators. I like the second
way.

Bye, bearophile


Instead of renaming the boolean ops they should simply be removed. The 
type system gives you all the required information to know what to do 
without needlessly duplicating the syntax:


uint a, b; // init to whatever
bool c, d; // ditto

auto r1 = a AND b; //  a  b
auto r2 = c AND d; // c  d
...
AND stands for whatever *single* syntax is chosen for this.

the compiler will implement the boolean version with lazy evaluation and 
the unsigned integral versions (uint, ulong, ...) with eager evaluation.


If someone really want to use Boolean ops on numbers [s]he could always 
do that explicitly:

cast(bool)myNum AND whatever


Re: Does functional programming work?

2010-01-03 Thread yigal chripun
Walter Bright Wrote:

 yigal chripun wrote:
  Have you ever actually used Smalltalk?? I have used it and it's the
  easiest language to use by far, having conditionals as methods of
  Boolean is much better, easier to read and more flexiable.
  
  The beauty of smalltalk is that you can easily add new language
  features in the library with little effort and they do not look
  foreign to the language. in fact, almost all of smalltalk is
  implemented in the library and it only has 5 actual keywords.
 
 
 What's your opinion, then, about why Smalltalk has failed to catch on?

That's a completely separate issue. Success of any product or idea depends on 
many aspects of which technical superiority is only one. I've used Smalltalk 
and it is most definitly superior to most other languages including D.  

Compare to cars - the most popular and successful design is that of the 
internal combustion engine yet it's the worst design in technical terms, there 
are more effecient and much cleaner designs.

compare to OSes - Unix died in the Unix wars and was replaced by a much worse 
system called windows which today has over 90% market share. Windows is by far 
the worst OS ever and yet this is the design that won. Technically speaking 
Linux today is much better and I enjoyed using it but ultimatly my main system 
today is windows 7 and not for technical reasons. 

Compare to Java - a mediocre language at best yet very popular and in fact I'd 
prefer to use Java over D too beacuse the fact that D is a (much) better 
langauge is a tiny aspect of productivity of a programmer. Java has a lot of 
amazing tools that support it and a lot of freely available libraries while in 
D we still can't have only one standard lib. 
So while it's fun to play with D, for real stuff I'll prefer Java with eclipse 
and all the standardized libs that make my life that much more simple. 


Re: Does functional programming work?

2010-01-03 Thread yigal chripun
Andrei Alexandrescu Wrote:

 yigal chripun wrote:
  Walter Bright Wrote:
  
  yigal chripun wrote:
  Have you ever actually used Smalltalk?? I have used it and it's the
  easiest language to use by far, having conditionals as methods of
  Boolean is much better, easier to read and more flexiable.
 
  The beauty of smalltalk is that you can easily add new language
  features in the library with little effort and they do not look
  foreign to the language. in fact, almost all of smalltalk is
  implemented in the library and it only has 5 actual keywords.
 
  What's your opinion, then, about why Smalltalk has failed to catch on?
  
  That's a completely separate issue. Success of any product or idea depends 
  on many aspects of which technical superiority is only one. I've used 
  Smalltalk and it is most definitly superior to most other languages 
  including D.  
  
  Compare to cars - the most popular and successful design is that of the 
  internal combustion engine yet it's the worst design in technical terms, 
  there are more effecient and much cleaner designs.
  
  compare to OSes - Unix died in the Unix wars and was replaced by a much 
  worse system called windows which today has over 90% market share. Windows 
  is by far the worst OS ever and yet this is the design that won. 
  Technically speaking Linux today is much better and I enjoyed using it but 
  ultimatly my main system today is windows 7 and not for technical reasons. 
  
  Compare to Java - a mediocre language at best yet very popular and in fact 
  I'd prefer to use Java over D too beacuse the fact that D is a (much) 
  better langauge is a tiny aspect of productivity of a programmer. Java has 
  a lot of amazing tools that support it and a lot of freely available 
  libraries while in D we still can't have only one standard lib. 
  So while it's fun to play with D, for real stuff I'll prefer Java with 
  eclipse and all the standardized libs that make my life that much more 
  simple. 
 
 You didn't answer the question. What's your opinion about why Smalltalk 
 has failed to catch on?
 
 Andrei

I thought I did. It's for simillar reasons as in my other examples. Java was a 
success because it was offered free of charge with a big supportive environment 
- libs, tools, documantation, etc and was promoted havily by Sun.
 Smalltalk OTOH was sold by a few vendors that didn't know how to promote it 
and build a vibrant comunity around it. Those vendors didn't supply libs for 
common stuff the industry uses and don't forget that one of its goals was to 
have an educational system for kids rather than something the industry will 
use. 

The industry rarely uses products based on its technical merits anyway - they 
don't and shouldn't care what language is technically better. they care only 
for the bottom line. 
that means that Java is the best for the industry:
1) has many existing tools and libs that enhance productivity and reduced the 
amount of code that is neeed to be written in house. 
2) there are many Java programmers so it's easy to find better qulity 
programmers and pay them less. 
3) the write once run anywhere promise - obviously saves money.
4) Is standardized yet has many vendors - no need to invest when switching 
vendor.

BTW, none of the above apply to D. 



Re: Does functional programming work?

2010-01-02 Thread yigal chripun
Nick Sabalausky Wrote:

 dsimcha dsim...@yahoo.com wrote in message 
 news:hhlsk7$2v0...@digitalmars.com...
  == Quote from Nick Sabalausky (a...@a.a)'s article
  Walter Bright newshou...@digitalmars.com wrote in message
  news:hhgvqk$8c...@digitalmars.com...
   An interesting counterpoint to the usual FP hype:
  
   http://prog21.dadgum.com/55.html
  Didn't read the original article, but the one being linked to is 
  completely
  in line with how I feel about not just FP, but all programming paradigms,
  for example, OO: It's great as long as you don't pull a Java or (worse 
  yet)
  a Smalltalk and try to cram *everything* into the paradigm.
 
  I actually think Smalltalk had the better idea.  Java doesn't support any 
  paradigm
  besides OO well, and neither does Smalltalk.  The difference is that, in
  Smalltalk, at least everything is an object, so you can do pure OO well. 
  Java
  is almost pure OO, but it lack of ints, floats, etc. being objects, 
  combined
  with its lack of support for any paradigm that works well without ints, 
  floats,
  etc. being objects, makes the language feel like a massive kludge, and 
  leads to
  debacles like autoboxing to get around this.
 
  In multiparadigm languages like D, C++ and C#, the lack of ints, floats, 
  etc.
  being objects is less of an issue because, although it's a wart in the OO 
  system,
  noone is forcing you to use the OO system for **everything**.
 
 I certainly agree about Java and multiparadign languages, but I never 
 understood how, for instance, making the if statement an object ever did 
 anything but obfuscate Smalltalk and give people warm fuzzies for being 
 uber-consistent.
 
 

Have you ever actually used Smalltalk?? I have used it and it's the easiest 
language to use by far, having conditionals as methods of Boolean is much 
better, easier to read and more flexiable.  

The beauty of smalltalk is that you can easily add new language features in 
the library with little effort and they do not look foreign to the language.
in fact, almost all of smalltalk is implemented in the library and it only has 
5 actual keywords. 


Re: What's wrong with D's templates?

2009-12-21 Thread yigal chripun
Lutger Wrote:

 Yigal Chripun wrote:
 
  On 19/12/2009 01:31, Lutger wrote:
  Yigal Chripun wrote:
 
  On 18/12/2009 02:49, Tim Matthews wrote:
  In a reddit reply: The concept of templates in D is exactly the same
  as in C++. There are minor technical differences, syntactic
  differences, but it is essentially the same thing. I think that's
  understandable since Digital Mars had a C++ compiler.
 
 
  
 http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 
 
  I have never touched ada but I doubt it is really has that much that
  can't be done in D. I thought most (if not all) the problems with C++
  were absent in D as this summary of the most common ones points out
  http://www.digitalmars.com/d/2.0/templates-revisited.html.
 
  Your thoughts?
 
  I don't know Ada but I do agree with that reddit reply about c++ and D
  templates. D provides a better implementation of the exact same design,
  so it does fix many minor issues (implementation bugs). An example of
  this is the foobarClass  construct that doesn't work because of the
   operator.
  However, using the same design obviously doesn't solve any of the deeper
  design problems and this design has many of those. An example of that is
  that templates are compiled as part of the client code. This forces a
  library writer to provide the source code (which might not be acceptable
  in commercial circumstances) but even more frustrating is the fact that
  template compilation bugs will also happen at the client.
 
  Well yes, but the .NET design restrict the generic type to a specific
  named interface in order to do type checking. You may find this a good
  design choice, but others find it far more frustrating because this is
  exactly what allows for a bit more flexibility in a statically typed
  world. So it is not exactly a problem but rather a trade-off imho.
  
  The .Net implementation isn't perfect of course and has a few issues
  that should be resolved, one of these is the problem with using
  operators. requiring interfaces by itself isn't the problem though. The
  only drawback in this case is verbosity which isn't really a big deal
  for this.
 
 The drawback is not verbosity but lack of structural typing. Suppose some  
 library has code that can be parametrized by IFoo and I have another library 
 with a type that implements IBar, which satisfies IFoo but not explicitly 
 so. Then what? Unless I have totally misunderstood .NET generics, I have to 
 create some proxy object for IBar that implements IFoo just to satisfy the 
 strong type checking of .NET generics. You could make the argument that this 
 'inconvenience' is a good thing, but I do think it is a bit more of a 
 drawback than just increased verbosity.

The way I see it we have three options:

assume we have these definitions:
interface I {...}
class Foo : I {...}
class Bar {...} // structurally compatible to I

template tp (I) {...}

1) .Net nominative typing:
tp!(Foo) // OK
tp!(Bar) //not OK

2) structural typing (similllar to Go?)
tp!(Foo) // OK
tp!(Bar) // also OK

3) C++ style templates where the compatibility check is against the *body* of 
the template.

of the three above I think option 3 is the worst design and option 2 is my 
favorite design. I think that in reality you'll almost always want to define 
such an interface and I really can't think of any useful use cases for an 
unrestricted template parameter as in C++. 

If you think of templates as functions the compiler executes, the difference 
between the last two options is that option 2 is staticly typed vs. option 3 
which is dynamicaly typed. We all use D because we like static typing and 
there's no reasone to not extend this to compile-time as well. 


Re: What's wrong with D's templates?

2009-12-21 Thread yigal chripun
Don Wrote:
  The way I see it we have three options:
  
  assume we have these definitions:
  interface I {...}
  class Foo : I {...}
  class Bar {...} // structurally compatible to I
  
  template tp (I) {...}
  
  1) .Net nominative typing:
  tp!(Foo) // OK
  tp!(Bar) //not OK
  
  2) structural typing (similllar to Go?)
  tp!(Foo) // OK
  tp!(Bar) // also OK
  
  3) C++ style templates where the compatibility check is against the *body* 
  of the template.
  
  of the three above I think option 3 is the worst design and option 2 is my 
  favorite design. I think that in reality you'll almost always want to 
  define such an interface and I really can't think of any useful use cases 
  for an unrestricted template parameter as in C++. 
 
 You forgot option 4:
 
 4) D2 constrained templates, where the condition is checked inside the 
 template constraint.
 
 This is more powerful than option 2, because:
 
 (1) there are cases where you want MORE constraints than simply an 
 interface; and (2) only a subset of constraints can be expressed as an 
 interface.
 Also a minor point: (3) interfaces don't work for built-in types.
 
 Better still would be to make it impossible to compile a template which 
 made use of a feature not provided through a constraint.
 

I wouldn't give that a sepoarate option number, IMO this is a variation on 
option2. regarding your notes:
when you can express the same concept in both ways, using an interface is esier 
to read  understand IMO. What about having a combination of the two designs? 
you define an interface and allow optionally defining additional constraints 
_on_the_interface_ instead of the template.
I think this complies with your points (1) and (2) and is better since you 
don't need to repeat the constraints at the call site (each template that uses 
that type needs to repeat the constraint).
even if you factor out the checks into a separate isFoo template you still 
need to add to each template declaration if isFoo!(T) which really should be 
done by the compiler instead.

regarding point(3) - this is orthogonal IMO. Ideally I'd like to see this 
distinction between builtin type and user defined one removed. int should be 
treated in the same manner as a user defined struct. 

I completely agree about not compiling templates that use features not defined 
by constraints. This is in fact the main point I was trying to make in this 
thread. 



Re: What's wrong with D's templates?

2009-12-21 Thread yigal chripun
Rainer Deyke Wrote:

 yigal chripun wrote:
  2) structural typing (similllar to Go?)
  tp!(Foo) // OK
  tp!(Bar) // also OK
  
  3) C++ style templates where the compatibility check is against the
  *body* of the template.
  
  If you think of templates as functions the compiler executes, the
  difference between the last two options is that option 2 is staticly
  typed vs. option 3 which is dynamicaly typed. We all use D because we
  like static typing and there's no reasone to not extend this to
  compile-time as well.
 
 I prefer to think of option 2 as explicitly typed while option 3 uses
 type inference.  Type inference is a good thing.
 
 
 
 -- 
 Rainer Deyke - rain...@eldwood.com

You might prefer that but it's incorrect. This is exactly equivalent to calling 
a Ruby function vs. a D function, only happens at the compiler's run-time 
instead your app's run-time. 
Errors that the compiler statically checks in D will only be caught at run-time 
in Ruby. In our case, this means that a user of a tempate can get compilation 
errors for the temple code itself.


Re: What's wrong with D's templates?

2009-12-21 Thread yigal chripun
Don Wrote:

 yigal chripun wrote:
  Don Wrote:
  The way I see it we have three options:
 
  assume we have these definitions:
  interface I {...}
  class Foo : I {...}
  class Bar {...} // structurally compatible to I
 
  template tp (I) {...}
 
  1) .Net nominative typing:
  tp!(Foo) // OK
  tp!(Bar) //not OK
 
  2) structural typing (similllar to Go?)
  tp!(Foo) // OK
  tp!(Bar) // also OK
 
  3) C++ style templates where the compatibility check is against the 
  *body* of the template.
 
  of the three above I think option 3 is the worst design and option 2 is 
  my favorite design. I think that in reality you'll almost always want to 
  define such an interface and I really can't think of any useful use cases 
  for an unrestricted template parameter as in C++. 
  You forgot option 4:
 
  4) D2 constrained templates, where the condition is checked inside the 
  template constraint.
 
  This is more powerful than option 2, because:
 
  (1) there are cases where you want MORE constraints than simply an 
  interface; and (2) only a subset of constraints can be expressed as an 
  interface.
  Also a minor point: (3) interfaces don't work for built-in types.
 
  Better still would be to make it impossible to compile a template which 
  made use of a feature not provided through a constraint.
 
  
  I wouldn't give that a sepoarate option number, IMO this is a variation on 
  option2. regarding your notes:
  when you can express the same concept in both ways, using an interface is 
  esier to read  understand IMO. What about having a combination of the two 
  designs? you define an interface and allow optionally defining additional 
  constraints _on_the_interface_ instead of the template.
  I think this complies with your points (1) and (2) and is better since you 
  don't need to repeat the constraints at the call site (each template that 
  uses that type needs to repeat the constraint).
 
 I don't think interfaces are flexible enough for that.
 EG, how do you express that the type I must have a template function 
 void baz!(X)(X x) ?
 There's more to a type, than just a list of the virtual functions which 
 it supports.

I agree that interfaces don't support this ATM. That's why i suggested to add 
constraints to them. 
e.g.
Interface I if isFoo!(I) {...}  // one possible syntax
other approaches could be :
1) add non-virtual functions to interfaces (Andrei once suggested this)
2) add more meta-data with annotations
etc.. 

 
  even if you factor out the checks into a separate isFoo template you 
  still need to add to each template declaration if isFoo!(T) which really 
  should be done by the compiler instead.
  
  regarding point(3) - this is orthogonal IMO. Ideally I'd like to see this 
  distinction between builtin type and user defined one removed. int should 
  be treated in the same manner as a user defined struct. 
  
  I completely agree about not compiling templates that use features not 
  defined by constraints. This is in fact the main point I was trying to make 
  in this thread. 
 
 The problem is, I'm not sure that it's feasible in general. At least, 
 it's not obvious how to do it.
 


Re: What's wrong with D's templates?

2009-12-21 Thread Yigal Chripun

On 20/12/2009 03:11, BCS wrote:

Hello Yigal,


On 18/12/2009 17:34, dsimcha wrote:


I think variadics, static if and alias parameters qualify more as a
better design than fixing minor issues.


actually they qualify as - even worse design. duplicating the syntax
like that is butt ugly.



I for one think that it's a better design than C++ has. (Given that 99%
of what they do, C++ was never designed to do at all, you'd be hard
pressed to come up with a worse design without trying to.)

If you can come up with an even better design for compile time stuff,
I'd be interested.


the conflation of user-code and library code.


Could you elaborate on this?




but even more frustrating is the fact that
template compilation bugs will also happen at the client.


Jumping back a bit; which client? The one with the compiler or the end
user?

If the first; removing this puts major limits on what can be done
because you can't do anything unless you be sure it will work with the
open set of types that could be instanceiated, including ones you don't
know about yet. I know some system like c# requiter you to define what
you will do to a type at the top and then enforce that. IMHO this is a
non-solution. Without be to silly I think I could come up with a library
that would requiter a solution to the halting problem in order to check
that the template code can't generate and error with the given constants
and that a given type fits the constraints, both without actuality
instanceate the template for the type.

If the second; nether D nor C++ have to worry about that.


.Net generics for example work by creating an instantiation for each
value type (same as c++ templates) and one instantiation for all
reference types since at the binary level all reference types are
simply
addresses.


I don't know how it does it but there has to be more to it in C# because
they allow you to do thing to the objects that Object doesn't support.
For that to happen, the objects have to be wrapped or tagged or
something so that the generics code can make foo.MyFunction() work for
different types. If I has to guess, I'd guess it's done via a vtable
either as a magic interface or as a fat pointer.

Oh, and if the above is garbage because C# can't access totally
independent methods from a generic, then right there is my next argument
against generics.




What you're talking about in the above is meta-programing. Doing 
meta-programing a-la c++ templates is IMO like trying to use square 
wheels, it is just wrong.


To answer your questions:
D already has better designed tools for this and they keep improving. 
Don is doing an excellent job in fixing CTFE.
I think D needs to go beyond just constant-folding (CTFE) and allow to 
run any function at compile-time in the same manner it's done in Nemerle 
(multi-stage compilation).


This is orthogonal to generics. the limitations you see in C# are *not* 
limitations of its generics but rather of the meta-programing facilities.


Re: What's wrong with D's templates?

2009-12-21 Thread Yigal Chripun

On 21/12/2009 19:53, Walter Bright wrote:

Don wrote:

The problem is, I'm not sure that it's feasible in general. At least,
it's not obvious how to do it.


C++0x Concepts tried to do it in a limited form, and it got so
complicated nobody could figure out how it was supposed to work and it
capsized and sank.

I don't think it's possible in the more general sense.


The C++0x Concepts tried to add two more levels to the type system:
template typename T ...
The T parameter would belong to a Concept type, and they also added 
Concept maps whice are like concept Interfaces. Add to the mix backward 
compatibility (as always is the case in C++) and of course you'll get a 
huge complicated mess of special cases that no-one can comprehend.


But that doesn't mean the idea itself isn't valid. Perhaps a different 
language with different goals in mind can provide a much simpler non 
convoluted implementation and semantics for the same idea?
You've shown in the past that you're willing to break backward 
compatibility in the name of progress and experiment with new ideas. You 
can make decisions that the C++ committee will never approve.


Doesn't that mean that this is at least worth a shot?


Re: What's wrong with D's templates?

2009-12-21 Thread Yigal Chripun

On 22/12/2009 05:22, Walter Bright wrote:

Kevin Bealer wrote:

The performance / impl-hiding conflict is a fundamental problem -- if
the user's compiler can't see the template method definitions, then
it can't optimize them very well. If it can, then the user can too.
Any method of compiling them that preserves enough info for the
compiler to work with will probably be pretty easily and cleanly
byte-code-decompilable.


Absolutely right.

One of the features that C++ exported templates was supposed to provide
was obfuscation of the template bodies so that users couldn't see it. My
contention was that there was essentially no reasonable method to ensure
that.

1. any obfuscation method only has to be cracked by one individual, then
everyone can see through it.

2. if you ship the library for multiple compilers, you only have to
crack the weakest one

3. if you provide the decryption key to the customer, and you must, and
an open source compiler is used, you lose


You can also dis-assemble binary libs. That's not the point of this 
discussion. The point is having proper encapsulation.


Re: What's wrong with D's templates?

2009-12-21 Thread yigal chripun
Walter Bright Wrote:

 Yigal Chripun wrote:
  But that doesn't mean the idea itself isn't valid. Perhaps a different 
  language with different goals in mind can provide a much simpler non 
  convoluted implementation and semantics for the same idea?
  You've shown in the past that you're willing to break backward 
  compatibility in the name of progress and experiment with new ideas. You 
  can make decisions that the C++ committee will never approve.
  
  Doesn't that mean that this is at least worth a shot?
 
 I believe that D's template constraint feature fills the bill, it does 
 everything Concepts purported to do, and more, in a simple and easily 
 explained manner, except check the template body against the constraint.
 
 The latter is, in my not-so-humble opinion, a desirable feature but its 
 desirability is overwhelmed by the payment in complexity and 
 constrictions on the Concepts necessary to make it work.

could you please expand on what are the main issues with implementing that 
check? 
I also wonder what's the situation regarding this in other languages - C# has 
constraints IIRC - is the check performed there? 

IMO cconstraints are neat but they aren't perfect. For one, they need to be 
repeated for each template, it would be very awesome if that could be moved to 
the parameter of the template so instead of: 

template foo(T) if isRange!T ...
template bar(T) if isRange!T ... 

you could write something like: 
struct Range if ... {}
template foo(r : Range) ...
template bar(r : Range) ...

inside both templates the parameter r satisfies all the constraits of Range. 

does that sound reasonable at all?


Re: What's wrong with D's templates?

2009-12-19 Thread Yigal Chripun

On 19/12/2009 01:08, retard wrote:

Sat, 19 Dec 2009 00:24:50 +0200, Yigal Chripun wrote:


to retard:
different problems should be solved with different tools. Macros should
be used for meta-programming and generics for type-parameters. they
don't exclude each other. E.g. Nemerle has an awesome macro system yet
it also has .net generics too.
As the saying goes -when all you got is a hammer everything looks like
a nail which is a very bad situation to be in. templates are that
hammer while a much better approach is to go and by a toolbox with
appropriate tools for your problem set.


I didn't say anything that contradicts this.


Did you read any arguing in the above? I thought I was agreeing with 
you.. :)


Re: What's wrong with D's templates?

2009-12-19 Thread Yigal Chripun

On 19/12/2009 02:43, bearophile wrote:

Yigal Chripun:


To bearophile: you're mistaken on all counts -


Yes, this happens every day here :-) I am too much ignorant still
about computer science to be able to discuss in this newsgroup in a
good enough way.



didn't mean to sound that harsh. sorry about that.



generics (when properly implemented) will provide the same
performance as templates.


I was talking about a list of current language implementations.



Also, a VM is completely orthogonal to this. Ada ain't VM based, is
it?


Ada doesn't use the generics how currently C# implement them.
Currently C# generics need a VM.


they were implemented for a VM based system but nothing in the *design* 
itself inherently requires a VM. You keep talking about implementation 
details while I try to discuss the design aspects and trad-offs. It's 
obvious that we can't just copy-paste the .NET implementation to D.




Macros should be used for meta-programming and generics for
type-parameters.


This can be true, but there's a lot of design to do to implement that
well. In Go there are no generics/templates nor macros. So generics
and macros can be added, as you say. In D2 there are templates and no
macros, so in D3 macros may be added but how can you design a D3
language where templates are restricted enough to become generics?
Unless D3 breaks a lot of backwards compatibility with D2 you will
end in D3 with templates + macros + language conventions that tell to
not use templates when macros can be used. Is this good enough?

Bye, bearophile


I don't know about D3, But even now in D2 there is confusion as to what 
should be implemented with templates and what with CTFE.




Re: What's wrong with D's templates?

2009-12-18 Thread Yigal Chripun

On 18/12/2009 02:49, Tim Matthews wrote:

In a reddit reply: The concept of templates in D is exactly the same as
in C++. There are minor technical differences, syntactic differences,
but it is essentially the same thing. I think that's understandable
since Digital Mars had a C++ compiler.

http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3


I have never touched ada but I doubt it is really has that much that
can't be done in D. I thought most (if not all) the problems with C++
were absent in D as this summary of the most common ones points out
http://www.digitalmars.com/d/2.0/templates-revisited.html.

Your thoughts?


I don't know Ada but I do agree with that reddit reply about c++ and D 
templates. D provides a better implementation of the exact same design, 
so it does fix many minor issues (implementation bugs). An example of 
this is the foobarClass construct that doesn't work because of the 
 operator.
However, using the same design obviously doesn't solve any of the deeper 
design problems and this design has many of those. An example of that is 
that templates are compiled as part of the client code. This forces a 
library writer to provide the source code (which might not be acceptable 
in commercial circumstances) but even more frustrating is the fact that 
template compilation bugs will also happen at the client.


There's a whole range of designs for this and related issues and IMO the 
C++ design is by far the worst of them all. not to mention the fact that 
it isn't an orthogonal design (like many other features in c++). I'd 
much prefer a true generics design to be separated from compile-time 
execution of code with e.g. CTFE or AST macros, or other designs.


Re: What's wrong with D's templates?

2009-12-18 Thread Yigal Chripun

On 18/12/2009 16:02, retard wrote:

Fri, 18 Dec 2009 08:53:33 -0500, bearophile wrote:


Yigal Chripun:

There's a whole range of designs for this and related issues and IMO
the C++ design is by far the worst of them all.


My creativity is probably limited, so I think that while C++/D templates
have some well known problems, they are better than the strategies used
by Java, C#, Ada, Haskell, Object-C, Scala, and Delphi to define generic
code. They produce efficient code when you don't have a virtual machine
at run time, and allow to write STL-like algorithms. If you need less
performance and/or you accept worse algorithms/collections then I agree
there are designs simpler to use and cleaner than C++/D templates. If
you are able to design something better I'd like to know about your
ideas.


Templates are good for parameterizing algorithms and data structures.
They begin to have problems when they are used extensively for meta-
programming. For instance the lack of lazy evalution in the type world
forces the language to either have 'static if' or you need to add
indirection via dummy members. The language is basically purely
functional, but it's several orders of magnitude more verbose than say
Haskell.

CTFE solves some of the problems, but as a result the system becomes
really unorthogonal. Macros on the other hand solve the problem of clean
meta-programmming but are not the best way to describe generic types.

Java, C#, Scala, Haskell et al only support types as template parameters.
In addition Java erases this type info on runtime so you get even worse
performance than on C#/.NET.


To bearophile:
you're mistaken on all counts - generics (when properly implemented) 
will provide the same performance as templates. Also, a VM is completely 
orthogonal to this. Ada ain't VM based, is it?


to retard:
different problems should be solved with different tools. Macros should 
be used for meta-programming and generics for type-parameters. they 
don't exclude each other. E.g. Nemerle has an awesome macro system yet 
it also has .net generics too.
As the saying goes -when all you got is a hammer everything looks like 
a nail which is a very bad situation to be in.
templates are that hammer while a much better approach is to go and by a 
toolbox with appropriate tools for your problem set.


Re: What's wrong with D's templates?

2009-12-18 Thread Yigal Chripun

On 18/12/2009 22:09, BCS wrote:

Hello Yigal,


On 18/12/2009 02:49, Tim Matthews wrote:


In a reddit reply: The concept of templates in D is exactly the same
as in C++. There are minor technical differences, syntactic
differences, but it is essentially the same thing. I think that's
understandable since Digital Mars had a C++ compiler.

http://www.reddit.com/r/programming/comments/af511/ada_programming_ge
nerics/c0hcb04?context=3

I have never touched ada but I doubt it is really has that much that
can't be done in D. I thought most (if not all) the problems with C++
were absent in D as this summary of the most common ones points out
http://www.digitalmars.com/d/2.0/templates-revisited.html.

Your thoughts?


I don't know Ada but I do agree with that reddit reply about c++ and D
templates. D provides a better implementation of the exact same
design,
so it does fix many minor issues (implementation bugs). An example of
this is the foobarClass construct that doesn't work because of the
 operator.
However, using the same design obviously doesn't solve any of the
deeper
design problems and this design has many of those. An example of that
is
that templates are compiled as part of the client code. This forces a
library writer to provide the source code (which might not be
acceptable
in commercial circumstances) but even more frustrating is the fact
that
template compilation bugs will also happen at the client.
There's a whole range of designs for this and related issues and IMO
the C++ design is by far the worst of them all. not to mention the
fact that it isn't an orthogonal design (like many other features in
c++). I'd much prefer a true generics design to be separated from
compile-time execution of code with e.g. CTFE or AST macros, or other
designs.



If D were to switch to true generics, I for one would immediately start
looking for ways to force it all back into compile time. I think that
this would amount to massive use of CTFE and string mixins.

One of the things I *like* about template is that it does everything at
compile time.

That said, I wouldn't be bothered by optional generics or some kind of
compiled template where a lib writer can ship a binary object (JVM
code?) that does the template instantiation at compile time without the
text source. (The first I'd rarely use and the second would just be an
obfuscation tool, but then from that standpoint all compilers are)


you are confused - the term generics refers to writing code that is 
parametrized by type(s). it has nothing to do with JVM or the specific 
Java implementation of this idea. Java's implementation is irrelevant to 
our discussion since it's broken by design in order to accommodate 
backward compatibility.


generics != Java generics !!!

Generics are also orthogonal to meta-programming.

please also see my reply to dsimcha.


Re: Unification

2009-12-02 Thread yigal chripun
Zexx Wrote:

 Have you heard of language called Vala? They came to the same idea - C# is a 
 scripting language for web apps, but it's not suitable for demanding 
 applications.
 
 I myself used Delphi for a long time because Object Pascal provided me with 
 all the modern features and modern IDE that left competitors in dust. 
 Especiall Microsoft's pityful Visual Studio 6. Unfortunately, Microsoft 
 responded by buying shares of Borland, made them go NET and... that was the 
 end of Delphi as native language. Embarcadero isn't el salvatore either.
 
 C and C++ programmers search for a modern language too. Java and C# 
 programmers have modern languages, but their programs waste 10 times more 
 memory than necessary and aren't suitable for resource demanding applications.
 
 The creators of D and Vala know why they created it. Maybe there are other 
 similar projects. But why work separately? There's no chance for success when 
 working like that.
 
 I propose uniting. Lets make programmers busy on the same goal, instead of 5 
 separate ones. Making a modern language is only the first part. Other parts 
 are harder.
 
 1. Quality, fast, optimizing cross-platform compilers. Speed of compilation 
 is important too.
 
 2. IDEs where you can write and debug code, but NOT only that. IDEs where you 
 can create visual applications with ease. There will be no popularity util 
 there's a visual forms editor similar to that in MS Visual Studio or Delphi.
 
 3. Lots of libraries for all purposes (including ports of existing 
 libraries), so that one doesn't have to reinvent the wheel every time a 
 different project is started.
 
 4. A dedicated web site where you can upload and download libraries and 
 components, whether free or commercial. Where you can find everything you 
 need when you need it, or a place where you can share (or sell) libraries 
 that you made.
 
 5. Lots of shiny, new, beautiful visual controls already coming in the 
 package. Some programmers are happy if their programs use standard Linux or 
 Windows style. That's not enough anymore. 
 
 Programs made with a new tool that kicks arse must attract people. Not  only 
 by functionality and usability, but by looks too. So that new programmers 
 wish to make their applications in that and not in other development tools. 
 Mediocre-looking software won't win any followers.
 
 A dedicated team that makes controls must be formed. And they shouldn't just 
 copy Office 2007 controls, they should be beyond that. Programs written in a 
 new unified language must look excellent and modern, to attract young 
 programmers and increase overall popularity of the new language / 
 environment. 
 
 Those are all the necessary steps. None of that is unnecessary. Without that 
 D and Vala will stay just experiments. And nobody wants that. Except maybe 
 Microsoft, Sun and Google.
 
 

My personal opinion - Vala is a waste unnecessary effort. Vala is a 99.9% a 
reimplementation of C# on top of the GObject system. Had I wanted useing C# I 
would have just use the .net version or the mon version if I'm on *nix. I don't 
see any benefit in using the C based Gobject system either. 



Re: removal of cruft from D

2009-11-25 Thread Yigal Chripun

Don wrote:

bearophile wrote:

Don:
There seems to be no point in having a *single* integer value, shared 
between the app and all libraries! It's just reducing future 
flexibility.


It doesn't reduce flexibility at all, 


I meant future D flexibility.

because if you need something more complex you don't use it and nothing 
bad happens. You can even ignore it.

You are thinking about 1+ lines long apps; about scaling up.


No, I'm not, actually. I've actually never worked on a large project. 
I'm not a computer scientist.




IMO, computer scientists are over-rated. The only great (as in very 
large) thing about them is their egos.


 I am thinking about single-module 500-lines long programs that replace 
some scripts; about scaling down too.
A compilation constant avoids me to modify the source every time I 
need to change the size of some static array/matrix. With that I just 
need a second Python script that calls dmd/ldc with a different 
argument, instead of a little more complex Python script that changes 
the source code of the D program, to modify the constant.


A very modern language like Fortress, designed for physics, has that 
small feature :-) (It's available in C too, only integer/symbol 
constants).


Yes, but it has MORE THAN ONE.
Some specifics -- it'd be nice to have a Windows version specified as an 
integer. It'd be nice to have a DirectX version number. Can't do it.


version(int) is like a programming language with one variable. It's 
ridiculous.


The feature berophile speaks of in C, and such (and even in Java with 
properties) is IMO yet another special case with special syntax in other 
languages. If/When D gets proper macros, this would be trivial to 
implement. basic idea - the macro reads a properties file and defines a 
constant with the value specified in that file.


This is another reason why I prefer the two-phase compilation approach 
instead of D's CTFE - CTFE is limited to a small subset of D which can 
be reduced to constant-folding whereas in a Nemerle-like system you have 
the full power of the language: I/O, network, syscalls, whatever. For 
instance this is used to verify SQL queries on the DB engine at 
compile-time.


Re: Non-enum manifest constants: Pie in the sky?

2009-11-24 Thread yigal chripun
bearophile Wrote:

 Jason House:
 
  IMHO, enum is a patchwork collection of features... manifest constants, 
  enumerated lists, and bitmasks are all conflated into something rather 
  ugly.
 
 Manifest constants defined with enum look a little ugly, and I too don't like 
 it, but it's not even a syntax problem, it's a naming problem, so we can 
 survive with it. Do you have ideas for alternative design of manifest 
 constants? (LDC may even not need manifest constants at all, especially when 
 you use link-time optimization).
 
 The enumerated lists of D2 may enjoy to grow some built-in way to invert 
 them, and little more.
 
 Regarding bitmasks, I don't like how they are implemented in C#. D can do 
 better, while keeping things simple.
 
 Do you have ideas for a better design of those three things? (I don't want 
 heavy Java-like enums).
 
 Bye,
 bearophile

Manifest constants reflect a toolchain problem and *not* a naming problem. They 
exist only because the linker isn't smart enough to optimize immutable 
variables. LLVM should already have this implemented.

regarding Java style enumarations, I disagree that they are too heavy. They are 
a *much* better design than the C/D/.. design.  I think the C style use of 
enums to implement bit masks is the problem and not the enum itself. 

there are two better designs IMO:
a) low level where you need manual control of the bit patterns - use ubyte/etc 
directly is more explicit andf therfore better. 
b) you don't care about the bits themselves and only want to implement OptionA 
| OptionB efficiently - bit patterns should *not* be exposed and should be 
encapsulated by a type. Java has a enumSet (I don't remember the exact name) 
that handles that efficently for you. 

either way, you shouldn't ever provide an interface for Option flags where the 
underlining implementation - the bit values are exposed to the user. 



Re: Short list with things to finish for D2

2009-11-23 Thread yigal chripun
aarti_pl Wrote:

 Walter Bright pisze:
  Don wrote:
  There's not many sensible operators anyway. opPow is the only missing 
  one that's present in many other general-purpose languages. The only 
  other ones I think are remotely interesting are dot and cross product.
  
  Yup.
  
  Anything beyond that, you generally want a full DSL, probably with 
  different precendence and associativity rules. Eg, for regexp, you'd 
  want postfix * and + operators. There are examples of clever things 
  done in C++  with operator overloading, but I think that's just 
  because it's the only way to do DSLs in C++.
  
  I was enthralled with the way C++ did it for regex for a while, but when 
  I think more about it, it's just too clever. I think it's more operator 
  overloading abuse now.
  
  I don't think the applications are there.
  
  I agree.
 
 Well, I can understand your fear about operator abuse. And I agree that 
 code might be awful when operator overloading will be abused.
 
 But I have in mind one very convincing example. I defined in D/Java SQL 
 syntax. They are also other frameworks which do the same.
 
 What can I say about my experiences with using such framework: it is 
 very, very powerful concept. It cuts time necessary to develop 
 application, makes sql statements type safe and allows to pass around 
 parts of sql statements inside application. It also makes easy 
 refactoring of sql statement (especially in Java). Its huge win 
 comparing it to defining DSL as strings.
 
 It's hard to explain just in few sentences all details. I have already 
 done it long time ago, and in my first post I provided links.
 
 Problem with current approach is that I have to define SQL in D/Java in 
 following way:
 
 auto statement = Select(visitcars.name).Where(And(More(visitcards.id, 
 100), Like(visitcards.surname, A*)));
 
 Please look at code in Where(). It's so awfulll!
 
 It would be so much better to write:
 auto statement = Select(visitcars.name).Where((visitcards.id `` 100) 
 `AND` (visitcards.surname `Like` A*));
 
 I used here syntax which you have proposed with delimiter ``. I think it 
 is good enough solution for such purpose.
 
 But please, don't underestimate problem! Many DSL languages would never 
 appear if languages would be good enough.
 
 As I said solution with delimiter is good enough for me. It has another 
 advantage that it clearly shows in code that you have overloaded 
 operator here, so no surprises here. Additionally when you implement 
 template function:
 opInfix('AND')(val0, val1);
 you pass string into template. So I see it quite intuitive that you use 
 string as operator: ``. Maybe there will be not necessary to change 
 current behavior that `` defines string.
 
 I think we have good possibility to  open this door now. It can be even 
 implemented later, but I would wish just not to close this door now :-)
 
 BR
 Marcin Kuszczak
 (aarti_pl)

There's nothing more hideous than all those frameworks in Java/C++ that try to 
re-enginer SQL into functions, templates, LINQ, whatever.
SQL *is* a perfectly designed language for its purpose and it doesn't need to 
be redisnged! The only problem with this is the type-safety when embedding sql 
as string in a host language. 
the solution is two-phased: 

phase a is simple, look at the C# API for postgres (I think). The query is one 
string like:
select * from table where :a  42, the :name is a place holder for the 
host-language variable, and you call an API to bind those :names to variables 
in a type-safe way. the downside is that it's verbose. 
 
phase b is what Nemerle does with the above - it has an AST macro to wrap the 
above so you can write your query directly and it is checked as compile-time. 

No operators were abused in implementing this. 


Re: Short list with things to finish for D2

2009-11-23 Thread yigal chripun
Don Wrote:

 Chad J wrote:
  Don wrote:
  I quite agree. What we can do already is:
 
  auto statement = db.execute!(`select $a from table where $b  100  $c
  Like A*`)(visitcars.name,visitcars.id, visitcars.surname);
 
  which I personally like much better than the proposed goal:
 
  It would be so much better to write:
  auto statement = Select(visitcars.name).Where((visitcards.id `` 100)
  `AND` (visitcards.surname `Like` A*));
  (Replace $a with your preferred method for defining placeholder variables).
 
  And the question then is, can we improve the existing solution? And if
  so, how? I just don't think the solution involves overloading operators.
  I think this a great example of why we *don't* want arbitrary operator
  overloading: there's no point overloading  and  if you can't make
  'from', 'where', and 'like' to all be infix operators, as well!
  
  This sounds like a job for better mixin syntax.
  
  So let template#(args) be equivalent to mixin(template!(args)).
  
  Then you can do
  
  auto statement = db.execute#(`select $visitcars.name from table where
  $visitcars.id  100  $visitcars.surname Like A*`);
 
 Yeah, something like that. Or it could mixin automatically. eg if
 macro foo(args...)
 foo(args) meant  mixin(foo(args)).
 
 then the syntax would be:
 db.execute(`select $visitcars.name from table where $visitcars.id  100 
  $visitcars.surname Like A*`);
 
 which has advantages and disadvantages. So there's quite a bit of 
 flexibility. A lot of potential for brainstorming!

a few points I want to add:
1) I though that :name was in some version of the SQL standard or a know 
extension so if we use this in APIs for D we should use the standard notation 
(can anyone verify this?)

2) I don't want to mix this discussion with infix functions and operator 
overloading. I'm not sure I want to limit these and perhaps there are other 
legitimate uses for general purpose infix functions. In this post I just 
pointed out that SQL is *not* a legitimate use case for that. 

3) the Nemerle macros for SQL allow for: 
db.execute(`select $visitcars.name from table where $visitcars.id  100   
$visitcars.surname Like A*`);

the $ in Nemerle is used for controled breaking of hygene. Their Macro 
translates such a query into a sql string with :names and calls for the bind 
API to connect those with the given variables in a type-safe way. 

IIRC, they use the db connection object to conncet to the DB at compile-time 
and check the syntax and also existence of the objects (tables, columns, etc) 
in the DB schema.


Re: removal of cruft from D

2009-11-21 Thread Yigal Chripun

On 21/11/2009 02:45, Andrei Alexandrescu wrote:

Ellery Newcomer wrote:

Nick Sabalausky wrote:

Yigal Chripun yigal...@gmail.com wrote in message
news:he6sqe$1dq...@digitalmars.com...

Based on recent discussions on the NG a few features were
deprecated/removed from D, such as typedef and C style struct
initializers.

IMO this cleanup and polish is important and all successful
languages do such cleanup for major releases (Python and Ruby come
to mind). I'm glad to see that D follows in those footsteps instead
of accumulating craft like C++ does.


As part of this trend of cleaning up D before the release of D2,
what other features/craft should be removed/deprecated?

I suggest reverse_foreach and c style function pointers

please add your candidates for removal.


s/reverse_foreach/foreach_reverse/ ;)

1. Floating point literals without digits on *both* sides!!! 1.,
.1 -- Useless hindrance to future language expansion!

2. Octal literals! I think it'd be great to have a new octal syntax,
or even better, a general any-positive-inter-base syntax. But until
that finally happens, I don't want 010 == 8 preserved. And I don't
think the ability to have an octal literal is important enough that
lacking it for a while is a problem. And if porting-from-C really has
to be an issue, then just make 0[0-9_]+ an error for a transitionary
period (or forever - it'd at least be better than maintaining 010 ==
8).

3. Also the comma operator, but that's already been recently discussed.





bikeshed

hex literal prefix: 0x, not 0h
=
octal literal prefix: 0c, not 0o

/bikeshed


This I'm on board with. 0o is too much like a practical joke.

Andrei


in the short term I wouldn't mind if they would be typed as: 
0baseEightXXX or what ever as long as the current syntax is removed.


in the long term, I'd like to see a more general syntax that allows to 
write numbers in any base.

something like:
[base]n[number] - e.g. 16nA0FF, 2n0101, 18nGH129, etc.
also define syntax to write a list of digits:
1024n[1005, 452, 645, 16nFFF] // each digit can also be defined in 
arbitrary base




Re: Itcy-BitC closures and curries

2009-11-21 Thread Yigal Chripun

On 21/11/2009 15:41, Justin Johansson wrote:

Having noticed that the BitC PL http://www.bitc-lang.org/ has been
mentioned in passing before on this forum, I wonder if any of the D
community have any comment on the following aspect of the design of
BitC, particularly as may be relevant to D and GC.

1.1 About the Language

http://www.bitc-lang.org/docs/bitc/spec.html

In contrast to ML, BitC syntax is designed to discourage currying.
Currying encourages the formation of closures that capture non-global
state. This requires dynamic storage allocation to instantiate these
closures at runtime. Since there are applications of BitC in which
dynamic allocation is prohibited, currying is an inappropriate idiom for
this language.

I don't have any particular agenda in asking this question but feel that
some interesting discussion might result out of.

There's also a design note for Closure Implementation in BitC by Dr.
Jonathan Shapiro, 2005, though I'm unsure if this is currently
implemented in BitC or if the article now out of date.

http://www.bitc-lang.org/docs/bitc/closures.html

Cheers to all,

Justin Johansson



Dr. Jonathan Shapiro works on modern u-kernels and his latest project is 
the coyotos project which is a capability based design (a very neat 
design if I may add). BitC was mainly created to facilitate writing that 
kernel and prove its correctness and thus it makes perfect sense that it 
was designed that way. Does BitC even has a GC? I thought it was 
compiled to C.


Re: Why we need opApply (Was: Can we drop static struct initializers?)

2009-11-21 Thread Yigal Chripun

dsimcha wrote:

== Quote from Max Samukha (spam...@d-coding.com)'s article

On Sat, 21 Nov 2009 18:51:40 + (UTC), dsimcha dsim...@yahoo.com
wrote:

== Quote from Max Samukha (spam...@d-coding.com)'s article

On Fri, 20 Nov 2009 15:30:48 -0800, Walter Bright
newshou...@digitalmars.com wrote:

Yigal Chripun wrote:

what about foreach_reverse ?

No love for foreach_reverse? tear

And no mercy for opApply

opApply **must** be kept  It's how my parallel foreach loop works.  This 
would
be **impossible** to implement with ranges.  If opApply is removed now, I will
fork the language over it.

I guess it is possible:
uint[] numbers = new uint[1_000];
pool.parallel_each!((size_t i){
numbers[i] = i;
})(iota(0, numbers.length));
Though I agree it's not as cute but it is faster since the delegate is
called directly. Or did I miss something?


I'm sorry, but I put a lot of work into getting parallel foreach working, and I
also have a few other pieces of code that depend on opApply and could not 
(easily)
be rewritten in terms of ranges.  I feel very strongly that opApply and ranges
accomplish different enough goals that they should both be kept.

opApply is good when you **just** want to define foreach syntax and nothing 
else,
with maximum flexibility as to how the foreach syntax is implemented.  Ranges 
are
good when you want to solve a superset of this problem and define iteration over
your object more generally, giving up some flexibility as to how this iteration
will be implemented.

Furthermore, ranges don't allow for overloading based on the iteration type.  
For
example, you can't do this with ranges:

foreach(char[] line; file) {}  // Recycles buffer.
foreach(string line; file) {}  // Doesn't recycle buffer.

They also don't allow iterating over more than one variable, like:
foreach(var1, var2, var3; myObject) {}

Contrary to popular belief, opApply doesn't even have to be slow.  Ranges can be
as slow as or slower than opApply if at least one of the three functions (front,
popFront, empty) is not inlined.   This actually happens in practice.  For
example, based on reading disassemblies and the code to inline.c, neither 
front()
nor popFront() in std.range.Take is ever inlined.  If the range functions are
virtual, none of them will be inlined.

Just as importantly, I've confirmed by reading the disassembly that LDC is 
capable
of inlining the loop body of opApply at optimization levels = O3.  If D becomes
mainstream, D2 will eventually also be implemented on a compiler that's smart
enough to do stuff like this.  To remove opApply for performance reasons would 
be
to let the capabilities of DMD's current optimizer influence long-term design
decisions.

If anyone sees any harm in keeping opApply other than a slightly larger language
spec, please let me know.  Despite its having been superseded by ranges for a
subset of use cases (and this subset, I will acknowledge, is better handled by
ranges), I actually think the flexibility it gives in terms of how foreach can 
be
implemented makes it one of D's best features.


There are three types of iteration: internal to the container, external 
by index, pointer, range, etc, and a third design with co-routines 
(fibers) in which the container internally iterates itself and yields a 
single item on each call.
Ranges accomplish only the external type of iteration. opApply allows 
for internal iteration. All three strategies have their uses and should 
be allowed in D.


Re: Can we drop static struct initializers?

2009-11-20 Thread Yigal Chripun

Andrei Alexandrescu wrote:

Walter Bright wrote:

Andrei Alexandrescu wrote:
Would love to trim the book as well. My finger is on the Del button. 
Just say a word.


Unless someone comes up with I really need field names, dump 'em 
(but save a backup of your work first!).


My RIP emails to you (as with typedef) are my backup. Don't delete them 
:o).


So, C-style arrays are gone, C-style struct initializers are gone, 
typedef is gone. __traits isn't feeling too well either :o).



Andrei


what about foreach_reverse ?


Re: Can we drop static struct initializers?

2009-11-20 Thread Yigal Chripun

Bill Baxter wrote:

On Fri, Nov 20, 2009 at 11:15 AM, Yigal Chripun yigal...@gmail.com wrote:


what about foreach_reverse ?



What about starting a different thread?


Sorry.
I assumed we were discussing removals from D and therefore mentioned 
foreach_reverse as a prime candidate. I'll start a new thread.


removal of cruft from D

2009-11-20 Thread Yigal Chripun
Based on recent discussions on the NG a few features were 
deprecated/removed from D, such as typedef and C style struct initializers.


IMO this cleanup and polish is important and all successful languages do 
such cleanup for major releases (Python and Ruby come to mind). I'm glad 
to see that D follows in those footsteps instead of accumulating craft 
like C++ does.



As part of this trend of cleaning up D before the release of D2, what 
other features/craft should be removed/deprecated?


I suggest reverse_foreach and c style function pointers

please add your candidates for removal.




Re: And what will we do about package?

2009-11-20 Thread Yigal Chripun

Don wrote:

To quote bugzilla 143: 'package' does not work at all

But even if worked as advertised, it'd still be broken.

Although it's a really useful concept that works great in Java, the 
existing 'package' doesn't fit with D's directory-based module system.

As I see it, the problem is that, given:

module first.second.third.fourth;

which package is this module part of?
Is it 'third', 'second.third', or 'first.second.third'?

I think that _all_ of those can be reasonable project designs; but the 
compiler has no way of working out which is intended.
The behaviour currently described in the spec, that 'fourth' can use 
functions defined in 'first', is a particularly odd choice. If they were 
structs, the behaviour would be the exact opposite:


struct first {
   struct second {
  struct third {
   int fourth;
  }
   }
}
then first could access fourth, but fourth couldn't reach second. I 
think that's _generally_ the most sensible for modules, as well.


I think there are two possibilities:
(1) We work out some decent semantics for 'package'; OR
(2) We decide there isn't time, and defer it to D3.

Maybe the solution is a simple as adding a 'package' field to the module 
declaration. (eg,

module first.second.third.fourth package first.second;
)
But I fear that a major change to the module system might be required, 
which wouldn't be viable at this late stage.


Option (2) is possible because 'package' has never actually worked. It 
seems to be just a synonym for 'public' at present. Clearly, we can 
survive without it, no matter how desirable it is.


Well put.

I think we can just drop package and adopt a similar model to that of 
Go with D modules. it looks simple and flexible. especially the ability 
to have a Go package span several files.


Re: removal of cruft from D

2009-11-20 Thread Yigal Chripun

On 20/11/2009 23:49, Nick Sabalausky wrote:

Yigal Chripunyigal...@gmail.com  wrote in message
news:he6sqe$1dq...@digitalmars.com...

Based on recent discussions on the NG a few features were
deprecated/removed from D, such as typedef and C style struct
initializers.

IMO this cleanup and polish is important and all successful languages do
such cleanup for major releases (Python and Ruby come to mind). I'm glad
to see that D follows in those footsteps instead of accumulating craft
like C++ does.


As part of this trend of cleaning up D before the release of D2, what
other features/craft should be removed/deprecated?

I suggest reverse_foreach and c style function pointers

please add your candidates for removal.



s/reverse_foreach/foreach_reverse/ ;)


thanks :)



1. Floating point literals without digits on *both* sides!!! 1., .1 --
Useless hindrance to future language expansion!

2. Octal literals! I think it'd be great to have a new octal syntax, or even
better, a general any-positive-inter-base syntax. But until that finally
happens, I don't want 010 == 8 preserved. And I don't think the ability to
have an octal literal is important enough that lacking it for a while is a
problem. And if porting-from-C really has to be an issue, then just make
0[0-9_]+ an error for a transitionary period (or forever - it'd at least be
better than maintaining 010 == 8).

3. Also the comma operator, but that's already been recently discussed.





agree two all counts.


Re: Should the comma operator be removed in D2?

2009-11-19 Thread yigal chripun
retard Wrote:

 Thu, 19 Nov 2009 00:00:00 +0200, Yigal Chripun wrote:
 
  Ellery Newcomer wrote:
foo(a, b) is identical to foo(t);
  
  does ML have any equivalent of template parameters? eg
  
  foo!(1,int);
 
  foo's signature is actually: `a - `a which is like doing in D: T foo(T)
  (T n) { return n + n; } but unlike ML, in D/C++ you need to provide the
  type parameter yourself.
  
  does that answer your question?
 
 A bit more precise answer would be 'no'. ML does not support programming 
 with types directly. There is no way to pass stuff from value world to 
 type world and vice versa like in D.

I wouldn't say it's a decisive No. there are several extensions to ML like 
MetaML and MacroML that do have such capabilities. 
There are also ML inspired languages like Nemerle which provide very powerful 
AST macro systems. 


Re: Should the comma operator be removed in D2?

2009-11-18 Thread yigal chripun
You're remark of function chaining reminded me of a nice feture that a few OOP 
languages provide:

// pseudo syntax
auto obj = new Object();
obj foo() ; bar() ; goo() 

foo, bar and goo above are three mesages (methods) that are sent to the same 
object. i.e. it's the same as doing:
obj.foo();
obj.bar();
obj.goo();

this means the functions can return void instead of returning this like you'd 
do in C++/D. I think it provides a cleaner conceptual separation between 
multiple messages sent to one object and real chaining when foo returns obj2 
which then receives message bar and so on.


Re: Should the comma operator be removed in D2?

2009-11-18 Thread Yigal Chripun

Ellery Newcomer wrote:
 foo(a, b) is identical to foo(t);


does ML have any equivalent of template parameters? eg

foo!(1,int);



I'd suggest reading the wikipedia page about ML.

in short, ML is a strongly, statically typed language much like D, but 
doesn't require type annotations. it uses the Hindley-Milner type 
inference algorithm (named after its creators) which infers the types at 
compile-time.


here's a naive factorial implementation in ML:

fun f (0 : int) : int = 1
  | f (n : int) : int = n * f (n-1)


you can provide type annotations as above if you want to specify 
explicit types.


here's another function:

fun foo (n) = n + n

if you use foo(3.5) the compiler would use a version of foo with 
signature: real - real
but if you use foo(4) the compiler will use a version of foo with 
signature int - int


note that I didn't need to specify the type as parameter.

foo's signature is actually: `a - `a which is like doing in D:
T foo(T) (T n) { return n + n; } but unlike ML, in D/C++ you need to 
provide the type parameter yourself.


does that answer your question?


Re: Should the comma operator be removed in D2?

2009-11-18 Thread Yigal Chripun

Stewart Gordon wrote:

Yigal Chripun wrote:
snip
the only use case that will break is if the two increments are 
dependent on the order (unless tuples are also evaluated from left to 
right):

e.g.
a + 5, b + a //

snip

If you're overloading the + operator to have an externally visible side 
effect, you're probably obfuscating your code whether you use the comma 
operator or not.


Moreover, how can you prove that nothing that uses the operator's return 
value can constitute a use case?


Stewart.


I don't follow you. What I said was that if you have the above in a for 
loop with a comma expression, you'd expect to *first* add 5 to a and 
*then* add the new a to b (comma operator defines left to right order of 
evaluation).
tuples in general do not have to require a specific order since you keep 
all the results anyway, so the above could break. by defining tuples to 
be evaluated with the same order, the problem would be solved.


Re: Should the comma operator be removed in D2?

2009-11-17 Thread Yigal Chripun

Robert Jacques wrote:
On Tue, 17 Nov 2009 01:44:30 -0500, yigal chripun yigal...@gmail.com 
wrote:



Robert Jacques Wrote:

However, I imagine tuple(a++,b++) would have some overhead, which is
exactly what someone is trying to avoid by using custom for loops.

Personally, I like using a..b = tuple(a,b), since it also solves the
multi-dimensional slicing and mixed indexing and slicing problems.


what overhead? It's all in your imagination :)
a..b is confusing and bad UI. a..b means for humans the range of a 
till b and not the tuple of the two.
if I see something like hello..42 I would assume the person who 
wrote this was high on something.


multi dimentinal slicing should accept an integer range type and NOT a 
tuple.


The unnecessary creation and setting of a tuple struct is by definition 
overhead. Also, containers will have non-integer slices, e.g. 
dictionary[hello..world], and strided ranges would even mix types: 
hello..42..world. My point isn't that '..' was a better syntax than 
'(,)'. It's that '..' needs to change to something very much like a 
tuple and therefore could be used to kill two birds with one stone.


what tuple struct are you talking about?
we are discussing real true tuples that are supported by the language 
type system (meaning at compile-time), not some library struct type.


let me rephrase my sentence regarding slicing:
struct R { int start, stride, end; }
arrays should accept a list of the above R and a Dictionary should *not* 
implement slicing since that requires an order which dictionaries have 
no business to require.

OrderedDictionary!(T) (if you really want such a beast) would accept:
struct RR(T) {
T start, end;
int stride;
}

R and RR above are simplistic, real world impl. should make stride 
optional but the main point is that it is by no means a tuple.


I see what you're saying about two birds with one stone but from my POV 
instead of replacing old cruft with a useful and friendly to use new 
feature you just added more cruft and hacks to poorly support said 
feature with unfriendly and confusing syntax.


Re: Should the comma operator be removed in D2?

2009-11-17 Thread Yigal Chripun

Robert Jacques wrote:

On Tue, 17 Nov 2009 11:38:19 -0500, Bill Baxter wbax...@gmail.com wrote:


On Tue, Nov 17, 2009 at 7:09 AM, Robert Jacques sandf...@jhu.edu wrote:
On Tue, 17 Nov 2009 05:44:31 -0500, downs default_357-l...@yahoo.de 
wrote:



Robert Jacques wrote:


On Tue, 17 Nov 2009 00:06:27 -0500, Yigal Chripun yigal...@gmail.com
wrote:


Robert Jacques wrote:


On Mon, 16 Nov 2009 17:53:45 -0500, Stewart Gordon
smjg_1...@yahoo.com wrote:


dsimcha wrote:
snip


Axe.  Looks like the only things it's good for are making code
undreadable and
abusing for loop syntax to...
 Make code unreadable.


snip

Suppose you want the increment of a for loop to change two 
variables

in parallel.  I don't call that making code unreadable.

Stewart.


 Yes the classic use case of the comma operator is multi-variable
declarations/increments in a for loop.


This was argued before and as I and others said before, this is *not*
a use case for the comma separator.

e.g.
for (int a = 0, b = 1; condition(); a++, b++) {...}

int a = 0, b = 1 // this is a declaration and not an expression

a++, b++ // isn't assigned to any variable and can be treated as a 
tuple


the only use case that will break is if the two increments are
dependent on the order (unless tuples are also evaluated from left to
right):
e.g.
a + 5, b + a //

I doubt it very much that anyone ever uses this, it's too unreadable
to be useful.


However, I imagine tuple(a++,b++) would have some overhead, which is
exactly what someone is trying to avoid by using custom for loops.

Personally, I like using a..b = tuple(a,b), since it also solves the
multi-dimensional slicing and mixed indexing and slicing problems.


Zero overhead. Tuples are flat compile-time entities.


There are compile time tuples and runtime tuples. D already has a 
form of
compile-time tuples. This discussion seems to be about runtime tuples 
which

currently don't have a nice syntax: you have to use tuple(a,b). And
tuple(a,b) does have runtime overhead.


I think the point is that in something like this:
   auto foo = (a,b);
   foo.a += 2;
   foo.b += foo.a;
   // etc

The tuple foo is allocated on the stack and the compiler knows where
the a part and b part are, so the code generated should be absolutely
no different from the code generated for:

   auto foo_a = a;
   auto foo_b = b;
   foo_a += 2;
   foo_b += foo_a;
   // etc

So there doesn't need to be any tuple overhead.
--bb


But that isn't the expansion:
for (int a = 0, b = 1; condition(); a++, b++) =

int a = 0;
int b = 0;
while(condition) {
auto temp = tuple(a++,b++); // creation of a struct on the stack
}

Now the optimizer might get rid of that temporary struct. Then again it 
might not (or its presence interferes with other optimizations). At the 
very least, some amount of code profiling or disassembly needs to be done.


why would it create a struct?

it would probably do:
int a = 0, b = 0;
while (condition) {
  int temp_a = (a++); // temp_a == a + 1
  int temp_b = (b++);
}

the above would most like be optimized by away.

you can also take advantage of tuples with for loops, something like:
for ( auto t = (0, 0); condition(); (t[1]++, t[2]++) ) {...}

this is more flexible since you can't do with the current system:
for (int a = 0, char b = 'a'; ; ) {...}

because int a = 0 is a declaration and not an expression.

with tuples:
for (auto t = (0, 'a'); ; ) {...}





Re: Should the comma operator be removed in D2?

2009-11-17 Thread Yigal Chripun

KennyTM~ wrote:

On Nov 18, 09 05:40, Ellery Newcomer wrote:

Bill Baxter wrote:


However, I think for the good of humanity we can accept that one
little bizarre example of legal C syntax not doing the same thing in
D.


int[] i;

auto a = (i)[0];

what does this do?


(i) should not construct a tuple. Probably (i,).


I agree, a tuple of one element (doesn't matter what type, array in this 
case) should be semantically identical to that single element.


proper semantics for language supported tuples should IMO include:
1) syntax to explicitly [de]construct tuples and no auto-flattening
2) a tuple of one element is identical to a scalar:
   int a = 5; // scalar integer
   auto b = (5); // tuple of one integer
   a == b // is true
3) function's argument list is a tuple like in ML:
   void foo(int a, char b);
   int a = 5; char b ='a';
   auto tup = (5, 'a');
   foo(a, b) is identical to foo(t);
4) unit type defined by the empty tuple instead of c-like void


Re: Should the comma operator be removed in D2?

2009-11-17 Thread Yigal Chripun

Bill Baxter wrote:

On Tue, Nov 17, 2009 at 3:57 PM, retard r...@tard.com.invalid wrote:

Tue, 17 Nov 2009 14:38:57 -0800, Bill Baxter wrote:


I agree, a tuple of one element (doesn't matter what type, array in
this case) should be semantically identical to that single element.

proper semantics for language supported tuples should IMO include: 1)
syntax to explicitly [de]construct tuples and no auto-flattening 2) a
tuple of one element is identical to a scalar:
  int a = 5; // scalar integer
  auto b = (5); // tuple of one integer a == b // is true

Interesting.  It does kinda make sense.  So should indexing work too?
And properties?  5[0] == 5?  5.length == 1? If not that could be painful
for functions that process generic N-tuples. If so then what does that
do if the scalar type happens to be float*?

In some languages () is a distinct Unit type. Tuples are defined
recursively from the Pair type, e.g. Pair[int,int], Pair[int, Pair
[int,int]] === (int,int,int). And have a special indexing syntax with 1-
based indexing.


That wasn't really the question.  It was what should 5[0] do in D, if
scalars are considered to be 1-tuples?
I think that's a killer for 1-tuple / scalar equivalence in D.
Neither behavior is acceptable in my opinion.
So it seems you can't have 1-tuple/scalar equivalence unless you have
a distinct tuple-indexing syntax.

Right now std.typecons.tuple uses x.at!(0) because you can't have a
x[i] return different types, but the built-in A... template tuples
do it.
So that's something that needs to be fixed anyway, because good for
me but not for thee is lame.  (Took that phrase from a review of
Go...)
I think probably D should allow a templated opIndex!(int) so that user
types can implement tuple-like indexing where each index could be a
different type.

Or we should try to come up with another syntax for indexing tuples.



3) function's argument list is a tuple like in ML:
  void foo(int a, char b);
  int a = 5; char b ='a';
  auto tup = (5, 'a');
  foo(a, b) is identical to foo(t);

Tuples can't encode things like by-ref, by-val, lazy etc.


That does seem to kill that idea.



That seems like a kind of auto-flattening.  Shouldn't (t) be a tuple of
a tuple?
What if you have an actual tuple in the signature, like void foo((int
a,char b))?
Or you have both overloads -- foo(int,char) and foo((int,char)) I think
I like Python's explicit explode tuple syntax better.
   foo(*t)
Probably that syntax won't work for D, but I'd prefer explicit
flattening over implicit.

Good boy.


4) unit type defined by the empty tuple instead of c-like void

This is kind of neat, but does it actually change anything?  Or just
give an aesthetically pleasing meaning to void/unit?

The empty tuple can be considered to be the unit type.


Yes, Yigal said basically that.  The question I have is what practical
difference does that make to the language?
Seems no different from defining the empty tuple to be void, then
renaming void to unit.


--bb


to clarify what I meant regarding function args list lets look at a few 
ML examples:



fun f1() = ()
f1 is unit - unit

fun f2 (a) = a
f2 `a - `a

fun f3 (a, b, (c, d)) = a + b + c + d
f3 is (`a, `a, (`a, `a)) - `a

it doesn't auto flatten the tuples but the list of parameters is 
equivalent to a tuple.


regarding unit type, it has by definition exactly one value, so a 
function that is defined now in D to return void would return that 
value and than it's perfectly legal to have foo(bar()) when bar returns 
a unit type.


Re: Making alloca more safe

2009-11-16 Thread Yigal Chripun

Andrei Alexandrescu wrote:

Denis Koroskin wrote:
On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:



bearophile wrote:

Walter Bright:

A person using alloca is expecting stack allocation, and that it 
goes away after the function exits. Switching arbitrarily to the gc 
will not be detected and may hide a programming error (asking for a 
gigantic piece of memory is not anticipated for alloca, and could 
be caused by an overflow or logic error in calculating its size).
 There's another solution, that I'd like to see more often used in 
Phobos: you can add another function to Phobos, let's call it 
salloca (safe alloca) that does what Denis Koroskin asks for (it's a 
very simple function).


Can't be written. Try it.

Andrei


It's tricky. It can't be written *without a compiler support*, because 
it is considered special for a compiler (it always inlines the call to 
it). It could be written otherwise.


I was thinking about proposing either an inline keyword in a language 
(one that would enforce function inlining, rather than suggesting it 
to compiler), or allways inline all the functions that make use of 
alloca. Without either of them, it is impossible to create wrappers 
around alloca (for example, one that create arrays on stack 
type-safely and without casts):


T[] array_alloca(T)(size_t size) { ... }

or one that would return GC-allocated memory when stack allocation fails:

void* salloca(size_t size) {
void* ptr = alloca(size);
if (ptr is null) return (new void[size]).ptr;

return ptr;
}


The problem of salloca is that alloca's memory gets released when 
salloca returns.


Andrei


template salloca(alias ptr, alias size) { // horrible name, btw
  ptr = alloca(size);
  if (ptr is null) ptr = (new void[size]).ptr;
}

// use:
void foo() {
  int size = 50;
  void* ptr;
  mixin salloca!(ptr, size);
  //...
}

wouldn't that work?


Re: Should the comma operator be removed in D2?

2009-11-16 Thread Yigal Chripun

Robert Jacques wrote:
On Mon, 16 Nov 2009 17:53:45 -0500, Stewart Gordon smjg_1...@yahoo.com 
wrote:



dsimcha wrote:
snip
Axe.  Looks like the only things it's good for are making code 
undreadable and

abusing for loop syntax to...
 Make code unreadable.

snip

Suppose you want the increment of a for loop to change two variables 
in parallel.  I don't call that making code unreadable.


Stewart.


Yes the classic use case of the comma operator is multi-variable 
declarations/increments in a for loop.


This was argued before and as I and others said before, this is *not* a 
use case for the comma separator.


e.g.
for (int a = 0, b = 1; condition(); a++, b++) {...}

int a = 0, b = 1 // this is a declaration and not an expression

a++, b++ // isn't assigned to any variable and can be treated as a tuple

the only use case that will break is if the two increments are dependent 
on the order (unless tuples are also evaluated from left to right):

e.g.
a + 5, b + a //

I doubt it very much that anyone ever uses this, it's too unreadable to 
be useful.


Re: Should the comma operator be removed in D2?

2009-11-16 Thread yigal chripun
Robert Jacques Wrote:
 However, I imagine tuple(a++,b++) would have some overhead, which is  
 exactly what someone is trying to avoid by using custom for loops.
 
 Personally, I like using a..b = tuple(a,b), since it also solves the  
 multi-dimensional slicing and mixed indexing and slicing problems.

what overhead? It's all in your imagination :)
a..b is confusing and bad UI. a..b means for humans the range of a till b and 
not the tuple of the two. 
if I see something like hello..42 I would assume the person who wrote this 
was high on something.

multi dimentinal slicing should accept an integer range type and NOT a tuple. 


Re: Getting the error from __traits(compiles, ...)

2009-11-14 Thread Yigal Chripun

On 13/11/2009 22:05, Bill Baxter wrote:

On Fri, Nov 13, 2009 at 11:50 AM, Yigal Chripunyigal...@gmail.com  wrote:


I don't follow your logic regarding CTFE.

with 2 phase macros a-la nemerle:

macro foo() {
  int res = 2 + 3;
  return res;
}

macro bar() {
  return q{2 + 3};
}

foo's addition is done at compile time so the constant folding was
implemented in the macro body

bar return the AST for the expression 2 + 3. Compiler optimizations like
constant folding will apply just as if you wrote that expression yourself
instead of generating it by calling a macro.


Right, which is why I'm saying you still want constant folding/CTFE
even if you have a macro system.  But then if you're going to have
CTFE sitting around anyway, you might as well use it to implement
macros instead of going to this funky two-phase thing.   That was one
point I was making, though not so clearly.


static if is not supposed to be implemented with macros, rather the
equivalent of a static if would be using a regular if *inside* the body of
the macro.


But that's how you would implement a static if with macros, no?
(pardon the incorrect nemerle syntax)

macro static_if(bool cond, a, b) {
 if (cond) {
|  a  |
 } else {
|  b  |
 }
}

--bb


let's start from the end,
yes, you can implement static if that way but I don't see why would you 
want to do that.


regarding constant folding and 2 phase compilation:
from what I know, DMD currently contains inside two backends, the 
regular one generates the executable *and* an interpreter which does 
CTFE. Constant folding is an optimization used by both.


basically we already have two phases of compilation but they are done 
internally by DMD. This means that DMD has two separate backends instead 
of just one and that you need separate syntax to target the different 
phases.


The major problems I see with the current system:
- unnecessary duplication of syntax
- two backends instead of just one which complicates the compiler 
implementation

- unclear separation of phase in code:
  auto arr = [1.0, 2.0, bar(5.0)]; // when is bar executed?
also, how can I control when it's executed? This is needlessly confusing 
and doesn't provide enough control to the programmer.


it's analogous to structs vs. classes. I'm sure everyone in the D 
community agree that this separation of concerns is much better than the 
bug-prone c++ way.


Re: How about Go's... error on unused imports?

2009-11-14 Thread Yigal Chripun

On 14/11/2009 00:28, bearophile wrote:

Nick Sabalausky:

I used to think so, but I'm not so sure anymore.


It's the same for me. I can live without the *, as I can live without D typedef.
There are other changes/fixed that I want for the module system still.

Bye,
bearophile


once upon a time there was a suggestion to have a special file that 
would define the public API of a package.


e.g.

myPackage
   a.d
   b.d
   ...
   this.d // special file that defines the public imports
   private.d // should not be imported by a * import

then, the user can import myPackage.* and that would search for this.d 
in the directory and use its contents instead of importing all d files 
in the directory.


Go compilation model

2009-11-14 Thread Yigal Chripun

I just saw this on the Go language site
from http://golang.org/cmd/gc/


gives the overall design of the tool chain. Aside from a few adapted 
pieces, such as the optimizer, the Go compilers are wholly new programs.


The compiler reads in a set of Go files, typically suffixed .go. They 
must all be part of one package. The output is a single intermediate 
file representing the binary assembly of the compiled package, ready 
as input for the linker (6l, etc.).


The generated files contain type information about the symbols exported 
by the package and about types used by symbols imported by the package 
from other packages. It is therefore not necessary when compiling client 
C of package P to read the files of P's dependencies, only the compiled 
output of P.


/quote

notice that they removed the need for header files.


Re: Getting the error from __traits(compiles, ...)

2009-11-14 Thread Yigal Chripun

On 14/11/2009 13:32, Don wrote:

Yigal Chripun wrote:

On 13/11/2009 22:05, Bill Baxter wrote:

On Fri, Nov 13, 2009 at 11:50 AM, Yigal Chripunyigal...@gmail.com
wrote:


I don't follow your logic regarding CTFE.

with 2 phase macros a-la nemerle:

macro foo() {
int res = 2 + 3;
return res;
}

macro bar() {
return q{2 + 3};
}

foo's addition is done at compile time so the constant folding was
implemented in the macro body

bar return the AST for the expression 2 + 3. Compiler
optimizations like
constant folding will apply just as if you wrote that expression
yourself
instead of generating it by calling a macro.


Right, which is why I'm saying you still want constant folding/CTFE
even if you have a macro system. But then if you're going to have
CTFE sitting around anyway, you might as well use it to implement
macros instead of going to this funky two-phase thing. That was one
point I was making, though not so clearly.


static if is not supposed to be implemented with macros, rather the
equivalent of a static if would be using a regular if *inside* the
body of
the macro.


But that's how you would implement a static if with macros, no?
(pardon the incorrect nemerle syntax)

macro static_if(bool cond, a, b) {
if (cond) {
| a |
} else {
| b |
}
}

--bb


let's start from the end,
yes, you can implement static if that way but I don't see why would
you want to do that.

regarding constant folding and 2 phase compilation:
from what I know, DMD currently contains inside two backends, the
regular one generates the executable *and* an interpreter which does CTFE



Constant folding is an optimization used by both.

basically we already have two phases of compilation but they are done
internally by DMD. This means that DMD has two separate backends
instead of just one and that you need separate syntax to target the
different phases.

The major problems I see with the current system:
- unnecessary duplication of syntax
- two backends instead of just one which complicates the compiler
implementation


There's only one backend. The interpreter is basically just constant
folding, with a small amount of interpreting of statements. 90% of the
CTFE complexity is in the constant folding. The interpreter itself is
only about 200 lines of code. But the real backend is huge.


- unclear separation of phase in code:
auto arr = [1.0, 2.0, bar(5.0)]; // when is bar executed?
also, how can I control when it's executed? This is needlessly
confusing and doesn't provide enough control to the programmer.


That particular example is caused by array literals not being immutable,
which I am certain is a mistake.


it's analogous to structs vs. classes. I'm sure everyone in the D
community agree that this separation of concerns is much better than
the bug-prone c++ way.


Don,
what's your opinion regarding two phase compilation a-la Nemerle vs. 
the current D model?


btw, another benefit I forgot to mention regarding this is that in 
Nemerle compile time code is precompiled which solves a lot of problems 
with c++ style template code


Re: D library projects : adopting Boost license

2009-11-14 Thread Yigal Chripun

On 13/11/2009 20:51, Walter Bright wrote:

Yigal Chripun wrote:

[...]


On dsource you wrote: The current situation requires to get an explicit
permission to change the license from each contributor for his code and
if someone cannot be contacted for any reason, his contribution cannot
be re-licensed.

That's a big problem. The only solution I can see is to relicense with
the Boost license whatever you can of Tango. We faced the same issue
with Phobos, and we're just going to dump what cannot be relicensed.


This is very important IMO, probably as important as the license itself.
This is exactly why the GNU project rejects contributions even if they 
are licensed under the GPL unless the the contributer agrees to give 
ownership of the copyright to the FSF (the legal entity for the GNU 
project).
Almost all open source projects do the same. a notable exception is the 
linux kernel and I think this influenced the decision to not upgrade to 
GPL3.


Does that mean that all of Phobos is under one legal entity - Digital 
Mars I presume? If not, than it really should be and you should require 
the same policy for future contributions.
I don't want to see each module licensed under a different person 
(Andrei, Sean, You, etc..).






Re: D library projects : adopting Boost license

2009-11-14 Thread Yigal Chripun

On 15/11/2009 00:28, dsimcha wrote:

== Quote from Yigal Chripun (yigal...@gmail.com)'s article

On 13/11/2009 20:51, Walter Bright wrote:

Yigal Chripun wrote:

[...]


On dsource you wrote: The current situation requires to get an explicit
permission to change the license from each contributor for his code and
if someone cannot be contacted for any reason, his contribution cannot
be re-licensed.

That's a big problem. The only solution I can see is to relicense with
the Boost license whatever you can of Tango. We faced the same issue
with Phobos, and we're just going to dump what cannot be relicensed.

This is very important IMO, probably as important as the license itself.
This is exactly why the GNU project rejects contributions even if they
are licensed under the GPL unless the the contributer agrees to give
ownership of the copyright to the FSF (the legal entity for the GNU
project).
Almost all open source projects do the same. a notable exception is the
linux kernel and I think this influenced the decision to not upgrade to
GPL3.
Does that mean that all of Phobos is under one legal entity - Digital
Mars I presume? If not, than it really should be and you should require
the same policy for future contributions.
I don't want to see each module licensed under a different person
(Andrei, Sean, You, etc..).


I personally would have a hard time giving the copyright up for stuff that I
worked on without pay.  I don't mind licensing it permissively, but the idea 
that
it's even possible (even if it's not likely) for someone to prevent me from
relicensing subsequent versions own code under whatever terms I want bothers me.
For example, let's say that (hypothetically, not that this has any chance of
happening) that Digital Mars switched to GPL for Phobos.  If I had given them 
the
copyright to my code, I wouldn't be able to keep the stuff I wrote permissively
licensed.


I can't see how that's possible. if you contribute to Phobos under Boost 
license and Phobos is re-licensed under GPL that would mean that any 
future versions would be GPL but you should be able to fork your 
original Boost licensed version and release subsequent versions of that 
under Boost license.


The project needs to have the ability to adapt its license in the future 
due to various reasons. case in point is tango: they are discussing 
changing the license and maybe even go with a Phobos compatible license 
to help a merger of the two code bases. this requires all contributors 
(past and present) to agree to this and if somebody cannot be contacted 
for whatever reason (maybe he lost interest in Tango and D) than his 
code cannot be re-licensed. Big problem.


Re: D library projects : adopting Boost license

2009-11-13 Thread Yigal Chripun

Robert Jacques wrote:
On Fri, 13 Nov 2009 01:08:03 -0500, Yigal Chripun yigal...@gmail.com 
wrote:



Robert Jacques wrote:
 The Apache 2.0 license requires attribution. It's therefore 
unsuitable for a standard library. From the website FAQ:


It forbids you to:
redistribute any piece of Apache-originated software without proper 
attribution;
use any marks owned by The Apache Software Foundation in any way that 
might state or imply that the Foundation endorses your distribution;
use any marks owned by The Apache Software Foundation in any way that 
might state or imply that you created the Apache software in question.

 It requires you to:
include a copy of the license in any redistribution you may make that 
includes Apache software;
provide clear attribution to The Apache Software Foundation for any 
distributions that include Apache software.




excerpts from http://www.apache.org/licenses/LICENSE-2.0.html

Derivative Works shall mean any work, whether in Source or Object 
form, that is based on (or derived from) the Work and for which the 
editorial revisions, annotations, elaborations, or other modifications 
represent, as a whole, an original work of authorship. For the 
purposes of this License, Derivative Works shall not include works 
that remain separable from, or merely link (or bind by name) to the 
interfaces of, the Work and Derivative Works thereof.


4. Redistribution. You may reproduce and distribute copies of the Work 
or Derivative Works thereof in any medium, with or without 
modifications, and in Source or Object form, provided that You meet 
the following conditions:


1. You must give any other recipients of the Work or Derivative 
Works a copy of this License; and


2. You must cause any modified files to carry prominent notices 
stating that You changed the files; and


3. You must retain, in the Source form of any Derivative Works 
that You distribute, all copyright, patent, trademark, and attribution 
notices from the Source form of the Work, excluding those notices that 
do not pertain to any part of the Derivative Works; and



/quote

my understanding of the above is that using tango in your code doesn't 
constitute as Derivative Works. that means that _uesrs_ of Tango are 
not required to provide attribution.


First,   according to international copyright law (Berne convention), 
compiling source code creates a derivative work. (See 
http://en.wikipedia.org/wiki/ISC_License for some links)
Second,  4.1 explicitly require you to provide the license with all 
distributions.
Third,   Apache's FAQ, which was written by lawyers, instruct users to 
include the license/attribution.
Finally, the linking divide, allows you link together code licensed 
under different licensees. I believe the GPL also has a similar clause. 
It doesn't mean that if you distribute a compiled copy of the library 
(either explicitly as a dll/so or by statically linking it in) you don't 
have to include the Apache license. You just don't have to license your 
application which uses Tango under the Apache license.


There was a large discussion a while back about this, and essentially 
there are only 2 licenses suitable for a standard library: Boost and 
zlib/libpng (And technically WTFYW).




Ok, I ain't a layer so let's see if I understood you correctly:

You're saying that if I write code using Tango, I can license *my* code 
with whatever I want. My source will require a tango dll to work and 
*that* dll must come with its apache 2.0 license file.


That sounds completely reasonable to me. I don't get what the problem 
with this scheme of things.


Re: D library projects : adopting Boost license

2009-11-13 Thread Yigal Chripun

Don wrote:

Yigal Chripun wrote:

Robert Jacques wrote:
On Fri, 13 Nov 2009 01:08:03 -0500, Yigal Chripun 
yigal...@gmail.com wrote:



Robert Jacques wrote:
 The Apache 2.0 license requires attribution. It's therefore 
unsuitable for a standard library. From the website FAQ:


It forbids you to:
redistribute any piece of Apache-originated software without proper 
attribution;
use any marks owned by The Apache Software Foundation in any way 
that might state or imply that the Foundation endorses your 
distribution;
use any marks owned by The Apache Software Foundation in any way 
that might state or imply that you created the Apache software in 
question.

 It requires you to:
include a copy of the license in any redistribution you may make 
that includes Apache software;
provide clear attribution to The Apache Software Foundation for any 
distributions that include Apache software.




excerpts from http://www.apache.org/licenses/LICENSE-2.0.html

Derivative Works shall mean any work, whether in Source or Object 
form, that is based on (or derived from) the Work and for which the 
editorial revisions, annotations, elaborations, or other 
modifications represent, as a whole, an original work of authorship. 
For the purposes of this License, Derivative Works shall not include 
works that remain separable from, or merely link (or bind by name) 
to the interfaces of, the Work and Derivative Works thereof.


4. Redistribution. You may reproduce and distribute copies of the 
Work or Derivative Works thereof in any medium, with or without 
modifications, and in Source or Object form, provided that You meet 
the following conditions:


1. You must give any other recipients of the Work or Derivative 
Works a copy of this License; and


2. You must cause any modified files to carry prominent notices 
stating that You changed the files; and


3. You must retain, in the Source form of any Derivative Works 
that You distribute, all copyright, patent, trademark, and 
attribution notices from the Source form of the Work, excluding 
those notices that do not pertain to any part of the Derivative 
Works; and



/quote

my understanding of the above is that using tango in your code 
doesn't constitute as Derivative Works. that means that _uesrs_ of 
Tango are not required to provide attribution.


First,   according to international copyright law (Berne convention), 
compiling source code creates a derivative work. (See 
http://en.wikipedia.org/wiki/ISC_License for some links)
Second,  4.1 explicitly require you to provide the license with all 
distributions.
Third,   Apache's FAQ, which was written by lawyers, instruct users 
to include the license/attribution.
Finally, the linking divide, allows you link together code licensed 
under different licensees. I believe the GPL also has a similar 
clause. It doesn't mean that if you distribute a compiled copy of the 
library (either explicitly as a dll/so or by statically linking it 
in) you don't have to include the Apache license. You just don't have 
to license your application which uses Tango under the Apache license.


There was a large discussion a while back about this, and essentially 
there are only 2 licenses suitable for a standard library: Boost and 
zlib/libpng (And technically WTFYW).




Ok, I ain't a layer so let's see if I understood you correctly:

You're saying that if I write code using Tango, I can license *my* 
code with whatever I want. My source will require a tango dll to work 
and *that* dll must come with its apache 2.0 license file.


That sounds completely reasonable to me. I don't get what the problem 
with this scheme of things.


At the present time, D DLLs don't work with D apps. Only static linking 
works. And disallowing static linking is utterly ridiculous, anyway.


Conditions 2 and 3 above are no problem. I'm a bit scared of 1, though. 
What does give mean? (It's not even make available). Sounds as 
though EVERY D app (even Hello, world apps) would need to include a 
license file for the standard library.




I agree, supplying the stdlib license file with a hello, world 
executable would be very bad.


Generally speaking, is static linking of the stdlib the right thing?
I realize that's the only working option now, but when this is fixed 
(and it really should be fixed) would that still be the correct choice 
for the stdlib?




Re: Getting the error from __traits(compiles, ...)

2009-11-13 Thread Yigal Chripun

grauzone wrote:

Yigal Chripun wrote:


I really wish this was folded into the language by allowing structs to 
implement interfaces.


interface Range(T) {
  bool empty();
  void popFront();
  T front();
}

struct MyRange(T) : Range!(T) { ... } // checked by compiler



One problem with this was that arrays wouldn't automagically be ranges 
anymore. Right now, int[] a; a.popFront(); works, because std.array 
has a global function popFront. Some old language hack turns a.popFront 
into popFront(a).


you're talking about this construct:
int[] arr;
void foo(int[], params) {}  = arr.foo(params);

This should not be considered a hack but rather should be a feature 
extended for all types. see extension methods in C#.


I also think that arrays (containers) must be distinct from 
slices/ranges (views).


Re: Getting the error from __traits(compiles, ...)

2009-11-13 Thread Yigal Chripun

On 13/11/2009 19:30, Bill Baxter wrote:

On Fri, Nov 13, 2009 at 8:40 AM, bearophilebearophileh...@lycos.com
wrote:

Bill Baxter:

2) how to get and report errors related to failure to compile
some code. (this one I hadn't thought of back then)


I'd like a static foreach too. Eventually most statements will
have a static version. At that point people will start seeing this
little duplication in the language and someone may invent a way to
throw away all the static versions and allow normal D code to be
used at compile time, maybe with a 2-stage compilation or
something.


A static switch would be nice too.   static if (is(type == xxx)) {}
else static if (is(type==yyy)) {} else static if ... gets kinda
tedious.


The kind of unification you're talking about is one thing I like
about Nemerle's 2-phase macros-as-plugins.  The code you execute at
compile time is written in exactly the same language as what you
execute at runtime.  And no CTFE engine is required to make it work.
Only one new construct required, the macro facility itself.

But I don't think that leads to elimination static if, etc.  It just
means that such things become implementable as macros, rather than
language constructs.

On the other hand, having macros doesn't mean that you don't want
constant folding.  And if you can fold the constant 2+3, why not the
constant add(2,3)?  So desire to fold as many constants as possible
naturally leads to a desire to do CTFE and be able to execute your
entire language at compile time.

And once you're there -- yeh, I guess you're right.   Ultimately
it's not really necessary to specify static if vs regular if.   It's
yet another extension of constant folding -- if the condition is a
compile time constant, then it can act as a static if.   Same goes
for loops. But like regular loop unrolling optimizations, the
compiler should decide if it's prudent to unroll that 10,000 static
foreach loop or not.

So in short.  I think you're right.  static if  should go away.
But 2-stage compilation isn't really necessary, just more
extensions to the constant folding engine.  (Or perhaps you could say
constant folding is already a separate stage of a 2-stage process)

--bb


I don't follow your logic regarding CTFE.

with 2 phase macros a-la nemerle:

macro foo() {
  int res = 2 + 3;
  return res;
}

macro bar() {
  return q{2 + 3};
}

foo's addition is done at compile time so the constant folding was 
implemented in the macro body


bar return the AST for the expression 2 + 3. Compiler optimizations 
like constant folding will apply just as if you wrote that expression 
yourself instead of generating it by calling a macro.


static if is not supposed to be implemented with macros, rather the 
equivalent of a static if would be using a regular if *inside* the body 
of the macro.


Re: D library projects

2009-11-12 Thread Yigal Chripun

Walter Bright wrote:
For anyone looking for an easy, but valuable, contribution to D, take a 
look at the go runtime library.


There's a lot in there we could use in the D library:

http://golang.org/pkg/

The library is licensed under the 
http://creativecommons.org/licenses/by/3.0/

meaning we can adapt it to D.

Some packages that look particularly useful are:

archive.tar
compress.flate
crypto
debug
ebnf
encoding
gob
http
image
net
rpc


Go also has a go package that contains:

ast
doc
parser
printer
scanner
token

see http://golang.org/pkg/go/


Re: D library projects : adopting Boost license

2009-11-12 Thread Yigal Chripun

Robert Jacques wrote:


The Apache 2.0 license requires attribution. It's therefore unsuitable 
for a standard library. From the website FAQ:


It forbids you to:
redistribute any piece of Apache-originated software without proper 
attribution;
use any marks owned by The Apache Software Foundation in any way that 
might state or imply that the Foundation endorses your distribution;
use any marks owned by The Apache Software Foundation in any way that 
might state or imply that you created the Apache software in question.


It requires you to:
include a copy of the license in any redistribution you may make that 
includes Apache software;
provide clear attribution to The Apache Software Foundation for any 
distributions that include Apache software.




excerpts from http://www.apache.org/licenses/LICENSE-2.0.html

Derivative Works shall mean any work, whether in Source or Object 
form, that is based on (or derived from) the Work and for which the 
editorial revisions, annotations, elaborations, or other modifications 
represent, as a whole, an original work of authorship. For the purposes 
of this License, Derivative Works shall not include works that remain 
separable from, or merely link (or bind by name) to the interfaces of, 
the Work and Derivative Works thereof.


4. Redistribution. You may reproduce and distribute copies of the Work 
or Derivative Works thereof in any medium, with or without 
modifications, and in Source or Object form, provided that You meet the 
following conditions:


   1. You must give any other recipients of the Work or Derivative 
Works a copy of this License; and


   2. You must cause any modified files to carry prominent notices 
stating that You changed the files; and


   3. You must retain, in the Source form of any Derivative Works that 
You distribute, all copyright, patent, trademark, and attribution 
notices from the Source form of the Work, excluding those notices that 
do not pertain to any part of the Derivative Works; and



/quote

my understanding of the above is that using tango in your code doesn't 
constitute as Derivative Works. that means that _uesrs_ of Tango are 
not required to provide attribution.


Re: Getting the error from __traits(compiles, ...)

2009-11-12 Thread Yigal Chripun

Bill Baxter wrote:

On Thu, Nov 12, 2009 at 1:00 PM, Walter Bright
newshou...@digitalmars.com wrote:

Walter Bright wrote:

Bill Baxter wrote:

Any other thoughts about how to get the failure info?   This is
probably the main complaint against __traits(compiles), that there's
no way to find out what went wrong if the code doesn't compile.  Often
it can just be a typo.  I know I've spent plenty of time looking at
static if(__traits(compiles, ...)) checks that weren't working only to
discover I switched an x for a y somewhere.  Or passed the wrong
number of arguments.

I agree it's a problem. Perhaps we can do:

  __traits(compiles_or_msg, ...)

which would print the error messages, at least making it easier to track
down.

Eh, scratch that dumb idea. Just remove the __traits(compiles, ...) and
replace it with ..., and you'll get the message!


Maybe that is enough combined with a const code snippet

enum code = q{
R r; // can define a range object
if (r.empty) {}  // can test for empty
r.popFront;  // can invoke next
auto h = r.front; // can get the front of the range
}
static if (__traits(compiles, mixin(code))) {
mixin(code);
}
else {
pragma(msg, Unable to instantiate code for type
T=`~T.stringof~`:\n ~ code);
pragma(msg, Compiler reports: );
mixin(code);
}

But I was really hoping for a separation of Interface definition and
Interface verification.  With the above you'll have to have two
templates for every interface,  like  isForwardRange!(T) (evals to
bool)  and assertIsForwardRange!(T)  (reports the compiler error or is
silent).   Hmm unless

template assertIsInputRange(T, bool noisy=true) {
 enum code = q{
 R r; // can define a range object
 if (r.empty) {}  // can test for empty
 r.popFront;  // can invoke next
auto h = r.front; // can get the front of the range
 };
 static if (!__traits(compiles, mixin(code))) {
static if (noisy) {
 pragma(msg, Type T=`~T.stringof~` doesn't support
interface:\n ~ code);
 pragma(msg, Compiler reports: );
}
mixin(code);
 }
}

template isInputRange(T) {
 enum bool isInputRange = __traits(compiles, assertIsInputRange!(T, false));
}

And then we could wrap the whole shebang in a fancy code-generating
string mixin and define things like the above using:

mixin(DefineInterface(
InputRange,
q{
 R r; // can define a range object
 if (r.empty) {}  // can test for empty
 r.popFront;  // can invoke next
auto h = r.front; // can get the front of the range
 }));

mixin(DefineInterface!(assertIsInputRange)(
ForwardRange,
 q{
R r1;
R r2 = r1;   // can copy a range object
  })));

Writing DefineInterface is left as an exercise for the reader. :-)
But it looks do-able.
And DefineInterface could take a variadic list of assertIsXXX template
aliases and generate code to check each one.

--bb


I really wish this was folded into the language by allowing structs to 
implement interfaces.


interface Range(T) {
  bool empty();
  void popFront();
  T front();
}

struct MyRange(T) : Range!(T) { ... } // checked by compiler



Re: Semantics of toString

2009-11-12 Thread Yigal Chripun

Steven Schveighoffer wrote:
On Thu, 12 Nov 2009 17:13:06 -0500, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:



Bill Baxter wrote:

On Thu, Nov 12, 2009 at 1:54 PM, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


Let's not forget that this is mainly for debugging...

If it's mainly for debugging maybe it's not worth spending time on.

 Nonsense!  Developers spend a lot of time debugging.  Helping people
debug their programs is certainly worth spending time on.
 --bb


Sorry sorry. I just meant to say it's not worth coming with an 
airtight design. We might afford some extra conversions and extra 
virtual calls I guess.


But that being said, I'd so much want to start thinking of an actual 
text serialization infrastructure. Why develop one later with the 
mention well use that stuff for debugging only, this is the real stuff.


The main purpose to serialize is to be able to deserialize.  The main 
reason to print debug information is so a person can read it.  I don't 
know if those two goals overlap enough.


I think we need both.  Maybe one uses the other, I'm not sure, but a way 
to say here's how you interact with writefln and friends would be very 
nice.


-Steve


I'd add to that the a format facility should be locale aware as in .Net.
i.e: (pseudo-code)

auto str = format({0}, 2.4, CurrentCulture);
// or specify a specific locale

str will be either 2.4 or 2,4 based on locale.

this serves an entirely different purpose from serialization even though 
both have common parts.


you can't and shouldn't try to de-serialize the above text representation.


Re: static static

2009-11-11 Thread Yigal Chripun

bearophile wrote:

Yigal Chripun:


Regardless of usefulness (or good design) of such variables, this sounds
extremely dangerous. The compiler must not change semantics of the
program based on optimization. optimizing away such variables most
definitely alters the semantics.


Maybe you have misunderstood, or I have explained the things badly. So I 
explain again.

I have seen that LDC (when it performs link-time optimization, that's not done 
in all situations) keeps just one copy of constants inside the binary even if 
such constants are present in more than one template instance. In the 
situations where LTO is available I think this doesn't cause problems.

Then I am half-seriously proposing a syntax like:
T foo(T)(T x) {
  static static int y;
  // ...
}

Where the y is now static to (shared among) all instances of the templated 
function foo. This may be a little error-prone and maybe not that useful, but 
again here the compiler doesn't change the semantics of the program, because 
using a double static keyword the programmer has stated such intention.

Bye,
bearophile


Oh. ok. I seems I completely misunderstood you. It wasn't clear to me 
before that your were talking about constants. Of course it's perfectly 
OK to optimize _constants_ like that.


IMO, static is harmful and should be avoided. some newer languages 
recognize this and completely remove this from the language. I'd like to 
see D going in that path rather than adding even more ways to use static.


regarding your concrete proposal - as others said, you can use global 
variables for that or put this inside a struct if you want to limit the 
scope.


Re: static static

2009-11-10 Thread Yigal Chripun

bearophile wrote:

When I convert a function to a templated function (for example
because I know the value of an argument at compile time, so using a
template gives me a poor's man partial compilation) the static
variables get duplicated for each instance of the function template,
and I may need to use true global variables/constants (but if you use
link-time optimization then LDC is able to remove such shared
constants). So I was thinking about a static static attribute that
avoid moving the statics to globals. Is this a useless idea?

Bye, bearophile


Regardless of usefulness (or good design) of such variables, this sounds
extremely dangerous. The compiler must not change semantics of the
program based on optimization. optimizing away such variables most
definitely alters the semantics.

I wonder, how do other languages treat static variables inside templated 
functions?


Re: Safety, undefined behavior, @safe, @trusted

2009-11-07 Thread Yigal Chripun

On 07/11/2009 11:53, Don wrote:

Walter Bright wrote:

grauzone wrote:

If you mean memory safety, then yes and will probably forever be for
all practical uses (unless D gets implemented on a Java or .net like
VM).


A VM is neither necessary nor sufficient to make a language memory
safe. It's all in the semantics of the language.


In practice, the big disadvantage which D has is that it can make calls
to C libraries which are not necessarily memory safe -- and this is an
important feature of the language. Dealing with the external,
uncheckable libraries is always going to be a weak point. Both Java and
.net have mitigated this by rewriting a fair chunk of an OS in their
libraries. That's probably never going to happen for D.



Sun pretty much implemented a full OS inside the JVM. At least their RT 
offering contains a scheduler in order to provide guaranties regarding 
collection time.


In .Net land, MS uses .net to implement parts of their OS so no surprise 
there that those OS APIs are available to .net code. I wouldn't say that 
it's part of their libraries but rather parts of the OS itself.


What parts of the OS are still missing in D's standard library? Isn't 
tango/phobos already provide all the common parts like i/o and 
networking and a few other major libs provide bindings/implementation 
for UI, 3d  multimedia, db bindings, etc?


I think that the big disadvantage you claim D has isn't that big and it 
is well underway to go away compared to .net/java.
Both Java and .net also provide ways to use unsafe C code (e.g. JNI, 
COM), It just a matter of what's the default, what's easier to do and 
what can be done without choosing the unsafe option. I think that D 
isn't that far off behind and could and should catch up.




Re: Safety, undefined behavior, @safe, @trusted

2009-11-07 Thread Yigal Chripun

Christopher Wright wrote:

Yigal Chripun wrote:
In .Net land, MS uses .net to implement parts of their OS so no 
surprise there that those OS APIs are available to .net code.


Really? What parts?

There are a bajillion APIs that you can use from .NET that aren't 
written in .NET. Microsoft just made it easier to use native code from 
.NET than Java does.


WPF for one. yes, it uses an unmanaged low-level engine called MIL to 
improve performance and interoperability but the windowing APIs 
themselves are .NET only and it's not just wrappers, it contains over 
3000 classes according to MSDN.


Of course there are bajillion Non .net APIs that are accessible from 
.NET. That's because MS has backward compatibility support starting from 
the DOS era. New technology is however done in .NET


Re: Arrays passed by almost reference?

2009-11-06 Thread Yigal Chripun

On 06/11/2009 07:07, Bob Jones wrote:

Leandro Lucarellallu...@gmail.com  wrote in message
news:20091106035612.gi3...@llucax.com.ar...


I am not fully against pass-by-ref arrays, I just think in passing by
reference all of the time could have some performance implications.


OK, make 2 different types then: slices (value types, can't append, they
are only a view on other's data) and dynamic arrays (reference type, can
append, but a little slower to manipulate).

It's a shame this idea didn't came true after all...


Thats the whole problem. Dynamic arrays and slices are not the same thing,
and having a syntax that allows code to be ignorant of which it is dealing
with is always going to have problems imo. Being able to resize or append to
slices is fubar imo.

I'd go with slices being value types, no concentenation, or resizing /
reallocating, etc..

Dynamic arrays could be a library type. A templated struct that has a
pointer, length, or whatever. They can have operator overloads for implicit
convertion to slices, so any code that accepts slice can take dynamic
arrays, and prevent side effects. Code that is going to reallocate, has to
take a dynamic array. So at least whats happening is more obvious/explicit.




I agree with the above.

the semantics should be:
DynamicArray!(T) as a dynamic array
int[x] is a static array
RandomAccessRange!(T) is a slice

int[] a; // compile error

(names are not important ATM)

I don't think there's a need for a dedicated array slice type and 
instead they should be range types.
It should be easy to change underlining containers with compatible range 
types.




Re: Semantics of toString

2009-11-06 Thread Yigal Chripun

On 06/11/2009 07:34, Don wrote:

Nick Sabalausky wrote:

Don nos...@nospam.com wrote in message
news:hcvf9l$91...@digitalmars.com...

Justin Johansson wrote:

So what does toString mean to you?

It's a hack from the early days of D. Should be unavailable unless
the -debug flag is set, to discourage people from using it. I hate it.



What don't you like about it?



It cannot even do the most basic stuff.
(1) You can't even make a struct that behaves like an int.

struct MyInt
{
int z;
string toString() {  }
}

void main()
{
int a = 400;
MyInt b = 400;
writefln(%05d %05d, a, b);
writefln(%x %x, a, b);
}

(2) It doesn't behave like a stream. Suppose you have XmlDoc.toString()
You can't emit the doc, piece by piece. You have to create the ENTIRE
string in one go!




The first issue you raise is IMO a problem with writefln and not with 
toString since writefln doesn't handle user-defined types properly.


I think that writefln (btw, horrible name) should only deal with strings 
and their formatting and all other types need to provide an (optionally 
formatted) string.
a numeric type would provide formatting of properties like number of 
decimal places, thousands separator, etc while user defined 
specification type could provide a type of standard format.


auto spec = new Specification(HTML);
string ansi = spec.toString(Specification.ANSI);
string iso = spec.toString(Specification.ISO);
writefln ({1} {0}, ansi, iso); // i'm using the tango/C# formatting

the c style format string that specifies types is a horrible horrible 
thing and should be removed.


regarding the second issue:
forech (node; XmlDoc.preOrder()) writfln({0}, node.toString());


Re: Safety, undefined behavior, @safe, @trusted

2009-11-05 Thread Yigal Chripun

On 05/11/2009 23:24, Andrei Alexandrescu wrote:

Nick Sabalausky wrote:

Walter Bright newshou...@digitalmars.com wrote in message
news:hcv5p9$2jh...@digitalmars.com...

Based on Andrei's and Cardelli's ideas, I propose that Safe D be
defined as the subset of D that guarantees no undefined behavior.
Implementation defined behavior (such as varying pointer sizes) is
still allowed.

Safety seems more and more to be a characteristic of a function,
rather than a module or command line switch. To that end, I propose
two new attributes:

@safe
@trusted



Sounds great! The lower-grained safeness makes a lot of sense, and I'm
thrilled at the idea of safe D finally encompassing more than just
memory safety - I'd been hoping to see that happen ever since I first
heard that safeD only ment memory-safe.


I can think of division by zero as an example. What others are out there?

Andrei


Safe arithmetic like in C# that guards against overflows (throws on 
overflow).




  1   2   3   4   >