Re: C++ guys hate static_if?

2013-03-11 Thread so
On Saturday, 9 March 2013 at 14:09:39 UTC, Andrei Alexandrescu 
wrote:


Wow. That's quite out of character for Bjarne. I think it's 
quite a poor piece.


Andrei


I simply didn't have the stomach to read it after the first 
paragraph. They have no idea what they are talking about. It is 
as if they never used templates more than writing one liners. 
Harder to read, understand, debug? I am just speechless.


Re: Learning Haskell makes you a better programmer?

2012-12-25 Thread so

On Tuesday, 25 December 2012 at 20:50:35 UTC, SomeDude wrote:

As for being a better programmer after having used some 
advanced concepts, I don't know. I think every feature of a 
language must be used where appropriate. I've seen some Python 
code using heavily map/filter/etc that was simply unreadable to 
me. In some places, I find  it easier to understand for loops, 
while in other cases, using functional style programming 
conveys the intent better. But maybe that's just me.


I didn't know "set -o vi" until a few weeks ago, learning it made 
me a better linux user as i knew both vi and terminal. but it 
didn't make me a better pc user generally. If my environment 
doesn't evolve with me or lack tools to combine to make something 
new (probably i'm repeating myself) than i gain nothing from 
learning a feature of another language.


IMO, learning programming language X doesn't make you a better 
programmer. Learning X make you better X programmer. But if your 
existing environment/language is extremely flexible, takes code 
generation *very* seriously, you have a case. You gain something 
from learning a new feature or a paradigm. That is why lisp 
fascinates me, as i believe code generation is one of the most 
important (if not the most important) thing in a PL.


Re: Learning Haskell makes you a better programmer?

2012-12-25 Thread so

On Tuesday, 25 December 2012 at 19:37:42 UTC, Walter Bright wrote:
I've often heard that claim, but here's an article with what 
the substance is:


http://dubhrosa.blogspot.co.uk/2012/12/lessons-learning-haskell.html?m=1

Note that D offers this style of programming, with checkable 
purity, immutability and ranges. I think it is a very important 
paradigm.


Same is often said for lisp for (IMO) far far better reasons, but 
it is still pure nonsense. In my case, the more i learn lisp more 
i hate c++, and since i have to use it, i become frustrated and 
unproductive. C++ made me develop a hatred to the languages that 
get in your way for absolutely no reason (example: lack of static 
if) or locks you to certain paradigms (single for Haskell?).


I am reading the book "Paradigms of Artificial Intelligence 
Programming: Case Studies in Common Lisp", and i have to say it 
was somewhat a shocking experience. The fact that in a popular 
book about the highest level language which includes chapters for 
writing interpreters, compilers, yet i haven't met a c/c++ book 
including such things, it is said these languages are one of the 
low level languages. I know it is much easier to write those for 
lisp than C, yet try to understand my point.


Sorry for being a lisp advocate in D forum, but you know i am on 
your side, we all are (i hope) trying to find the best tool for 
the job.


Re: D vs C++11

2012-11-02 Thread so

On Friday, 2 November 2012 at 22:47:20 UTC, Nick Sabalausky wrote:


No polysemous literals?


Since when D has polysemous literals? Link please.



Re: D vs C++11

2012-11-02 Thread so

On Friday, 2 November 2012 at 18:34:13 UTC, Jacob Carlborg wrote:

I would absolutely say that the gap is getting thinner. I would 
mostly say that with C++11 C++ has finally started to catch up 
with D and the rest of the world.


Serious? It doesn't even have a "static if".


Re: Scaling Scala Vs Java

2012-11-02 Thread so

On Friday, 2 November 2012 at 04:33:33 UTC, bearophile wrote:

Andrei Alexandrescu:

I have a dream that one day there will be a guy with the ID 
philobear discussing D-related stuff on Java and Scala forums.


It's very important to look at how other very good languages 
solve common problems :-)


While i like many of your posts on different languages, there are 
also too much noise. I now realize that noise cost me a great 
deal. After nearly 10 years of programming i finally gave a try 
to lisp, just because of a phrase "programmable programming 
language".


This might offend some people but i have to say regardless. Lisp 
is the most beautiful language i have seen. If i didn't need a 
system language there would be no reason for me to drop 
everything else but only use it.


It took me 10 years to get to know it because of no advertisement 
and the noise and the hatred generated by 1000s other random 
languages that cater mediocre programmers and the "business".


Because of similar reasons, i worry about D's future too. When a 
language is powerful, it creates programmers that is not 
replaceable and we know it is not a good thing.


Re: alias A = B; syntax

2012-10-16 Thread so

On Tuesday, 16 October 2012 at 17:05:14 UTC, kenji hara wrote:


The result of my challenging.
https://github.com/D-Programming-Language/dmd/pull/1187

Kenji Hara


As far as i remember, Andrei said that it was planned and the 
syntax will support templates too (like C++0x template alias). 
So, this might not be enough.


Something like:
alias vec(T) = vector!(T, allocator);


Re: 48 hour game jam

2012-10-15 Thread so

On Monday, 15 October 2012 at 20:41:55 UTC, Paulo Pinto wrote:

Now imagine those that have experimented how powerful Lisp and 
Smalltalk based OS were.


It is so sad to see IDE makers still trying to replicate the 
experience from those environments.


--
Paulo


Thanks for mentioning that, checked "lisp os" and next thing was 
"www.loper-os.org/?p=69". It might be offensive to some people 
but reading his posts/rants now, i kind of like what he says. 
This is what i was talking about when i say that i feel lucky 
because Lisp was not my first language. Looks like he experienced 
the language devolutions and very (rightly so) frustrated.


An example:

"You will not find a “Thumbs Down for Python” essay in this 
blog, because Python users make no attempt to peddle their crock 
of shit as “the future of Lisp.” I have no quarrel with users 
of Python, Ruby, Dylan, and other shoddy “infix Lisps.” 
Because they are honest.


It is the lying of Clojure users which upsets me, and their 
deliberate attempts to rewrite history, to make people forget 
that truly-interactive, advanced Lisp systems once existed"


Re: 48 hour game jam

2012-10-15 Thread so

On Monday, 15 October 2012 at 21:29:11 UTC, H. S. Teoh wrote:

If you think forbidding templates/STL is crazy, wait till you 
hear about
the people who insist that const is evil and ban it from their 
codebase.
(That was from before C++11, though, I don't know what their 
reaction
would be now that key parts of the language _require_ const. 
Maybe

they've migrated to VB or something. :-P)


T


I can somewhat understand not using STL as the library assumes 
you are using certain paradigms, but they sometimes doesn't do 
the job. It is similar to phobos being designed GC in mind.


But if you are not even using templates, why bother? For OOP? I 
would never move to C++ for OOP, since you are also losing 
something quite important in the process, interoperability with 
other languages. Most (maybe all) languages (i know) have some 
kind of interface to C. With C++ you lose that one too.




Re: 48 hour game jam

2012-10-15 Thread so

On Monday, 15 October 2012 at 19:18:34 UTC, Peter Alexander wrote:


I'm going to start a bit of a rant now.
...


On this one, i agree everything you say. D templates inherit 
quite a bit of errors.


I have problem with the attitude of some people (not necessarily 
you). As they say D templates are just better because of things 
like syntax, other than that you could pretty much do everything 
you can do with C++ templates (you simply can't). And they just 
get away with this. For me it just shows how little they know 
about C++ or D.


Re: 48 hour game jam

2012-10-15 Thread so

On Monday, 15 October 2012 at 17:58:13 UTC, H. S. Teoh wrote:

Now try doing this in C++. It is in all likelihood plain 
impossible, or
so extremely painful that it's not worth the suffering to 
implement.



T


Having read some Lisp recently, i was so lucky it was not my 
first language.
It would have been a very frustrating experience having to adopt 
other languages, required to write code in them. I don't have 
experience on it and don't know how it scales with bigger 
projects but as someone who sees the importance of 
metaprogramming in D/C++, it is disturbing when you find out it 
was a 50 years old concept.





Re: 48 hour game jam

2012-10-15 Thread so

On Monday, 15 October 2012 at 16:37:08 UTC, Peter Alexander wrote:

...
- Simple(r) templates


I keep seeing things like this and probably i am failing to 
understand it.
This is a vast understatement for D templates. Yes, easier to 
use, i agree.
But C++ templates are stone age comparing to D and i don't see 
this mentioned enough where it matters most. It was there in 
recent reddit discussions too. I am reading those comments (some 
posters obviously have some kind of agenda) and seeing no one 
refuting them. They neither know C++ well enough to do 
metaprogramming nor D. Because if they did know, they would never 
bring templates into any discussions which involves C++/D 
comparison.


Comparing to C++.

* D templates easier to use.
* There are constructs you can't just do templates without. 
(templates without "static if" is c without "if")
* Some things possible (string/float/alias/... arguments) because 
there is no C++ way of doing these.
* Some things possible in practice (there are things you can 
achieve with C++ templates but they are practically impossible)

* You can have all these and able get higher performance.

There are probably more of those i just can't remember now.


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread so

On Thursday, 4 October 2012 at 09:43:08 UTC, monarch_dodra wrote:


The current implementation for "isAssignable" is
//
template isAssignable(Lhs, Rhs)
{
enum bool isAssignable = is(typeof({
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}));
}


OT - Is there any reason for disabling UFCS for this?

template isAssignable(Lhs, Rhs)
{
enum bool isAssignable =
{
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}.typeof.is;
}


Re: luajit-ffi

2012-05-02 Thread so

On Wednesday, 2 May 2012 at 18:27:18 UTC, agitator wrote:

good bye
you won't be missed


I am just trying to find a better tool to ease the work i do, not 
trying to be one as yourself. So as long as D rocks, i ll be 
following. Just not post on forums as it won't do much good when 
you have hard time expressing yourself anyway. I pity you, as i 
believe nothing would be that low on spirit if it wasn't 
disturbed somehow.


Sorry everyone!
I can't restrain the urge to answer such posts.
I feel the presence of some higher powers, yes! they are trying 
to tell me... something, perhaps warning me in a way? I keep 
hearing voices along the lines "NNTP error: 400 loadav 
[innwatch:load] 2084 gt 1500". yet i... must... click "Send"!


Re: luajit-ffi

2012-05-02 Thread so

On Wednesday, 2 May 2012 at 16:23:28 UTC, David Nadlinger wrote:

Well, after more or less telling Alex that he has no clue what 
he's talking about,


With no reason, right? Could you go back and read why i did what 
you said i did? I might sometimes sound rude. You can blame my 
temper, nature or lack of (quite obvious) English language 
skills, but these never happen without a reason.


So enum, alias has nothing to do with #define?
Or D requires syntax to support the things i mentioned?
Don't i have a right to expect, someone who answers me already 
understand what i am saying or ask nicely to clear it up?


This is not directing Alex but you since i think we settled the 
issue, yet you jump as a self appointed lawyer. Nothing i hate as 
much as this, pack behavior in humans. It is time for me to leave 
this community too, i can stand many things but this. My start on 
this forum wasn't very bright either. Called troll for a  very 
legitimate question, because a few regulars couldn't comprehend 
the topic i am talking about and/or my cryptic English.


Whatever.

you implied that most of the »mature« libraries would make 
only moderate use of the preprocessor for constants. In this 
generality (»Have you ever used a C api, say OpenGL? What are 
they using preprocessor for? other than enum and alias?«), 
this is simply not true. As Alex pointed out, it's often even 
the other way round: The older and more mature a library is, 
the more preprocessor macros it uses to deal with various 
subtle incompatibilities between all the different systems out 
there…


Re: luajit-ffi

2012-05-02 Thread so

On Wednesday, 2 May 2012 at 11:56:51 UTC, David Nadlinger wrote:

On Tuesday, 1 May 2012 at 16:15:58 UTC, so wrote:

Have you ever used a C api, say OpenGL?
What are they using preprocessor for? other than enum and 
alias?
It is that damn simple. I am not talking about supporting 
Boost level preprocessor exploit. I am talking about mature 
"C" libraries.


Oh _come on_, that's just plain wrong. For example, have you 
ever heard of that library called OpenSSL?


David


Why don't you give a link to the source where they use 
preprocessor heavily? And "even" if they do, did i say "all" 
libraries? Yes you managed to come up with a library which 
includes tons of header files in the sea of mature C libraries.





Re: luajit-ffi

2012-05-01 Thread so

On Tuesday, 1 May 2012 at 16:56:48 UTC, Alex Rønne Petersen
wrote:

Yes, creating manual bindings is tedious and annoying to 
maintain, but

it is the most foolproof approach.


In another language there may not be other exit but D, i am 
not sure.


As it happens, you are safe with D even if you pick that path.
With my limited knowledge, these a few lines of code destroys
all lua binders out there. Make sure they are inlined and it
would be identical to handcrafted bindings.

WARNING: DO NOT TRY THIS WITH C++! (See the "alias F"?
ParamTuple? Yep)

import std.stdio;
import std.traits;

struct handle {
int i=0;
int get_arg() {
return i++;
}
}

alias handle* hnd;

void assign(uint N, P...)(ref P p, hnd h)
{
static if(N)
{
p[N-1] = h.get_arg();
assign!(N-1)(p, h);
}
}

int fn(alias F)(hnd h)
{
alias ParameterTypeTuple!(typeof(&F)) P;
P p;

assign!(P.length)(p, h);
F(p);
return 0;
}

// test 
void fun0(int a)
{
writeln("fun0 a: ", a);
}

void fun1(int a, int b)
{
writeln("fun1 a: ", a);
writeln("fun1 b: ", b);
}

void main()
{
//  reg(&fn!fun0, "fun0");
//  reg(&fn!fun1, "fun1");
auto f0 = &fn!fun0;
auto f1 = &fn!fun1;
handle h;
f0(&h);
f1(&h);
}
// test 


Re: Does D have too many features?

2012-05-01 Thread so

On Tuesday, 1 May 2012 at 14:41:43 UTC, SomeDude wrote:
On Tuesday, 1 May 2012 at 14:31:25 UTC, Alex Rønne Petersen 
wrote:


1) So because some people might use a feature incorrectly due 
to lack of knowledge in algorithms and data structures, we 
should cripple the language?


It's not crippling the language. Nothing prevents you from 
writing a loop.
Or using a library find function that does the same thing. But 
the name "find" gives you a hint that it's not magical and that 
it has a cost, while with "if( foo in bar)", it is too easy to 
forget that we are actually potentially performing an O(n) 
operation. In an AA, the 'in' keyword performs a O(1) 
operation, so that's ok to use it as a syntactic sugar.


I remember this was the argument Andrei also came up. Still can't
make any sense out of it. If someone have some detailed reference
please share! If you have it just for a niche usage, why do you
have it at all? With UFCS "bar.contains(foo)" precise enough.

You can make same argument for every operator in any language if
you have operator overloading. And it would be against operator
overloading, not particularly "in".

--
Didn't know "http://forum.dlang.org/"; adopted that
unreadable/annoying to no end captchas.


Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 16:46:32 UTC, Alex Rønne Petersen 
wrote:

On 01-05-2012 18:15, so wrote:
On Tuesday, 1 May 2012 at 15:56:32 UTC, Alex Rønne Petersen 
wrote:


What are you talking about? The link you posted clearly shows 
that
LuaJIT has a C parser built in. It has everything to do with 
syntax
(note that FFI is not anything spectacular or innovative; see 
libffi,
Mono, Lisp, ...). And no, D does not "have all the 
structures". If it

did, we wouldn't be redefining them in D bindings.


What am "i" talking about? How hard to understand these two 
things?


Very hard! You're not making it clear what it is you want!

Do you want something like LuaJIT's FFI so you can just drop a 
C header in and get a binding or what? You really need to make 
this clear. I'm not even sure what we're discussing at this 
point.


Oh! Be prepared...

import capi; // and you are done.

Yes... I wrote the libffi-d binding. You might want to have a 
look at your local ffi.h and ffitarget.h to see why the 
preprocessor is a serious problem in automated binding 
generation. As another example, look at gc.h from libgc.


No, not really. See above. It may be that simple for the OpenGL 
headers, but frankly, OpenGL is an example of a reasonably 
designed library, something that can't be said for 98% of all C 
libraries.


As you said lets be realistic if we can support things like 
OpenGL/CL with tons of typedefs, enums, defines, function 
declerations (which you said seem to agree that easy). We could 
have 98% of mature C libraries. I don't agree with you on that 
98% of mature libraries designed worse than those. Because since 
they target different languages they tend to have clean 
interfaces.


Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 16:31:09 UTC, Alex Rønne Petersen 
wrote:


That has nothing to do with the C preprocessor... The point 
here is that you can't just copy/paste a C header into LuaJIT's 
FFI out of the box.


Well i didn't say you can. But D have great advantages over Lua 
to be able to do that and much more.



Let's be realistic.

In practice, you can almost never just copy/paste a C header 
into LuaJIT's FFI. Even a header guard (#ifndef FOO \ #define 
FOO \ ... \ #endif) will ruin it. The fundamental problem is 
the C preprocessor. The best you can do is take your C header, 
preprocess it with cpp, and then pass it to LuaJIT and hope 
nothing breaks. Usually, not even an approach like that will 
work because of weird compiler extensions used in headers which 
aren't stripped by cpp.


Again, you can't do that in lua but you should be able to do it 
in D. If you check some lua bindings, libraries tries to simplify 
the process, all the duplicate stuff they do on tons of different 
projects. All for nothing, and i am sure this is true for many 
other languages. And then you see luajit-ffi, gives both 
performance boost and s much clean code.


This discussion came up before and preprocessor was the only 
serious blocker. But if you support it to the level what mature C 
libraries use, which means simple alias and enum you have them 
all.


In an ideal world where no compiler extensions existed, this 
might have had a chance of working, but we all know what the 
situation with C and C++ is.


I am not sure if C++ had quarter of its user base if it didn't 
work with C seamlessly.


Yes, creating manual bindings is tedious and annoying to 
maintain, but it is the most foolproof approach.


In another language there may not be other exit but D, i am not 
sure.


Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 15:56:32 UTC, Alex Rønne Petersen 
wrote:



note that FFI is not anything spectacular or innovative;


Have you ever written bindings for any of these scripting 
languages?






Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 15:56:32 UTC, Alex Rønne Petersen 
wrote:


What are you talking about? The link you posted clearly shows 
that LuaJIT has a C parser built in. It has everything to do 
with syntax (note that FFI is not anything spectacular or 
innovative; see libffi, Mono, Lisp, ...). And no, D does not 
"have all the structures". If it did, we wouldn't be redefining 
them in D bindings.


What am "i" talking about? How hard to understand these two 
things?
ABI compatibility and "already" being able to call C natively? If 
you need any syntax you "already" have it.


What does enum have to do with the C preprocessor? Anyway, it's 
not that simple. Any arbitrary symbol can have multiple 
definitions depending on what path you take in the preprocessor 
forest.


Have you ever used a C api, say OpenGL?
What are they using preprocessor for? other than enum and alias?
It is that damn simple. I am not talking about supporting Boost 
level preprocessor exploit. I am talking about mature "C" 
libraries.


Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 15:53:37 UTC, Alex Rønne Petersen 
wrote:

On 01-05-2012 17:43, so wrote:

On Tuesday, 1 May 2012 at 15:31:05 UTC, Robert Clipsham wrote:

On 01/05/2012 16:24, so wrote:

http://luajit.org/ext_ffi.html
https://github.com/malkia/ufo

How awesome is Mike Pall?
I didn't dive into details of the code, but if he can do 
this with a
dynamic language, why on earth D still need manual C 
bindings while

having ABI compatibility? So luajit comes with a C compiler?


Note that you can't just drop any C header file in there for 
that to
work (as far as I can tell), you still have to bring out 
individual

post-processed function declarations.


https://github.com/malkia/ufo/blob/master/ffi/OpenCL.lua
https://github.com/malkia/ufo/blob/master/ffi/OpenGL.lua
If it can handle these 2 beasts.


I see no preprocessor directives.


They are all there as "enum".

Also, someone has written a libffi binding for D, which could 
probably

be adapted to work in a similar manor:

https://github.com/lycus/libffi-d


Neat. Still, having native C libraries means that you can just 
drop your
C/C++ environment and start D. And i am sure you agree this is 
by far

the biggest blocker for C/C++ developers.


I'm not sure what you're trying to say here. Elaborate/rephrase?


For example in my projects i implement different tasks in 
different libraries.
All of them have C interfaces. With something like this i could 
just access these libraries as i am accessing in C/C++. Then why 
do i need to go on using C/C++? Transition would be seamless. You 
can say just write the damn bindings and be done with it. But it 
is neither scalable nor maintainable.


Re: luajit-ffi

2012-05-01 Thread so
On Tuesday, 1 May 2012 at 15:32:40 UTC, Alex Rønne Petersen 
wrote:

On 01-05-2012 17:24, so wrote:

http://luajit.org/ext_ffi.html
https://github.com/malkia/ufo

How awesome is Mike Pall?
I didn't dive into details of the code, but if he can do this 
with a
dynamic language, why on earth D still need manual C bindings 
while

having ABI compatibility? So luajit comes with a C compiler?


Parsing a C header is relatively easy; don't need a full 
compiler for that. It's what htod does. I don't know what you 
mean by "manual bindings", but keep in mind that:


1) We *don't* want to embed some kind of crazy C syntax in D.
2) In D, we want statically bound C function calls. Lua is a 
dynamic language.


It has nothing to do with syntax, D already can call C functions 
directly,

and have all the structures.

3) LuaJIT will have the same problems as htod: Preprocessor 
definitions.


We can just support enum and simple alias capabilities of C PP 
and most if not all the popular C libraries would be in D 
arsenal. I haven't seen many serious C APIs that exploit PP more 
than these simple tasks.


Re: luajit-ffi

2012-05-01 Thread so

On Tuesday, 1 May 2012 at 15:31:05 UTC, Robert Clipsham wrote:

On 01/05/2012 16:24, so wrote:

http://luajit.org/ext_ffi.html
https://github.com/malkia/ufo

How awesome is Mike Pall?
I didn't dive into details of the code, but if he can do this 
with a
dynamic language, why on earth D still need manual C bindings 
while

having ABI compatibility? So luajit comes with a C compiler?


Note that you can't just drop any C header file in there for 
that to work (as far as I can tell), you still have to bring 
out individual post-processed function declarations.


https://github.com/malkia/ufo/blob/master/ffi/OpenCL.lua
https://github.com/malkia/ufo/blob/master/ffi/OpenGL.lua
If it can handle these 2 beasts.

Also, someone has written a libffi binding for D, which could 
probably be adapted to work in a similar manor:


https://github.com/lycus/libffi-d


Neat. Still, having native C libraries means that you can just 
drop your C/C++ environment and start D. And i am sure you agree 
this is by far the biggest blocker for C/C++ developers.


luajit-ffi

2012-05-01 Thread so

http://luajit.org/ext_ffi.html
https://github.com/malkia/ufo

How awesome is Mike Pall?
I didn't dive into details of the code, but if he can do this 
with a dynamic language, why on earth D still need manual C 
bindings while having ABI compatibility? So luajit comes with a C 
compiler?


Re: Static method conflicts with non-static method?

2012-04-27 Thread so
On Friday, 27 April 2012 at 12:35:53 UTC, Steven Schveighoffer 
wrote:


Huh?  The main reason of confusion is that the static method is 
named in such a way that it looks like an instance method.  So 
we prevent that, unless the author of the class (who is 
deciding the name of the function) deems it should be called on 
instances


example:

struct File
{
   static File open(string name) {...} // factory method
   this(string name);
}

File f = File("hello");

f.open("world"); // oops!  Just opened file world and threw it 
away

f = File.open("world");// better!


With your proposal you can still do "f.open("world");" and get 
the same result if the author provided alias. You are trying to 
solve another problem, that the author should better state if 
this is intended. The problem i see is user assumming author is a 
smart guy. But at the end he finds out the author is as dumb as 
himself {he should have RTFM :)}


I challenge you to name File.open some way where it *wouldn't* 
be confusing when called on an instance :)


-Steve


Easy! Don't call on an instance! openFile() out of the struct.
I always add "make_" before any static function, otherwise static 
methods should be precise as 
http://forum.dlang.org/post/araqkvvgyspzmdecx...@forum.dlang.org


Re: John-Carmack quotes the D programming language

2012-04-27 Thread so
On Friday, 27 April 2012 at 07:26:52 UTC, Guillaume Chatelet 
wrote:

A very good article by John-Carmack about purity

http://www.altdevblogaday.com/2012/04/26/functional-programming-in-c/


Just a glance AND my eyes managed to parse "axilmar" in thousands 
of words.

I better get some fresh air.


Re: Static method conflicts with non-static method?

2012-04-27 Thread so
On Friday, 27 April 2012 at 11:51:40 UTC, Steven Schveighoffer 
wrote:


The idea I came up with in my proposal 
(http://d.puremagic.com/issues/show_bug.cgi?id=6579) was to 
allow aliasing the static method into the instance namespace:


struct S1
{
   static void foo() {}
   alias S1.foo this.foo;
}

struct S2
{
   static void foo() {}
}

void main()
{
   S1 i1;
   S2 i2;
   S1.foo(); // ok
   i1.foo(); // ok
   S2.foo(); // ok
   i2.foo(); // error
}

-Steve


But call site remains unchanged, which was the main reason of 
confusion. If we expect user to read the function declaration 
anyway, an extra alias won't do much good or probably complicate 
it even further.


Re: Static method conflicts with non-static method?

2012-04-27 Thread so

On Friday, 27 April 2012 at 11:49:08 UTC, Kevin Cox wrote:

On Apr 27, 2012 7:34 AM, "so"  wrote:


I agree it is ugly. If there is a way out (reason why i 
asked), we should

just dump it.

I don't like the idea either because it is confusing.  The only 
reason I
can imagine is if there was polymorphism on statics which I see 
as a fairly
useless feature.   I would be interested to hear of possible 
use cases

though.


This one is quite important for templates.

struct A(T)
{
   static T mini() { return T.min; }
   static T maxi() { return T.max; }
}

struct B(T)
{
   T[] v;
   T mini() { return min(v); }
   T maxi() { return max(v); }
}

void test(T)(T a)
{
   writeln("min: ", a.mini());
   writeln("max: ", a.maxi());
}


Re: Static method conflicts with non-static method?

2012-04-27 Thread so
On Friday, 27 April 2012 at 11:23:39 UTC, Steven Schveighoffer 
wrote:

On Fri, 27 Apr 2012 07:03:02 -0400, so  wrote:

On Friday, 27 April 2012 at 10:48:29 UTC, Steven Schveighoffer 
wrote:


With the advent of UFCS, this argument has much less teeth.  
Maybe it should be revisited...


-Steve


Elaborate please how UFCS would help in that context.


Hm... thinking about it, UFCS requires passing the instance, 
while static methods do not.  So it doesn't make as much sense 
as I thought...


I still think static methods should not be callable on 
instances without opt-in from the aggregate author.  The 
confusion it can cause is not worth the benefits IMO.


I agree it is ugly. If there is a way out (reason why i asked), 
we should just dump it.


Re: Static method conflicts with non-static method?

2012-04-27 Thread so
On Friday, 27 April 2012 at 10:48:29 UTC, Steven Schveighoffer 
wrote:


With the advent of UFCS, this argument has much less teeth.  
Maybe it should be revisited...


-Steve


Elaborate please how UFCS would help in that context.





Re: [off-topic] Sony releases PS Vita SDK

2012-04-21 Thread so

On Friday, 20 April 2012 at 12:30:19 UTC, SomeDude wrote:

What I don't get is why no large software company is backing up 
D right now. It's quite clear by now that D is by far the 
language that has the best feature set to be the successor to 
C++.


Answer is i think quite simple. How many C++ developers do you 
think use templates more than like "min(Ta){ reuturn a}"? I don't believe it is more than 5%. We first need to show how 
powerfull and accessible templates/ctfe in D. We need to stop 
saying "D does templates better than C++". This is huge 
underestimation. By stating it this way we are just targetting 
like 1% of the C++ audience. We need to teach people how to do 
templates. Starting with Boost community. If D can't absorb Boost 
community. There is no hope in neither D nor Boost, we should 
just stop!


If IBM for example was helping D like they did for eclipse, the 
traction would be huge and the toolchain would stabilize so 
much faster. :(


Re: D Compiler as a Library

2012-04-19 Thread so

On Thursday, 19 April 2012 at 10:28:08 UTC, Tobias Pankrath wrote:
On Thursday, 19 April 2012 at 09:24:23 UTC, Roman D. Boiko 
wrote:

On Friday, 13 April 2012 at 09:57:49 UTC, Ary Manzana wrote:
Having a D compiler available as a library will (at least) 
give these benefits:




What about joining forces with sdc then?


Wow! Didn't know there was a D compiler with Boost like licence.
Awesome job whoever started it.




Re: Disallow (dis)equality with FP.nan/FP.init literals

2012-04-18 Thread so

On Thursday, 19 April 2012 at 01:28:51 UTC, so wrote:

On Thursday, 19 April 2012 at 00:50:09 UTC, bearophile wrote:
This is an open enhancement request that I'll probably add to 
Bugzilla.
The direct cause of it is a recent discussion in D.learn, but 
I and other people are aware of this problem for a lot of time.



Since a lot of time D statically refuses the use of 
"classReference == null":


// Program #1
class Foo {}
void main() {
   Foo f;
   assert(f == null);
   assert(f != null);
}


test.d(4): Error: use 'is' instead of '==' when comparing with 
null
test.d(5): Error: use '!is' instead of '!=' when comparing 
with null




A not expert D programmer sometimes compares a double with 
double.nan in a wrong way:


http://forum.dlang.org/thread/mailman.1845.1334694574.4860.digitalmars-d-le...@puremagic.com#post-jmlhfr:2428cv:241:40digitalmars.com

because someDouble == double.nan is always false:


// Program #2
import std.math: isNaN;
void main() {
   double x = double.init;
   assert(x != double.nan);
   assert(x != double.init);
   assert(isNaN(x));
   assert(x is double.init);
   assert(x !is double.nan);

   double y = double.nan;
   assert(y != double.nan);
   assert(y != double.init);
   assert(isNaN(y));
   assert(y !is double.init);
   assert(y is double.nan);
}



I think there are three common wrong usage patterns of NaNs 
testing:
1) Test that x is equal to/different from nan using 
x==FP.nan/x!=FP.nan
2) Test that x is equal to/different from all NaNs using 
x==FP.nan/x!=FP.nan
3) Test that x is equal to/different from FP.init using 
x==FP.init/x!=FP.init


The case 3 is a bit less important because the programmer 
already knows something about FP init, but it's wrong still.


There are other wrong usages of NaNs but they are by more 
expert programmers, to I don't want to catch them (example: 
using "is" to test if x is equal to/different from all NaNs is 
a bug, because there are more than one NaN. But if the 
programmer uses "is" I assume he/she/shi knows enough about 
NaNs, so this is not flagged by the compiler).


Currently this program compiles with no errors:


// Program #3
void main() {
   float x1 = float.nan;
   assert(x1 == float.nan);
   float x2 = 0.0;
   assert(x2 != float.nan);
   float x3 = float.init;
   assert(x3 == float.init);
   float x4 = 0.0;
   assert(x4 != float.init);

   double x5 = double.nan;
   assert(x5 == double.nan);
   double x6 = 0.0;
   assert(x6 != double.nan);
   double x7 = double.init;
   assert(x7 == double.init);
   double x8 = 0.0;
   assert(x8 != double.init);

   real x9 = real.nan;
   assert(x9 == real.nan);
   real x10 = 0.0;
   assert(x10 != real.nan);
   real x11 = real.init;
   assert(x11 == real.init);
   real x12 = 0.0;
   assert(x12 != real.init);

   enum double myNaN = double.nan;
   assert(myNaN == double.nan);
}



So I propose to statically disallow comparisons of Program #3, 
so it generates errors similar to:


test.d(4): Error: comparison is always false. Use 
'std.math.isNaN' to test for every kind of NaN or 'is 
float.nan' for this specific NaN
test.d(6): Error: comparison is always true. Use 
'!std.math.isNaN' to test for every kind of NaN or '!is 
float.nan' for this specific NaN
test.d(8): Error: comparison is always false. Use 
'std.math.isNaN' to test for every kind of NaN or 'is 
float.init' for this specicif NaN

...


Opinions, improvements, votes or critics are welcome.

Bye,
bearophile


Good idea.
But nothing beats this:

int main() {
double  q;
q = 3.0/7.0;
if (q == 3.0/7.0) printf("Equal\n");
else printf("Not Equal\n");
return 0;
}

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Should we disable implicit conversion "at" comparison as well?


s/double/float/

"double" version also problematic but for another case.


Re: Disallow (dis)equality with FP.nan/FP.init literals

2012-04-18 Thread so

On Thursday, 19 April 2012 at 00:50:09 UTC, bearophile wrote:
This is an open enhancement request that I'll probably add to 
Bugzilla.
The direct cause of it is a recent discussion in D.learn, but I 
and other people are aware of this problem for a lot of time.



Since a lot of time D statically refuses the use of 
"classReference == null":


// Program #1
class Foo {}
void main() {
Foo f;
assert(f == null);
assert(f != null);
}


test.d(4): Error: use 'is' instead of '==' when comparing with 
null
test.d(5): Error: use '!is' instead of '!=' when comparing with 
null




A not expert D programmer sometimes compares a double with 
double.nan in a wrong way:


http://forum.dlang.org/thread/mailman.1845.1334694574.4860.digitalmars-d-le...@puremagic.com#post-jmlhfr:2428cv:241:40digitalmars.com

because someDouble == double.nan is always false:


// Program #2
import std.math: isNaN;
void main() {
double x = double.init;
assert(x != double.nan);
assert(x != double.init);
assert(isNaN(x));
assert(x is double.init);
assert(x !is double.nan);

double y = double.nan;
assert(y != double.nan);
assert(y != double.init);
assert(isNaN(y));
assert(y !is double.init);
assert(y is double.nan);
}



I think there are three common wrong usage patterns of NaNs 
testing:
1) Test that x is equal to/different from nan using 
x==FP.nan/x!=FP.nan
2) Test that x is equal to/different from all NaNs using 
x==FP.nan/x!=FP.nan
3) Test that x is equal to/different from FP.init using 
x==FP.init/x!=FP.init


The case 3 is a bit less important because the programmer 
already knows something about FP init, but it's wrong still.


There are other wrong usages of NaNs but they are by more 
expert programmers, to I don't want to catch them (example: 
using "is" to test if x is equal to/different from all NaNs is 
a bug, because there are more than one NaN. But if the 
programmer uses "is" I assume he/she/shi knows enough about 
NaNs, so this is not flagged by the compiler).


Currently this program compiles with no errors:


// Program #3
void main() {
float x1 = float.nan;
assert(x1 == float.nan);
float x2 = 0.0;
assert(x2 != float.nan);
float x3 = float.init;
assert(x3 == float.init);
float x4 = 0.0;
assert(x4 != float.init);

double x5 = double.nan;
assert(x5 == double.nan);
double x6 = 0.0;
assert(x6 != double.nan);
double x7 = double.init;
assert(x7 == double.init);
double x8 = 0.0;
assert(x8 != double.init);

real x9 = real.nan;
assert(x9 == real.nan);
real x10 = 0.0;
assert(x10 != real.nan);
real x11 = real.init;
assert(x11 == real.init);
real x12 = 0.0;
assert(x12 != real.init);

enum double myNaN = double.nan;
assert(myNaN == double.nan);
}



So I propose to statically disallow comparisons of Program #3, 
so it generates errors similar to:


test.d(4): Error: comparison is always false. Use 
'std.math.isNaN' to test for every kind of NaN or 'is 
float.nan' for this specific NaN
test.d(6): Error: comparison is always true. Use 
'!std.math.isNaN' to test for every kind of NaN or '!is 
float.nan' for this specific NaN
test.d(8): Error: comparison is always false. Use 
'std.math.isNaN' to test for every kind of NaN or 'is 
float.init' for this specicif NaN

...


Opinions, improvements, votes or critics are welcome.

Bye,
bearophile


Good idea.
But nothing beats this:

int main() {
double  q;
q = 3.0/7.0;
if (q == 3.0/7.0) printf("Equal\n");
else printf("Not Equal\n");
return 0;
}

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Should we disable implicit conversion "at" comparison as well?


Re: Three Unlikely Successful Features of D

2012-03-21 Thread so

On Tuesday, 20 March 2012 at 19:02:16 UTC, Andrei Alexandrescu
wrote:
I plan to give a talk at Lang.NEXT 
(http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2012) with 
the subject above. There are a few features of D that turned 
out to be successful, in spite of them being seemingly 
unimportant or diverging from related consecrated approaches.


What are your faves? I have a few in mind, but wouldn't want to 
influence answers.



Thanks,

Andrei


. ctfe - mixin
. explain them why templates without "static if" is like a
language without "if".
. string, float and anything that matters can be used as template
parameters.


Re: Dynamic language

2012-03-16 Thread so

On Thursday, 15 March 2012 at 22:37:12 UTC, Manu wrote:

Do you expect users to be modifying the scripts in the retail 
release?

Surely scripting is still for what it has always been for, rapid
iteration/prototyping during development.


You need to be able to do both.

You only need the compiler as a part of the games 
asset-pipeline.


That is the biggest and (only :) ) blocker.


Re: Dynamic language

2012-03-16 Thread so

On Thursday, 15 March 2012 at 22:51:57 UTC, H. S. Teoh wrote:

It can also be a security concern. Somebody could offer players 
an
"awesome script" in DLL form that actually contains arbitrary 
exploit

code. Or a "script" that contains D code for an exploit.


If you are compiling at runtime you can expose only certain 
header files and with d annotations like "safe" and many others 
you can also disable low level access.





Re: Dynamic language

2012-03-15 Thread so

On Thursday, 15 March 2012 at 19:35:15 UTC, Nick Sabalausky wrote:


As for other langauges:

Lua is currently *king* for game scripting. It's used all over 
the gaming
industry because it's fast and integrates with C very easily. 
Personally, I
don't like it because it *is* a dynamic langauge. But if you're 
looking for

a dynamic language, well, then there's that.


Not just because i am looking for, how would you code a 
customizable ui (wow ui for example) without an interpreter? You 
got no other options AFAIK when it comes to customizable stuff.





Re: Dynamic language

2012-03-15 Thread so

On Thursday, 15 March 2012 at 19:25:24 UTC, Nick Sabalausky wrote:

"so"  wrote in message
news:uamqdkmnshxmvayeu...@forum.dlang.org...

Hello,

Not related to D but this is a community which i can find at 
least a few objective person. I want to invest some "quality" 
time on a dynamic language but i am not sure which one. Would 
you please suggest one?


To give you an idea what i am after:
Of all one-liners i have heard only one gets me.
"The programmable programming language". Is it true? If so 
Lisp will be my first choice.





I'd say it depends:

- If can can tolerate the parenthesis-hell and goofy prefix 
notation
(instead of infix), then LISP has been said to be the 
generalization of all
other langauges. IIRC, I heard that it was created specifically 
as an

example of a "programmable programming language".

- If you're looking for performace and practical real-world 
usage as a way
to add scripting support to a program, Lua is considerd king 
for that.


- If you can stomach the indent-scoping, Python is very 
well-regarded and

has a lot of fancy advanced features.

- If you don't like indent-scoping, Ruby is probably about the 
closest there

is to a block-scoped Python.

- If you're looking for the most painful dynamic experince 
imaginable,
ActionScript2 should be at the top of your list. Make sure to 
use all-Adobe
tools, and the newest versions of each, so the whole experience 
will be

*truly* unbearable.

I admit though, I'm not very familiar with the extent of the 
metaprogramming

abilities of any of those languages.


Metaprogramming abilities is the first thing i check now. After 
the painful experience of C/C++. I just can't "repeat" codes and 
when i have no options, i just curse the language :)


Fanatics don't get it. How much a simple tool like "static if" 
improves the entire template mechanism and makes it something 
enjoyable. What the hell is a killer app if this is not for a 
programmer?


Re: Dynamic language

2012-03-15 Thread so

Thank you all!

On Thursday, 15 March 2012 at 17:30:49 UTC, Adam D. Ruppe wrote:

On Thursday, 15 March 2012 at 07:09:39 UTC, so wrote:

If so Lisp will be my first choice.


I just talked about D because D rox, but if you are doing
it for education, Lisp is a good choice because it is fairly
unique.


I'd love to use D but it is not an option is it? At least for the 
things i am after, say game scripting.



But for education, Lisp's uniqueness is a good thing -
experiencing the new syntax and playing with the macros
is a good way to learn things you might have not looked
at otherwise.


I wouldn't mind learning something new! Looks like either way 
Lisp will be on my reading list.


Dynamic language

2012-03-15 Thread so

Hello,

Not related to D but this is a community which i can find at 
least a few objective person. I want to invest some "quality" 
time on a dynamic language but i am not sure which one. Would you 
please suggest one?


To give you an idea what i am after:
Of all one-liners i have heard only one gets me.
"The programmable programming language". Is it true? If so Lisp 
will be my first choice.


Thanks.


Re: toHash => pure, nothrow, const, @safe

2012-03-12 Thread so

On Monday, 12 March 2012 at 07:18:06 UTC, so wrote:

A pattern is emerging. Why not analyze it a bit and somehow try 
to find a common ground? Then we can generalize it to a single 
annotation.


@mask(wat) const|pure|nothrow|safe

@wat hash_t toHash()
@wat bool opEquals(ref const KeyType s)
@wat int opCmp(ref const KeyType s)


Re: toHash => pure, nothrow, const, @safe

2012-03-12 Thread so

On Sunday, 11 March 2012 at 23:54:10 UTC, Walter Bright wrote:

Consider the toHash() function for struct key types:

http://dlang.org/hash-map.html

And of course the others:

const hash_t toHash();
const bool opEquals(ref const KeyType s);
const int opCmp(ref const KeyType s);

They need to be, as well as const, pure nothrow @safe.

The problem is:
1. a lot of code must be retrofitted
2. it's just plain annoying to annotate them

It's the same problem as for Object.toHash(). That was 
addressed by making those attributes inheritable, but that 
won't work for struct ones.


So I propose instead a bit of a hack. toHash, opEquals, and 
opCmp as struct members be automatically annotated with pure, 
nothrow, and @safe (if not already marked as @trusted).


A pattern is emerging. Why not analyze it a bit and somehow try 
to find a common ground? Then we can generalize it to a single 
annotation.


Re: Breaking backwards compatiblity

2012-03-11 Thread so

On Sunday, 11 March 2012 at 19:22:04 UTC, Nick Sabalausky wrote:

I just avoid those programs (and immediately disable the 
upgrade nag
screens). For example, I will *not* allow Safari or Chrome to 
even *touch*
my computer. (When I want to test a page in Chrome, I use 
SRWare Iron
instead. If SRWare Iron ever goes away, then Chrome users will 
be on their

own when viewing my pages.)


http://code.google.com/p/smoothgestures-chromium/issues/detail?id=498




Re: Breaking backwards compatiblity

2012-03-10 Thread so
On Saturday, 10 March 2012 at 19:54:13 UTC, Jonathan M Davis 
wrote:


LOL. I'm the complete opposite. I seem to end up upgrading my 
computer every 2
or 3 years. I wouldn't be able to stand being on an older 
computer that long.
I'm constantly annoyed by how slow my computer is no matter how 
new it is.


No matter how much hardware you throw at it, somehow it gets 
slower and slower.
New hardware can't keep up with (ever increasing) writing bad 
software.


http://www.agner.org/optimize/blog/read.php?i=9




Re: Breaking backwards compatiblity

2012-03-10 Thread so

On Saturday, 10 March 2012 at 17:51:28 UTC, H. S. Teoh wrote:

Um... before my recent upgrade (about a year ago), I had been 
using a
500MB (or was it 100MB?) RAM machine running a 10-year-old 
processor.

And before *that*, it was a 64MB (or 32MB?) machine running a
15-year-old processor...

Then again, I never believed in the desktop metaphor, and have 
never
seriously used Gnome or KDE or any of that fluffy stuff. I was 
on VTWM
until I decided ratpoison (a mouseless WM) better suited the 
way I

worked.


I am also using light window managers. Most of the time only tmux 
and gvim running. I tried many WMs but if you are using it 
frequently and don't like falling back to windows and such, you 
need a WM working seamlessly with GUIs. Gimp is one. (You might 
not believe in desktop but how would you use a program like 
Gimp?) Now most of the tiling WMs suck at handling that kind of 
thing. Using xmonad now, at least it has a little better support.


Re: Breaking backwards compatiblity

2012-03-10 Thread so

On Saturday, 10 March 2012 at 16:22:41 UTC, H. S. Teoh wrote:

OK, clearly I wasn't understanding what the OP was talking 
about. It
*seemed* to imply that Linux had stop-the-world problems with 
mouse

movement, but this isn't the case.

A hardware interrupt is a hardware interrupt. Whatever OS 
you're using,
it's got to stop to handle this somehow. I don't see how else 
you can do
this. When the hardware needs to signal the OS about something, 
it's

gotta do it somehow. And hardware often requires top-priority
stop-the-world handling, because it may not be able to wait a 
few
milliseconds before it's handled. It's not like software that 
generally

can afford to wait for a period of time.

As for Win95 being unable to keep up with mouse movement... 
well, to be
honest I hated Win95 so much that 90% of the time I was in the 
DOS
prompt anyway, so I didn't even notice this. If it were truly a 
problem,
it's probably a sign of poor hardware interrupt handling 
(interrupt
handler is taking too long to process events). But I haven't 
seen this

myself either.


Design of input handling, the theoretical part is irrelevant. I 
was solely talking about how they do it in practice. OSs are 
simply unresponsive and in linux it is more severe. If i am 
having this issue in practice it doesn't matter if it was the GC 
lock or an another failure to handle input.


Re: Breaking backwards compatiblity

2012-03-10 Thread so

On Saturday, 10 March 2012 at 15:27:00 UTC, H. S. Teoh wrote:

On Sat, Mar 10, 2012 at 04:23:43PM +0100, Adam D. Ruppe wrote:

On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
>Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have
to give a lot of care to handling them very quickly and
not letting them stack up (lest the whole system freeze).


Sure, but I've never seen a problem with that.


Neither the OS developers, especially when they are on 999kTB ram 
and 1billion core processors.





Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-10 Thread so

On Saturday, 10 March 2012 at 09:25:28 UTC, so wrote:

While i tend to code that way it is not as pretty in C++ as it 
looks on paper when you use namespaces.


namespace ns {
  struct S {
void b();
  }

  void b(S s);
}

auto s = ns::S;
s.b() // fine
ns::b(s) // uh..

It gets much worse when S is somewhere deep in many namespaces.
ADL helps but it is also unreliable. I think D doesn't have 
this issue.


And... at the end of the article he also mentions this.




Re: Breaking backwards compatiblity

2012-03-10 Thread so
On Saturday, 10 March 2012 at 09:17:37 UTC, Jonathan M Davis 
wrote:


And actually, when _responsiveness_ is one of the key features 
that a desktop
OS requires, a stop-the-world GC in a desktop would probably be 
_worse_ than

one in a server.


My point is, every operation, even a mouse movement is already a 
stop-the-world event for all the "modern" operating systems i 
have encountered and Linux manages to take this to ridiculous 
levels.





Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-10 Thread so
On Saturday, 10 March 2012 at 05:06:40 UTC, Andrei Alexandrescu 
wrote:



Insert obligatory link: http://drdobbs.com/184401197

Very insightful article.


While i tend to code that way it is not as pretty in C++ as it 
looks on paper when you use namespaces.


namespace ns {
  struct S {
void b();
  }

  void b(S s);
}

auto s = ns::S;
s.b() // fine
ns::b(s) // uh..

It gets much worse when S is somewhere deep in many namespaces.
ADL helps but it is also unreliable. I think D doesn't have this 
issue.




Re: Breaking backwards compatiblity

2012-03-10 Thread so
On Saturday, 10 March 2012 at 08:53:23 UTC, Alex Rønne Petersen 
wrote:


In all fairness, a stop-the-world GC in a kernel probably *is* 
a horrible idea.


For us (desktop users), it would not differ much would it now?



Re: Breaking backwards compatiblity

2012-03-10 Thread so

My grandfather still uses 3.11

I gave him my old computer (Athlon) with Windows 7. But no 
after a week he wanted his old system back.


"my old Windows 7"
Which was the newest/bestest thing a few days/weeks/months/years 
ago.
It must be hard to keep up, they release OSs faster than the time 
it takes to install or possibly booting a system! :)


Re: Breaking backwards compatiblity

2012-03-09 Thread so

On Saturday, 10 March 2012 at 00:02:44 UTC, Andrej Mitrovic wrote:

Linus would probably hate D just as much as he hates C++. :p


Rather then using ones influence to make a better language (C) it 
is much easier to bitch about attempts made by others.





Re: Review of Jose Armando Garcia Sancio's std.log

2012-03-06 Thread so

On Tuesday, 6 March 2012 at 03:11:11 UTC, Robert Jacques wrote:

Please don't forget that you are _submitting_ a library into 
Phobos and the D ecosystem at large. Yes, new code can be 
expected to avoid these names, but all existing code has to be 
retrofitted and fixed.


Probably it is the reason why they use cryptic variable names in 
C++ std library :)





Re: Review of Jose Armando Garcia Sancio's std.log

2012-03-06 Thread so

On Tuesday, 6 March 2012 at 07:46:14 UTC, Jacob Carlborg wrote:

On 2012-03-06 02:32, Jonathan M Davis wrote:



The user can then alias "log!info" to "info" if he/she wants to.


Again, you are now forcing 2 common names instead of one as it is 
now.

When you instantiate log!info where do you get "info" from?




Re: Review of Jose Armando Garcia Sancio's std.log

2012-03-05 Thread so

On Tuesday, 6 March 2012 at 01:33:05 UTC, Jonathan M Davis wrote:


And I really don't think that this merits it.
log!info(msg) would work just fine and would be _far_ better.


Now you got not only "info" but "log" in global namespace :)
I think you meant "log.info".





Re: Review of Jose Armando Garcia Sancio's std.log

2012-03-05 Thread so
On Tuesday, 6 March 2012 at 01:30:41 UTC, Steven Schveighoffer 
wrote:


Except 'info', 'error', 'warning' are all common names, likely 
to be a very attractive name for something that has nothing to 
do with (or cares about) logging.  cout is not a common name or 
even an english word, so it's unlikely someone has or wants to 
create a cout member.


Couple this with the fact that all of these are nouns -- likely 
candidates for fields.


Your argument has some merit, but I would add that my argument 
is only against *common* global namespace names.


Another solution besides using a namespace is to make the names 
less common, like linfo instead of just info.


I have no objections against changing names. For example, instead 
of "info" i use "note" for my logger. Not 100% sure about "error" 
but i think "warning" also implies logging and don't see any use 
case where it would be used as a variable name.


Re: Review of Jose Armando Garcia Sancio's std.log

2012-03-05 Thread so
On Monday, 5 March 2012 at 23:51:29 UTC, Steven Schveighoffer 
wrote:
On Mon, 05 Mar 2012 18:30:03 -0500, David Nadlinger 
 wrote:


On Monday, 5 March 2012 at 21:55:08 UTC, Steven Schveighoffer 
wrote:
The log aliases use names that are too common.  I think 
log.info is a better symbol for logging than just 'info', 
which could be a symbol in a myriad of places.  Given that 
D's symbol lookup rules allow shadowing of global symbols, 
this does not work out very well.


Originally, the code used log!info and so on, but it was 
changed to the current design right after review begin, the 
rationale being that you could always use »import log = 
std.log« if you want the extra namespace.


That doesn't help.  Software isn't static.

import std.log;
import other; // defines B

class A : B
{
   void foo()
   {
  info("some info message"); // error! int isn't a function!
   }
}

other.d:

class B
{
   int info; // added later
}


That is not a counter-argument to something related to this 
library but everything that lies in global namespace.
At its first state both severity levels and the "log" was in 
global namespace. Now only severity levels.


You are also overlooking one crucial fact that this library will 
be part of phobos, standard library. Which requires everyone to 
adopt. When you see codes like this (below), you don't blame 
standard library designers do you?


using namespace std;
int cout;



Re: A better way to manage discussions?

2012-03-04 Thread so

Hello!

On Sunday, 4 March 2012 at 08:21:13 UTC, Sandeep Datta wrote:

Also the upvote and downvote buttons can come in handy 
sometimes too (for example if you did not like this post you 
could downvote it to oblivion).


All sounds fine except this, i hate this kind of stuff. They open 
doors for all kind of abuses, popularity contests, factions in 
community and similar distasteful things.






Re: John Carmack applauds D's pure attribute

2012-02-27 Thread so

On Monday, 27 February 2012 at 08:39:54 UTC, Paulo Pinto wrote:
I keep bringing this issues, because I am a firm believer that 
when

people that fight against a GC are just fighting a lost battle.

Like back in the 80's people were fighting against Pascal or C 
versus

Assembly. Or in the 90' were fighting against C++ versus C.

Now C++ is even used for operating systems, BeOS, Mac OS X 
drivers,

COM/WinRT.


It is not a fair analogy. Unlike MMM and GC, C++ can do 
everything C can do and has more sugar. What they are argue i 
think whether or not OO is a solution to everything and troubles 
with its implementation in C++.


Sure a systems programming language needs some form of manual 
memory
management for "exceptional situations", but 90% of the time 
you will

be allocating either referenced counted or GCed memory.

What will you do when the major OS use a systems programming 
language like forces GC or reference counting on you do? Which 
is already slowly happening with GC and ARC on Mac OS X, WinRT 
on Windows 8, mainstream OS, as well as the Oberon, Spin, 
Mirage, Home, Inferno and Singularity research OSs.


Create your own language to allow you to live in the past?

People that refuse to adapt to times stay behind, those who 
adapt, find ways to profit from the new reality.


But as I said before, that is my opinion and as a simple human 
is also prone to errors. Maybe my ideas regarding memory 
management in systems languages are plain wrong, the future 
will tell.


--
Paulo


As i said in many threads regarding GC and MMM, it is not about 
this vs that.
There should be no religious stances. Both have their strengths 
and failures.
What every single discussion on this boils down to is some people 
downplay the failures of their religion :)


And that staying behind thing is something i never understand!
It is a hype, it is marketing! To sell you a product that doesn't 
deserve half its price!


Religion/Irrationality has no place in what we do. Show me a 
better tool, "convince me" it is better and i will be using that 
tool. I don't give a damn if it is D or vim i am leaving behind.


Re: C++ pimpl

2012-02-26 Thread so
On Sunday, 26 February 2012 at 21:03:11 UTC, Robert Klotzner 
wrote:


Yeah, I got that. What I don't get is why it needs to be a 
template?

Wouldn't a simple base class with a destructor suffice?


Indeed it would suffice. Nothing really special, i didn't want 
pimpl to be a "one true base class" that you could do things like:


vector v;

Well my initial example was Qt. Take for example the QWidget 
class, you
usually derive from it in order to implement custom widgets and 
with the
PIMPL idiom it is ensured that your derived classes will not 
break upon
addition of private fields in QWidget. It is a black box, a 
derived

class does not have access to private fields.

The whole purpose of my proposal is, to really hide private 
fields so

that not just the API is stable, but also the ABI.


I understand. My use case was that i needed an alternative to C 
way of doing it.
I was thinking that C++/D got namespaces, modules, class/struct 
methods. It should be possible to improve C pimpl.


So instead of:
libFun(handle, ...);

You would do:
handle.fun(...);

Now you have a cleaner code, cleaner global namespace with zero 
cost.


Re: Inheritance of purity

2012-02-26 Thread so

On Sunday, 26 February 2012 at 16:47:33 UTC, Daniel Murphy wrote:

I spent hours trying to get disassembly working in ddd 
yesterday, in the end
I gave up and used gdb.  I hope I never have to leave visual 
studio 6 again.


There is always Kdevelop if you want IDE. Awesome piece of free 
software.

It now has vim mode, if only it was supported it fully!


Re: John Carmack applauds D's pure attribute

2012-02-26 Thread so

On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

Would this even be an issue on multicore systems where the GC 
can run
concurrently? As long as the stop-the-world parts are below 
some given

threshold.


If it is possible to guarantee that i don't think anyone would 
bother with manual MM.


Re: John Carmack applauds D's pure attribute

2012-02-26 Thread so

On Sunday, 26 February 2012 at 15:22:09 UTC, deadalnix wrote:

True, but the problem of video game isn't how much computation 
you do to allocate, but to deliver a frame every few 
miliseconds. In most cases, it worth spending more in 
allocating but with a predictable result than let the GC does 
its job.


Absolutely! It cracks me up when i see (in this forum or any 
other graphics related forums) things like "you can't allocate at 
runtime!!!" or "you shouldn't use standard libraries!!!". Thing 
is, you can do both just fine if you just RTFM :)





Re: Inheritance of purity

2012-02-26 Thread so

On Sunday, 26 February 2012 at 15:25:44 UTC, deadalnix wrote:

Le 26/02/2012 00:25, so a écrit :



You have GUI that goes over gdb and are nice to use.


You mean DDD (which i think best of them)? Indeed nice, but it 
crashes too often.




Re: John Carmack applauds D's pure attribute

2012-02-26 Thread so

On Sunday, 26 February 2012 at 12:09:21 UTC, Paulo Pinto wrote:

It is still at 0.2 and the newsgroup only has 13 messages, lets 
see how far it goes.


We are almost done with gpu devolution.
Once we get unified storage none of this will matter, much like 
flat to round earth transition. So much wasted for absolutely 
nothing.


Re: Inheritance of purity

2012-02-25 Thread so

On Friday, 24 February 2012 at 00:01:52 UTC, F i L wrote:

Well then I disagree with Walter on this as well. What's wrong 
with having a "standard" toolset in the same way you have 
standard libraries? It's unrealistic to think people (at large) 
will be writing any sort of serious application outside of a 
modern IDE. I'm not saying it's Walters job to write IDE 
integration, only that the language design shouldn't cater to 
the smaller use-case scenario.


Cleaner code is easier to read and, within an IDE with 
tooltips, makes little difference when looking at the 
hierarchy. If you want to be hard-core about it, no one is 
stopping you from explicitly qualifying each definition.


Debugger is the single tool in VisualStudio that i failed to 
replace in unix land.
I have tried many of them and they all sucked. They are either 
incomplete or crash too often. Command line gdb is not much of an 
option. The situation is so bad that looks like i need to go back 
to the VisualC++/gvim combo.


Re: John Carmack applauds D's pure attribute

2012-02-25 Thread so

On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:

Most standard compiler malloc()/free() implementations are 
actually slower than most advanced GC algorithms.


Explicit allocation/deallocation performance is not that 
significant, main problem is they are unreliable at runtime.





Re: John Carmack applauds D's pure attribute

2012-02-25 Thread so
On Saturday, 25 February 2012 at 20:26:11 UTC, Peter Alexander 
wrote:


Memory management is not a problem. You can manage memory just 
as easily in D as you can in C or C++. Just don't use global 
new, which they'll already be doing.


C++ standard library is not based around a GC.
D promises both MM possibilities yet its standard library as of 
now based around GC.


You are talking about design. When it comes to implementation, 
last time i checked, not using standard memory manager also means 
not using standard library.


A big codebase on another language is a problem shared by most of 
us and that is by far the most significant. Yet i thought we were 
talking about "why not switch to D" rather than "why not switch 
to another language".


Re: John Carmack applauds D's pure attribute

2012-02-25 Thread so
On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky 
wrote:


Interesting. I wish he'd elaborate on why it's not an option 
for his daily

work.


Not the design but the implementation, memory management would be 
the first.





Re: Inheritance of purity

2012-02-25 Thread so

On Saturday, 25 February 2012 at 17:57:54 UTC, Timon Gehr wrote:

class A {
void fun() const { ... }
}

class B : A {
override void fun() { ... }
}

Now I change the class A to become :

class A {
void fun() const { ... }
void fun() { ... }
}

And suddenly, the override doesn't override the same thing 
anymore.

Which is unnacceptable.


You didn't try to actually compile this, did you? ;D


You can't compile that now, can you?


Re: Inheritance of purity

2012-02-23 Thread so

On Friday, 24 February 2012 at 00:01:52 UTC, F i L wrote:

It's unrealistic to think people (at large) will be writing any 
sort of serious application outside of a modern IDE.


You would be surprised or i should rather say shocked? :)
I used to be an IDE fanatic as well, then i took an arrow...


Re: Inheritance of purity

2012-02-23 Thread so

On Thursday, 23 February 2012 at 23:40:28 UTC, H. S. Teoh wrote:

Omitting argument names/types is very evil. It opens up the 
possibility
of changing the base class and introducing nasty subtle bugs in 
the

derived class without any warning.  For example:


Good catch.



Re: Inheritance of purity

2012-02-23 Thread so

On Thursday, 23 February 2012 at 22:01:43 UTC, F i L wrote:

UTC, so wrote:

If you are not using an IDE or a mouse, this would be hell.


lol wut? This isn't the 80's.

In all seriousness, I think you're decoupling inherently 
ingrained pieces: the language and it's tools. The same way you 
*need* syntax highlighting to distinguish structure, you 
*should* have other productivity tools to help you analyze 
data-layout. It's not like these tools don't exist in abundance 
on every platform. And MS has pulled some really stupid shit in 
its day, but it's developer tools and support do not fall under 
that category.


No one said you shouldn't use IDE or any other tool, but i don't 
think it is healthy to design a language with such assumptions. 
Walter himself was against this and stated why he doesn't like 
Java way of doing things, one of the reason was the language was 
relying on IDEs.


I understand he is trying to fulfill a need that function 
qualifiers looks ugly yet i am not sure this is the answer.




Re: Inheritance of purity

2012-02-23 Thread so
On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis 
wrote:


No. Absolutely not. I hate the fact that C++ does this with 
virtual. It makes
it so that you have to constantly look at the base classes to 
figure out what's
virtual and what isn't. It harms maintenance and code 
understandability. And
now you want to do that with @safe, pure, nothrow, and const? 
Yuck.


I can understand wanting to save some typing, but I really 
think that this
harms code maintainability. It's the sort of thing that an IDE 
is good for. It
does stuff like generate the function signatures for you or 
fill in the
attributes that are required but are missing. I grant you that 
many D
developers don't use IDEs at this point (at least not for D) 
and that those
sort of capabilities are likely to be in their infancy for the 
IDEs that we
_do_ have, but I really think that this is the sort of thing 
that should be
left up to the IDE. Inferring attribtutes like that is just 
going to harm code
maintainibility. It's bad enough that we end up with them not 
being marked on
templates due to inferrence, but we _have_ to do it that way, 
because the
attributes vary per instantiation. That is _not_ the case with 
class member

functions.

Please, do _not_ do this.

- Jonathan M Davis


As much as i hate the "pure const system trusted" spam, I don't 
think i like the idea either. If you are not using an IDE or a 
mouse, this would be hell. A language shouldn't be designed with 
such assumptions, unless you are Microsoft.


Thing is, this will make things harder not easier. (which i think 
is the intention here) When you overload a function, at most you 
copy/paste it from base class.


Re: Inheritance of purity

2012-02-23 Thread so
On Thursday, 23 February 2012 at 18:32:12 UTC, Walter Bright 
wrote:


Not a bad idea, but it would be problematic if there were any 
overloads.


It is still applicable to return types.
But i don't like the idea. If you omit arguments and return type, 
you force both yourself and the reader to check the base class 
for everything.





Re: Inheritance of purity

2012-02-23 Thread so
If it can be applied to const, wouldn't it be like "const by 
convention" that you argued against?


On Friday, 17 February 2012 at 02:49:40 UTC, Walter Bright wrote:

Given:

class A { void foo() { } }
class B : A { override pure void foo() { } }

This works great, because B.foo is covariant with A.foo, 
meaning it can "tighten", or place more restrictions, on foo. 
But:


class A { pure void foo() { } }
class B : A { override void foo() { } }

fails, because B.foo tries to loosen the requirements, and so 
is not covariant.


Where this gets annoying is when the qualifiers on the base 
class function have to be repeated on all its overrides. I ran 
headlong into this when experimenting with making the member 
functions of class Object pure.


So it occurred to me that an overriding function could 
*inherit* the qualifiers from the overridden function. The 
qualifiers of the overriding function would be the "tightest" 
of its explicit qualifiers and its overridden function 
qualifiers. It turns out that most functions are naturally 
pure, so this greatly eases things and eliminates annoying 
typing.


I want do to this for @safe, pure, nothrow, and even const.

I think it is semantically sound, as well. The overriding 
function body will be semantically checked against this 
tightest set of qualifiers.


What do you think?





Re: Ideas from Clang

2012-02-22 Thread so

On Wednesday, 22 February 2012 at 14:27:18 UTC, deadalnix wrote:

Le 19/02/2012 21:19, bearophile a écrit :
A belated comment on the GoingNative 2012 talk "Defending C++ 
fom Murphy's Million Monkeys" by Chandler Carruth:

http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Clang-Defending-C-from-Murphy-s-Million-Monkeys

The slides:
http://ecn.channel9.msdn.com/events/GoingNative12/GN12Clang.pdf

This talk shows some ways used by Clang to help the programmer 
spot or fix some mistakes in C++ code (according to Chandler, 
all bugs shown in the talk are real bugs found in production 
software). I think some of those ideas are interesting or 
useful for D too.


-

Slides page 31 - 32, 14.48 in the video:

I have adapted the Clang example to this D code:


class BaseType {}
static int basetype;
class DerivedType : Basetype {}
void main() {}


The latest DMD gives this error:
test.d(3): Error: undefined identifier Basetype, did you mean 
variable basetype?



The idea here is to restrict the list of names to search for 
similar ones only among the class names, because here 
"basetype" is an int so it can't be what the programmer meant. 
There are few other groups of names for other situations.


This also reduces the work of the routines that look for 
similar names, because they need to work on a shorter list of 
possibilities.


-

Slides page 47 - 48, 21.24 in the video:

C++ code:

static const long long DiskCacheSize = 8<<  30; // 8 Gigs

Clang gives:

% clang++ -std=c++11 -fsyntax-only overflow.cpp
overflow.cpp:1:42: warning: signed shift result (0x2) 
requires 35
bits to represent, but 'int' only has 32 bits 
[-Wshift-overflow]

static const long long DiskCacheSize = 8<<  30; // 8 Gigs
   ~ ^  ~~

In D/DMD this compiles with no errors or warnings:

static const long diskCacheSize = 8<<  30;
static assert(diskCacheSize == 0);
void main() {}


This kind of overflow errors are important and common enough, 
so I think D too has to spot them, as Clang.



I have done a little test, and I've seen that with the latest 
Clang this C++ code gives no warnings:


const static unsigned int x = -1;

-

Slides 57 - 58, 28.11 in the video:

C++ code:

void test(bool b, double x, double y) {
  if (b || x<  y&&  x>  0) {
// ...
  }
}


Clang gives:


% clang++ -std=c++11 -fsyntax-only -Wparentheses 
parentheses3.cpp

parentheses3.cpp:2:18: warning: '&&' within '||' [-Wlogical-op-
parentheses]
  if (b || x<  y&&  x>  0) {
~~ ~~^~~~
parentheses3.cpp:2:18: note: place parentheses around the '&&' 
expression

to silence this warning
  if (b || x<  y&&  x>  0) {
 ^
   ( )
1 warning generated.


DMD compiles that code with no warnings. I think a similar 
warning is useful in D/DMD too.


-

Slides 59 - 60, 29.17 in the video:

int test(bool b, int x, int y) {
  return 42 + b ? x : y;
}


Clang gives:


% clang++ -std=c++11 -fsyntax-only -Wparentheses 
parentheses4.cpp
parentheses4.cpp:2:17: warning: operator '?:' has lower 
precedence than

'+'; '+' will be evaluated first [-Wparentheses]
  return 42 + b ? x : y;
 ~~ ^
parentheses4.cpp:2:17: note: place parentheses around the '+' 
expression

to silence this warning
  return 42 + b ? x : y;
^
 ( )
parentheses4.cpp:2:17: note: place parentheses around the '?:' 
expression

to evaluate it first
  return 42 + b ? x : y;
^
  ()
1 warning generated.


DMD compiles that code with no warnings.
(Maybe here Don thinks that accepting int+bool in the language 
rules is bad in the first place.)


-

A little comment on slides 63 - 64: in D when I switch on an 
enumeration, I always use a final switch. I think use cases 
for nonfinal switches on enums are rare in D code.


-

Slides 65 - 66, 31.57 in the video:


constexpr int arr_size = 42;
constexpr int N = 44;
void f(int);
int test() {
  int arr[arr_size];
  // ...
  f(arr[N]);
  // ...
  if (N<  arr_size) return arr[N];
  return 0;
}


Clang gives:

% clang++ -std=c++11 -fsyntax-only deadcode.cpp
deadcode.cpp:7:5: warning: array index 44 is past the end of 
the array

(which contains 42 elements) [-Warray-bounds]
  f(arr[N]);
^   ~
deadcode.cpp:5:3: note: array 'arr' declared here
  int arr[arr_size];
  ^
1 warning generated.


Similar D code:

enum int arr_size = 42;
enum int N = 44;
void foo(int) {}
int test() {
int[arr_size] arr;
foo(arr[N]);
if (N<  arr_size)
return arr[N];
return 0;
}
void main() {}


DMD is able to catch the array overflo

Re: C++ pimpl

2012-02-19 Thread so
On Sunday, 19 February 2012 at 17:25:51 UTC, Robert Caravani 
wrote:

On Friday 10 February 2012 20:23:59 so wrote:

I think i finally got a solution.
(Sorry for C++ code)

Never mind, it is my mother tongue.


--- pimpl.hpp
template
struct pimpl : noncopyable
{
  virtual ~pimpl() {}
};

--- lib.hpp
struct lib : public pimpl
{
  void fun(...);
  ...
  static lib* make();
};

--- lib.cpp
struct lib_impl : lib
{
  ... // data
};

void lib::fun(...)
{
  auto& r = *static_cast(this);
  ...
}

lib* lib::make()
{
  return new lib_impl;
}


Well it is not bad, but how do you support derived classes with 
this scheme? Derived classes would have to derive from the 
implementation, so it would have to be public and compiled code 
using the derived class would break on changes of the private 
fields. Am I misinterpreting something?


Whole purpose of this is hiding implementation from user with 
"zero" cost.
Derived classes has no places here. Using anything about derived 
classes means "zero" cost is not your first concern. So just use 
pure classes / interfaces, they are much cleaner and to the point.



What's the purpose of the pimpl template?


template
struct pimpl : noncopyable
{
  virtual ~pimpl() {}
};

"virtual ~pimpl() {}"

This is to get rid of leaking, you know we need to provide a 
"virtual destructor" for an interface otherwise a base class has 
no way of knowing it is a "base" class and when you delete a 
pointer you would free only the memory base class allocated. But 
i guess you are asking why it should be like this, another way is:


#define PIMPL(x) \
private: \
x(const x&); \
const x& operator=(const x&); \
protected: \
x() {} \
public: \
virtual ~x() {}

now we can just use:

struct lib
{
   PIMPL(lib)
   ...
   ...
};

I don't understand why this didn't get much attention either, i 
am now using it in my framework and it rocks!


What do you think about my proposal? Is it sane and feasible at 
all?


I am having trouble understanding the need for supporting class 
hierarchies.
Deriving from a private implementation feels quite wrong. A pimpl 
IMO should be a black box.


Re: The Right Approach to Exceptions

2012-02-19 Thread so
On Sunday, 19 February 2012 at 00:50:07 UTC, Jonathan M Davis 
wrote:

On Saturday, February 18, 2012 16:46:43 H. S. Teoh wrote:

I can't believe something this simple has to be explained so
elaborately. I thought all of us here knew how to use OO??


I think that the problem stems from people frequently using 
exceptions incorrectly, and many of the C++ programmers 
probably haven't _ever_ seen them used correctly, since I don't 
think that it's very common for C++ programs to define 
exception hierarchies - especially not advanced ones like Java 
has. And when you see a lot of bad exception code, that tends 
to turn you off to them, and it definitely doesn't show you how 
to use them correctly.


- Jonathan M Davis


Problem is, "no one" using exception handling correctly including 
language experts. There is no consensus on where they are useful 
or not. Neither articles nor codes help you. Go read every single 
one of them and come back and code something. I invested a lot of 
time on it, yet i am now using it when i need an aggressive 
assert.


It is a great idea but incomplete.


Re: Review of Jose Armando Garcia Sancio's std.log

2012-02-15 Thread so

On Tuesday, 14 February 2012 at 21:47:42 UTC, bls wrote:

This is somehow bad. Review a piece of library-software by 
using a beta compiler and beta-library.


Indeed, never happened such thing in whole compiler/language 
development history.


Re: Review of Jose Armando Garcia Sancio's std.log

2012-02-14 Thread so
On Tuesday, 14 February 2012 at 16:21:42 UTC, Jose Armando Garcia 
wrote:
On Tue, Feb 14, 2012 at 8:42 AM, jdrewsen  
wrote:
On Tuesday, 14 February 2012 at 02:28:11 UTC, Jose Armando 
Garcia wrote:


On Mon, Feb 13, 2012 at 6:44 PM, jdrewsen 
 wrote:


A first quick observation:

I vote for a debug severity level. Then make that default to 
the template

parameter for log:

template log(Severity severity = Severity.debug)

That would make it nice for good old print debugging.

log("This is a dbg message");



I like the idea of having a default. Not sure about adding 
debug. What
are you trying to do with default that log!info and vlog(#) 
doesn't

let you do?



As Sean mentioned the vlog function may be the one I want. 
Maybe it is okey
not to have a debug severity but then a default on the vlog 
level parameter

would be nice. That would make quick debug prints a tad simpler

If we do set a default what should it be? It is not clear to me 
what

value we should pick so if you have any suggestions let me know.


IMO a default severity level is not a good idea, not explicit to 
begin with.
As i suggested on another reply, getting rid of the 
instantiations solve it.


We lose nothing and gain a common keyword. I used to have 
severity levels for my logging library in c++. As soon as i got 
the C++0x options i thrashed them all.


Now instead of:

txt(error) << "this is: " << it;

i just got:

error("this is: ", it);

Win win for every aspect of it. And got rid of the keyword "txt".


Re: Review of Jose Armando Garcia Sancio's std.log

2012-02-13 Thread so
On Monday, 13 February 2012 at 15:50:05 UTC, David Nadlinger 
wrote:
There are several modules in the review queue right now, and to 
get things going, I have volunteered to manage the review of 
Jose's std.log proposal. Barring any objections, the review 
period starts now and ends in three weeks, on March 6th, 
followed by a week of voting.


---
Code: 
https://github.com/jsancio/phobos/commit/d114420e0791c704f6899d81a0293cbd3cc8e6f5

Docs: http://jsancio.github.com/phobos/phobos/std_log.html

Known remaining issues:
- Proof-reading of the docs is required.
- Not yet fully tested on Windows.

Depends on: 
https://github.com/D-Programming-Language/druntime/pull/141 
(will be part of 2.058)

---

Earlier drafts of this library were discussed last year, just 
search the NG and ML archives for "std.log".


I think getting this right is vitally important so that we can 
avoid an abundance of partly incompatible logging libraries 
like in Java. Thus, I'd warmly encourage everyone to actively 
try out the module or compare it with any logging solution you 
might already be using in your project.


Please post all feedback in this thread, and remember: Although 
comprehensive reviews are obviously appreciated, short comments 
are very welcome as well!


David


Good work.

One suggestion. Instantiating a template for each log rather 
verbose for such common thing. I suggest:


(Just to demonstrate)
alias global_logger!sev_info info;
alias global_logger!sev_warning warning;
alias global_logger!sev_error error;
alias global_logger!sev_critical critical;
alias global_logger!sev_dfatal dfatal;
alias global_logger!sev_fatal fatal;

As we are pulling severity levels to global namespace anyway, 
this will save us some verbosity and the keyword "log".


Re: C++ pimpl

2012-02-10 Thread so

On Monday, 23 January 2012 at 17:09:30 UTC, so wrote:
On Mon, 23 Jan 2012 18:09:58 +0200, Robert Caravani 
 wrote:



Thanks for the links, it was a good read.


I think it is the best answer to the problem.


What's the destructor limitation?


struct S {
 static S* make(); // constructor
 static void purge(S*); // destructor - you have to provide 
this as well

}

Limitation is that just like constructor, we also lose the 
destructor.
Which means we can't use "delete", and the tools designed for 
it.
As a result we don't have something new. It is exactly like C 
impl. hiding.


struct S;
S* make();
void purge(S*);


I think i finally got a solution.
(Sorry for C++ code)

--- pimpl.hpp
template
struct pimpl : noncopyable
{
 virtual ~pimpl() {}
};

--- lib.hpp
struct lib : public pimpl
{
 void fun(...);
 ...
 static lib* make();
};

--- lib.cpp
struct lib_impl : lib
{
 ... // data
};

void lib::fun(...)
{
 auto& r = *static_cast(this);
 ...
}

lib* lib::make()
{
 return new lib_impl;
}


Re: Damn C++ and damn D!

2012-02-05 Thread so
On Sunday, 5 February 2012 at 15:17:39 UTC, Jose Armando Garcia 
wrote:



What I would really like to see in D is:

immutable variable = if (boolean condition)
{
// initialize based on boolean condition being true
}
else
{
// initialize based on boolean condition being false
}

Scala has this and find it indispensable for functional and/or
immutable programming. Yes, I have been programming with Scala 
a lot
lately. It has a lot of problem but it has some really cool 
constructs
like the one above. Scala also has pattern matching and 
structural

typing but that may be asking too much ;).

I am not sure what it would take to implement this in D but I am
thinking we need the concept of a void type (Unit in scala). 
Thoughts?


What am i missing?
I can't see the difference between that and "static if".

static if (boolean condition)
{
 // initialize based on boolean condition being true
 immutable variable = ...
}
else
{
 // initialize based on boolean condition being false
 immutable variable = ...
}



Re: Damn C++ and damn D!

2012-02-05 Thread so

On Sunday, 5 February 2012 at 14:57:58 UTC, Timon Gehr wrote:

On 02/05/2012 03:53 PM, so wrote:
You just maintain the macro.


1. When actual part of the work lies in the macro.

#define DETAIL \
... \
... \
...

You now have an enigma and compiler won't help you.

2. And if you need another condition.
Another macro? Just imagine the situation!

Maybe small but branching also creates overhead.



Re: Damn C++ and damn D!

2012-02-05 Thread so

On Sunday, 5 February 2012 at 14:24:20 UTC, Timon Gehr wrote:


This should work:

#define DOTDOTDOT ...

template void fun(T a){
   if(cond::value) {
   auto var = make(a);
   DOTDOTDOT;
   }else{
   auto tmp = make(a);
   auto var = make_proxy(tmp);
   DOTDOTDOT;
   }
}


It won't work.
You now have two scopes and you have to repeat every line after 
"var" for both scopes. Now you have to maintain both of them. And 
this grows exponentially for every new condition you have.





Damn C++ and damn D!

2012-02-05 Thread so

After some time with D, C++ is now a nightmare for me. (especially on
generic coding)
Think about replicating this simple code in C++.

void fun(T)(T a)
{
static if(cond(T))
{
   auto var = make(a);
}
else
{
   auto tmp = make(a);
   auto var = make_proxy(tmp);
}
...
}

And this is just "one" condition.
Damn D for introducing these and damn C++ committee for not adopting.


Re: C++ pimpl

2012-01-23 Thread so
On Mon, 23 Jan 2012 18:09:58 +0200, Robert Caravani   
wrote:



Thanks for the links, it was a good read.


I think it is the best answer to the problem.


What's the destructor limitation?


struct S {
  static S* make(); // constructor
  static void purge(S*); // destructor - you have to provide this as well
}

Limitation is that just like constructor, we also lose the destructor.
Which means we can't use "delete", and the tools designed for it.
As a result we don't have something new. It is exactly like C impl. hiding.

struct S;
S* make();
void purge(S*);


Re: C++ pimpl

2012-01-22 Thread so

On Mon, 23 Jan 2012 02:49:39 +0200, Martin Nowak  wrote:


This will even work with plain new/this, as the allocation is done
in the constructor. The difficulty is that you can't build inheritable  
pimpls.


No, you can't define a struct without providing all the fields.
Inheritance has zero importance on this one. If you will use inheritance  
why would you need things like this in the first place?


Re: C++ pimpl

2012-01-22 Thread so

On Mon, 23 Jan 2012 02:33:47 +0200, Timon Gehr  wrote:


This seems to work.

a.di:

final class A{
 private this();
 static A factory();
 T1 publicMember1(int x);
 T2 publicMember2(float y);
 T3 publicField;
 // ...
}

a.d:

class A{
 static A factory(){return new A;}
 T1 publicMember1(int x){ ... }
 T2 publicMember2(float y){ ... }
 T3 publicField;
 // ...
private:
 T1 field1;
 T2 field2;
}


Oh? How so? Within current framework it is not possible.
Probably a glitch in matrix :)


Re: C++ pimpl

2012-01-22 Thread so

On Mon, 23 Jan 2012 02:07:29 +0200, so  wrote:


On Mon, 23 Jan 2012 01:39:23 +0200, so  wrote:

I have been asking that for some time now, i am afraid you won't get  
much of an audience.
You can get rid of both additional allocation and indirection but it is  
not pretty. We could definitely use some help/sugar on this.


http://www.artima.com/cppsource/backyard3.html


http://www.digitalmars.com/d/archives/digitalmars/D/Implementation_hiding_139625.html

There is another issue Walter forgot to mention in the article.
I think there might be a way but looks like we also loose the  
"destructor".
Which means we are all the way back to the  
http://en.wikipedia.org/wiki/Opaque_pointer.


Walter, is there a way to get around destructor limitation?


1. Lose, not loose.
2. I linked the wikipedia page to point to the "C version" not the others.
I was wrong calling them taft types (it is the name of the Ada version).


Re: C++ pimpl

2012-01-22 Thread so

On Mon, 23 Jan 2012 01:39:23 +0200, so  wrote:

I have been asking that for some time now, i am afraid you won't get  
much of an audience.
You can get rid of both additional allocation and indirection but it is  
not pretty. We could definitely use some help/sugar on this.


http://www.artima.com/cppsource/backyard3.html


http://www.digitalmars.com/d/archives/digitalmars/D/Implementation_hiding_139625.html

There is another issue Walter forgot to mention in the article.
I think there might be a way but looks like we also loose the "destructor".
Which means we are all the way back to the  
http://en.wikipedia.org/wiki/Opaque_pointer.


Walter, is there a way to get around destructor limitation?


Re: C++ pimpl

2012-01-22 Thread so
I have been asking that for some time now, i am afraid you won't get much  
of an audience.
You can get rid of both additional allocation and indirection but it is  
not pretty. We could definitely use some help/sugar on this.


http://www.artima.com/cppsource/backyard3.html

On Thu, 19 Jan 2012 22:48:39 +0200, Roberto Caravani   
wrote:


Qt for example uses the pimpl idiom for achieving ABI compatibility  
between releases. The problem is additional heap allocations, additional  
indirection and you pay for it, whether needed or not. (For example even  
derived classes in the same library pay for it.)

I wondered whether this would still be necessary in D and I think not.
In D, as interface files are automatically generated, it could be  
possible to have ones created with let's say a special "@private_impl"  
property or something. For these classes the object size would have to  
be stored as hidden static member in the library. The new operator could  
then simply read the size and allocate the needed space. Derived class  
methods can also use the size to calculate the offset of the derived  
class data members.


So you would lose some optimizations, e.g. initializing of base members  
can't be inlined and stuff. But this is not possible with pimpl either  
and you gain the following:
 - You only pay for it if you want it. You can also use the standard .di  
file and lose the ABI compatiblity between versions if you so want and  
derived classes in the same library do not need to pay any additional  
overhead either.
 - It is completely transparent: If you later on decide you need ABI  
compatibility between releases, it's just a matter of a compiler switch  
and differend .di files.

 - I think it will also be more efficient than pimpl in all regards.

I think this would be a real neat and very important feature, when it  
comes to shared libraries. Is there any plan to implement something like  
that in the future? Do I miss something?


Best regards,

Robert


Re: System programming in D (Was: The God Language)

2011-12-30 Thread so
On Sat, 31 Dec 2011 04:30:01 +0200, Walter Bright  
 wrote:



On 12/30/2011 5:59 PM, so wrote:
Well not them but another dummy function, i didn't think it would  
differ this much.


It differs that much because once it is inlined, the optimizer deletes  
it because it does nothing.


I don't think it is a valid test.


Yes i can see from asm output but are we talking about same thing?
@inline IS all about that. We can try it with any example, one  
outperforming the other is not the point.


for()
   fun()

With or without @inline i know fun should/will get folded away, then why  
should i pay for the function call?


Re: System programming in D (Was: The God Language)

2011-12-30 Thread so

On Sat, 31 Dec 2011 03:40:43 +0200, Iain Buclaw  wrote:


Take a pick of any examples posted on this ML.  They are far better
fit to use as a test bed.  Ideally one that does number crunching and
can't be easily folded away.


I don't understand your point btw, why it shouldn't be easily folded away?
@inline is exactly for that reason, why would i pay for something i don't  
want?


Re: System programming in D (Was: The God Language)

2011-12-30 Thread so

On Sat, 31 Dec 2011 03:40:43 +0200, Iain Buclaw  wrote:


Take a pick of any examples posted on this ML.  They are far better
fit to use as a test bed.  Ideally one that does number crunching and
can't be easily folded away.


Well not them but another dummy function, i didn't think it would differ  
this much.


time ./test_inl

real0m0.013s
user0m0.007s
sys 0m0.003s
time ./test

real0m7.753s
user0m5.966s
sys 0m0.013s
time ./test_inl

real0m0.013s
user0m0.010s
sys 0m0.000s
time ./test

real0m7.391s
user0m5.960s
sys 0m0.017s
time ./test_inl

real0m0.014s
user0m0.007s
sys 0m0.003s
time ./test

real0m7.582s
user0m5.950s
sys 0m0.030s


real test() // test.d
real test() @inline // test_inl.d
{
real a=423123, b=432, c=10, d=100, e=4045, f=123;
a = a / b * c / d + e - f;
b = a / b * c / d + e - f;
c = a / b * c / d + e - f;
d = a / b * c / d + e - f;
e = a / b * c / d + e - f;
f = a / b * c / d + e - f;
a = a / b * c / d + e - f;
b = a / b * c / d + e - f;
c = a / b * c / d + e - f;
d = a / b * c / d + e - f;
e = a / b * c / d + e - f;
f = a / b * c / d + e - f;
a = a / b * c / d + e - f;
b = a / b * c / d + e - f;
c = a / b * c / d + e - f;
d = a / b * c / d + e - f;
e = a / b * c / d + e - f;
f = a / b * c / d + e - f;
return f;
}

void main()
{
for(uint i=0; i<1_000_000_0; ++i)
test();
}


  1   2   3   4   5   6   7   >