Re: do D support something like C# 4.0 co/contra-variance?

2010-10-20 Thread Bruno Medeiros

On 15/10/2010 13:49, Sean Reque wrote:

Java doesn't have variant generic type parameters. Wildcard types in Java are 
NOT the same thing and are certainly not more powerful. C# doesn't have 
wildcard types because it doesn't implement generics with erasure. See 
http://stackoverflow.com/questions/527766/what-is-the-equivalent-of-java-wildcards-in-c-generics.


What do you mean they are not the same? Yes, they don't work the same 
way, their semantics is fairly different, but they are both attempting 
to address the same problem: To improve safety/expressiveness in the 
type system, with regards to variance in generic type parameters.


Why is C#'s approach to variance is more powerful than Java's, and not 
the other way around? ( You're not off to a good start when the article 
you mentioned exposes a scenario that Java can express, but C# can't, at 
least not clearly. ;) )


And think carefully before you justify the above by saying that C#'s 
generic are not erasures (ie, the generic parameters information is 
preserver at runtime).


--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-10-20 Thread Bruno Medeiros

On 10/09/2010 19:20, Walter Bright wrote:

http://www.reddit.com/r/programming/comments/dc6ir/overlooked_essentials_for_optimizing_code/



Generally, an interesting article. However, there are a few points I 
would like to counter:


"Nope, it isn't avoiding premature optimization. It isn't replacing 
bubble sort with quicksort (i.e. algorithmic improvements). It's not 
what language used, nor is it how good the compiler is. It isn't writing 
i<<2 instead of i*4.


It is:

   1. Using a profiler
   2. Looking at the assembly code being executed
"

There is a bit confusion here. The first things (use algorithmic 
improvements, a better language or compiler, "i << 2" code, etc.) are 
not of the same nature as the other things (1 & 2 - Using a profiler and 
looking at the assembly )
The first are techniques for optimizing particular code, and the second 
ones are strategies for figuring out *which* code is best to try to 
optimize. It does not make sense to try to oppose the two, because the 
second one actually requires the first one to be useful. I mean, no code 
has ever improved in performance *strictly just* by using a profiler or 
looking at the assembly code being executed. :)



But more importantly 1 and 2, (especially 1: "using a profiler") are 
crucial elements of "avoiding premature optimization". I mean, I learnt 
"avoid premature optimization" as being "don't optimize code until you 
*know* that code is a bottleneck", and in something like 80% of the 
books/articles/college-courses that taught about premature optimization, 
they also explicitly mentioned that running a profiler was by far the 
best way to determine which code is the bottleneck, and that the 
alternative of guessing is bad, because it is often wrong.




"Though that is undeniably true, there are two caveats that don't get 
taught in schools. First and most importantly, choosing the best 
algorithm for a part of the program that has no participation to the 
performance profile has a negative effect on optimization because it 
wastes your time that could be better invested in making actual 
progress, and diverts attention from the parts that matter."


I don't think its true that it "doesn't get taught in schools", at least 
in CS university degrees. In my degree this was explained in detail at 
least 2 core curricula courses, and a few more times in other non-core 
(optional/specialization) courses. Also, (like I mentioned above) the 
majority of other learning material (articles, books) that talk about 
optimization do mention the importance of profiling.


"Second, algorithms' performance always varies with the statistics of 
the data they operate on. Even bubble sort, the butt of all jokes, is 
still the best on almost-sorted data that has only a few unordered 
items. So worrying about using good algorithms without measuring where 
they matter is a waste of time - your's and computer's."


Also, the notion that an intrinsic part of algorithm design is to 
understand what kind of data you are going to work was also mentioned in 
one of the core curricula courses in my degree (although with less 
detail than with case 1, above).


I don't mean to offend anyone, but if you CS degree (at least for the 
last decade or so), doesn't teach about points 1 and 2 above as part of 
core curricula, then it's a pretty crappy CS degree. The same is 
probably also true for other related degrees (*-engineering, maths), at 
least with regards to point 1.


--
Bruno Medeiros - Software Engineer


Re: A summary of D's design principles

2010-10-21 Thread Bruno Medeiros

On 17/09/2010 23:39, retard wrote:

Fri, 17 Sep 2010 14:33:30 -0700, Walter Bright wrote:


retard wrote:

FWIW, if you're picking up one of the most used languages out there,
their list won't differ that much:


Exactly. Much of that can be summed up as D being intended for
professional production use, rather than:

1. a teaching tool (Pascal)
2. a research project (Haskell)
3. being focussed on solving one particular problem (Erlang) 4. designed
to promote a related product (Flash) 5. designed for kids (Logo)
6. designed for non-programmers (Basic) 7. one paradigm to rule them all
(Smalltalk) 8. gee, math is hard (Java)
9. implementing skynet (Lisp)


A funny pic, somewhat related.. (language X, as seen by language Y users)

http://i.imgur.com/1gF1j.jpg


retard, this is your best post ever! xP

--
Bruno Medeiros - Software Engineer


Re: A summary of D's design principles

2010-10-21 Thread Bruno Medeiros

On 15/09/2010 18:58, Andrei Alexandrescu wrote:

A coworker asked me where he could find a brief document of D's design
principles. This was after I'd mentioned the "no function hijacking"
stance.

I think it would be a great idea if the up-and-coming
www.d-programming-language.org contained such a document.

Ideas for what it could contain? I know we discussed this once in the
past, but couldn't find the discussion.


Andrei



(recapitulating something that popped in the middle of a discussion a 
while back) :


* D aims to be suitable for medium and large sized software projects. In 
other words, it aims to scale well with increases of: developers, source 
code, software components, project duration, teams, third party 
components, etc..


I don't think this goal would actually influence D's design that much, 
because if language changes and features are carefully designed, they 
would rarely be detrimental to small scale projects in favor of medium 
or large ones.
Rather, the big benefit of the statement above would be to reduce 
certain wasteful discussions or comments that pop-up occasionally in 
which someone proposes some "Pythonesque" change that might benefit 
small programs but would be crap for medium/large ones.



--
Bruno Medeiros - Software Engineer


Re: A summary of D's design principles

2010-10-21 Thread Bruno Medeiros

On 21/10/2010 14:28, Justin Johansson wrote:

On 21/10/2010 11:13 PM, Bruno Medeiros wrote:

On 17/09/2010 23:39, retard wrote:

Fri, 17 Sep 2010 14:33:30 -0700, Walter Bright wrote:


retard wrote:

FWIW, if you're picking up one of the most used languages out there,
their list won't differ that much:


Exactly. Much of that can be summed up as D being intended for
professional production use, rather than:

1. a teaching tool (Pascal)
2. a research project (Haskell)
3. being focussed on solving one particular problem (Erlang) 4.
designed
to promote a related product (Flash) 5. designed for kids (Logo)
6. designed for non-programmers (Basic) 7. one paradigm to rule them
all
(Smalltalk) 8. gee, math is hard (Java)
9. implementing skynet (Lisp)


A funny pic, somewhat related.. (language X, as seen by language Y
users)

http://i.imgur.com/1gF1j.jpg


retard, this is your best post ever! xP


Yes, it was a good post by retard.
I remember seeing it a few weeks ago.
IIRC D was not in the matrix of pictures
and one wonders how the missing rows/columns
for D should be rendered. :-)





I don't think you can have a row/column for D at this stage:
* Non-D programmers are not familiar enough with D to have an opinion of 
it, at least in a "stereotype" sense.
* And as for what D programmers think of other languages, well, it seems 
D attracts programmers from very varied backgrounds (C/C++, Java, 
scripting languages, etc.), so there would likely be wildly varied 
opinions about other languages. Probably the only consistent opinion 
would be about C/C++ (that although useful and powerful, it has immense 
innate shortcomings).


--
Bruno Medeiros - Software Engineer


Re: A summary of D's design principles

2010-10-22 Thread Bruno Medeiros

On 21/10/2010 18:07, bearophile wrote:

Bruno Medeiros:


Rather, the big benefit of the statement above would be to reduce
certain wasteful discussions or comments that pop-up occasionally in
which someone proposes some "Pythonesque" change that might benefit
small programs but would be crap for medium/large ones.


I suggest to judge each proposed feature on its own, instead of refusing all the 
"Pythonesque" changes together.



I agree, and never suggested or implied otherwise!

My reference to Python was a lighthearted one, I did not want to make a 
generic statement about all Python features (that's why I put quotes 
around "Pythonesque"). I am not even familiar with Python, other than 
it's syntax, and in fact, the indentation-as-blocks was the example I 
has in mind when I mentioned "Pythonesque".




A well designed tuple unpacking syntax will shorten D code and make it more 
readable and more handy to write, so I think it's a positive change, both for 
little and large D programs.

As every language feature tuples too may be abused: in large programs if many 
of your functions/methods return tuples with five or six different anonymous 
fields, your program will not be much readable.

Another "Pythonesque" example of change that I regard as positive for D 
programs of all sizes are lazy/eager array/range comprehensions. If you don't abuse them, 
they shorten code, make it more readable, and avoid the cluttering of brackets and 
parentheses typical of lambdas with map+array/filter+array functions.

Bye,
bearophile


I've just recalled another "corollary" design goal, a strictening of the 
previous one, that I would also like to see adopted:


 * If changes are of limited use to medium/large scale projects, they 
should also not be considered, even if they are not detrimental to 
projects of such scale.


The reason for this is to save work in implementing languages tools 
(compilers, IDEs, any kind of code analysis tools, etc.). It should be 
obvious to everyone that the time of the programmers working such tools, 
especially the compiler, is a very precious resource.
Indeed, Walter has already expressed his intention to not adopt features 
that are of limited use, however, how useful a feature is is very 
debatable and often not agreed upon. So what I am arguing for, in the 
point above, is that usefulness should only be evaluated in the context 
of medium/large projects.



I am not implying that the tuple features you mentioned above have 
limited use like I described. I am not familiar with those changes at 
the moment and won't comment on them now.



--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-10-22 Thread Bruno Medeiros

On 20/10/2010 16:17, dsimcha wrote:

== Quote from Bruno Medeiros (brunodomedeiros+s...@com.gmail)'s

Also, the notion that an intrinsic part of algorithm design is to
understand what kind of data you are going to work was also mentioned in
one of the core curricula courses in my degree (although with less
detail than with case 1, above).
I don't mean to offend anyone, but if you CS degree (at least for the
last decade or so), doesn't teach about points 1 and 2 above as part of
core curricula, then it's a pretty crappy CS degree. The same is
probably also true for other related degrees (*-engineering, maths), at
least with regards to point 1.


You have a point to some degree, but I've noticed two related themes from my
experience with my Ph.D. research and with the D community.  Both of these are
things Walter seems to understand and exploit exceptionally well.

1.  There's a big difference between "it was covered" and "most people actually
understood it".  Of course the first is a necessary condition for the second, 
but
it's not a sufficient condition.  Of course any CS/software engineering program
worth its salt will recommend using a profiler, but the question is, does it 
sink
in for most students or was it mentioned in maybe one lecture on a Friday 
morning
when half the class was hung over and the other half was still sleeping?



If a student is not paying attention, didn't go to class, or didn't 
study a topic, that's the student's fault and there is little that the 
university can (or even should) try to do change that. But that's 
besides the point. I wasn't arguing that a lot of students or 
professionals understand the issue around premature optimization and 
profilers. I was just arguing against "[it] doesn't get taught in 
schools". I do agree however, that a topic can be covered with varying 
degrees of detail and importance.


As for the premature-optimization/profiling, it was well covered in my 
degree. I clearly remember it being mentioned in the very first 
"Introduction to Programming" course (based on "Structure and 
Interpretation of Computer Programs", the MIT one), where the Paretto 
principle as applied to programs was explained (aka the 90/10 law). And 
the topic was again studied during the "Software Engineering" course, as 
part of the key larger topic of how best to manage/optimize the time of 
the programmers and team members in a real-world project. It was also 
mentioned (in the context of the Agile series) that even if you know for 
sure that the code you are writing is indeed part of the 10% bottleneck, 
you likely should not optimize it yet, as changing requirements may make 
that code not used anymore, or no longer part of the bottleneck.


As for the issue of the importance of analyzing the inputs of the 
algorithms, well, that one was also mentioned, but definitely not 
covered that well. It was mentioned in the context of hashing keys, but 
mostly only in passing, a lot of the nuances of the topic where not 
discussed. It was only recently that I learned more about this, as I 
watched the MIT OCW lectures of the "Introduction to Algorithms" course, 
as well as reading part of the respective book.
For example the issue of deterministic hashing vs. probabilistic 
hashing: if you have a deterministic hashing function which indeed does 
distribute your input keys well on the average case, that may actually 
not be good enough! Because if people have access to how your hashing 
function works, an adversary can force the worst case to happen! (random 
hashing is the solution to that) I guess that with the rise of the 
Internet, scenarios where this matters are more common.



2.  "It's been done before" is not a good reason not to do something if you're
going to up-level it.  If something's been done before at a proof-of-concept
level, it's often rewarding to tweak/improve it to the point where it's 
practical.
  If it's practical, it's often rewarding to try to tweak/improve it so it's 
usable
by mere mortals and is well integrated into whatever other stuff might be 
related.
  If it's usable by mere mortals, it's often rewarding to figure out what
improvements need to be made to break barriers to widespread adoption.


??
I have not idea what you mean by this...

--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-10-22 Thread Bruno Medeiros

On 21/10/2010 09:02, Peter Alexander wrote:

On 20/10/10 2:59 PM, Bruno Medeiros wrote:

I don't mean to offend anyone, but if you CS degree (at least for the
last decade or so), doesn't teach about points 1 and 2 above as part of
core curricula, then it's a pretty crappy CS degree. The same is
probably also true for other related degrees (*-engineering, maths), at
least with regards to point 1.


I don't really think of CS that way. To me, CS is to practical
programming as pure math is to accounting, i.e. I don't think CS should
be teaching about profiling because that's what software engineering is
for. They are two different worlds in my opinion. If you wanted to get a
practical programming education and you took CS then I think you took
the wrong degree.


Well, you think wrongly. :)
If you look at the top universities worldwide, the majority of them have 
only one "computer programming" undergraduate degree. Sometimes it is 
called "Computer Science" (typical in the US), other times it is called 
"Computer Engineering", "Informatics Engineering", "Software 
Engineering", "Informatics Science" or something like that (typical in 
Europe), but despite the different names they are essentially the same: 
courses designed to _teach and educate future software engineers_. A 
good software engineer will need a lot of the basis of CS and maths. 
Also those courses are nonetheless perfectly fine for someone who wishes 
to study CS on an academical level (ie, research). It does not make 
sense to have a separate undergraduate degree (other than the CS degree 
or the Math degree), and in some cases it also does not make sense to 
have a separate graduate degree (MSc.).



--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-10-25 Thread Bruno Medeiros

On 22/10/2010 15:56, Diego Cano Lagneaux wrote:

Well, you think wrongly. :)
If you look at the top universities worldwide, the majority of them
have only one "computer programming" undergraduate degree. Sometimes
it is called "Computer Science" (typical in the US), other times it is
called "Computer Engineering", "Informatics Engineering", "Software
Engineering", "Informatics Science" or something like that (typical in
Europe), but despite the different names they are essentially the
same: courses designed to _teach and educate future software engineers_.


I must nuance: as an European* "Informatics (and Applied Maths**)
engineer", I can say this degree is not 'Software engineer' but indeed
'whole computer engineer' as we studied both software and hardware, to
the point of building a complete (simulated) processor.
Furthermore, I can't recall they told us about profiling tools, but it
was 10 years ago and I skiped a few classes, so it means nothing.



Which degree did 'Software engineers' take then?

--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-10-25 Thread Bruno Medeiros

On 22/09/2010 09:13, Don wrote:

Don wrote:

The docs currently state that:



PROPOSAL:
Drop the first requirement. Only one requirement is necessary:

A pure function does not read or write any global mutable state.



Wow. It seems that not one person who has responded so far has
understood this proposal! I'll try again. Under this proposal:

If you see a function which has mutable parameters, but is marked as
'pure', you can only conclude that it doesn't use global variables.
That's not much use on it's own. Let's call this a 'weakly-pure' function.

However, if you see a function maked as 'pure', which also has only
immutable parameters, you have the same guarantee which 'pure' gives us
as the moment. Let's call this a 'strongly-pure' function.

The benefit of the relaxed rule is that a strongly-pure function can
call a weakly-pure functions, while remaining strongly-pure.
This allows very many more functions to become strongly pure.

The point of the proposal is *not* to provide the weak guarantee. It is
to provide the strong guarantee in more situations.


I'm confused: Isn't this essentially the same as the "partially pure" 
functions idea that was discussed as far back as 2008? And wasn't it 
agreed already that this would be the way things would work?



--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-10-25 Thread Bruno Medeiros

On 23/09/2010 23:39, Robert Jacques wrote:

On Thu, 23 Sep 2010 16:35:23 -0400, Tomek Sowiński  wrote:


On topic: this means a pure function can take a reference to data that
can be mutated by
someone else. So we're giving up on the "can parallelize with no
dataraces" guarantee on
pure functions?



In short, No. In long; the proposal is for pure functions become broken
up into two groups (weak and strong) based on their function signatures.
This division is internal to the compiler, and isn't expressed in the
language in any way. Strongly-pure functions provide all the guarantees
that pure does today and can be automatically parallelized or cached
without consequence. Weakly-pure functions, don't provide either of
these guarantees, but allow a much larger number of functions to be
strongly-pure. In order to guarantee a function is strongly pure, one
would have to declare all its inputs immutable or use an appropriate
template constraint.


I think we need to be more realistic with what kinds of optimizations
we could expect from a D compiler and pure functions.
Caching might be done, but only a temporary sense (caching under a 
limited execution scope). I doubt we would ever have something like 
memoization, which would incur memory costs (potentially quite big 
ones), and so the compiler almost certainly would not be able to know 
(without additional metadata/anotations or compile options) if that 
trade-off is acceptable.


Similarly for parallelism, how would the compiler know that it's ok to 
spawn 10 or 100 new threads to parallelize the execution of some loop?
The consequences for program and whole-machine scheduling would not be 
trivial and easy to understand. For this to happen, amongst other things 
the compiler and OS would need to ensure that the spawned threads would 
not starve the rest of the threads of that program.
I suspect all these considerations might be very difficult to guarantee 
on a non-VM environment.


--
Bruno Medeiros - Software Engineer


Re: The Wrong Stuff

2010-10-25 Thread Bruno Medeiros

On 26/09/2010 14:26, Michel Fortin wrote:

Unfortunately, one attribute with the @ syntax in D -- and I'd say the
flagship one as it was the first and most discussed -- is not a pure
annotation as it has a noticeable effect on semantics. I'm talking about
@property which, if it ever get implemented, changes your function so it
simulates a field. In Herb's terms, @property is clearly a keyword in
disguise.


You don't want attributes to affect semantics? That's odd, it seems to 
me the most useful scenarios for @attributes is actually to affect 
semantics, and they're somewhat limited otherwise.

I mean, what exactly do you mean by "effect on semantics" ?


--
Bruno Medeiros - Software Engineer


Re: The Wrong Stuff

2010-10-25 Thread Bruno Medeiros

On 25/10/2010 16:01, Michel Fortin wrote:

On 2010-10-25 10:32:45 -0400, Bruno Medeiros
 said:


On 26/09/2010 14:26, Michel Fortin wrote:

Unfortunately, one attribute with the @ syntax in D -- and I'd say the
flagship one as it was the first and most discussed -- is not a pure
annotation as it has a noticeable effect on semantics. I'm talking about
@property which, if it ever get implemented, changes your function so it
simulates a field. In Herb's terms, @property is clearly a keyword in
disguise.


You don't want attributes to affect semantics? That's odd, it seems to
me the most useful scenarios for @attributes is actually to affect
semantics, and they're somewhat limited otherwise.
I mean, what exactly do you mean by "effect on semantics" ?


What I meant is that @property actually *changes* the semantics: calling
the function becomes a different thing and the function lose its regular
semantics. Other attributes only *restrict* existing semantics, they
don't change the existing semantics beyond making illegal some things
which are normally legal.



Hum, I think I see what you mean. That being the case, I agree, if an 
@annotation radically changes the nature of whatever is being defined, 
it probably should not be an @annotation.
But I don't agree with Herb Sutter's comment that "Attributes are 
acceptable as pure annotations, but they should have no semantic effect 
on the program.", at least as applied to D. Just the fact that C++'s 
attribute are 4-characters extra (compared to just 1 in D, or Java for 
example) makes the comparison not very valid. I also would not like to 
have [[override]] [[pure]] [[safe]] , etc. in D.




--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-10-25 Thread Bruno Medeiros

On 25/10/2010 13:46, Bruno Medeiros wrote:


I'm confused: Isn't this essentially the same as the "partially pure"
functions idea that was discussed as far back as 2008? And wasn't it
agreed already that this would be the way things would work?




I've only now seen this additional post (that shows up out of it's 
thread in my ThunderBird) :


On 25/09/2010 03:53, Jason House wrote:
> It looks like your proposal was accepted. Walter just checked in 
changes to make this a reality. 
http://www.dsource.org/projects/dmd/changeset/687



I was getting worried otherwise

--
Bruno Medeiros - Software Engineer


Re: Module-level accessibility

2010-10-26 Thread Bruno Medeiros

On 05/10/2010 08:27, Jacob Carlborg wrote:

If you have two classes in one file in Java you can access private
methods from the other class.



Only if one class is nested in the other.

--
Bruno Medeiros - Software Engineer


Re: We need to kill C syntax for declaring function types

2010-10-26 Thread Bruno Medeiros

On 05/10/2010 10:50, Walter Bright wrote:

Lars T. Kyllingstad wrote:

http://dsource.org/projects/dmd/changeset/703

:)


Don, as usual, made a compelling case.


Programs going down in flames is always a compelling argument... ^_^'

--
Bruno Medeiros - Software Engineer


Re: We need to kill C syntax for declaring function types

2010-10-26 Thread Bruno Medeiros

On 04/10/2010 10:07, Don wrote:

A great example of how C syntax is hurting us.
---
I found this bit of code in std.container, inside BinaryHeap:

size_t insert(ElementType!Store value)
{
static if (is(_store.insertBack(value)))
{
...
}
else ...

What does the static if do? It's *intended* to check if _store has a
member function insertBack(), which accepts type of a 'value'.

But instead, it ALWAYS silently does the 'else' clause.
Unless _store.insertBack is a valid *type*, (eg, alias int insertBack;).
In which case it gives an error "declaration value is already defined".

Why?

This happens because
x(y); is valid C syntax for declaring a type 'y', such that &y is of
type 'x function()'.

The C syntax is unspeakably ridiculous, useless, and downright
dangerous. It shouldn't compile.



Whoa. I considered myself completely knowledgeable of the C language, 
but I had no idea about this syntax. (Note: by "completely 
knowledgeable" I don't mean I could recite the spec by memory, but 
rather that at least I knew what features, syntax and semantics ANSI C 
89 had available.)


Hum, your description of what "x(y);" means seems slightly incorrect 
though. Both in C and D, if x is a type, then it is the same as "x y;", 
that is, it declares a variable y with type x. If x is not a type but y 
is, it seems to be the same as "void x(y);", that is, it declares a 
function prototype named x. If both are not types, then it declares that 
strange thing I don't quite understand nor am I interested to...


I do vaguely recall learning about the first scenario, where a variable 
is declared, but I had not idea about the others. Is this mentioned in 
K&R TCPL 2nd edition?


Not that it matters, it's still horrid! I'm glad we're nuking it from D.

--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-10-28 Thread Bruno Medeiros

On 26/10/2010 03:07, Don wrote:

Bruno Medeiros wrote:

On 23/09/2010 23:39, Robert Jacques wrote:

On Thu, 23 Sep 2010 16:35:23 -0400, Tomek Sowiński  wrote:


On topic: this means a pure function can take a reference to data that
can be mutated by
someone else. So we're giving up on the "can parallelize with no
dataraces" guarantee on
pure functions?



In short, No. In long; the proposal is for pure functions become broken
up into two groups (weak and strong) based on their function signatures.
This division is internal to the compiler, and isn't expressed in the
language in any way. Strongly-pure functions provide all the guarantees
that pure does today and can be automatically parallelized or cached
without consequence. Weakly-pure functions, don't provide either of
these guarantees, but allow a much larger number of functions to be
strongly-pure. In order to guarantee a function is strongly pure, one
would have to declare all its inputs immutable or use an appropriate
template constraint.


I think we need to be more realistic with what kinds of optimizations
we could expect from a D compiler and pure functions.
Caching might be done, but only a temporary sense (caching under a
limited execution scope). I doubt we would ever have something like
memoization, which would incur memory costs (potentially quite big
ones), and so the compiler almost certainly would not be able to know
(without additional metadata/anotations or compile options) if that
trade-off is acceptable.

Similarly for parallelism, how would the compiler know that it's ok to
spawn 10 or 100 new threads to parallelize the execution of some loop?
The consequences for program and whole-machine scheduling would not be
trivial and easy to understand. For this to happen, amongst other
things the compiler and OS would need to ensure that the spawned
threads would not starve the rest of the threads of that program.
I suspect all these considerations might be very difficult to
guarantee on a non-VM environment.


I agree with this, especially with regard to memoization.
However, several other very interesting optimisations are
possible with pure, especially with regard to memory allocation.

At exit from an immutably pure function, all memory allocated by that
function and its subfunctions can be collected, except for anything
which is reachable through the function return value.
Even for a weakly-pure function, the set of roots for gc is limited to
the mutable function parameters and the return value.
And for all pure functions with no mutable reference parameters, it is
guaranteed that no other thread has access to them.

The implications for gc are very interesting.


Indeed, this opens up several possibilities, it will be interesting to 
see what else pure can give us in terms of optimization. (I'm not a 
compiler or low level optimization expert, so my imagination here is 
somewhat limited :P )


Even for the optimizations that should not be applied completely 
automatically (the aforementioned parallelization, memoization) it is 
nice to have a language mechanism to verify their safety.


--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-10-28 Thread Bruno Medeiros

On 26/10/2010 04:47, Robert Jacques wrote:

On Mon, 25 Oct 2010 09:44:14 -0400, Bruno Medeiros
 wrote:


On 23/09/2010 23:39, Robert Jacques wrote:

On Thu, 23 Sep 2010 16:35:23 -0400, Tomek Sowiński  wrote:


On topic: this means a pure function can take a reference to data that
can be mutated by
someone else. So we're giving up on the "can parallelize with no
dataraces" guarantee on
pure functions?



In short, No. In long; the proposal is for pure functions become broken
up into two groups (weak and strong) based on their function signatures.
This division is internal to the compiler, and isn't expressed in the
language in any way. Strongly-pure functions provide all the guarantees
that pure does today and can be automatically parallelized or cached
without consequence. Weakly-pure functions, don't provide either of
these guarantees, but allow a much larger number of functions to be
strongly-pure. In order to guarantee a function is strongly pure, one
would have to declare all its inputs immutable or use an appropriate
template constraint.


I think we need to be more realistic with what kinds of optimizations
we could expect from a D compiler and pure functions.
Caching might be done, but only a temporary sense (caching under a
limited execution scope). I doubt we would ever have something like
memoization, which would incur memory costs (potentially quite big
ones), and so the compiler almost certainly would not be able to know
(without additional metadata/anotations or compile options) if that
trade-off is acceptable.

Similarly for parallelism, how would the compiler know that it's ok to
spawn 10 or 100 new threads to parallelize the execution of some loop?
The consequences for program and whole-machine scheduling would not be
trivial and easy to understand. For this to happen, amongst other
things the compiler and OS would need to ensure that the spawned
threads would not starve the rest of the threads of that program.
I suspect all these considerations might be very difficult to
guarantee on a non-VM environment.



Ahem, it's trivial for the compiler to know if it's okay to spawn 10 or
100 _tasks_. Tasks, as opposed to threads or even thread pools, are
extremely cheap (think on the order of function call overhead).


What are these tasks you mention? I've never heard of them.

--
Bruno Medeiros - Software Engineer


Re: Streaming library (NG Threading)

2010-10-29 Thread Bruno Medeiros

On 13/10/2010 18:48, Daniel Gibson wrote:

Andrei Alexandrescu schrieb:

On 10/13/10 11:16 CDT, Denis Koroskin wrote:

P.S. For threads this deep it's better fork a new one, especially when
changing the subject.


I thought I did by changing the title...


Andrei


At least on my Thunderbird/Icedove 2.0.0.24 it's still in the old Thread.


Same here on my Thunderbird 3.0. Is seems TB cares more about the 
"References:" field in NNTP message to determine the parent. In fact, 
with version 3 of TB, it seems that's all it considers... which means 
that NG messages with the same title as the parent will not be put in 
the same thread as the parent if they don't have the references field.
That sounds like the right approach, however there are some problems in 
practice because some clients never put the references field 
(particularly Webnews I think), so all those messages show up in my TB 
as new threads. :/



--
Bruno Medeiros - Software Engineer


Re: Streaming library (NG Threading)

2010-10-29 Thread Bruno Medeiros

On 29/10/2010 12:50, Denis Koroskin wrote:

On Fri, 29 Oct 2010 15:40:35 +0400, Bruno Medeiros
 wrote:


On 13/10/2010 18:48, Daniel Gibson wrote:

Andrei Alexandrescu schrieb:

On 10/13/10 11:16 CDT, Denis Koroskin wrote:

P.S. For threads this deep it's better fork a new one, especially when
changing the subject.


I thought I did by changing the title...


Andrei


At least on my Thunderbird/Icedove 2.0.0.24 it's still in the old
Thread.


Same here on my Thunderbird 3.0. Is seems TB cares more about the
"References:" field in NNTP message to determine the parent. In fact,
with version 3 of TB, it seems that's all it considers... which means
that NG messages with the same title as the parent will not be put in
the same thread as the parent if they don't have the references field.
That sounds like the right approach, however there are some problems
in practice because some clients never put the references field
(particularly Webnews I think), so all those messages show up in my TB
as new threads. :/




Nope, most of the responses through WebNews have correct References in
place.


All responses that appear as new threads in my TB (ie, threads whose 
title starts with "Re: ") and for which I have looked at the message 
source, have user agent:

  User-Agent: Web-News v.1.6.3 (by Terence Yim)
and no references field. These messages are common with some posters, 
like berophile, Sean Kelly, Kagamin,etc..
But some Web-News messages do have a references field though, so it's 
not all Web-News messages that are missing it.


--
Bruno Medeiros - Software Engineer


Re: Ruling out arbitrary cost copy construction?

2010-10-29 Thread Bruno Medeiros

On 06/10/2010 17:57, dsimcha wrote:

Vote++.  IMHO the problem with arbitrary cost copy construction is that
abstractions that are this leaky don't actually make people's lives simpler.
Abstractions (like value semantics) are supposed to make the code easier to 
reason
about.  When an abstraction forces you to think hard about things as trivial as
variable assignments, I think it's better to either scrap the abstraction and
require all copying to be explicit, or use a less leaky abstraction like 
reference
counting/COW.


I agree with this as well.

But I'm still wondering if the original copy construction problem 
couldn't be solved in some other way.


--
Bruno Medeiros - Software Engineer


Re: Ruling out arbitrary cost copy construction?

2010-10-29 Thread Bruno Medeiros

On 06/10/2010 17:34, Andrei Alexandrescu wrote:



or similar. However, a sealed range does not offer references, so trying
e.g.

swap(r1.front, r2.front);

will not work. This is a problem.


Why doesn't a sealed range offer references? Is it to prevent modifying 
the elements being iterated?

(I searched google and TDPL but couldn't find any info on sealed ranges)

--
Bruno Medeiros - Software Engineer


Re: On C/C++ undefined behaviours (on the term "undefined behaviours")

2010-10-29 Thread Bruno Medeiros
Note: I've only seen this message now, since I am several threads late 
in the (date-ordered) queue of unread NG threads, and this message 
appeared as a new thread.



On 06/10/2010 21:00, bearophile wrote:

Bruno Medeiros:


[...mumble mumble...]
I don't like this term "undefined behavior"
[...mumble mumble...]


I really don't care about words, and about C, I want signed/unsigned 
compile-time/run-time overflow errors in D.

Bye,
bearophile


Like I mentioned afterwards, I think communication is important, so we 
should strive to have a clear understanding of the terms we and other 
people use.


But anyways, regarding this issue, I am satisfied. The D glossary and 
TDPL have precisely defined "undefined behavior", which I didn't know 
was the case.
Also, the related term "implementation defined", which some people in 
the C world equivocate with "undefined behavior", has been used here in 
D, but in also in a more accurate way, distinct from "undefined 
behavior". So that's good.



--
Bruno Medeiros - Software Engineer


Re: Ada, SPARK [Was: Re: tolf and detab (language succinctness)]

2010-10-29 Thread Bruno Medeiros

On 06/10/2010 22:48, bearophile wrote:

Bruno Medeiros:


[About ADA] That "begin" "end" syntax is awful. I already think just "begin" 
"end" syntax is bad, but also having to repeat the name of block/function/procedure/loop at the "end", that's 
awful.<


If you take a look at my dlibs1, probably more than 60_000 lines of D1 code, 
you may see that all closing braces of templates, functions, classes, and 
structs have a comment like:

} // end FooBar!()

Ada makes sure that comment at the end doesn't get out of sync with the true 
function mame :-)

I agree it's not DRY, but when you aren't using a good IDE that comment helps 
you understand where you are in the code.



Well, I rarely use languages without good IDE's, so... :P

But still, even with IDEs aside, I see little use in that kind of 
information comment. But I try to keep my functions short, both in 
lines, and depth of nested blocks. And as for aggregate types, of those 
that have large amount of code, I prefer that there are only a few of 
them per source file. So this makes it rare that I'm not able to see the 
whole block in one page.



That dlibs1 of yours, is it publicly available somewhere? "Let me see 
your micro..."




Ada is not designed to write Web applications, it's useful where you need very 
reliable software, high integrity systems. And in such situations it's very 
hard to beat it with other languages as C/C++/Java/C#. In such situations a 
language that helps you avoid bugs is better than a very handy language like 
Ruby. C language was not designed to write reliable systems, and it shows. D2 
language distances itself from C, trying to be (useful to write code) more 
reliable than C, it uses some of the ideas of Ada (but D2 misses still a basic 
Ada feature important for highly reliable software systems: optional integral 
overflows, that in Ada are active on default).



I'm not an expert on high-reliability/critical systems, but I had the 
impression that the majority of it was written in C (even if with 
restricting code guidelines). Or that at least, much more critical 
software is written in C than in Ada. Is that not the case?



--
Bruno Medeiros - Software Engineer


Re: "in" everywhere

2010-10-29 Thread Bruno Medeiros

On 08/10/2010 00:42, Jonathan M Davis wrote:

On Thursday, October 07, 2010 16:39:15 Tomek Sowiński wrote:

http://en.wikipedia.org/wiki/Halting_problem

It's a classic problem in computer science. Essentially what it comes down to is
that you can't determine when - or even if - a program will halt until it
actually has. It's why stuff like file transfer dialogs can never be totally
accurate. And best, you can estimate how long a file transfer will take based on
the current progress, but you can't _know_ when it will complete.


O_o

--
Bruno Medeiros - Software Engineer


Re: Tuple literal syntax + Tuple assignment

2010-10-29 Thread Bruno Medeiros

On 07/10/2010 19:45, Andrei Alexandrescu wrote:

On 10/7/10 12:45 CDT, Michel Fortin wrote:

On 2010-10-07 12:34:33 -0400, Andrei Alexandrescu
 said:


My suggestion is that we deprecate TypeTuple and we call it AliasTuple
because that's really what it is - it's a tuple of stuff that can be
passed in as an alias parameter.


Personally, I like D built-in tuples; they're so simple. At the core
they're just a group of "things".


They are terrible, awful, despiteful. They don't compose with anything;
you can't have an array of tuples or a hash of tuples. They can't be
returned a from a function. They spread their legs in function parameter
lists without any control (flattening is bad, right?) Built-in tuples
are the pitts. The one thing they're good for is as a back-end for
std.typecons.Tuple.



In fairness, my impression is they were not meant to compose with 
anything or be returned with a function. They were created not as a 
first class type, but as a metaprogramming construct, whose purpose was 
*exactly* for capturing parameters for templates or functions and 
expanding them automatically. They were a great boon for D's 
metaprogramming capabilities.
As such they were not meant to emulate tuples as in Python's tuples, or 
any record type in general. But because they could partially be used as 
such, and because they share the same name, a lot of comparisons are 
made, which results in this idea that D's tuples are inferior.


This is not saying it would not be useful to have functionality like 
Python's tuples.


--
Bruno Medeiros - Software Engineer


Re: [nomenclature] What is a bug?

2010-10-29 Thread Bruno Medeiros

On 12/10/2010 14:14, Justin Johansson wrote:

Perhaps this topic could be posted as
"[challenge] Define just exactly what a bug is".

I trust this topic will yield some interesting conversation.

Cheers
Justin Johansson



A developer(s) designs a program/system/spec/whatever to exhibit certain 
behavior. A bug is a behavior exhibited by that creation but which was 
not intended or expected according to the underlying design.
(Design in this context includes the whole source of the program, not 
just architecture, "overall design" or something like that.)


"Unwanted" behavior is not a good definition. A behavior can be 
intended/expected even if is unwanted or undesired. (enhancements, 
behaviors beyond the control of the program, etc.)


--
Bruno Medeiros - Software Engineer


Re: [D typesystem] What is the type of null?

2010-10-29 Thread Bruno Medeiros

On 13/10/2010 17:27, retard wrote:

Tue, 12 Oct 2010 17:41:05 +0200, Simen kjaeraas wrote:


Justin Johansson  wrote:


The answer to the OP's question is simple: null's type is not
expressible in D.


That is a sad observation for a language that purports maturity beyond
the epoch of C/C++/Java et. al.


I'm curious - why does null need such a specific type?


It's much easier to write a specification, a compiler, and an automatic
theorem prover for a language with a sane type system. The types and
transitions are expressible with simple rules of logic. Now you need ad
hoc special cases. Nothing else.


It may be true that such would make it easier to write an automatic 
theorem prover, maybe also a specification, but would it really make it 
easier to write a compiler for such a language?
And more importantly, even if it was easier, would the language actually 
be a better and more useful language? Better as in a general-purpose 
programming language.
I suspect not, and Justin's implication that D's inability to accurately 
express the type of null is somehow a severe shortcoming seems to me 
like wacky formal-methods fanboyism or some other similar crazyness...



--
Bruno Medeiros - Software Engineer


Re: [nomenclature] systems language

2010-10-29 Thread Bruno Medeiros

On 14/10/2010 13:30, Justin Johansson wrote:

Touted often around here is the term "systems language".

May we please discuss a definition to be agreed upon
for the usage this term (at least in this community) and
also have some agreed upon examples of PLs that might also
be members of the "set of systems languages".
Given a general subjective term like this, one would have
to suspect that the D PL is not the only member of this set.

Cheers
Justin Johansson

PS. my apologies for posting a lame joke recently;
certainly it was not meant to be disparaging towards
the D PL and hopefully it was not taken this way.


It's those programming languages whose type systems can be used to move 
and navigate across water (but can sink if you rock it enough). Compare 
to other languages whose type systems merely floats on water, but don't 
move anywhere... (although some guarantee they will never sink no matter 
how much you rock it!)



--
Bruno Medeiros - Software Engineer


Re: What do people here use as an IDE?

2010-10-29 Thread Bruno Medeiros

On 13/10/2010 03:20, Eric Poggel wrote:

On 10/12/2010 10:11 PM, Michael Stover wrote:

Descent is a dead project, replaced by DDT which doesn't have a release.
Also, I'm running Linux at home and Mac at work, so VisualD won't do for
me. Poseidon is also Windows-only.


Descent is dead? The change log shows recent activity
(http://dsource.org/projects/descent/log/)


Descent, the IDE is indeed abandoned, but one of its components, the DMD 
parser Java port, which resides in the descent.compiler plugin, is still 
used by DDT. (although in maintenance mode only)


Most of that SVN activity is for Mmrnmhrm (now DDT) which was hosted in 
the same location as Descent up to 09/23/10. The remaining activity is 
for descent.compiler which is still hosted at the Descent repository.





--
Bruno Medeiros - Software Engineer


Re: What do people here use as an IDE?

2010-10-29 Thread Bruno Medeiros

On 13/10/2010 08:07, Peter Alexander wrote:

On 13/10/10 4:15 AM, so wrote:

Does Java come with a standard gui library? Yes.
Does C come with a standard gui library? No.

C didn't need a gui library to be successful, and didn't come with one.
On the other hand Java/C# have to have one, packed, and they do come
with (at least)one.

If your language has a "system programming" in its feature lists, these
kind of libraries have very low priority, let alone specific IDE.


C didn't need a GUI library because there was no competition with a GUI
library.

Like it or not, in this day and age, people expect GUI libraries and
IDEs. In fact, most programmers have no idea how to compile code without
an IDE. Moreover, most people think that the IDE and the language are
the *same thing* (evidenced by the number of people that tag their C++
theory questions as "visual studio" on stackoverflow.com).

I agree that solving the compiler bugs and language issues are top
priority, but after that, I'd say IDE and GUI library come next (doesn't
have to be a standard GUI library -- just any robust library).


I would a say a modern IDE, together with other toolchain programs 
(debugger, build tools) are much more important than a GUI library. This 
due to the fact that they would be used by many more developers than 
those who would want to use a GUI library.



--
Bruno Medeiros - Software Engineer


Re: What do people here use as an IDE?

2010-10-29 Thread Bruno Medeiros

On 13/10/2010 04:15, so wrote:

I guess it is wording.
Hmm say...

Does Java come with a standard gui library? Yes.
Does C come with a standard gui library? No.

C didn't need a gui library to be successful, and didn't come with one.
On the other hand Java/C# have to have one, packed, and they do come
with (at least)one.

If your language has a "system programming" in its feature lists, these
kind of libraries have very low priority, let alone specific IDE.

On Wed, 13 Oct 2010 06:00:16 +0300, Jimmy Cao  wrote:


I'm not quite understanding your argument.
C and C++ do have *actual* IDE's for them, such as Visual Studio.




It's incorrect wording, plain and simple. D != DMD, no one was 
suggesting DMD should come bundled with an IDE...



--
Bruno Medeiros - Software Engineer


Re: What do people here use as an IDE?

2010-10-29 Thread Bruno Medeiros

On 16/10/2010 10:50, "Jérôme M. Berger" wrote:

Russel Winder wrote:

On Wed, 2010-10-13 at 16:24 -0700, Jonathan M Davis wrote:
[ . . . ]

Proper code completion, correctly jumping to function definitions, and various
other features that IDEs generally do well tend to be quite poor in vim. It can
do many of them on some level, but for instance, while ctags does give you the
ability to jump to function declarations, it does quite poorly in the face of
identical variable names across files. There are a number of IDE features that I
would love to have and use but vim can't properly pull off. When I have a decent
IDE, I'm always torn on whether to use vim or the IDE. vim (well, gvim)
generally wins out, but sometimes the extra abilities of the IDE are just too
useful. What I'd really like is full-featured IDE with complete and completely
remappable vim bindings.


Bizarrely the single feature that fails for me in Eclipse, NetBeans and
IntelliJ IDEA that I find the single most problematic feature in my
programming life -- which means Emacs remains the one true editor -- is
formatting comments.  I seemingly cannot survive without the ability to
reformat the paragraphs of comment blocks to a given width.  Emacs
handles this trivially in all languages I use for the modes I have.  The
IDEs seem unable to provide the functionality.  Usually they end up
reformatting my entire file to some bizarre formatting that is not the
one set up for the project.  I appreciate that being able to trivially
create properly formatted comments is probably uniquely my problem
but . . .


Same here, no IDE I've seen is able to format code and comments as
well as (X)Emacs.

Jerome


Interesting. For anyone else who shares that opinion, what are the IDE's 
that you have seen? In particular, does this include JDT?



--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-11-01 Thread Bruno Medeiros

On 31/10/2010 05:35, BCS wrote:

Hello Bruno,


Which degree did 'Software engineers' take then?



You know, that's one thing that kinda irks me: Why is it called
'Software engineers' when I've never seen engineering taught in a CS
course (not to be confused with real "computer engineering" courses that
are a lot more like EE than CS).


What are you referring to when you say "called 'Software engineers'" ? 
The people who write software, or the college degrees/programs? I didn't 
quite get it.



The most direct example of this I know
of is in "The Pragmatic Programmer": Item 18 is "estimate to avoid
surprises" and then goes on to describe how to do that. Well, if
programming were taught as an engineering discipline, that would be a
pointless (if not insulting) comment because what it is advocating is so
fundamental to engineering that it goes without saying.




What do you mean "if programming were taught as an engineering discipline" ?

--
Bruno Medeiros - Software Engineer


Re: Language features and reinterpret casts

2010-11-01 Thread Bruno Medeiros

On 21/09/2010 00:27, bearophile wrote:

klickverbot:

Are there any cases where (*cast(int*)&someFloat) does not fit the bill?


I am not a C lawyer, but I think that too is undefined in C (and maybe D too).

Bye,
bearophile


In general, it is definitely undefined behavior in C, but that's because 
an int in C can be bigger than a float, so you could be reading memory 
out of bounds with that. If sizeof(int) == sizeof(float), then it's 
legal in C.

Similarly, I think it is legal in D.


--
Bruno Medeiros - Software Engineer


Re: Language features and reinterpret casts

2010-11-01 Thread Bruno Medeiros

On 21/09/2010 09:23, Simen kjaeraas wrote:

bearophile  wrote:


klickverbot:

Are there any cases where (*cast(int*)&someFloat) does not fit the bill?


I am not a C lawyer, but I think that too is undefined in C (and maybe
D too).


From your own link
(http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=109033):


"I don't see any way to make conversions between pointers and ints
implementation defined, and make dereferencing a pointer coming from
some int anything but undefined behavior."



That's not the same thing. Walter was referring to casting an int to a 
pointer, but the above is casting a _float pointer_ to an _int pointer_.


--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-11-01 Thread Bruno Medeiros

On 29/10/2010 02:32, Robert Jacques wrote:

On Thu, 28 Oct 2010 10:48:34 -0400, Bruno Medeiros
 wrote:

On 26/10/2010 04:47, Robert Jacques wrote:

On Mon, 25 Oct 2010 09:44:14 -0400, Bruno Medeiros
 wrote:


On 23/09/2010 23:39, Robert Jacques wrote:

On Thu, 23 Sep 2010 16:35:23 -0400, Tomek Sowiński 
wrote:


On topic: this means a pure function can take a reference to data
that
can be mutated by
someone else. So we're giving up on the "can parallelize with no
dataraces" guarantee on
pure functions?



In short, No. In long; the proposal is for pure functions become
broken
up into two groups (weak and strong) based on their function
signatures.
This division is internal to the compiler, and isn't expressed in the
language in any way. Strongly-pure functions provide all the
guarantees
that pure does today and can be automatically parallelized or cached
without consequence. Weakly-pure functions, don't provide either of
these guarantees, but allow a much larger number of functions to be
strongly-pure. In order to guarantee a function is strongly pure, one
would have to declare all its inputs immutable or use an appropriate
template constraint.


I think we need to be more realistic with what kinds of optimizations
we could expect from a D compiler and pure functions.
Caching might be done, but only a temporary sense (caching under a
limited execution scope). I doubt we would ever have something like
memoization, which would incur memory costs (potentially quite big
ones), and so the compiler almost certainly would not be able to know
(without additional metadata/anotations or compile options) if that
trade-off is acceptable.

Similarly for parallelism, how would the compiler know that it's ok to
spawn 10 or 100 new threads to parallelize the execution of some loop?
The consequences for program and whole-machine scheduling would not be
trivial and easy to understand. For this to happen, amongst other
things the compiler and OS would need to ensure that the spawned
threads would not starve the rest of the threads of that program.
I suspect all these considerations might be very difficult to
guarantee on a non-VM environment.



Ahem, it's trivial for the compiler to know if it's okay to spawn 10 or
100 _tasks_. Tasks, as opposed to threads or even thread pools, are
extremely cheap (think on the order of function call overhead).


What are these tasks you mention? I've never heard of them.



The programming language Cilk popularized the concept of parallelization
through many small tasks combined with a work stealing runtime. Futures
are essentially the same concept, but because futures were generally
implemented with OS-threads, a thread pool or fibers/coroutines, that
term is generally avoided. Like message passing, tasks are often
implemented in libraries with Intel's threading building blocks probably
being the most famous, though both Microsoft's Task Parallel Library and
Apple's Grand Central are gaining mind-share. David Simcha currently has
a task library in review for inclusion to phobos. Basically, the point
of tasks is to provide parallelization with extremely low overhead (on
average a Cilk spawn is less than 4 function calls). That way, instead
of having a few coarse grain threads which neither scale nor load
balance well, you're encouraged to use tasks everywhere and therefore
reap the benefits of a balanced N-way scalable system.


Hum, I see what you mean know, but tasks only help with the *creation 
overhead* of otherwise spawning lots of OS threads, they don't solve the 
main problems I mentioned.
First, it may be fine to spawn 100 tasks, but there is still the issue 
of deciding how many OS threads the tasks will run in! Obviously, you 
won't run them in just one OS thread, otherwise you won't get any 
parallelism. Ideally for your program only, your program would have as 
many OS threads as there are cores. But here there is still the same 
issue of whether its ok for you program to use up all the cores in your 
machine. The compiler doesn't know that. Could it be enough to have a 
global compiler option to specify that? I don't think so: What if you 
want some code of your program to use as much OS-threads as possible, 
but not some other code?
Second, and perhaps more importantly, the very same issue occurs in the 
scope of your program alone. So, even if you use all OS threads, and 
don't care about other programs, spawning 100 tasks for some loop might 
take time away from other more important tasks of your program. The 
compiler/task-scheduler/whatever would not automatically know what is 
acceptable and what is not. (the only exception being if your program 
was logically single-threaded)


--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code (Software Engineering degrees)

2010-11-01 Thread Bruno Medeiros

On 25/10/2010 23:09, Diego Cano Lagneaux wrote:

En Mon, 25 Oct 2010 13:22:02 +0200, Bruno Medeiros
 escribió:


On 22/10/2010 15:56, Diego Cano Lagneaux wrote:

Well, you think wrongly. :)
If you look at the top universities worldwide, the majority of them
have only one "computer programming" undergraduate degree. Sometimes
it is called "Computer Science" (typical in the US), other times it is
called "Computer Engineering", "Informatics Engineering", "Software
Engineering", "Informatics Science" or something like that (typical in
Europe), but despite the different names they are essentially the
same: courses designed to _teach and educate future software
engineers_.


I must nuance: as an European* "Informatics (and Applied Maths**)
engineer", I can say this degree is not 'Software engineer' but indeed
'whole computer engineer' as we studied both software and hardware, to
the point of building a complete (simulated) processor.
Furthermore, I can't recall they told us about profiling tools, but it
was 10 years ago and I skiped a few classes, so it means nothing.



Which degree did 'Software engineers' take then?


Well, depends of what you mean by "Software engineer". They could take a
3 years 'informatics' degree, which is not an engineering degree (even
if it's called 'technical engineering in Spain) but is perfect for
coders, or take the full 'informatics engineering' and just specialize
later (or forget everything they don't need), for a more general and
advanced degree.


Yeah, I meant the longer, more comprehensive degree (which like you said 
is usually 5 years long in continental Europe).


But yeah, you are right, these courses are not just for software 
engineers, but also other related areas (computer/hardware engineering, 
IT/systems administration, MIS). That was the case in my university, one 
would specialize in one of these areas in the last 2 years (of the 5 
year degree program).




In most Europe, Engineering is always a 5 years (masters) degree,
oriented to big project developers who'll (supposedly) lead teams. I've
heard it's different in the Anglosaxon systems.


Whoa! :o
Shit, I'm going to go on a big tangent here, but I'm very surprised to 
again hear that notion that the 5 year CS/Engineering degrees in Europe 
are for "big project developers who'll (supposedly) lead teams.".
In my university (which, btw, is widely regarded as the best 
technical/engineering school in Portugal), that idea was often mentioned 
by some of the "senior" students in my degree. The details of their 
opinions varied, but generally some of them seemed to think that our 
graduates would soon become project managers and/or software architects 
in the workforce, whereas most of the programming and grunt work would 
be left to the "trolhas": the lowly developers who took the subprime 3 
year "practical" courses in other universities/polytechnics. ("Trolha" 
is Portuguese slang for a bricklayer, or also any guy who does 
construction work... see the metaphor here?)


Obviously I found this whole idea to be complete nonsense. Not that I 
didn't agree that the CS/E graduates from our degree were much better 
(on average) than the graduates from those 3 or 4 year CS/E courses, but 
rather the stupid notion that it would be perfectly fine (and ideal) for 
a software team to have one or two good software engineers as project 
leaders/managers/architects, and the rest to be "code monkeys"... These 
seniors students were completely blind to the importance of having the 
majority of your developers be good, smart developers (even if junior ones).
One or two of such seniors even went so far as to comment that 
programming itself was a lowly task, for "trolhas" only... we the 
Engineers might program in the first 2-3 years after entering the 
workplace, but we would gradually move to a architure/design role in 
enterprise and soon would not need program anymore... [end of quote, and 
you could feel in these comments how much this guy really disliked 
programming... ]
Man, my eyes went cartoonishly wide open when I read this. How 
incredibly deluded this guy was... :S


But the whole surprising thing is, I wasn't expecting this kind of 
attitude in other countries, I thought this was somewhat isolated in 
Portugal... a mix of personal delusion (derived from the fact that 
actually these guys sucked at programming, or anything else useful), 
combined with a still lingering non-meritocratic class arrogance in 
Portuguese society. Nobility may be long gone, but there are a lot of 
people in Portugal who like to put themselves about other people, and 
having a degree (especially with title-conferring degrees, which 
engineering degrees are btw) is a very common excuse for people trying 
to make themselves look superior, (even if their degree was crappy, or 
they sucked at it).



--
Bruno Medeiros - Software Engineer


Re: Streaming library (NG Threading)

2010-11-01 Thread Bruno Medeiros

On 29/10/2010 18:08, Denis Koroskin wrote:

On Fri, 29 Oct 2010 16:32:24 +0400, Bruno Medeiros
 wrote:


On 29/10/2010 12:50, Denis Koroskin wrote:

On Fri, 29 Oct 2010 15:40:35 +0400, Bruno Medeiros
 wrote:


On 13/10/2010 18:48, Daniel Gibson wrote:

Andrei Alexandrescu schrieb:

On 10/13/10 11:16 CDT, Denis Koroskin wrote:

P.S. For threads this deep it's better fork a new one, especially
when
changing the subject.


I thought I did by changing the title...


Andrei


At least on my Thunderbird/Icedove 2.0.0.24 it's still in the old
Thread.


Same here on my Thunderbird 3.0. Is seems TB cares more about the
"References:" field in NNTP message to determine the parent. In fact,
with version 3 of TB, it seems that's all it considers... which means
that NG messages with the same title as the parent will not be put in
the same thread as the parent if they don't have the references field.
That sounds like the right approach, however there are some problems
in practice because some clients never put the references field
(particularly Webnews I think), so all those messages show up in my TB
as new threads. :/




Nope, most of the responses through WebNews have correct References in
place.


All responses that appear as new threads in my TB (ie, threads whose
title starts with "Re: ") and for which I have looked at the message
source, have user agent:
User-Agent: Web-News v.1.6.3 (by Terence Yim)
and no references field. These messages are common with some posters,
like berophile, Sean Kelly, Kagamin,etc..
But some Web-News messages do have a references field though, so it's
not all Web-News messages that are missing it.



That's strange because here is what I get for a typical WebNews message:

Path: digitalmars.com!not-for-mail
From: tls 
Newsgroups: digitalmars.D
Subject: Re: Lints, Condate and bugs
Date: Fri, 29 Oct 2010 15:54:12 +0400
Organization: Digital Mars
Lines: 48
Message-ID: 
References:  


MIME-Version: 1.0
Content-Type: text/plain
X-Trace: digitalmars.com 1288353252 9827 65.204.18.192 (29 Oct 2010
11:54:12 GMT)
X-Complaints-To: use...@digitalmars.com
NNTP-Posting-Date: Fri, 29 Oct 2010 11:54:12 + (UTC)
User-Agent: Web-News v.1.6.3 (by Terence Yim)
Xref: digitalmars.com digitalmars.D:120649


Well, here's what I get for such a typical unparented message:

Path: digitalmars.com!not-for-mail
From: bearophile 
Newsgroups: digitalmars.D
Subject: Re: The Many Faces of D - slides
Date: Sun, 03 Oct 2010 15:44:24 -0400
Organization: Digital Mars
Lines: 14
Message-ID: 
Mime-Version: 1.0
Content-Type: text/plain
X-Trace: digitalmars.com 1286135064 75071 65.204.18.192 (3 Oct 2010 
19:44:24 GMT)

X-Complaints-To: use...@digitalmars.com
NNTP-Posting-Date: Sun, 3 Oct 2010 19:44:24 + (UTC)
User-Agent: Web-News v.1.6.3 (by Terence Yim)
Xref: digitalmars.com digitalmars.D:118239

> Page 30: that little concurrent test program gives me an error:



--
Bruno Medeiros - Software Engineer


Re: Ruling out arbitrary cost copy construction?

2010-11-01 Thread Bruno Medeiros

On 31/10/2010 03:56, Andrei Alexandrescu wrote:

On 10/30/2010 09:40 PM, Michel Fortin wrote:

On 2010-10-30 20:49:38 -0400, Andrei Alexandrescu
 said:


On 10/30/10 2:24 CDT, Don wrote:

At the moment, I think it's impossible.
Has anyone succesfully implemented refcounting in D? As long as bug
3516
(Destructor not called on temporaries) remains open, it doesn't seem to
be possible.
Is that the only blocker, or are there others?


I managed to define and use RefCounted in Phobos. File also uses
hand-made reference counting. I think RefCounted is a pretty good
abstraction (unless it hits the bug you mentioned.)


I like the idea of RefCounted as a way to automatically make things
reference counted.


Unfortunately it's only a semi-automated mechanism.


But like File and many similar ref-counted structs, it has this race
condition (bug 4624) when stored inside the GC heap. Currently, most of
Phobos's ref-counted structs are race-free only when they reside on the
stack or if your program has only one thread (because the GC doesn't
spawn threads if I'm correct).

It's a little sad that the language doesn't prevent races in destructors
(bug 4621).


I hope we're able to solve these implementation issues that can be seen
as independent from the decision at hand.

Walter and I discussed the matter again today and we're on the brink of
deciding that cheap copy construction is to be assumed. This simplifies
the language and the library a great deal, and makes it perfectly good
for 95% of the cases. For a minority of types, code would need to go
through extra hoops (e.g. COW, refcounting) to be compliant.

I'm looking for more feedback from the larger D community. This is a
very important decision that marks one of the largest departures from
the C++ style. Taking the wrong turn here could alienate many
programmers coming from C++.

So, everybody - this is really the time to speak up or forever be silent.


Andrei


I would also go for "2. Constant-cost copy construction", but I can't 
really make a case for it. I can only state that it seems to me that the 
benefits in library simplification (both in implementation and API) 
would be far more valuable than the drawbacks ("Makes value types 
difficult to define").


It should be considered that those 5% types (large value types) will not 
strictly need to have a refcounting+COW support to be usable in 
containers and algorithms: just store pointers-to-the-value-type in the 
container, instead of using the value type directly. (Basicly, use them 
as reference types). So it's not the end of the world for some value 
type if it doesn't implement refcounting+COW.


--
Bruno Medeiros - Software Engineer


Re: Ruling out arbitrary cost copy construction?

2010-11-01 Thread Bruno Medeiros

On 31/10/2010 16:42, Andrei Alexandrescu wrote:

On 10/31/10 8:04 AM, Michel Fortin wrote:

On 2010-10-30 23:56:24 -0400, Andrei Alexandrescu
 said:


Walter and I discussed the matter again today and we're on the brink
of deciding that cheap copy construction is to be assumed. This
simplifies the language and the library a great deal, and makes it
perfectly good for 95% of the cases. For a minority of types, code
would need to go through extra hoops (e.g. COW, refcounting) to be
compliant.


A simple question: can a reference counter work with const and immutable?

const(S) func(ref const(S) s) {
return s; // this should increment the reference counter, but can we
bypass const?
}

Bypassing const/immutable could work, but if your data is immutable and
resides in read-only memory you'll get a crash.


There are several solutions possible, some that require the compiler
knowing about the idiom, and some relying on trusted code. One in the
latter category is to create immutable objects with an unattainable
reference count (e.g. size_t.max) and then incrementing the reference
count only if it's not equal to that value. That adds one more test for
code that copies const object, but I think it's acceptable.

Andrei


Ehh? So here we would have logical const instead of strict const? Feels 
a bit like this could be an opening of Pandora's box...


--
Bruno Medeiros - Software Engineer


Re: What do people here use as an IDE?

2010-11-01 Thread Bruno Medeiros

On 29/10/2010 21:29, dolive wrote:

Bruno Medeiros дµ½:


On 13/10/2010 03:20, Eric Poggel wrote:

On 10/12/2010 10:11 PM, Michael Stover wrote:

Descent is a dead project, replaced by DDT which doesn't have a release.
Also, I'm running Linux at home and Mac at work, so VisualD won't do for
me. Poseidon is also Windows-only.


Descent is dead? The change log shows recent activity
(http://dsource.org/projects/descent/log/)


Descent, the IDE is indeed abandoned, but one of its components, the DMD
parser Java port, which resides in the descent.compiler plugin, is still
used by DDT. (although in maintenance mode only)

Most of that SVN activity is for Mmrnmhrm (now DDT) which was hosted in
the same location as Descent up to 09/23/10. The remaining activity is
for descent.compiler which is still hosted at the Descent repository.


Bruno Medeiros - Software Engineer

Why not continue to maintenance the descent, but to re to develop ddt £¿If you 
add creative to descent of ddt, will be better, more conservation of resources.

thank's

dolive



http://code.google.com/a/eclipselabs.org/p/ddt/wiki/GeneralFAQ#Why_not_develop_Descent_instead_of_Mmrnmrhm/DDT?


--
Bruno Medeiros - Software Engineer


Re: Proposal: Relax rules for 'pure'

2010-11-09 Thread Bruno Medeiros

On 02/11/2010 03:26, Robert Jacques wrote:

On Mon, 01 Nov 2010 10:24:43 -0400, Bruno Medeiros
 wrote:

On 29/10/2010 02:32, Robert Jacques wrote:

[snip]

The programming language Cilk popularized the concept of parallelization
through many small tasks combined with a work stealing runtime. Futures
are essentially the same concept, but because futures were generally
implemented with OS-threads, a thread pool or fibers/coroutines, that
term is generally avoided. Like message passing, tasks are often
implemented in libraries with Intel's threading building blocks probably
being the most famous, though both Microsoft's Task Parallel Library and
Apple's Grand Central are gaining mind-share. David Simcha currently has
a task library in review for inclusion to phobos. Basically, the point
of tasks is to provide parallelization with extremely low overhead (on
average a Cilk spawn is less than 4 function calls). That way, instead
of having a few coarse grain threads which neither scale nor load
balance well, you're encouraged to use tasks everywhere and therefore
reap the benefits of a balanced N-way scalable system.


Hum, I see what you mean know, but tasks only help with the *creation
overhead* of otherwise spawning lots of OS threads, they don't solve
the main problems I mentioned.
First, it may be fine to spawn 100 tasks, but there is still the issue
of deciding how many OS threads the tasks will run in! Obviously, you
won't run them in just one OS thread, otherwise you won't get any
parallelism. Ideally for your program only, your program would have as
many OS threads as there are cores. But here there is still the same
issue of whether its ok for you program to use up all the cores in
your machine. The compiler doesn't know that. Could it be enough to
have a global compiler option to specify that? I don't think so: What
if you want some code of your program to use as much OS-threads as
possible, but not some other code?
Second, and perhaps more importantly, the very same issue occurs in
the scope of your program alone. So, even if you use all OS threads,
and don't care about other programs, spawning 100 tasks for some loop
might take time away from other more important tasks of your program.
The compiler/task-scheduler/whatever would not automatically know what
is acceptable and what is not. (the only exception being if your
program was logically single-threaded)



Controlling the task runtime thread-pool size is trivial. Indeed, you'll
often want to reduce the number of daemon threads by the number of
active program threads. And if you need fine grain control over pool
sizes, you can always create separate pools and assign tasks to them. I
think a reasonable default would be (# of cores - 2) daemons with
automatic decreases/increases with every spawn/termination. But, all
those settings should be controllable at runtime.



I didn't say controlling pools/task-count/threads-count/priorities 
wasn't easy or trivial. I just claimed it should not done automatically 
by the compiler, but rather "manually" in the code, and with fine 
grained control.

∎

--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code

2010-11-09 Thread Bruno Medeiros

On 02/11/2010 04:25, BCS wrote:

Hello Bruno,


On 31/10/2010 05:35, BCS wrote:


Hello Bruno,


Which degree did 'Software engineers' take then?


You know, that's one thing that kinda irks me: Why is it called
'Software engineers' when I've never seen engineering taught in a CS
course (not to be confused with real "computer engineering" courses
that are a lot more like EE than CS).


What are you referring to when you say "called 'Software engineers'" ?
The people who write software, or the college degrees/programs? I
didn't quite get it.



I've never seen the details of a software engineering program so I can't
say much on that, but my current job title is software engineer and I
know *I'm* not doing engineering.



I don't think you understood my question. You said "Why is it called 
'Software engineers'", and I was asking what you meant by "it". Were you 
referring to the people, or to the degrees?



The most direct example of this I know
of is in "The Pragmatic Programmer": Item 18 is "estimate to avoid
surprises" and then goes on to describe how to do that. Well, if
programming were taught as an engineering discipline, that would be a
pointless (if not insulting) comment because what it is advocating is
so
fundamental to engineering that it goes without saying.


What do you mean "if programming were taught as an engineering
discipline" ?


I'm saying that programming is *not* taught or practiced as an
engineering discipline (Ok, maybe the DOD, DOE and NASA do).
Furthermore, I'm presenting the fact that "item 18" needs stating as
evidence supported to support my assertion and supporting that with the
assertion that any practitioner of an engineering discipline wouldn't
need to be told about "item 18".



Ehh? "needs stating as evidence supported to support my assertion and 
supporting that with the assertion that" ??



To be totally clear, I'm not saying that software development should be
done as an engineering process, but that the standard practices (and job
title) of today shouldn't claim to be engineering.



What is engineering to you then?


--
Bruno Medeiros - Software Engineer


Re: Ruling out arbitrary cost copy construction?

2010-11-09 Thread Bruno Medeiros

On 02/11/2010 15:16, Steven Schveighoffer wrote:

On Fri, 29 Oct 2010 09:06:24 -0400, Bruno Medeiros
 wrote:


On 06/10/2010 17:34, Andrei Alexandrescu wrote:



or similar. However, a sealed range does not offer references, so trying
e.g.

swap(r1.front, r2.front);

will not work. This is a problem.


Why doesn't a sealed range offer references? Is it to prevent
modifying the elements being iterated?
(I searched google and TDPL but couldn't find any info on sealed ranges)


Because a sealed range then has complete power over memory allocation of
its elements (really, I think sealed ranges is a misnomer, it's really
ranges on sealed containers).



Ah, my mistake. I thought sealed ranges were simply ranges that did not 
allow modifying the underlying container (a "logical const" range), and 
similarly for sealed containers (= an unmodifiable container).


I see why escaping references is not allowed then.

--
Bruno Medeiros - Software Engineer


Re: [nomenclature] systems language [OT] [NSFW]

2010-11-09 Thread Bruno Medeiros

On 30/10/2010 02:13, Walter Bright wrote:

div0 wrote:

There's nothing special about a systems language; it's just they have
explicit facilities that make certain low level functionality easier
to implement. You could implement an OS in BASIC using PEEK/POKE if
you mad enough.


I suppose it's like the difference between porn and art. It's impossible
to write a bureaucratic rule to distinguish them, but it's easy to tell
the difference just by looking at the two. "I know it when I see it!"


In the vast majority of cases, yes, but is it *always* easy to tell the 
difference? Because if a bureaucratic rule was to made, it would have to 
work for *all* cases, otherwise it would not be very useful.


Consider Chloe Sevigny's blowjob scene in the end of The Brown Bunny (an 
art house film). Art or porno?
Or the photograph "Portrait of My British Wife" by Panayiotis Lamprou: 
http://www.guardian.co.uk/artanddesign/2010/sep/17/panayiotis-lamprou-portrait-wife-photography 
which will be displayed in the National Portrait Gallery. Art or porno?


-_-'

--
Bruno Medeiros - Software Engineer


Re: [nomenclature] systems language

2010-11-09 Thread Bruno Medeiros

On 29/10/2010 21:30, retard wrote:

Fri, 29 Oct 2010 20:54:03 +0100, Bruno Medeiros wrote:


On 14/10/2010 13:30, Justin Johansson wrote:

Touted often around here is the term "systems language".

May we please discuss a definition to be agreed upon for the usage this
term (at least in this community) and also have some agreed upon
examples of PLs that might also be members of the "set of systems
languages". Given a general subjective term like this, one would have
to suspect that the D PL is not the only member of this set.

Cheers
Justin Johansson

PS. my apologies for posting a lame joke recently; certainly it was not
meant to be disparaging towards the D PL and hopefully it was not taken
this way.


It's those programming languages whose type systems can be used to move
and navigate across water (but can sink if you rock it enough).


It's probably very hard to find an accurate definition for this kind of
term. The same can be said about terms such as 'functional language'. Many
'pragmatic' software engineering terms are based on emotions, broken
mental models, inaccurate or purposefully wrong information. In my
opinion these are all subtypes of a thing called 'marketing bullshit'.



Compare
to other languages whose type systems merely floats on water, but don't
move anywhere... (although some guarantee they will never sink no matter
how much you rock it!)


You can easily create a language with guarantees about safety: no
segfaults, no index out of bounds errors, no overflows etc. Some of these
languages even guarantee termination. However, they're not Turing
complete in that case, which reduces their usefulness. Another thing is,
these guarantees can be expensive. However, the trend has been towards
higher level languages. One reason is Moore's law, you have achieved the
same results with a N times slower implementation using the N times
faster hardware.


Why this serious reply? Perhaps I fell victim to an overly accurate 
analogy, but my previous post was a joke/satire.


--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code (Software Engineering degrees)

2010-11-09 Thread Bruno Medeiros

On 01/11/2010 22:58, Diego Cano Lagneaux wrote:

In most Europe, Engineering is always a 5 years (masters) degree,
oriented to big project developers who'll (supposedly) lead teams. I've
heard it's different in the Anglosaxon systems.


Whoa! :o
Shit, I'm going to go on a big tangent here, but I'm very surprised to
again hear that notion that the 5 year CS/Engineering degrees in
Europe are for "big project developers who'll (supposedly) lead teams.".
In my university (which, btw, is widely regarded as the best
technical/engineering school in Portugal), that idea was often
mentioned by some of the "senior" students in my degree. The details
of their opinions varied, but generally some of them seemed to think
that our graduates would soon become project managers and/or software
architects in the workforce, whereas most of the programming and grunt
work would be left to the "trolhas": the lowly developers who took the
subprime 3 year "practical" courses in other
universities/polytechnics. ("Trolha" is Portuguese slang for a
bricklayer, or also any guy who does construction work... see the
metaphor here?)

Obviously I found this whole idea to be complete nonsense. Not that I
didn't agree that the CS/E graduates from our degree were much better
(on average) than the graduates from those 3 or 4 year CS/E courses,
but rather the stupid notion that it would be perfectly fine (and
ideal) for a software team to have one or two good software engineers
as project leaders/managers/architects, and the rest to be "code
monkeys"... These seniors students were completely blind to the
importance of having the majority of your developers be good, smart
developers (even if junior ones).
One or two of such seniors even went so far as to comment that
programming itself was a lowly task, for "trolhas" only... we the
Engineers might program in the first 2-3 years after entering the
workplace, but we would gradually move to a architure/design role in
enterprise and soon would not need program anymore... [end of quote,
and you could feel in these comments how much this guy really disliked
programming... ]
Man, my eyes went cartoonishly wide open when I read this. How
incredibly deluded this guy was... :S

But the whole surprising thing is, I wasn't expecting this kind of
attitude in other countries, I thought this was somewhat isolated in
Portugal... a mix of personal delusion (derived from the fact that
actually these guys sucked at programming, or anything else useful),
combined with a still lingering non-meritocratic class arrogance in
Portuguese society. Nobility may be long gone, but there are a lot of
people in Portugal who like to put themselves about other people, and
having a degree (especially with title-conferring degrees, which
engineering degrees are btw) is a very common excuse for people trying
to make themselves look superior, (even if their degree was crappy, or
they sucked at it).




Well, I am not sure you got what I meant. What I said is not that
engineers will never code or won't have to after a couple years. The
idea is more that engineers will be able to have people with different
skills to manage, or to work closely with, so they'll have to know many
fields to understand the whole thing. And I was not talking specifically
about computers, but about all kinds of engineering. Engineering is
about understanding and developping projects as a whole, which doesn't
exclude working also on the details.
Of course, many engineers may end doing different things, which is
another advantage of the generalist approach. I'm actually doing
websites now!


Yeah, I wasn't accusing you of sharing that viewpoint, at least not in 
the same way that those students I mentioned in my post.


But you do agree that in the case of Software Engineers at least, you 
will lead a big project only after you have several years of experience 
(more or less depending on how big the project is), and even so, only if 
you are skilled enough? But more importantly, you don't need to lead 
over anyone to be a Software Engineer, even a good one.

In other words, it's not very analogous to say, civil engineering.


--
Bruno Medeiros - Software Engineer


Re: New slides about Go

2010-11-11 Thread Bruno Medeiros

On 16/10/2010 00:15, Walter Bright wrote:

Andrei Alexandrescu wrote:

On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:

On 10/15/10 16:25 CDT, Nick Sabalausky wrote:

I just hope they get serious enough about functional programming to
gain
some monads to go along with their "goroutines".


They should call them "gonads".

Andrei


Wait, that was your actual joke. Sig...


I see we should invite JokeExplainer to the forums!


I didn't get it... :/
(Nick's joke that is)

--
Bruno Medeiros - Software Engineer


Re: duck!

2010-11-11 Thread Bruno Medeiros

On 16/10/2010 21:30, Michel Fortin wrote:

On 2010-10-16 16:05:52 -0400, Andrei Alexandrescu
 said:


On 10/16/2010 02:54 PM, kenji hara wrote:

Adapter-Pattern! I'd have forgotten the name.
It is NOT equals to duck-typing.


It's a subset of duck typing. I don't think calling a function that
supports a limited form of duck typing "duck" is a lie.


Not a lie, just a word with a deceptive meaning that'll lead people to
believe something else than the truth. Some cynically call that
marketing speech.



I have to agree with Kenji and Michael here: this is not duck typing. 
Duck typing is like example "6. Dynamic object type&  dynamic signature" 
in Kenji's last post.


The problem is not just strictly the name of the adapter function, but 
that from this whole thread there was that the implication that you 
Andrei, and Walter, were willing to market D as "having duck typing", or 
"supporting duck typing", without any qualification on that support 
("subset of", "limited form of", or even just "form of" like the Go 
slides). So I agree with Michael, this would be inaccurate at best, and 
deceitful at worst.


We could argue nomenclature, but there is a much more simple litmus 
test: any non-D developer who used proper duck typing before, and then 
heard "D has duck typing" from some official sources, and then tried D 
out and found out how the actual feature worked, would almost certainly 
feel deceived or disappointed.



--
Bruno Medeiros - Software Engineer


Re: duck!

2010-11-11 Thread Bruno Medeiros

On 16/10/2010 19:16, Walter Bright wrote:


Being the upstart language, D needs now and then something a little more
attention-getting than generic terms. The "duck" feature is important
for two reasons:

1. duck typing is all the rage now



What??... :o
I think this is very much a wrong perception! Sure, there has been a lot 
of articles, publicity and buzz surrounding duck typing in the past few 
years, but that doesn't meant it's "all the rage". More concretely, it 
doesn't mean that duck typing is popular, either now, or heading that 
way in the future. I think dynamic languages are somewhat of a niche 
(even if a growing one), but not really heading to be mainstream in 
medium/large scale projects.
Rather I think that the duck typing fanboys are just very vocal about 
it. (Kinda like the Tea Party movement... )




BTW, just as a clarifying side note, duck typing is nothing new in terms 
of language features. Duck typing is just dynamic typing as combined 
with Object Oriented. What is new about duck typing is not so much in 
the language, but rather the development philosophy that states:
 * dynamic typing with OO is an equally valid approach (if not better) 
than traditional static typing OO, for thinking about objects and their 
behavior,
 * documentation, clear code, and testing are good enough to ensure 
correct usage (especially testing, in some circles).


Note: I'm not implying I agree with the above, I'm just paraphrasing.

--
Bruno Medeiros - Software Engineer


Re: duck!

2010-11-11 Thread Bruno Medeiros

On 17/10/2010 20:11, Andrei Alexandrescu wrote:

On 10/17/2010 01:09 PM, Jeff Nowakowski wrote:

On 10/16/2010 04:05 PM, Andrei Alexandrescu wrote:


It's a subset of duck typing. I don't think calling a function that
supports a limited form of duck typing "duck" is a lie.


I'm sure if it was on a Go slide you would.


Probably not in as strong terms, but if you want to point out that I'm
biased... what can I do? I'm a simple man.

Andrei


When I first heard you say you were biased (in the Google Talk), I 
thought you were being facetious, or just exaggerating.


I'm not so sure anymore, and I hope that is not the case. Because, as 
I'm sure you must realize, being biased for D will only result in an 
outcome of mild to severe annoyance and loss of credibility from:
* people biased for languages which are perceived to be competitors to D 
(like Go).

* people who are (or strive to be) unbiased.

And given who you are (one of the designers of D), this outcome will 
apply not just to yourself, but D as well, which obviously is not a good 
thing.


--
Bruno Medeiros - Software Engineer


Re: duck!

2010-11-11 Thread Bruno Medeiros

On 11/11/2010 14:19, Bruno Medeiros wrote:

way in the future. I think dynamic languages are somewhat of a niche
(even if a growing one), but not really heading to be mainstream in
medium/large scale projects.


Sorry, I actually meant "I think dynamic _typing_ is somewhat of a 
niche" rather than the above. Yes, the two are closely related, but they 
are not the same.
For example, I wouldn't be surprised if in the future certain 
dynamically-typed languages gain some static-typing capabilities. (like 
the inverse is happening)


--
Bruno Medeiros - Software Engineer


Re: New slides about Go

2010-11-12 Thread Bruno Medeiros

On 11/11/2010 12:10, Justin Johansson wrote:

On 11/11/10 22:56, Bruno Medeiros wrote:

On 16/10/2010 00:15, Walter Bright wrote:

Andrei Alexandrescu wrote:

On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:

On 10/15/10 16:25 CDT, Nick Sabalausky wrote:

I just hope they get serious enough about functional programming to
gain
some monads to go along with their "goroutines".


They should call them "gonads".

Andrei


Wait, that was your actual joke. Sig...


I see we should invite JokeExplainer to the forums!


I didn't get it... :/
(Nick's joke that is)



Hi Bruno,

It is an English language word play on sound-alike words.

Google on: "define: gonads"

I think Nick was suggesting that someone/something gets some "balls"
though "ovaries" might not be out of the question also. :-)

Trusting this explains well in your native language.

Regards,
Justin


So Nick already had "gonads" in mind on that post, is that the case?

--
Bruno Medeiros - Software Engineer


Re: The Next Big Language [OT]

2010-11-17 Thread Bruno Medeiros

On 18/10/2010 19:45, Steven Schveighoffer wrote:

On Mon, 18 Oct 2010 14:36:57 -0400, Andrei Alexandrescu
 wrote:


...bury the hatch and...


Sorry, I can't let this one pass... bury the *hatchet* :)

This isn't Lost.

-Steve


LOOOL


Oh man, I miss that series, even though it was going downhill..

--
Bruno Medeiros - Software Engineer


Re: blog: Overlooked Essentials for Optimizing Code (Software

2010-11-17 Thread Bruno Medeiros

On 11/11/2010 11:50, lurker wrote:

ruben niemann Wrote:


Diego Cano Lagneaux Wrote:


Well, I think a simple look at the real world is enough to agree that you
need several years of experience and good skills. Moreover, my personal
experience is that it's easier to get a job (and therefore the much needed
working experience) when you have a 3-year degree than a 5-year one, at
least in Spain: I've been told at many job interviews that I was
'overqualified' (I didn't care about that, just wanted to work, but they
did)


Same happened to me. I've MSc in computer engineering from a technical 
university. I began my PhD studies (pattern recognition and computer vision), 
but put those on hold after the first year because it seemed there isn't much 
non-academic work on that field and because of other more urgent issues. Four 
years after getting my MSc I'm still writing user interface html / css / 
javascript / php in a small enterprise. Hoping to see D or some strongly typed 
language in use soon. I'm one of the techies running the infrastructure, I 
should have studied marketing / management if I wanted to go up in the 
organization and earn more.


It's usually your own fault if you don't get promotions. My career started with 
WAP/XHTML/CSS, J2EE, Tapestry, Struts, then Stripes, Spring, Hibernate, jQuery, and 
few others. Due to my lack of small talk social skills, I was frist moved from client 
interface and trendy things to the backend coding and testing, later began doing 
sysadmin work at the same company. My working area is in the basement floor near a 
tightly locked and cooled hall full of servers. It's pretty cold here, I rarely see 
people (too lazy to climb upstairs to fetch a cup of coffee so I brought my own 
espresso coffee maker here) and when I do, they're angry because some  
doesn't work again.


So "lurker" is actually also your job description? :P

--
Bruno Medeiros - Software Engineer


Re: duck!

2010-11-19 Thread Bruno Medeiros

On 11/11/2010 15:22, Andrei Alexandrescu wrote:

On 11/11/10 6:30 AM, Bruno Medeiros wrote:

On 17/10/2010 20:11, Andrei Alexandrescu wrote:

On 10/17/2010 01:09 PM, Jeff Nowakowski wrote:

On 10/16/2010 04:05 PM, Andrei Alexandrescu wrote:


It's a subset of duck typing. I don't think calling a function that
supports a limited form of duck typing "duck" is a lie.


I'm sure if it was on a Go slide you would.


Probably not in as strong terms, but if you want to point out that I'm
biased... what can I do? I'm a simple man.

Andrei


When I first heard you say you were biased (in the Google Talk), I
thought you were being facetious, or just exaggerating.

I'm not so sure anymore, and I hope that is not the case. Because, as
I'm sure you must realize, being biased for D will only result in an
outcome of mild to severe annoyance and loss of credibility from:
* people biased for languages which are perceived to be competitors to D
(like Go).
* people who are (or strive to be) unbiased.

And given who you are (one of the designers of D), this outcome will
apply not just to yourself, but D as well, which obviously is not a good
thing.


I think I ascribe a milder meaning than you to "bias". It's in human
nature to have preferences, and it's self-evident that I'm biased in
favor of various facets of D's approach to computing. A completely
unbiased person would have a hard time working on anything creative.

Andrei



I don't think the bias above is just a case of preferences of one thing 
over the other. Having preferences is perfectly fine, as in "I prefer 
this approach", or even "I think this approach is more effective than 
that one".
Another thing is to describe reality in inaccurate terms ("I think 
approach A has property Z", when it doesn't), and/or to have a double 
standard when describing or analyzing something else.



--
Bruno Medeiros - Software Engineer


Re: std.algorithm.remove and principle of least astonishment

2010-11-19 Thread Bruno Medeiros

On 16/10/2010 20:51, Andrei Alexandrescu wrote:

On 10/16/2010 01:39 PM, Steven Schveighoffer wrote:

I suggest wrapping a char[] or wchar[] (of all constancies) with a
special range that imposes the restrictions.


I did so. It was called byDchar and it would accept a string type. It
sucked.

char[] and wchar[] are special. They embed their UTF affiliation in
their type. I don't think we should make a wash of all that by handling
them as arrays. They are not arrays.


Andrei


"They are not arrays."? So why are they arrays then? :3

Sorry, what I mean is: so we agree that char[] and wchar[] are special. 
Unlike *all other arrays*, there are restrictions to what you can assign 
to each element of the array. So conceptually they are not arrays, but 
in the type system they are very much arrays. (or described 
alternatively: implemented with arrays).


Isn't this a clear sign that what currently is char[] and wchar[] (= 
UTF-8 and UTF-16 encoded strings) should not be arrays, but instead a 
struct which would correctly represents the semantics and contracts of 
char[] and wchar[]? Let me clarify what I'm suggesting:
 * char[] and wchar[] would be just arrays of char's and wchar's, 
completely orthogonal with other arrays types, no restrictions on 
assignment, no further contracts.
 * UTF-8 and UTF-16 encoded strings would have their own struct-based 
type, lets called them string and wstring, which would likely use char[] 
and wchar[] as the contents (but these fields would be internal), and 
have whatever methods be appropriate, including opIndex.
 * string literals would be of type string and wstring, not char[] and 
wchar[].
 * for consistency, probably this would be true for UTF-32 as well: we 
would have a dstring, with dchar[] as the contents.


Problem solved. You're welcome. (as John Hodgeman would say)

No?

--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 22/10/2010 20:48, Andrei Alexandrescu wrote:

On 10/22/10 14:02 CDT, Tomek Sowiński wrote:

Dnia 22-10-2010 o 00:01:21 Walter Bright 
napisał(a):


As we all know, tool support is important for D's success. Making
tools easier to build will help with that.

To that end, I think we need a lexer for the standard library -
std.lang.d.lex. It would be helpful in writing color syntax
highlighting filters, pretty printers, repl, doc generators, static
analyzers, and even D compilers.

It should:

1. support a range interface for its input, and a range interface for
its output
2. optionally not generate lexical errors, but just try to recover and
continue
3. optionally return comments and ddoc comments as tokens
4. the tokens should be a value type, not a reference type
5. generally follow along with the C++ one so that they can be
maintained in tandem

It can also serve as the basis for creating a javascript
implementation that can be embedded into web pages for syntax
highlighting, and eventually an std.lang.d.parse.

Anyone want to own this?


Interesting idea. Here's another: D will soon need bindings for CORBA,
Thrift, etc, so lexers will have to be written all over to grok
interface files. Perhaps a generic tokenizer which can be parametrized
with a lexical grammar would bring more ROI, I got a hunch D's templates
are strong enough to pull this off without any source code generation
ala JavaCC. The books I read on compilers say tokenization is a solved
problem, so the theory part on what a good abstraction should be is
done. What you think?


Yes. IMHO writing a D tokenizer is a wasted effort. We need a tokenizer
generator.



Agreed, of all the things desired for D, a D tokenizer would rank pretty 
low I think.


Another thing, even though a tokenizer generator would be much more 
desirable, I wonder if it is wise to have that in the standard library? 
It does not seem to be of wide enough interest to be in a standard 
library. (Out of curiosity, how many languages have such a thing in 
their standard library?)



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex [Java and IDE's]

2010-11-19 Thread Bruno Medeiros

On 27/10/2010 05:39, Walter Bright wrote:

What I miss more in Java is not single structs (single values),


There's a lot more to miss than that. I find Java code tends to be
excessively complex, and that's because it lacks expressive power. It
was summed up for me by a colleague who said that one needs an IDE to
program in Java because with one button it will auto-generate 100 lines
of boilerplate.


I've been hearing that a lot, but I find this to be excessively 
exaggerated. Can you give some concrete examples?


Because regarding excessive verbosity in Java, I cab only remember tree 
significant things at the moment (at least disregarding meta 
programming), and one of them is nearly as verbose in D as in Java:


 1) writing getters and setters for fields
 2) verbose syntax for closures. (need to use an anonymous class, outer 
variables must be final, and wrapped in an array if write access is needed)
 3) writing trivial constructors whose parameters mirror the fields, 
and then constructors assign the parameters to the fields.


I don't think 1 and 2 happens that often to be that much of an 
annoyance. (unless you're one of those Java persons that thinks that 
directly accessing the public field of another class is a sin, instead 
every single field must have getters/setters and never ever be public...)


As an additional note, I don't think having an IDE auto-generate X lines 
of boilerplate code is necessarily a bad thing. It's only bad if the 
alternative of having a better language feature would actually save me 
coding time (whether initial coding, or subsequent modifications) or 
improve code understanding. _Isn't this what matters?_



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 19/11/2010 21:27, Jonathan M Davis wrote:

On Friday 19 November 2010 13:03:53 Bruno Medeiros wrote:

On 22/10/2010 20:48, Andrei Alexandrescu wrote:

On 10/22/10 14:02 CDT, Tomek Sowiński wrote:

Dnia 22-10-2010 o 00:01:21 Walter Bright

napisał(a):

As we all know, tool support is important for D's success. Making
tools easier to build will help with that.

To that end, I think we need a lexer for the standard library -
std.lang.d.lex. It would be helpful in writing color syntax
highlighting filters, pretty printers, repl, doc generators, static
analyzers, and even D compilers.

It should:

1. support a range interface for its input, and a range interface for
its output
2. optionally not generate lexical errors, but just try to recover and
continue
3. optionally return comments and ddoc comments as tokens
4. the tokens should be a value type, not a reference type
5. generally follow along with the C++ one so that they can be
maintained in tandem

It can also serve as the basis for creating a javascript
implementation that can be embedded into web pages for syntax
highlighting, and eventually an std.lang.d.parse.

Anyone want to own this?


Interesting idea. Here's another: D will soon need bindings for CORBA,
Thrift, etc, so lexers will have to be written all over to grok
interface files. Perhaps a generic tokenizer which can be parametrized
with a lexical grammar would bring more ROI, I got a hunch D's templates
are strong enough to pull this off without any source code generation
ala JavaCC. The books I read on compilers say tokenization is a solved
problem, so the theory part on what a good abstraction should be is
done. What you think?


Yes. IMHO writing a D tokenizer is a wasted effort. We need a tokenizer
generator.


Agreed, of all the things desired for D, a D tokenizer would rank pretty
low I think.

Another thing, even though a tokenizer generator would be much more
desirable, I wonder if it is wise to have that in the standard library?
It does not seem to be of wide enough interest to be in a standard
library. (Out of curiosity, how many languages have such a thing in
their standard library?)


We want to make it easy for tools to be built to work on and deal with D code.
An IDE, for example, needs to be able to tokenize and parse D code. A program
like lint needs to be able to tokenize and parse D code. By providing a lexer
and parser in the standard library, we are making it far easier for such tools
to be written, and they could be of major benefit to the D community. Sure, the
average program won't need to lex or parse D, but some will, and making it easy
to do will make it a lot easier for such programs to be written.

- Jonathan M Davis


And by providing a lexer and a parser outside the standard library, 
wouldn't it make it just as easy for those tools to be written? What's 
the advantage of being in the standard library? I see only 
disadvantages: to begin with it potentially increases the time that 
Walter or other Phobos contributors may have to spend on it, even if 
it's just reviewing patches or making sure the code works.



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 27/10/2010 22:43, Nick Sabalausky wrote:

"retard"  wrote in message
news:iaa44v$17s...@digitalmars.com...


I only meant that the widespead adoption of Java shows how the public at
large cares very little about the performance issues you mentioned.


The public at large is convinced that "Java is fast now, really!". So I'm
not certain widespread adoption of Java necessarily indicates they don't
care so much about performance. Of course, Java is quickly becoming a legacy
language anyway (the next COBOL, IMO), so that throws another wrench into
the works.




Java is quickly becoming a legacy language? the next COBOL? SRSLY?...
Just two years ago, the now hugely popular Android platform choose Java 
as it's language of choice, and you think Java is becoming legacy?...


The development of the Java language itself has stagnated over the last 
6 years or so (especially due to corporate politics, which now has 
become even worse and uncertain with all the shit Oracle is doing), but 
that's a completely different statement from saying Java is becoming 
legacy.
In fact, all the uproar and concern about the future of Java under 
Oracle, of the JVM, of the JCP (the body that regulates changes to 
Java),etc., is a testament to the huge popularity of Java. Otherwise 
people (and corporations) wouldn't care, they would just let it wither 
away with much less concern.



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 27/10/2010 22:04, retard wrote:

Wed, 27 Oct 2010 13:52:29 -0700, Walter Bright wrote:


retard wrote:

Wed, 27 Oct 2010 12:08:19 -0700, Walter Bright wrote:


retard wrote:

This is why the basic data structure in functional languages,
algebraic data types, suits better for this purpose.

I think you recently demonstrated otherwise, as proven by the
widespread use of Java :-)


I don't understand your logic -- Widespread use of Java proves that
algebraic data types aren't a better suited way for expressing
compiler's data structures such as syntax trees?


You told me that widespread use of Java proved that nothing more complex
than what Java provides is useful:

"Java is mostly used for general purpose programming so your claims
about usefulness and the need for extreme performance look silly."

I'd be surprised if you seriously meant that, as it implies that Java is
the pinnacle of computer language design, but I can't resist teasing you
about it. :-)


I only meant that the widespead adoption of Java shows how the public at
large cares very little about the performance issues you mentioned. Java
is one of the most widely used languages and it's also successful in many
fields. Things could be better from programming language theory's point
of view, but the business world is more interesting in profits and the
large pool of Java coders has given better benefits than more expressive
languages. I don't think that says anything against my notes about
algebraic data types.


"the widespead adoption of Java shows how the public at large cares very 
little about the performance issues you mentioned"


WTF? The widespead adoption of Java means that _Java developers_ at 
large don't care about those performance issues (mostly because they 
work on stuff where they don't need to). But it's no statement about all 
the pool of developers. Java is hugely popular, but not in a "it's 
practically the only language people use" way. It's not like Windows on 
the desktop.



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 19/11/2010 22:02, Jonathan M Davis wrote:

On Friday, November 19, 2010 13:53:12 Bruno Medeiros wrote:

On 19/11/2010 21:27, Jonathan M Davis wrote:

And by providing a lexer and a parser outside the standard library,
wouldn't it make it just as easy for those tools to be written? What's
the advantage of being in the standard library? I see only
disadvantages: to begin with it potentially increases the time that
Walter or other Phobos contributors may have to spend on it, even if
it's just reviewing patches or making sure the code works.


If nothing, else, it makes it easier to keep in line with dmd itself. Since the
dmd front end is LGPL, it's not possible to have a Boost port of it (like the
Phobos version will be) without Walter's consent. And I'd be surprised if he did
that for a third party library (though he seems to be pretty open on a lot of
that kind of stuff). Not to mention, Walter and the core developers are 
_exactly_
the kind of people that you want working on a lexer or parser of the language
itself, because they're the ones who work on it.

- Jonathan M Davis


Eh? That license argument doesn't make sense: if the lexer and parser 
were to be based on DMD itself, then putting it in the standard library 
is equivalent (in licensing terms) to licensing the lexer and parser 
parts of DMD in Boost. More correctly, what I mean by equivalent, is 
that there no reason why Walter would allow one thing and not the 
other... (because on both cases he would have to issue that license)


As for your second argument, yes, Walter and the core developers would 
be the most qualified people to work in it, no question about it. But my 
point is, I don't think Walter and Phobos core devs should be working on 
it, because it takes time away from other things that are much more 
important. Their time is precious.
I think our main point of disagreement is just how important a D lexer 
and/or parser would be. I think it would be of very low interest, 
definitely not a "major benefit to the D community".


For starters, regarding its use in IDEs: I think we are *ages* away from 
the point were an IDE based on D only will be able to compete with IDEs 
based in Eclipse/Visual-Studio/Xcode/etc.. I think much sooner we will 
have a full D compiler written in D than a (competitive) D IDE written 
in D. We barely have mature GUI libraries from what I understand.
(What may be more realistic is an IDE partially written in D, and 
otherwise based on Eclipse/Visual-Studio/etc., but even so, I think it 
would be hard to compete with other non-D IDEs)



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 19/11/2010 22:25, Michael Stover wrote:

As for D lexers and tokenizers, what would be nice is to
A) build an antlr grammar for D
B) build D targets for antlr so that antlr can generate lexers and
parsers in the D language.

For B) I found http://www.mbutscher.de/antlrd/index.html

For A) A good list of antlr grammars is at
http://www.antlr.org/grammar/list, but there isn't a D grammar.

These things wouldn't be an enormous amount of work to create and
maintain, and, if done, anyone could parse D code in many languages,
including Java and C which would make providing IDE features for D
development easier in those languages (eclipse for instance), and you
could build lexers and parsers in D using antlr grammars.

-Mike


Yes, that would be much better. It would be directly and immediately 
useful for the DDT project:


"But better yet would be to start coding our own custom parser (using a 
parser generator like ANTLR for example), that could really be tailored 
for IDE needs. In the medium/long term, that's probably what needs to be 
done. "
in 
http://www.digitalmars.com/d/archives/digitalmars/D/ide/Future_of_Descent_and_D_Eclipse_IDE_635.html


--
Bruno Medeiros - Software Engineer


Re: Simple @tagged attribute for unions

2010-11-19 Thread Bruno Medeiros

On 22/10/2010 04:04, Andrei Alexandrescu wrote:

On 10/21/2010 08:41 PM, bearophile wrote:

I have suggested yet another attribute, @tagged:
http://d.puremagic.com/issues/show_bug.cgi?id=5097

Bye, bearophile


And almost exactly six hours ago:


On the other hand, currently there are many D2 features that are
unfinished and buggy, so adding even more stuff is not a good idea.
And I think named arguments are a purely additive change. So Walter
may add them later when the current features are implemented well
enough. Currently it's much more important to focus on eventually
needed non-additive changes instead.


Hope you agree with yourself :o).


Andrei


How did I miss that! It's bearophile's best comment ever! :D

--
Bruno Medeiros - Software Engineer


Re: Linux Agora D thread

2010-11-19 Thread Bruno Medeiros

On 22/10/2010 11:17, retard wrote:

Fri, 22 Oct 2010 02:42:49 -0700, Walter Bright wrote:


retard wrote:


Why I think the D platform's risk is so high is because the author
constantly refuses to give ANY estimates on feature schedules.


Would you believe them if I did?


http://en.wikipedia.org/wiki/Software_development_process

"Without project management, software projects can easily be delivered
late or over budget. With large numbers of software projects not meeting
their expectations in terms of functionality, cost, or delivery schedule,
effective project management appears to be lacking."

http://en.wikipedia.org/wiki/Estimation_in_software_engineering

"The ability to accurately estimate the time and/or cost taken for a
project to come in to its successful conclusion is a serious problem for
software engineers. The use of a repeatable, clearly defined and well
understood software development process has, in recent years, shown
itself to be the most effective method of gaining useful historical data
that can be used for statistical estimation. In particular, the act of
sampling more frequently, coupled with the loosening of constraints
between parts of a project, has allowed more accurate estimation and more
rapid development times."

http://en.wikipedia.org/wiki/Application_Lifecycle_Management

"Proponents of application lifecycle management claim that it
* Increases productivity, as the team shares best practices for
development and deployment, and developers need focus only on current
business requirements
* Improves quality, so the final application meets the needs and
expectations of users
* Breaks boundaries through collaboration and smooth information flow
* Accelerates development through simplified integration
* Cuts maintenance time by synchronizing application and design
* Maximizes investments in skills, processes, and technologies
* Increases flexibility by reducing the time it takes to build and adapt
applications that support new business initiatives"

http://en.wikipedia.org/wiki/Cowboy_coding

"Lack of estimation or implementation planning may cause a project to be
delayed. Sudden deadlines or pushes to release software may encourage the
use of quick and dirty or code and fix techniques that will require
further attention later."

"Cowboy coding is common at the hobbyist or student level where
developers may initially be unfamiliar with the technologies, such as the
build tools, that the project requires."

"Custom software applications, even when using a proven development
cycle, can experience problems with the client concerning requirements.
Cowboy coding can accentuate this problem by not scaling the requirements
to a reasonable timeline, and may result in unused or unusable components
being created before the project is finished. Similarly, projects with
less tangible clients (often experimental projects, see independent game
development) may begin with code and never a formal analysis of the
design requirements. Lack of design analysis may lead to incorrect or
insufficient technology choices, possibly requiring the developer to port
or rewrite their software in order for the project to be completed."

"Many software development models, such as Extreme Programming, use an
incremental approach which stresses functional prototypes at each phase.
Non-managed projects may have few unit tests or working iterations,
leaving an incomplete project unusable."



What's your point with all of this? That Walter should do estimates?


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 27/10/2010 05:39, Walter Bright wrote:

bearophile wrote:

Walter:
Java was designed to be simple! Simple means to have a more uniform
semantics.


So was Pascal. See the thread about how useless it was as a result.



There's good simple, and there's bad simple...


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-19 Thread Bruno Medeiros

On 19/11/2010 23:45, Todd VanderVeen wrote:

== Quote from Bruno Medeiros (brunodomedeiros+s...@com.gmail)'s article

I think much sooner we will
have a full D compiler written in D than a (competitive) D IDE written
in D.


I agree. I do like the suggestion for developing the D grammar in Antlr though 
and
it is something I would be interested in working on. With this in hand, the
prospect of adding D support as was done for C++ to Eclipse or Netbeans becomes
much more feasible. Has a complete grammar been defined/compiled or is anyone
currently working in this direction? Having a robust IDE seems far more 
important
than whether it is written in D itself.


See the comment I made below, to Michael Stover. ( 
news://news.digitalmars.com:119/ic71pa$1le...@digitalmars.com )


--
Bruno Medeiros - Software Engineer


Re: New slides about Go

2010-11-24 Thread Bruno Medeiros

On 24/11/2010 01:37, Nick Sabalausky wrote:

"Bruno Medeiros"  wrote in message
news:ibjd5l$2p...@digitalmars.com...

On 11/11/2010 12:10, Justin Johansson wrote:

On 11/11/10 22:56, Bruno Medeiros wrote:

On 16/10/2010 00:15, Walter Bright wrote:

Andrei Alexandrescu wrote:

On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:

On 10/15/10 16:25 CDT, Nick Sabalausky wrote:

I just hope they get serious enough about functional programming to
gain
some monads to go along with their "goroutines".


They should call them "gonads".

Andrei


Wait, that was your actual joke. Sig...


I see we should invite JokeExplainer to the forums!


I didn't get it... :/
(Nick's joke that is)



Hi Bruno,

It is an English language word play on sound-alike words.

Google on: "define: gonads"

I think Nick was suggesting that someone/something gets some "balls"
though "ovaries" might not be out of the question also. :-)

Trusting this explains well in your native language.

Regards,
Justin


So Nick already had "gonads" in mind on that post, is that the case?



My intended joke:

Google Go has "coroutines" that it calls "goroutines" ( Because "go" +
"coroutines" == "goroutines"). So I applied the same cutesy naming to
"monads": "go" + "monads" == "gonads". And like Justin said, "gonads" also
means "testicles" (and sometimes "ovaries"), so it's a pun and a rather odd
name for a programming language feature.



Ok, just checking, thanks for the clarification. (I'm sometimes a bit 
obtuse with things like this)



(In English, saying that something requires
balls/gonads/nuts/etc is a common slang way of saying it requires courage.)



Yeah, that I know already. :)


--
Bruno Medeiros - Software Engineer


Re: std.algorithm.remove and principle of least astonishment

2010-11-24 Thread Bruno Medeiros

On 22/11/2010 04:56, Andrei Alexandrescu wrote:

On 11/21/10 22:09 CST, Rainer Deyke wrote:

On 11/21/2010 17:31, Andrei Alexandrescu wrote:
char[] and wchar[] fail to provide some of the guarantees of all other
instances of T[].


What exactly are those guarantees?



More exactly, that the following is true for any T: 

foreach(character; (T[]).init) {
static assert(is(typeof(character) == T));
}
static assert(std.range.isRandomAccessRange!(T[]));

It is not true for char and wchar (the second assert fails).
Another guarantee, similar in nature, and roughly described, is that 
functions in std.algorithm should never fail or throw when using an 
array as a argument (assuming the other arguments are valid). So for 
example:


std.algorithm.filter!("true")(anArray)

Should not throw, for any value of anArray. But it may if anArray is of 
type char[] or wchar[] and there is an encoding exception.



I'll leave the arguing of whether we want those guarantees for other 
subthreads, but it should be well agreed by now, that the above is not 
guaranteed.



--
Bruno Medeiros - Software Engineer


Re: std.algorithm.remove and principle of least astonishment

2010-11-24 Thread Bruno Medeiros

On 23/11/2010 18:15, foobar wrote:

It's simple, a mediocre language (Java) with mediocre libraries has orders of 
magnitude more success than C++ with it's libs fine tuned for performance. Why?


Java has mediocre libraries?? Are you serious about that opinion?


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 19/11/2010 23:39, Andrei Alexandrescu wrote:

On 11/19/10 1:03 PM, Bruno Medeiros wrote:

On 22/10/2010 20:48, Andrei Alexandrescu wrote:

On 10/22/10 14:02 CDT, Tomek Sowiński wrote:

Dnia 22-10-2010 o 00:01:21 Walter Bright 
napisał(a):


As we all know, tool support is important for D's success. Making
tools easier to build will help with that.

To that end, I think we need a lexer for the standard library -
std.lang.d.lex. It would be helpful in writing color syntax
highlighting filters, pretty printers, repl, doc generators, static
analyzers, and even D compilers.

It should:

1. support a range interface for its input, and a range interface for
its output
2. optionally not generate lexical errors, but just try to recover and
continue
3. optionally return comments and ddoc comments as tokens
4. the tokens should be a value type, not a reference type
5. generally follow along with the C++ one so that they can be
maintained in tandem

It can also serve as the basis for creating a javascript
implementation that can be embedded into web pages for syntax
highlighting, and eventually an std.lang.d.parse.

Anyone want to own this?


Interesting idea. Here's another: D will soon need bindings for CORBA,
Thrift, etc, so lexers will have to be written all over to grok
interface files. Perhaps a generic tokenizer which can be parametrized
with a lexical grammar would bring more ROI, I got a hunch D's
templates
are strong enough to pull this off without any source code generation
ala JavaCC. The books I read on compilers say tokenization is a solved
problem, so the theory part on what a good abstraction should be is
done. What you think?


Yes. IMHO writing a D tokenizer is a wasted effort. We need a tokenizer
generator.



Agreed, of all the things desired for D, a D tokenizer would rank pretty
low I think.

Another thing, even though a tokenizer generator would be much more
desirable, I wonder if it is wise to have that in the standard library?
It does not seem to be of wide enough interest to be in a standard
library. (Out of curiosity, how many languages have such a thing in
their standard library?)


Even C has strtok.

Andrei


That's just a fancy splitter, I wouldn't call that a proper tokenizer. I 
meant something that, at the very least, would tokenize based on regular 
expressions (and have heterogenous tokens).


--
Bruno Medeiros - Software Engineer


Re: std.algorithm.remove and principle of least astonishment

2010-11-24 Thread Bruno Medeiros

On 24/11/2010 13:07, Bruno Medeiros wrote:

On 22/11/2010 04:56, Andrei Alexandrescu wrote:

On 11/21/10 22:09 CST, Rainer Deyke wrote:

On 11/21/2010 17:31, Andrei Alexandrescu wrote:
char[] and wchar[] fail to provide some of the guarantees of all other
instances of T[].


What exactly are those guarantees?



More exactly, that the following is true for any T:

foreach(character; (T[]).init) {
static assert(is(typeof(character) == T));
}
static assert(std.range.isRandomAccessRange!(T[]));

It is not true for char and wchar (the second assert fails).
Another guarantee, similar in nature, and roughly described, is that
functions in std.algorithm should never fail or throw when using an
array as a argument (assuming the other arguments are valid). So for
example:

std.algorithm.filter!("true")(anArray)

Should not throw, for any value of anArray. But it may if anArray is of
type char[] or wchar[] and there is an encoding exception.


I'll leave the arguing of whether we want those guarantees for other
subthreads, but it should be well agreed by now, that the above is not
guaranteed.




Actually, I'll reply here, on why I would like these guarantees:

I think these guarantees are desirable due to a general design principle 
of mine that goes something like this:
 * Avoid "bad" abstractions: the abstraction should reflect intent as 
closely and clearly as possible.


Yeah, that may not tell anyone much because it's very hard to 
objectively define whether an abstraction is "bad" or not, or better or 
worse than another. However, here are a few guidelines:
  - within the same level of functionality, things should be as simple 
and as orthogonal as possible.
  - don't confuse implementation with contract/interface/API. (note 
that I said "confuse", not "expose")


char[] is not as orthogonal as possible. char[] does not reflect it's 
underlying intent as clearly as it could. If it was defined in a struct, 
you could directly document the expectation that the underlying string 
must be a valid UTF-8 encoding. In fact, you could even make that a 
contract.


If instead of an argument based on a design principle, you ask for 
concrete examples of why this is undesirable, well, I have no examples 
to give...  I haven't used D enough to run into real-world examples, but 
I believe that whenever the above principle is violated, then it is very 
likely that problems and/or annoyances will occur sooner or later.


I should point out however, that, at least for me, the undesirability of 
the current behavior is actually very low. Compared to other language 
issues (whether current ones, or past ones), it does not seem that 
significant. For example, static arrays not being proper values types 
(plus their .init thing) was much worse, man, that annoyed the shit out 
of me.


Then again, someone with more experience using D might encounter a more 
serious real-world case regarding the current behavior. Also, regarding 
this:



On 22/11/2010 17:40, Andrei Alexandrescu wrote:
>
> Of course you can. After you were to admit that it makes next to no
> sense to sort an array of code units, I would have said "well if somehow
> you do imagine such a situation, you achieve that by saying what you
> means: cast the char[] to ubyte[] and sort that".

Casting to ubyte[] does solve the use case, I agree. It does so with a 
minor inconvenience (having to cast), but it's very minor and I don't 
think it's that significant.
Rather, I'm more concerned with the use cases that actually want to use 
a char[] as a UTF-8 encoded string. As I mentioned above, I'm afraid of 
situations where this inconsistency might cause more significant 
inconveniences, maybe even bugs!


--
Bruno Medeiros - Software Engineer


Re: std.algorithm.remove and principle of least astonishment

2010-11-24 Thread Bruno Medeiros

On 21/11/2010 18:23, Andrei Alexandrescu wrote:


I have often reflected whether I'd do things differently if I could go
back in time and join Walter when he invented D's strings. I might have
done one or two things differently, but the gain would be marginal at
best. In fact, it's not impossible the balance of things could have been
hurt. Between speed, simplicity, effectiveness, abstraction, access to
representation, and economy of means, D's strings are the best
compromise out there that I know of, bar none by a wide margin.


Those things you would have done differently, would any of them impact 
this particular issue?


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 19/11/2010 23:56, Michael Stover wrote:

so that was 4 months ago - how do things currently stand on that initiative?

-Mike

On Fri, Nov 19, 2010 at 6:37 PM, Bruno Medeiros
 wrote:

On 19/11/2010 22:25, Michael Stover wrote:

As for D lexers and tokenizers, what would be nice is to
A) build an antlr grammar for D
B) build D targets for antlr so that antlr can generate lexers and
parsers in the D language.

For B) I found http://www.mbutscher.de/antlrd/index.html

For A) A good list of antlr grammars is at
http://www.antlr.org/grammar/list, but there isn't a D grammar.

These things wouldn't be an enormous amount of work to create and
maintain, and, if done, anyone could parse D code in many languages,
including Java and C which would make providing IDE features for D
development easier in those languages (eclipse for instance),
and you
could build lexers and parsers in D using antlr grammars.

-Mike


Yes, that would be much better. It would be directly and immediately
useful for the DDT project:

"But better yet would be to start coding our own custom parser
(using a parser generator like ANTLR for example), that could really
be tailored for IDE needs. In the medium/long term, that's probably
what needs to be done. "
in

http://www.digitalmars.com/d/archives/digitalmars/D/ide/Future_of_Descent_and_D_Eclipse_IDE_635.html

--
Bruno Medeiros - Software Engineer




I don't know about Ellery, as you can see in that thread he/she(?) 
mentioned interest in working on that, but I don't know anything more.


As for me, I didn't work on that, nor did I plan to.
Nor am I planning to anytime soon, DDT can handle things with the 
current parser for now (bugs can be fixed on the current code, perhaps 
some limitations can be resolved by merging some more code from DMD), so 
I'll likely work on other more important features before I go there. For 
example, I'll likely work on debugger integration, and code completion 
improvements before I would go on writing a new parser from scratch. 
Plus, it gives more time to hopefully someone else work on it. :P


Unlike Walter, I can't write a D parser in a weekend... :) Not even on a 
week, especially since I never done anything of this kind before.



--
Bruno Medeiros - Software Engineer


Re: What can the community do to help D?

2010-11-24 Thread Bruno Medeiros

On 23/10/2010 16:09, Peter Alexander wrote:

There have been threads about what the biggest issues with D are, and
about the top priorities for D are, but I don't think there has been a
thread about what the best things are that the community can do to help D.

Should people try to spread the word about D? I'm not even sure this is
a good idea. We all know that D2 is far from stable at the moment, and
getting people to try D in its current state might actually put them off
(first impressions are very important).

Should people try to help with dmd and Phobos? If so, what are the best
ways to do that?

Should people work on other D compilers (gdc and ldc in particular)? Is
this even possible without a formal language specification?

What else can we do to help? And what would we consider to be the *best*
ways to help?


If you have good /Java skills/, the DDT project welcomes contributions 
(http://code.google.com/a/eclipselabs.org/p/ddt/).


The *best* ways to help, it depends on your skills and area of 
experience. One person may be much more useful and/or productive for the 
D community doing one thing versus another.


But if you want to know which areas are better to have contributions 
made, in my opinion it's the compilers, especially DMD. Second to that I 
would say the rest of the toolchain (IDEs, build tools, debuggers).


--
Bruno Medeiros - Software Engineer


Re: More Clang diagnostic

2010-11-24 Thread Bruno Medeiros

On 26/10/2010 04:32, Walter Bright wrote:

bearophile wrote:

The C# compiler too show those column number. But last time Walter has
explained that to do this, the compiler has to keep more data (all
those line
numbers), so this may slow down the compilation a little. And of course
currently this information is not present, so it probably requires a good
amount of changes to be implemented. The slowdown problem may be
solved as in
GCC, adding a compilation switch that gives/removes the column number (it
uses to be switched off on default, now after the competition by Clang
it's
switched on on default).


Switching it off will have no effect on compile speed.



Why is that? What would cause a loss in compile speed even if this 
option was turned off?


--
Bruno Medeiros - Software Engineer


Re: More Clang diagnostic

2010-11-24 Thread Bruno Medeiros

On 26/10/2010 04:42, Walter Bright wrote:

Rainer Deyke wrote:

On 10/25/2010 19:01, Walter Bright wrote:

Yes, we discussed it before. The Digital Mars C/C++ compiler does this,
and NOBODY CARES.
Not one person in 25 years has ever even commented on it. Nobody
commented on its lack in dmd.


I think someone just did.


Only because clang made a marketing issue out of it. Nobody coming from
dmc++ noticed.

I used to point out this feature in dmc++ to users. Nobody cared, I'd
just get blank looks. Why would I drop it if people wanted it?



Could this be because of some particular bias or idiosyncrasy on the 
part of dmc++ users? Any idea what the C++ user community at large would 
think of a such a feature, prior to Clang?


I'm trying to think back to days when I used VS C++ 6 and VS C++ 2003, 
but I can't remember if the error messages were just line-based, or had 
more precision than that.



As for me, I think this functionality is useful, but only significantly 
so when integrated into an IDE/editor.


--
Bruno Medeiros - Software Engineer


Re: D in accounting program

2010-11-24 Thread Bruno Medeiros

On 26/10/2010 19:13, Adam D. Ruppe wrote:

I've used D2 for a large web application for 7 months now without having to 
change
any more than a handful of lines of code due to language/lib changes.

The most intrusive change has probably been std.contracts being renamed!


If you are just using this newsgroup, or one attempt at one of the fancier
features to decide the language is changing or buggy or whatever, you're looking
at a horribly small slice of reality.


That's interesting, could you detail a bit more what the D2 program did, 
etc. ?


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 24/10/2010 00:46, bearophile wrote:

Walter:


As we all know, tool support is important for D's success. Making tools easier
to build will help with that.

To that end, I think we need a lexer for the standard library - std.lang.d.lex.
It would be helpful in writing color syntax highlighting filters, pretty
printers, repl, doc generators, static analyzers, and even D compilers.


This is a quite long talk by Steve Yegge that I've just seen (linked from 
Reddit):
http://vimeo.com/16069687

I don't suggest you to see it all unless you are very interested in that topic. 
But the most important thing it says is that, given that big software companies 
use several languages, and programmers often don't want to change their 
preferred IDE, there is a problem: given N languages and M editors/IDEs, total 
toolchain effort is N * M. That means N syntax highlighters, N indenters, N 
refactoring suites, etc. Result: most languages have bad toolchains and most 
IDEs manage very well only one or very few languages.

So he has suggested the Grok project, that allows to reduce the toolchain 
effort to N + M. Each language needs to have one of each service: indenter, 
highlighter, name resolver, refactory, etc. So each IDE may link (using a 
standard interface provided by Grok) to those services and use them.

Today Grok is not available yet, and its development is at the first stages, 
but after this talk I think that it may be positive to add to Phobos not just 
the D lexer, but also other things, even a bit higher level as an indenter, 
highlighter, name resolver, refactory, etc. Even if they don't use the standard 
universal interface used by Grok I think they may speed up the development of 
the D toolchain.

Bye,
bearophile



Hum, very interesting topic! A few disjoint comments:


(*) I'm glad to see another person, especially one who is "prominent" in 
the development community (like Andrei), discuss the importance of the 
toolchain, specificaly IDEs, for emerging languages. Or for any language 
for that matter. At the beggining of the talk I was like "man, this is 
spot-on, that's what I've said before, I wish Walter would *hear* this"! 
LOL, imagine my surprise when I found that Walter was in fact *there*! 
(When I saw the talk I didn't even know this was at NWCPP, otherwise I 
might have suspected)



(*) I actually thought about some similar ideas before, for example, I 
thought about the idea of exposing some (if not all) of the 
functionality of DDT through the command-line (note that Eclipse can run 
headless, without any UI). And this would not be just semantic/indexer 
functionality, so for example:
  * DDoc generation, like Descent had at some point 
(http://www.mail-archive.com/digitalmars-d-annou...@puremagic.com/msg02734.html)
  * build functionality - only really interesting if the DDT builder 
becomes smarter, ie, does more useful stuff than what it does now.

  * semantic functionality: find-ref, code completion.


(*) I wished I was at that talk, I would have liked to ask and discuss 
some things with Steve Yegge, particularly his comments about Eclipse's 
indexer. I become curious for details about what he thinks is wrong 
about Eclipse's indexer. Also, I wonder if he's not conflating "CDT's 
indexer" with "Eclipse indexer", because actually there is no such thing 
as a "Eclipse indexer". I'm gonna take a better look at the comments for 
this one.



(*) As for Grok itself, it looks potentially interesting, but I still 
have only a very vague impression of what it does (let alone *how*).



--
Bruno Medeiros - Software Engineer


Re: ddt 0.4rc1 Installation error

2010-11-24 Thread Bruno Medeiros

On 28/10/2010 10:51, dolive wrote:

Cannot complete the install because one or more required items could not be 
found.

 Software being installed: DDT - D Development Tools RC1 
0.4.0.201010271500-RC1 (org.dsource.ddt.feature.group 0.4.0.201010271500-RC1) 
Missing requirement: DDT - D Development Tools RC1 0.4.0.201010271500-RC1 
(org.dsource.ddt.feature.group 0.4.0.201010271500-RC1) requires 
'org.eclipse.dltk.core.feature.group [2.0.0,2.1.0)' but it could not be found

I was try eclipse3.6, eclipse3.6.1, eclipse3.6.1-1020


Note: in the future please posts stuff like this in the digitalmars.ide 
Newsgroup, or on the DDT forums at DSource, but not here, thanks.


--
Bruno Medeiros - Software Engineer


Re: GCC 4.6

2010-11-24 Thread Bruno Medeiros

On 01/11/2010 01:23, Walter Bright wrote:

... , DOS support,  ...


DOS support?... That's quite telling.
I can't say I view that as a positive, either for DMC or the people who 
prefer it. In fact, I find quite the contrary.



--
Bruno Medeiros - Software Engineer


Re: More Clang diagnostic

2010-11-24 Thread Bruno Medeiros

On 24/11/2010 17:33, bearophile wrote:

Bruno Medeiros:

Why is that? What would cause a loss in compile speed even if this
option was turned off?


Maybe because the data structures used to keep the data around are used and 
kept even when you disable that feature (the switch just disables the printing).

Bye,
bearophile


So was Walter talking about GCC specifically, not the feature in 
general, theoretical terms (ie, as applied to any compiler)?


--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 24/11/2010 13:30, Bruno Medeiros wrote:

On 19/11/2010 23:39, Andrei Alexandrescu wrote:

On 11/19/10 1:03 PM, Bruno Medeiros wrote:

On 22/10/2010 20:48, Andrei Alexandrescu wrote:

On 10/22/10 14:02 CDT, Tomek Sowiński wrote:

Dnia 22-10-2010 o 00:01:21 Walter Bright 
napisał(a):


As we all know, tool support is important for D's success. Making
tools easier to build will help with that.

To that end, I think we need a lexer for the standard library -
std.lang.d.lex. It would be helpful in writing color syntax
highlighting filters, pretty printers, repl, doc generators, static
analyzers, and even D compilers.

It should:

1. support a range interface for its input, and a range interface for
its output
2. optionally not generate lexical errors, but just try to recover
and
continue
3. optionally return comments and ddoc comments as tokens
4. the tokens should be a value type, not a reference type
5. generally follow along with the C++ one so that they can be
maintained in tandem

It can also serve as the basis for creating a javascript
implementation that can be embedded into web pages for syntax
highlighting, and eventually an std.lang.d.parse.

Anyone want to own this?


Interesting idea. Here's another: D will soon need bindings for CORBA,
Thrift, etc, so lexers will have to be written all over to grok
interface files. Perhaps a generic tokenizer which can be parametrized
with a lexical grammar would bring more ROI, I got a hunch D's
templates
are strong enough to pull this off without any source code generation
ala JavaCC. The books I read on compilers say tokenization is a solved
problem, so the theory part on what a good abstraction should be is
done. What you think?


Yes. IMHO writing a D tokenizer is a wasted effort. We need a tokenizer
generator.



Agreed, of all the things desired for D, a D tokenizer would rank pretty
low I think.

Another thing, even though a tokenizer generator would be much more
desirable, I wonder if it is wise to have that in the standard library?
It does not seem to be of wide enough interest to be in a standard
library. (Out of curiosity, how many languages have such a thing in
their standard library?)


Even C has strtok.

Andrei


That's just a fancy splitter, I wouldn't call that a proper tokenizer. I
meant something that, at the very least, would tokenize based on regular
expressions (and have heterogenous tokens).



In other words, a lexer, that might be a better term in this context.

--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 20/11/2010 01:29, Jonathan M Davis wrote:

On Friday, November 19, 2010 15:17:35 Bruno Medeiros wrote:

On 19/11/2010 22:02, Jonathan M Davis wrote:

On Friday, November 19, 2010 13:53:12 Bruno Medeiros wrote:

On 19/11/2010 21:27, Jonathan M Davis wrote:

And by providing a lexer and a parser outside the standard library,
wouldn't it make it just as easy for those tools to be written? What's
the advantage of being in the standard library? I see only
disadvantages: to begin with it potentially increases the time that
Walter or other Phobos contributors may have to spend on it, even if
it's just reviewing patches or making sure the code works.


If nothing, else, it makes it easier to keep in line with dmd itself.
Since the dmd front end is LGPL, it's not possible to have a Boost port
of it (like the Phobos version will be) without Walter's consent. And
I'd be surprised if he did that for a third party library (though he
seems to be pretty open on a lot of that kind of stuff). Not to mention,
Walter and the core developers are _exactly_ the kind of people that you
want working on a lexer or parser of the language itself, because
they're the ones who work on it.

- Jonathan M Davis


Eh? That license argument doesn't make sense: if the lexer and parser
were to be based on DMD itself, then putting it in the standard library
is equivalent (in licensing terms) to licensing the lexer and parser
parts of DMD in Boost. More correctly, what I mean by equivalent, is
that there no reason why Walter would allow one thing and not the
other... (because on both cases he would have to issue that license)


It's very different to have D implementation of something - which is based on a
C++ version but definitely different in some respects - be under Boost and
generally available, and having the C++ implementation be under Boost -
particularly when the C++ version covers far more than just a lexer and parser.
Someone _could_ port the D code back to C++ and have that portion useable under
Boost, but that's a lot more work than just taking the C++ code and using it,
and it's only the portions of the compiler which were ported to D to which could
be re-used that way. And since the Boost code could be used in a commercial
product while the LGPL is more restricted, it could make a definite difference.

I'm not a licensing expert, and I'm not an expert on what Walter does and
doesn't want done with his code, but he put the compiler front end under the
LGPL, not Boost, and he's given his permission to have the lexer alone ported to
D and put under the Boost license in the standard library, which is very
different from putting the entire front end under Boost. I expect that the 
parser
will follow eventually, but even if it does, that's still not the entire front
end. So, there is a difference in licenses does have a real impact. And no one
can take the LGPL C++ code and port it to D - for the standard library or
otherwise - without Walter's permission, because its his copyright on the code.



There are some misunderstandings here. First, the DMD front-end is 
licenced under the GPL, not LGPL.
Second, more importantly, it is actually also licensed under the 
Artistic license, a very permissible license. This is the basis for me 
stating that almost certainly Walter would not mind licensing the DMD 
parser and lexer under Boost, as it's actually not that different from 
the Artistic license.





Ideally, Phobos would be huge in a manner similar to how C# or Java's libraries
are huge. It will take time to get there, and we'll need more developers, but I


That point actually works in my favor. C# and Java's libraries are much 
bigger than Phobos, and yet they have no functionality for 
lexing/parsing their own languages (or any other for that matter)!



--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-24 Thread Bruno Medeiros

On 24/11/2010 16:19, Ellery Newcomer wrote:

On 11/24/2010 09:13 AM, Bruno Medeiros wrote:


I don't know about Ellery, as you can see in that thread he/she(?)
mentioned interest in working on that, but I don't know anything more.



Normally I go by 'it'.



I didn't meant to offend or anything, I was just unsure of that. To me 
Ellery seems like a female name (but that can be a bias due to English 
not being my first language, or some other cultural thing). On the other 
hand, I would be surprised if a person of the female variety would be 
that interested in D, to the point of contributing in such way.



Been pretty busy this semester, so I haven't been doing much.

But the bottom line is, yes I have working antlr grammars for D1 and D2
if you don't mind
1) they're slow
2) they're tied to a hacked-out version of the netbeans fork of ANTLR2
3) they're tied to some custom java code
4) I haven't been keeping the tree grammars so up to date

I've not released them for those reasons. Semester will be over in about
3 weeks, though, and I'll have time then.



Hum, doesn't sound like it might be suitable for DDT, but I wasn't 
counting on that either.



As for me, I didn't work on that, nor did I plan to.
Nor am I planning to anytime soon, DDT can handle things with the
current parser for now (bugs can be fixed on the current code, perhaps
some limitations can be resolved by merging some more code from DMD), so
I'll likely work on other more important features before I go there. For
example, I'll likely work on debugger integration, and code completion
improvements before I would go on writing a new parser from scratch.
Plus, it gives more time to hopefully someone else work on it. :P

Unlike Walter, I can't write a D parser in a weekend... :) Not even on a
week, especially since I never done anything of this kind before.




It took me like 3 months to read his parser to figure out what was going
on.


Not 3 man-months for sure!, right? (Man-month in the sense of someone 
working 40 hours per week during a month.)



--
Bruno Medeiros - Software Engineer


Re: Marketing D [ was Re: GCC 4.6 ]

2010-11-24 Thread Bruno Medeiros

On 31/10/2010 23:20, Walter Bright wrote:

retard wrote:

"Around 2005, interest in the Ruby language surged in tandem with Ruby
on Rails, a popular web application framework written in Ruby. Rails
is frequently credited with making Ruby "famous" and the association
is so strong that the two are sometimes conflated by programmers who
are new to Ruby.[9]" [1]


I have the second edition of "Programming Ruby", the definitive book on
Ruby. Ruby was first released in 1995, and the first edition in 2000,
and there never would have been a second edition if it wasn't famous.




Whoa, I didnt imagine Ruby to be that old, I didn't think it went back 
to before the 2000s ...



--
Bruno Medeiros - Software Engineer


Re: GCC 4.6

2010-11-24 Thread Bruno Medeiros

On 31/10/2010 02:47, bearophile wrote:

Walter:


You post lists of features every day.


I hate wasting your time, so please ignore my posts you aren't interested in. I 
write those things because I like to think and discuss about new ways to 
explain semantics to computers. Most of those things are for discussion, not 
for inclusion in D2 (few of them may be included in D3, in the future).



And how does Walter (or anyone else for that matter), determine if they 
are interested in your posts or not (or anyone else's for that matter) 
without reading them first?
This is often the case for me regarding posts and threads that discuss 
changes or additions to language features. In these kinds of threads the 
title alone is very little indication of the quality or interest of the 
thread.



--
Bruno Medeiros - Software Engineer


Re: shorter foreach syntax - C++0x range-based for

2010-11-24 Thread Bruno Medeiros

On 01/11/2010 15:14, Andrei Alexandrescu wrote:

On 11/1/10 9:09 AM, Gary Whatmore wrote:

2) the syntax comes from Java. It
would be embarrasing to admit that Java did something right.

- G.W.


Only if one is an idiot.



Java did a lot of things right (be they novel or not) that are present
in D, such as reference semantics for classes, inner classes with outer
object access etc.

Andrei


A few more things:

Annotations: a very flexible and extensible system to add metadata to 
any kind of definition. Meta-data can be runtime, compile-time or both. 
D could take a lot of inspiration from Java's annotations.


Integrated support for multi-threading: threads, monitors, 
mutexes/locks, synchronization, etc., are part of the language, 
including more advanced synchronization constructs such as condition 
variables. And also a well defined memory model! In fact D took direct 
inspiration from Java on this, did it not?
Also, very good, very well thought concurrency utils (the stuff done by 
Doug Lea).


Wildcards in generics: a very interesting mechanism for increasing type 
safety. Java wildcards were not done right in every aspect, but still 
they are very nice, and I don't know of any mainstream languages that 
have anything quite like that, or even close.



--
Bruno Medeiros - Software Engineer


Re: GCC 4.6

2010-11-25 Thread Bruno Medeiros
std.algorithm.remove and principle of least astonishment"
A discussion about the semantics of char[] and wchar[], might lead to a 
future change in D, even if it just allowing:

  ubyte[] str = "asdf";


Feel free to correct me, I don't claim to have made a complete or fully 
accurate assessment of all the posts and proposal changes that were 
discussed in the last 3-4 months or so.



--
Bruno Medeiros - Software Engineer


Re: Immutable fields

2010-11-25 Thread Bruno Medeiros

On 03/11/2010 10:42, Lars T. Kyllingstad wrote:

On Tue, 02 Nov 2010 20:54:35 -0400, bearophile wrote:


Is it correct for immutable struct fields to act like enum or static
const fields? (I don't think so, but I am wrong often):



This is bug 3449:

http://d.puremagic.com/issues/show_bug.cgi?id=3449

-Lars


Ouch, this is a very nasty bug indeed. :S

--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-25 Thread Bruno Medeiros

On 24/11/2010 18:48, Andrew Wiley wrote:

On 24/10/2010 00:46, bearophile wrote:

Walter:


As we all know, tool support is important for D's success.
Making tools easier
to build will help with that.

To that end, I think we need a lexer for the standard
library - std.lang.d.lex.
It would be helpful in writing color syntax highlighting
filters, pretty
printers, repl, doc generators, static analyzers, and even D
compilers.


This is a quite long talk by Steve Yegge that I've just seen
(linked from Reddit):
http://vimeo.com/16069687

I don't suggest you to see it all unless you are very interested
in that topic. But the most important thing it says is that,
given that big software companies use several languages, and
programmers often don't want to change their preferred IDE,
there is a problem: given N languages and M editors/IDEs, total
toolchain effort is N * M. That means N syntax highlighters, N
indenters, N refactoring suites, etc. Result: most languages
have bad toolchains and most IDEs manage very well only one or
very few languages.

So he has suggested the Grok project, that allows to reduce the
toolchain effort to N + M. Each language needs to have one of
each service: indenter, highlighter, name resolver, refactory,
etc. So each IDE may link (using a standard interface provided
by Grok) to those services and use them.

Today Grok is not available yet, and its development is at the
first stages, but after this talk I think that it may be
positive to add to Phobos not just the D lexer, but also other
things, even a bit higher level as an indenter, highlighter,
name resolver, refactory, etc. Even if they don't use the
standard universal interface used by Grok I think they may speed
up the development of the D toolchain.

Bye,
bearophile


 From watching this, I'm reminded that in the Scala world, the compiler
can be used in this way. The Eclipse plugin for Scala (and I assume the
Netbeans and IDEA plugins work similarly) is really just a wrapper
around the compiler because the compiler can be used as a library,
allowing a rich IDE with minimal effort because rather than implementing
parsing and semantic analysis, the IDE team can just query the
compiler's data structures.


Interesting, very wise of them to do that.
But not very surprising, Scala is close to the Java world, so they (the 
Scala people) must have known how important it would be to have the best 
toolchain possible, in order to compete (with Java, JDT, also Visual 
Studio, etc.).


--
Bruno Medeiros - Software Engineer


Re: null [re: spec#] (on dynamic typing)

2010-11-25 Thread Bruno Medeiros

On 08/11/2010 14:35, steveh wrote:

bearophile Wrote:


Simen kjaeraas:


Context-sensitive constructor disabling is a theoretical possibility, but
seems to me to conflict with D's other goals.


It's time to update those goals.


I studied the situation further. Now I've decided to leave D. I tried to cope 
with all overly complex type system quirks, but have had enough of it now. 
These two months with D truly opened my eyes. It means I won't touch C++ or 
Java either.

My next goal is to use an untyped (less types = better) language which 
concentrates on cool syntax. Intensive test suites guarantee safety and 
quality. An extreme version of TDD.



I'm often hearing that argument from the new dynamic languages crowd: 
that less types = better, and that lots of tests guarantee safety and 
quality.


* lifts hands *
What they don't realize, is that static typing is just compile-time 
unit-testing. :)

* hands the spoon back to Neo *


Another solution is exploratory testing. I test stuff interactively using a 
REPL. These reports and guidelines can be written down in .doc word documents. 
I learnt this idea from Paul Graham and his new language.


Paul Graham. lol.

--
Bruno Medeiros - Software Engineer


Re: Spec#, nullables and more

2010-11-25 Thread Bruno Medeiros

On 05/11/2010 18:52, Daniel Gibson wrote:

Walter Bright schrieb:

bearophile wrote:

Walter Bright:


The $10 billion mistake was C's conversion of arrays to pointers when
passing to a function.

http://www.drdobbs.com/blog/archives/2009/12/cs_biggest_mist.html

Sadly, there's an ongoing failure to recognize this, as it is never
addressed in any of the revisions to the C or C++ standards,


I agree, that's a very bad problem, probably worse than null-related
bugs.


It's infinitely worse. Null pointers do not result in memory
corruption, buffer overflows, and security breaches.



Not entirely true: Null Pointer dereferences *have* been used for
security breaches, see for example: http://lwn.net/Articles/342330/
The problem is that one can mmap() to 0/NULL so it can be dereferenced
without causing a crash.

Of course this is also a problem of the OS, it shouldn't allow mmap()ing
to NULL in the first place (it's now forbidden by default on Linux and
FreeBSD afaik) - but some software (dosemu, wine) doesn't work without it.

Cheers,
- Daniel


I think Walter's point remains true: null pointers bugs are an order of 
magnitude less important, if not downright insignificant, with regards 
to security breaches.


I mean, from my understanding of that article, a NPE bug on its own is 
not enough to allow an exploit, but other bugs/exploits need to be be 
present. (in that particular case, a straight-flush of them it seems). 
On the other hand, buffer overflows bugs nearly always make possible an 
exploit, correct?


--
Bruno Medeiros - Software Engineer


Re: Spec#, nullables and more

2010-11-25 Thread Bruno Medeiros

On 05/11/2010 18:52, Walter Bright wrote:


I think you misunderstand why checked exceptions are such a bad idea.
It's not just that they are inconvenient and annoying. They decrease
security by *hiding* bugs. That is the opposite of what you'd want in a
high security language.

http://www.mindview.net/Etc/Discussions/CheckedExceptions




Just to clarify: Checked Exceptions are not a source of bugs per se. 
What is a source of bugs is catch-hiding an exception temporarily and 
then forgetting to change the code later (that's the case Bruce mentions 
in the article).
But with discipline you can avoid this: just don't catch-hide 
exceptions. One never catch-hides exceptions by mistake, it is always 
conscious (unlike other bugs like off-by-ones, logic errors, etc.).
For example in Java I *always* wrap exceptions I don't care about in a 
RuntimeException, (using the adapter Bruce presented in that article, 
actually).
Is it annoying and/or unnecessary? Well, I'm not making a statement 
about that, just that it will only actually cause bugs if you are lazy.


--
Bruno Medeiros - Software Engineer


Re: Spec#, nullables and more

2010-11-26 Thread Bruno Medeiros

On 06/11/2010 19:57, Walter Bright wrote:

Adam Burton wrote:

I wouldn't consider that as the same thing. null represents the lack
of a value where as 25 is the wrong value. Based on that argument the
application should fail immediately on accessing the item with 25 (not
many moons later) in the same manner it does nulls, but it doesn't
because 25 is the wrong value where as null is a lack of value.

As with the array allocation example earlier you initialise the array
to nulls to represent the lack of value till your application
eventually gets values to assign to the array (which may still be
wrong values). As shown by my alternative example non-nulls allow you
to define that a variable/parameter wants a value and does not work
when it receives nothing. However in the case of the array because all
the information is not there at the point of creation it is valid for
the array items to represent nothing till you have something to put in
them.



I am having a real hard time explaining this. It is conceptually *the
same thing*, which is having an enforced subset of the values of a type.


Indeed, it is the same thing: to enforce a subset of the values of a 
type (and these are contracts, generally speaking).


So, Adam, if you specify as a contract that a certain variable/value 
should have never 25 on it, and one time it is accessed and it does have 
25 there, then there is a bug, and you should consider your application 
to have failed immediately (even if in practice it likely won't, well, 
not immediately at least).


The only difference in these cases is that wanting to assert something 
to be nonnull (and have the compiler check that) is *much* more common 
than any particular restriction on numeric values. But it is not a 
difference in nature, or concept.



--
Bruno Medeiros - Software Engineer


Re: [help]operator overloading with opEquals in a class

2010-11-26 Thread Bruno Medeiros

On 03/11/2010 20:33, Andrei Alexandrescu wrote:

On 11/3/10 7:07 AM, zhang wrote:

This code belown can be compiled with DMD 2.050. However, it throws an
error message:
core.exception.HiddenFuncError: Hidden method called for main.AClass

I'm not sure about whether it is a bug. Thanks for any help.

[snip]

That hidden functions are signaled during run time and not compile time
is a misunderstanding between design and implementation. There should
not be a HiddenFunc exception and the compiler should just refuse to
compile the program.


Andrei



Indeed! Is this part of the spec, or has Walter not agreed to it?

--
Bruno Medeiros - Software Engineer


Re: Looking for champion - std.lang.d.lex

2010-11-26 Thread Bruno Medeiros

On 24/11/2010 21:12, Daniel Gibson wrote:

bearophile schrieb:

Bruno Medeiros:


On the other hand, I would be surprised if a person of the female
variety
would be that interested in D, to the point of contributing in such way.


In Python newsgroups I have seen few women, now and then, but in the D
newsgroup so far... not many. So far D seems a male thing. I don't
know why. At the university at the Computer Science course there are a
good enough number of female students (and few female teachers too).

Bye,
bearophile


At my university there are *very* few woman studying computer science.
Most women sitting in CS lectures here are studying maths and have to do
some basic CS lectures (I don't think they're the kind that would try D
voluntarily).
We have two female professors though.


It is well know that there is a big gender gap in CS with regards to 
students and professionals. Something like 5-20% I guess, depending on 
university, company, etc..


But the interesting thing (although also quite unfortunate), is that 
this gap takes a even greater dip downwards when you consider the 
communities of FOSS developers/contributors. It must be well below 1%!
(note that I'm not talking about *users* of FOSS software, but only 
people who actually contribute code, whether for FOSS projects, or for 
their own indie/toy projects)



--
Bruno Medeiros - Software Engineer


  1   2   3   4   5   6   7   >