Re: The review of std.hash package

2012-08-07 Thread Ary Manzana

On 8/7/12 14:39 , Dmitry Olshansky wrote:

Since the review queue has been mostly silent again I've decided to jump
in and manage the one that's ready to go :)

Today starts the the review of std.hash package by Johannes Pfau. We go
with the usual cycle of two weeks for review and one week for voting.
Thus review ends on 22th of August, followed by voting that ends on 29th
of August.

Description:

std.hash.hash is a new module for Phobos defining an uniform interface
for hashes and checksums. It also provides some useful helper functions
to deal with this new API.

The std.hash package also includes:


I think std.crypto is a better name for the package. At first I 
thought it contained an implementation of a Hash table.


Also note these entries in wikipedia:

http://en.wikipedia.org/wiki/Hash_function
http://en.wikipedia.org/wiki/Cryptographic_hash_function

Your package provides the later, not just any hash functions, but 
*crypto*graphic hash functions. :-)


(and yes, I know I'm just discussing the name here, but names *are* 
important)


Re: std.d.lexer requirements

2012-08-06 Thread Ary Manzana

On 8/1/12 21:10 , Walter Bright wrote:

8. Lexer should be configurable as to whether it should collect
information about comments and ddoc comments or not

9. Comments and ddoc comments should be attached to the next following
token, they should not themselves be tokens


I believe there should be an option to get comments as tokens. Or 
otherwise the attached comments should have source location...


Re: Ddoc inheritance

2012-06-11 Thread Ary Manzana

On 6/12/12 8:59 , Alex Rønne Petersen wrote:

Hi,

Suppose I have:

abstract class A
{
/// My very long and helpful documentation.
void foo();
}

class B : A
{
override void foo()
{
}
}

Is there any way I can instruct Ddoc to copy the documentation from
A.foo to B.foo? Copying it over manually is a maintenance nightmare.

Would be neat if you could do something like ditto:

/// inherit
override void foo()
{
}



I believe no special comment is needed for this. If you override a 
method without commenting it it should retain the original comment. If 
you do comment it, it should take that new comment.


A patch for this should be really easy to do. Maybe I'll do it after 
(and if) my previous patch gets accepted.


Re: clear() and UFCS

2012-05-26 Thread Ary Manzana

On 5/25/12 22:42 , Steven Schveighoffer wrote:

Finalize isn't right, and neither is dispose...


In Java it's finalize:

http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Object.html#finalize()

In Ruby it's define_finalizer:

http://www.ruby-doc.org/core-1.9.3/ObjectSpace.html#method-c-define_finalizer

Why not calling finalize then? A bonus is that programmers coming from 
those languages will find the name more intuitive.


Re: dget - getting code from github

2012-05-24 Thread Ary Manzana

On 5/24/12 6:08 PM, Kevin Cox wrote:


On May 24, 2012 7:03 AM, Jacob Carlborg d...@me.com
mailto:d...@me.com wrote:
 
  Mac OS X doesn't have one out of the box, App Store doesn't count.
 
  --
  /Jacob Carlborg

IIRC there is one that a ton of people use, is it called macports?


That, bust mostly homebrew:

https://github.com/mxcl/homebrew

(but it's more for developers)



Re: Live code analysis in DCT

2012-05-24 Thread Ary Manzana

On 5/24/12 20:02 , dennis luehring wrote:

Am 24.05.2012 14:25, schrieb Roman D. Boiko:

http://d-coding.com/2012/05/24/performing-live-analysis-in-dct.html


for ideas look at the (dead) decent ide - nice examples of great features

http://www.youtube.com/user/asterite


Maybe this playlist is more appropriate :-P

http://www.youtube.com/playlist?list=PL1DFFABD495A9072Efeature=plcp



compile-time view

http://www.youtube.com/watch?v=oAhrFQVnsrYlist=UUPrzytX4vxdaQU4IXnFEzUgindex=2feature=plcp


Re: dget - getting code from github

2012-05-23 Thread Ary Manzana

On 5/24/12 6:14 AM, Walter Bright wrote:

Currently, getting D code from github is a multistep process, that isn't
always obvious. I propose the creation of a dget program, which will:

dget https://github.com/D-Programming-Deimos/libevent

download the libevent code and install it in a subdirectory named
libevent. Of course, the url could also be:

dget github.com/D-Programming-Deimos/libevent

since https is assumed, or:

dget D-Programming-Deimos/libevent

since github is assumed. And since Deimos is a known library,

dget libevent

can also be hardwired into dget.

Anyone want to implement such? It ought to be fairly straightforward,
and will be a nice timesaver for a lot of people.


I think it's better to focus on a package manager that will make this 
'dget' program obsolete.


Re: GitHub for Windows

2012-05-22 Thread Ary Manzana

On 5/22/12 3:41 PM, Jacob Carlborg wrote:

On 2012-05-22 09:50, Nick Sabalausky wrote:


See, that must be the problem, I only like crazy insane stuff ;)

I actually did spend about a year using an OSX machine as my primary
system,
and was even impressed *at first*. Then I grew to hate it (It's now
sitting,
totally dead, six feet behind me).


When was that, around which version of Mac OS X?


At this point, IMO, the only thing OSX
really has going for it is the Unix underpinnings, and for that I'd
just as
soon use Linux (as a bonus, hardware costs are much lower in
Linux-land). I
know people have said OSX is the only Unix with a good GUI, and I am
largely
a GUI guy, but I actually prefer LXDE, XFCE, GNOME and KDE3 over
Finder/Dock/etc. Not that I'm a huge fan of any of those, but
whatever. The
Linux ones get in my way less, piss me off less, etc.


I feel just the opposite.



Me too.

I feel like OSX (and Mac's hardware) got almost everything right. It 
makes you more productive, even if you are a developer.


Re: dpj for Windows

2012-05-21 Thread Ary Manzana

On 5/20/12 10:37 PM, dnewbie wrote:

On Sunday, 20 May 2012 at 03:53:43 UTC, Nick Sabalausky wrote:

dnewbie r...@myopera.com wrote in message
news:qufvdhexcdzabuzqr...@forum.dlang.org...

dpj is a mini-ide for the D programming language.
http://my.opera.com/run3/blog/2012/05/20/dpj



That's a good start! Not bad. Is it written in D?



It started as a D project, then I've moved it to C.


o_O

Why?


Re: [OT] Windows users: Are you happy with git?

2012-05-18 Thread Ary Manzana

Are you happy with Windows? :-P


Re: [OT] Windows users: Are you happy with git?

2012-05-18 Thread Ary Manzana

On 5/18/12 9:03 PM, Jacob Carlborg wrote:

On 2012-05-18 16:01, Manu wrote:

On 18 May 2012 16:41, Alex Rønne Petersen a...@lycus.org
mailto:a...@lycus.org wrote:

But to be fair, most enterprises/businesses use Linux for servers,
not for desktops.


I don't code on a server... Do you? :)


Why use source code management and deploys when you can code directly on
the production server :)



Where's the like button here? :-P


Re: Problem using Interfce

2012-05-16 Thread Ary Manzana

On 5/16/12 9:24 AM, Stephen Jones wrote:

Ary: I seem to remember playing around with a Simpsons extending program
in Java that did this; you could throw all the different Simpsons into a
single Array because they extended Simpson, and you could walk through
the array and each would call their own name. I kind of purposely left
the language vague in case I was mistaken.


But in Java you'd also need to cast the Simpson to a specific class if 
it contains the fields you are interested it.


Re: Problem using Interfce

2012-05-15 Thread Ary Manzana

On 5/14/12 6:08 PM, Stephen Jones wrote:

I am used to languages where the w under consideration in any
iteration would be known to have been initialized as a Button or
Cursor, etc, and the value of vertStart would be found without
error.


What are the names of those languages?


Re: Visual D 0.3.32 maintenance release

2012-05-13 Thread Ary Manzana

On 5/13/12 7:31 PM, Rainer Schuetze wrote:

resending due to NNTP error, sorry if it causes duplicates.

On 5/13/2012 2:01 PM, Jonathan M Davis wrote:

On Sunday, May 13, 2012 13:48:39 Rainer Schuetze wrote:

On 5/11/2012 9:49 PM, Walter Bright wrote:

On 5/1/2012 9:46 AM, Rainer Schuetze wrote:

The Visual D installer can be downloaded from its website at
http://www.dsource.org/projects/visuald


Can you please move it to github?


I considered that aswell recently, but I'm not yet convinced.

I see the increase in contributions to dmd after the move to github, but
my own experience with it has not been too positive: making patches for
dmd is rather time consuming, I always have to struggle to get the
simple stuff done (while it was just adding a diff to the bugzilla in
the subversion times). As a result, the number of patches that I have
provided has dropped considerably. My feeling is that git allows a lot
of complex things at the cost of making standard operations much more
complicated than necessary.

Using git/github is probably less work for you compared to svn, but this
also depends on a rather large infrastructure like the auto tester. I'm
not sure it does actually help for a project with very few contributors.

There haven't been a lot of community contributions to Visual D so far.
To everybody interested: Would a move to github change that?


You actually find patches to be easier than using github? That strikes
me as
odd. I've always found patches to be a pain to deal with and git and
github
have been really easy overall. You just make your changes on another
branch,
push them up to github, and then create a pull request. If you're the one
merging in the changes, it's as easy as pushing the merge button the
pull
request, and it's in the main repository.

Now, I don't deal with Visual D at all (I'm always on Linux, if
nothing else),
so I wouldn't be a contributor, and I have no idea if very many more
people
would be contribute if it were on github, but I'd definitely expect it
to be
easier for people to contribute if it were up on github than it would
be for
them to create patches and send those to you.

- Jonathan M Davis


The problem is that I need/want to use a branch of dmd that incorporates
a number of patches, and that is where I start making additional
changes. To send a pull request, I have to create a new branch, copy the
changes into it, push it and make the pull request. I have created a
batch to do that, but every other pull request something breaks and I
start cursing...

With the workflow of bugzilla/svn it was just copy and pasting the diff
into the bug report. I understand it is easier on Walter's side, though.


But where did you get the diff from? I'm sure you checked out the 
project and made the changes on it. If that's the case, then it's the 
same as forking and cloning.


I *do* expect contributions to appear in Visual D. Since it's so easy to 
contribute in github, and it is standarized: people know how to do it: 
fork, work, make a pull request (as opposed to making a patch, sending 
it... mmm... is that the author's email? I hope it does work. And I hope 
it checks emails and mine doesn't go to the spam folder! Um, maybe I 
should post in the forums... but does he read them? Ah, maybe I will 
leave the patch for another day).


Re: DCT: D compiler as a collection of libraries

2012-05-12 Thread Ary Manzana

On 5/12/12 12:17 PM, Roman D. Boiko wrote:

On Saturday, 12 May 2012 at 03:32:20 UTC, Ary Manzana wrote:

As deadalnix says, I think you are over-complicating things.

I mean, to store the column and line information it's just:

if (isNewLine(c)) {
line++;
column = 0;
} else {
column++;
}

(I think you need to add that to the SourceRange class. Then copy line
and column to token on the Lexer#lex() method)

Do you really think it's that costly in terms of performance?

I think you are wasting much more memory and performance by storing
all the tokens in the lexer.

Imagine I want to implement a simple syntax highlighter: just
highlight keywords. How can I tell DCT to *not* store all tokens
because I need each one in turn? And since I'll be highlighting in the
editor I will need column and line information. That means I'll have
to do that O(log(n)) operation for every token.

So you see, for the simplest use case of a lexer the performance of
DCT is awful.

Now imagine I want to build an AST. Again, I consume the tokens one by
one, probably peeking in some cases. If I want to store line and
column information I just copy them to the AST. You say the tokens are
discarded but their data is not, and that's why their data is usually
copied.


Would it be possible for you to fork my code and tweak it for
comparison? You will definitely discover more problems this way, and
such feedback would really help me. That doesn't seem to be inadequately
much work to do.

Any other volunteers for this?


Sure, I'll do it and provide some benchmarks. Thanks for creating the issue.


Re: D dropped in favour of C# for PSP emulator

2012-05-12 Thread Ary Manzana

On 5/12/12 12:14 PM, Nick Sabalausky wrote:

DUE FOR TOMORROW?!?

That's NOT FUCKING ENGLISH! There IS NO due for!! Period!


Hilarious :-)


Re: D dropped in favour of C# for PSP emulator

2012-05-12 Thread Ary Manzana

On 5/13/12 4:13 AM, Andrei Alexandrescu wrote:

On 5/11/12 10:38 PM, Ary Manzana wrote:

Add a binarySearch(range, object) method that does all of that?

I mean, I don't want to write more than a single line of code to do a
binarySearch...


assumeSorted(range).contains(object)

is still one line, safer, and IMHO more self-explanatory.

Andrei


Ok. When more people start saying that they can't find a binarySearch 
method... will you change your mind? :-)


(but let's first wait until that moment comes)


Re: DCT: D compiler as a collection of libraries

2012-05-11 Thread Ary Manzana

On 5/11/12 4:22 PM, Roman D. Boiko wrote:

What about line and column information?

Indices of the first code unit of each line are stored inside lexer and
a function will compute Location (line number, column number, file
specification) for any index. This way size of Token instance is reduced
to the minimum. It is assumed that Location can be computed on demand,
and is not needed frequently. So column is calculated by reverse walk
till previous end of line, etc. Locations will possible to calculate
both taking into account special token sequences (e.g., #line 3
ab/c.d), or discarding them.


But then how do you do to efficiently (if reverse walk is any efficient) 
compute line numbers?


Usually tokens are used and discarded. I mean, somebody that uses the 
lexer asks tokens, process them (for example to highlight code or to 
build an AST) and then discards them. So you can reuse the same Token 
instance. If you want to peek the next token, or have a buffer of token, 
you can use a freelist ( http://dlang.org/memory.html#freelists , one of 
the many nice things I learned by looking at DMD's source code ).


So adding line and column information is not like wasting a lot of 
memory: just 8 bytes more for each token in the freelist.


Re: DCT: D compiler as a collection of libraries

2012-05-11 Thread Ary Manzana

On 5/11/12 10:14 PM, Roman D. Boiko wrote:

On Friday, 11 May 2012 at 15:05:19 UTC, deadalnix wrote:

Le 11/05/2012 16:02, Roman D. Boiko a écrit :

Technically, I'm trading N*0(1) operations needed to track line and
column while consuming each character to M*0(log(n)) operations when
calculating them on demand. N = number of characters, n is number of
lines and M is number of actual usages of Location. My assumption is
that M  N (M is much smaller than N).


N can easily be number of tokens.

Yes, if you are looking for the token by its index, not for location.
E.g., for autocompletion it is needed to find the last token before
cursor location. But that is not related to location search.

Also please note that I oversimplified formula for complexity. It also
has other components. My reply was just an additional minor comment.
Motivation is design, not performance.

One additional reason for such separation: with my approach it is
possible to calculate Location both taking into account infromation from
special token sequences, or ignoring it. How would you do that for eager
calculation? Calculate only one category? Or track and store both?

Unnecessary complexity will eventually find a way to shoot your leg :)
Real-world usage (at least, anticipated scenarios) should be the basis
for designing. Sometimes this contradicts intuition and requires you to
look at the problem from a different side.


As deadalnix says, I think you are over-complicating things.

I mean, to store the column and line information it's just:

if (isNewLine(c)) {
  line++;
  column = 0;
} else {
  column++;
}

(I think you need to add that to the SourceRange class. Then copy line 
and column to token on the Lexer#lex() method)


Do you really think it's that costly in terms of performance?

I think you are wasting much more memory and performance by storing all 
the tokens in the lexer.


Imagine I want to implement a simple syntax highlighter: just highlight 
keywords. How can I tell DCT to *not* store all tokens because I need 
each one in turn? And since I'll be highlighting in the editor I will 
need column and line information. That means I'll have to do that 
O(log(n)) operation for every token.


So you see, for the simplest use case of a lexer the performance of DCT 
is awful.


Now imagine I want to build an AST. Again, I consume the tokens one by 
one, probably peeking in some cases. If I want to store line and column 
information I just copy them to the AST. You say the tokens are 
discarded but their data is not, and that's why their data is usually 
copied.


Re: D dropped in favour of C# for PSP emulator

2012-05-11 Thread Ary Manzana

On 5/12/12 3:40 AM, Mehrdad wrote:

On Friday, 11 May 2012 at 20:20:36 UTC, Jonathan M Davis wrote:

That's definitely an example of something that depends on your
background. std.algorithm.any does _exactly_ what it would do in a
functional language.


Well, I know some FP (Scheme/newLISP) but they don't have an any
function. I don't know about F#, but my guess would be that it would do
the same thing as C# would (for obvious reasons).

Are you thinking of a language in particular that has D's behavior? Or
is this just a guess?


Ruby has any?:

http://ruby-doc.org/core-1.9.3/Enumerable.html#method-i-any-3F

And it's the best of the two worlds: given a block (a lambda) it works 
like D. Give no lambda and it just returns true if the enumerable has 
any elements.


Ruby wins again. :-P


Re: D dropped in favour of C# for PSP emulator

2012-05-11 Thread Ary Manzana

On 5/12/12 1:18 AM, Andrei Alexandrescu wrote:

On 5/11/12 1:10 PM, Mehrdad wrote:

On Friday, 11 May 2012 at 18:05:58 UTC, Mehrdad wrote:

and the solution indeed was NOT something I would've found by myself
without spending hours on it.



Just a note: I believe I *had* seen SortedRange in the docs, but I'd
never realized there's something called assumeSorted() that I was
supposed to call... so I was searching up and down for how to search an
*arbitrary* container, not how to search something which was already
pre-sorted for me.
(In retrospect, I probably should've just coded binary search myself...)
It's very counterintuitive to have to make a new object (or struct) just
to do binary search on an array...


At the same time it clarifies, documents, and statistically verifies
that you pass a sorted range. Also, D's binary search works with
non-array ranges, but C#'s works only with arrays (which it assumes
sorted only by convention).

I think we copiously made the right call there.


Andrei


Add a binarySearch(range, object) method that does all of that?

I mean, I don't want to write more than a single line of code to do a 
binarySearch...


Re: D dropped in favour of C# for PSP emulator

2012-05-11 Thread Ary Manzana

On 5/12/12 3:24 AM, Walter Bright wrote:

x + 3 = 5

on the board and asked people to solve for x, they would fail
completely. But, if she wrote:

( ) + 3 = 5

and asked them to fill in the circle with the number that makes it
work, they all went of course, it's 2!


When I read solve *for* x I don't understand it. What do you mean for 
x? Like doing a favor to x?


In Spanish we say hallar el valor de x which means find the value of 
x...


I'm not a native English speaker so I don't know if solve for x sounds 
natural if you haven't done ecuations before... but for me it doesn't. :-P


Re: Growing pains

2012-05-07 Thread Ary Manzana

On 5/7/12 10:25 PM, Robert Clipsham wrote:

On 03/05/2012 15:50, Andrei Alexandrescu wrote:

Just letting you all know we're working on the frustrating and
increasingly frequent Load at xx.xx, try again later errors when
reading this forum through NNTP. They are caused by a significant growth
spurt in newsgroup readership that occurred in recent times. We are
working with our provider to fix these issues.

Thanks,

Andrei


I've gotta say... These have been a lot more frequent for me since you
posted this message!


For me it stopped shortly after this message, but then it started 
happening one or two days ago.




Re: bootDoc - advanced DDoc framework using Twitter's Bootstrap

2012-05-03 Thread Ary Manzana

On 5/3/12 1:23 PM, Jakob Ovrum wrote:

On Thursday, 3 May 2012 at 05:44:47 UTC, Ary Manzana wrote:

On 5/3/12 1:26 AM, Jakob Ovrum wrote:

This project is finally published and documented, so here's an
announcement.

https://github.com/JakobOvrum/bootDoc

bootDoc is a configurable DDoc theme, with advanced JavaScript features
like a package tree and module tree, as well as fully qualified symbol
anchors. The style itself and some of the components come from Twitter's
Bootstrap framework.

Demonstration of Phobos documentation using bootDoc

http://jakobovrum.github.com/bootdoc-phobos/


Very nice!

But why the symbols inside std.algorithm, for instance, are not sorted?

http://jakobovrum.github.com/bootdoc-phobos/std.algorithm.html

(they are kind of sorted by chunks...)


The symbols in the symbol tree appear in the order the symbols appear in
the documentation, which is the order of declaration in the original
source (DMD does it this way). I think it would be a little confusing if
the symbol tree was alphabetically sorted, while the main documentation
was in order of declaration.

It is possible to rearrange everything with JavaScript of course, but...
I think this might be going a little bit too far.

What do you think?


I don't think the main documentation order is right in the first place. 
If a module provides many functions, like std.algorithm, I don't see how 
there could possibly be an intended order, like these are more likely 
to be used.


In any case, if I want to quickly find a function, for example remove 
or insert or something I think might have the name I'm looking for, 
alphabetical order is the best way to go.





Now if it only had cross references... :-P


If I understand you correctly, any kind of automatic cross-referencing
would need post-processing of DMD's generated output. I am considering
such post-processing, but it would massively change the project (a lot
less would require JavaScript), and completely bind the project to the
included generator tool.

I think the tool needs more trial-by-fire testing to determine whether
it's good enough to be mandatory.


Oh, I just said that because I have a pull request waiting for that 
feature to be incorporated in DMD... but I don't think it'll happen...


Re: bootDoc - advanced DDoc framework using Twitter's Bootstrap

2012-05-03 Thread Ary Manzana

On 5/3/12 2:10 PM, Jacob Carlborg wrote:

On 2012-05-03 08:09, Jakob Ovrum wrote:


I am considering putting the module tree and symbol tree in tabs instead
of below each other.


I think that would be a good idea.


I'm not sure. I'd like the symbols to be under the same tree.

With tabs you'd have to click twice to go from one place to another.



Re: bootDoc - advanced DDoc framework using Twitter's Bootstrap

2012-05-03 Thread Ary Manzana

On 5/3/12 6:41 PM, Jacob Carlborg wrote:

On 2012-05-03 10:09, Ary Manzana wrote:


I'm not sure. I'd like the symbols to be under the same tree.

With tabs you'd have to click twice to go from one place to another.



I didn't even know the symbols where there until a scrolled down.



The same happened to me.

What I meant with under the same tree is

+ std
  + algorithm
* map
* reduce
* ...


Re: string find and replace

2012-05-03 Thread Ary Manzana

On 5/3/12 9:30 PM, Iain wrote:

On Thursday, 3 May 2012 at 14:22:57 UTC, Iain wrote:

Forgive me if I am missing something obvious, but is there a simple
option for finding all instances of a particular character in a string
or char[] and replacing them with another character?

I can do this with std.regex, but it seems overkill, when all I want
is the equivalent of PHP's str_replace() function.

Many thanks!


Apologies, after half an hour searching, I post, then five minutes later
figure it out.

myString = replace(myString, to, from); // from std.array


Note that you can also do:

myString = myString.replace(to, from)

I'd point you to the reference on the official page (UFCS: unified 
function call syntax), but I can't find it...


Re: string find and replace

2012-05-03 Thread Ary Manzana

On 5/3/12 11:01 PM, Ary Manzana wrote:

On 5/3/12 9:30 PM, Iain wrote:

On Thursday, 3 May 2012 at 14:22:57 UTC, Iain wrote:

Forgive me if I am missing something obvious, but is there a simple
option for finding all instances of a particular character in a string
or char[] and replacing them with another character?

I can do this with std.regex, but it seems overkill, when all I want
is the equivalent of PHP's str_replace() function.

Many thanks!


Apologies, after half an hour searching, I post, then five minutes later
figure it out.

myString = replace(myString, to, from); // from std.array


Note that you can also do:

myString = myString.replace(to, from)

I'd point you to the reference on the official page (UFCS: unified
function call syntax), but I can't find it...


and it should be replace(from, to)


Re: bootDoc - advanced DDoc framework using Twitter's Bootstrap

2012-05-02 Thread Ary Manzana

On 5/3/12 1:26 AM, Jakob Ovrum wrote:

This project is finally published and documented, so here's an
announcement.

https://github.com/JakobOvrum/bootDoc

bootDoc is a configurable DDoc theme, with advanced JavaScript features
like a package tree and module tree, as well as fully qualified symbol
anchors. The style itself and some of the components come from Twitter's
Bootstrap framework.

Demonstration of Phobos documentation using bootDoc

http://jakobovrum.github.com/bootdoc-phobos/


Very nice!

But why the symbols inside std.algorithm, for instance, are not sorted?

http://jakobovrum.github.com/bootdoc-phobos/std.algorithm.html

(they are kind of sorted by chunks...)

Now if it only had cross references... :-P


Re: Does Dis thread have too many replies?

2012-04-30 Thread Ary Manzana

On 5/1/12 2:24 AM, Walter Bright wrote:

On 4/28/2012 11:47 AM, Walter Bright wrote:

Andrei and I had a fun discussion last night about this question. The
idea was
which features in D are redundant and/or do not add significant value?

A couple already agreed upon ones are typedef and the cfloat, cdouble
and creal
types.

What's your list?


This certainly seems to have become the biggest thread ever!




Re: type conversions

2012-04-30 Thread Ary Manzana

On 4/30/12 8:08 AM, Jonathan M Davis wrote:

On Monday, April 30, 2012 01:42:38 WhatMeWorry wrote:

I'm trying to get my head around D's type conversion. What is the
best way to convert a string to a char array? Or I should say is
this the best way?

string s = Hello There;
char[] c;

c = string.dup;


dup will return a mutable copy of an array. idup will return an immutable copy
of an array. They will both always copy. If you want to convert without having
to make a copy if the array is of the constancy that you want already (e.g. if
a templated function is templated on string type, and it could be any
constancy of char[]), then use std.conv.to.

auto c = to!(char[])(str);

If str was already char[], then it will just be returned, whereas if it's
immutable(char)[], then it would dup it and return that.


Also, what is the best way to explicitly convert a string to an
int?  I've been looking at Library Reference (Phobos) but I'm
stuck.


Use std.conv.to:

auto i = to!string(1234);

std.conv.to is what you use for pretty much any conversion.

- Jonathan M Davis


Can the documentation of std.conv be fixed?

http://dlang.org/phobos/std_conv.html#to

I mean, all the toImpl methods are documented, but in to it clearly 
says Client code normally calls to!TargetType(value) (and not some 
variant of toImpl. I think all the documentation should be in to. Now 
it sounds like you know what to does... but people read documentation 
because they don't know what it does.


There's also no need to document all the different parse methods in 
different places. Just one place is enough and simpler to read.


Re: mysql binding/wrapper?

2012-04-30 Thread Ary Manzana

On 4/30/12 11:57 PM, simendsjo wrote:
On 4/29/12 11:48 PM, dnewbie wrote:

On Saturday, 28 April 2012 at 15:30:13 UTC, simendsjo wrote:
stuff/blob/master/mysql.d

http://my.opera.com/run3/blog/2012/03/13/d-mysql


I use it in a bank account application. It works.



On Mon, 30 Apr 2012 18:19:29 +0200, James Oliphant
jollie.ro...@gmail.com wrote:


Actually, it looks like the vibe folks are using my fork of Steve Teales
mysqln. I had hoped to contact Steve first, so that these changes existed
in one place.
https://github.com/JollieRoger
All of the changes exist in individual branches off the master branch.
Git
will merge these into one file fuzzily.
What they are is as follows:
seperatemain - split main() into its own file (app.d in vibe).
seperatemainwithport - main() using branch addporttoconnection.
addporttoconnection - add no standard port selection to Connection.
fixfordmd2058 - cosmetic changes to work with dmd-2.058.
fixresultset - allow the return of an empty resultset. When
iterating schema, test had no tables and would crash.
fixconnection - would only connect to localhost in Steve's code.
I have other changes that I haven't pushed up yet relating to NUMERIC and
null variants with a more detailed main.d.
Vibe.d looks interesting, I hope these fixes help.


Yes, your patches has been merged. Of course it would be best to have
everything database complete already, but I'm glad it's been merged
as-is for now - it might take a long time (and has already) before a
generic database interface is completed.


Looking at the code of mysql.d I see a big switch with many cases like 
case  0x01:  // TINYINT. But then there's the SQLType enum with those 
constants. Why the enum values are not used in the cases? (and also in 
other parts of the code?)


Frustration [Was: mysql binding/wrapper?]

2012-04-30 Thread Ary Manzana

On 5/1/12 2:44 AM, simendsjo wrote:

On Mon, 30 Apr 2012 20:55:45 +0200, Ary Manzana a...@esperanto.org.ar
wrote:

Looking at the code of mysql.d I see a big switch with many cases like
case 0x01: // TINYINT. But then there's the SQLType enum with those
constants. Why the enum values are not used in the cases? (and also in
other parts of the code?)


It's not finished: http://www.britseyeview.com/software/mysqln/


Ah, I see.

The last commit is 6 months old.

I tried to compile mysql.d

---
 dmd -c mysql.d
/usr/share/dmd/src/phobos/std/exception.d(492): Error: constructor 
mysql.MySQLException.this (string msg, string file, uint line) is not 
callable using argument types (string,string,ulong)
/usr/share/dmd/src/phobos/std/exception.d(492): Error: cannot implicitly 
convert expression (line) of type ulong to uint
mysql.d(105): Error: template instance 
std.exception.enforceEx!(MySQLException).enforceEx!(ulong) error 
instantiating

(...)
(and more...)
---

It's sad. I always want to give D a chance. And when I do I always bump 
into errors and inconveniences.


I thought, maybe the project is 6 months old, it's not compatible 
anymore with the current DMD (but my code really doesn't break at all 
with new Ruby versions, for example). I thought of trying to fix the 
error. Apparently I need to compile it with -m32 so that lengths of 
arrays are uint instead of ulong.


---
 dmd -c -m32 mysql.d
mysql.d(4185): Error: cannot cast r.opIndex(cast(uint)j).get!(ulong)
mysql.d(4201): Error: cannot cast r.opIndex(cast(uint)j).get!(ulong)
mysql.d(4204): Error: cannot cast r.opIndex(cast(uint)j).get!(ulong)
---

(What does it mean cannot cast? Give me the reason, please...)

Or maybe instead of the flag the code is wrong and instead of uint it 
needs to be size_t. But I still get errors.


Every time I want to start coding in D, or helping some project, I 
stumble into all kind of troubles.


But I wonder... is this case in particular D's fault or the library's 
fault? (if the answer is the project is 6 months old, of course it 
won't compile then it's D's fault)


Re: Introducing vibe.d!

2012-04-27 Thread Ary Manzana

On 4/27/12 4:46 AM, Sönke Ludwig wrote:

During the last few months, we have been working on a new
framework for general I/O and especially for building
extremely fast web apps. It combines asynchronous I/O with
core.thread's great fibers to build a convenient, blocking
API which can handle insane amounts of connections due to
the low memory and computational overhead.

Some of its key fatures are:


Impressive. The website also looks really nice, and it's very fast.

I'll definitely play with it and slowly try to make it into my 
workplace, hehe.


Re: Introducing vibe.d!

2012-04-27 Thread Ary Manzana

On 4/27/12 2:50 PM, Brad Anderson wrote:

On Thursday, 26 April 2012 at 20:46:41 UTC, Sönke Ludwig wrote:

During the last few months, we have been working on a new
framework for general I/O and especially for building
extremely fast web apps. It combines asynchronous I/O with
core.thread's great fibers to build a convenient, blocking
API which can handle insane amounts of connections due to
the low memory and computational overhead.

Some of its key fatures are:

- Very fast but no endless callback chains as in node.js
and similar frameworks
- Concise API that tries to be as efficient and intuitive
as possible
- Built-in HTTP server and client with support for HTTPS,
chunked and compressed transfers, keep-alive connections,
Apache-style logging, a reverse-proxy, url routing and
more
- Jade based HTML/XML template system with compile-time
code generation for the fastest dynamic page generation
times possible
- Built-in support for MongoDB and Redis databases
- WebSocket support
- Natural Json and Bson handling
- A package manager for seemless use of extension libraries

See http://vibed.org/ for more information and some example
applications (there are some things in the works such as an
etherpad clone and an NNTP server).

vibe.d is in a working state and enters its first beta-phase
now to stabilize the current feature set. After that, a
small list of additional features is planned before the 1.0
release.

The framework can be downloaded or GIT cloned from
http://vibed.org/ and is distributed under the terms of the
MIT license.

Note that the website including the blog is fully written
in vibe and provides the first stress test for the
implementation.

Regards,
Sönke


I had to copy the included .lib files into bin in order to build the
examples but so far, so good. This is awesome.


How did you install it? I can't find the install.sh script anywhere...


Re: Introducing vibe.d!

2012-04-27 Thread Ary Manzana

On 4/28/12 8:12 AM, Ary Manzana wrote:

On 4/27/12 4:46 AM, Sönke Ludwig wrote:

During the last few months, we have been working on a new
framework for general I/O and especially for building
extremely fast web apps. It combines asynchronous I/O with
core.thread's great fibers to build a convenient, blocking
API which can handle insane amounts of connections due to
the low memory and computational overhead.

Some of its key fatures are:


Impressive. The website also looks really nice, and it's very fast.

I'll definitely play with it and slowly try to make it into my
workplace, hehe.


How to use it?

 ./bin/vibe
usage: dirname path
sh: /vpm.d.deps: Permission denied
Failed: 'dmd' '-g' '-w' '-property' '-I/../source' '-L-levent' 
'-L-levent_openssl' '-L-lssl' '-L-lcrypto' '-Jviews' '-Isource' '-v' 
'-o-' '/vpm.d' '-I/'

Error: cannot read file source/app.d
Failed: 'dmd' '-g' '-w' '-property' '-I/../source' '-L-levent' 
'-L-levent_openssl' '-L-lssl' '-L-lcrypto' '-Jviews' '-Isource' '-v' 
'-o-' 'source/app.d' '-Isource'


I also can't find the install.sh script...


Re: generic indexOf() for arrays ?

2012-04-27 Thread Ary Manzana

On 4/28/12 3:51 AM, Brad Anderson wrote:

On Friday, 27 April 2012 at 19:49:33 UTC, M.Gore wrote:

I'd like to know if there's a generic function over arrays to find the
index of a specific elemnt therein, something like, say:

int indexOf(S) (in S[] arr, S elem);

which works the same way the std.string.indexOf() function works...
couldn't find anything in the std.array module for this scenario.
Would be nice to have this functionality built-in somehow.

Or is there a completely different / better approach to this in D?
Thx, M.


countUntil in std.algorithm should work fine.

http://dlang.org/phobos/std_algorithm.html#countUntil

Regards,
Brad Anderson


---
sizediff_t indexOf(alias pred = a == b, R1, R2)(R1 haystack, R2 needle);

Scheduled for deprecation. Please use std.algorithm.countUntil instead.

Same as countUntil. This symbol has been scheduled for deprecation 
because it is easily confused with the homonym function in std.string.

---

But isn't it the same funcionality? Why use a different name for the 
same funcionality?


Re: What to do about default function arguments

2012-04-25 Thread Ary Manzana

On 4/26/12 11:44 AM, Walter Bright wrote:

A subtle but nasty problem - are default arguments part of the type, or
part of the declaration?

See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug
report.

The problem centers around name mangling. If two types mangle the same,
then they are the same type. But default arguments are not part of the
mangled string. Hence the schizophrenic behavior.

But if we make default arguments solely a part of the function
declaration, then function pointers (and delegates) cannot have default
arguments. (And maybe this isn't a bad thing?)


I don't understand the relationship between two delegate types being the 
same and thus sharing the same implementation for default arguments for 
*different instances* of a delegate with the same type.


Maybe a bug in how it's currently implemented?


Re: Power of D

2012-04-25 Thread Ary Manzana

On 4/26/12 1:51 AM, bioinfornatics wrote:

i search some example of something easy (more easy)  to do in D an not
in another language if possible
- D - C++


...


- D - Haskell
- D - Java
- D - python


A segmentation fault is really easy to do in D but hard in those 
languages. :-P


Re: Let's give a honor to dead giants!

2012-04-21 Thread Ary Manzana

On 4/21/12 9:32 PM, Brian Schott wrote:

On Friday, 20 April 2012 at 08:01:08 UTC, Jacob Carlborg wrote:

There's a vim plugin that uses Clang for autocompletion, if I recall
correctly. We need the same for D.


Pre-alpha is on Github: https://github.com/Hackerpilot/Dscanner/

There *ARE* bugs in --dotComplete. I'll be implementing the first
version of --parenComplete this weekend.


Compiler *cough* as *cough* library *cough*...

Nice work, by the way :-)



Re: pure functions/methods

2012-04-20 Thread Ary Manzana

On 4/20/12 4:06 PM, Namespace wrote:

The sense of pure functions isn't clear to me.
What is the advantage of pure functions / methods?
I inform the compiler with const that this method does not change the
current object, and therefore he can optimize (at least in C++) this
method. How and what optimized the compiler if i have pure or const
pure functions / methods?


As far as I know pure functions always return the same results given the 
same arguments. They also don't cause any side effect.


http://en.wikipedia.org/wiki/Pure_function

Many invocations of a pure function can be executed in parallel because 
they don't have side effects. There's also a chance of caching their 
result since it only depends on the value of their arguments (though I 
doubt what rule the compiler can use to decide to do it).


I don't think any of the following benefits are implemented in DMD.


Re: D Compiler as a Library

2012-04-19 Thread Ary Manzana

On 4/19/12 12:48 AM, Jacob Carlborg wrote:

On 2012-04-18 14:49, Marco Leise wrote:


I want refactoring to be as simple as Foo.renameSymbol(std.path.sep,
std.path.dirSeperator); if the connection between module- and
filename allows std.path to be traced back to the original file.


I'm not sure but I don't think that is enough. In Clang you do something
like this:

1. Get cursor of source location
2. Get a unique global ID of the cursor the corresponds to the symbol
(unified symbol resolution)
3. Walk the AST to find all matches of this ID
4. Get the source location of the cursors which match
5. Rename the symbol at the source location



Unfortunately rename can't be perfect in D because you can't apply it 
inside templates.


Re: D Compiler as a Library

2012-04-19 Thread Ary Manzana

On 4/19/12 7:25 PM, Roman D. Boiko wrote:

On Thursday, 19 April 2012 at 11:04:20 UTC, Ary Manzana wrote:

On 4/19/12 12:48 AM, Jacob Carlborg wrote:

On 2012-04-18 14:49, Marco Leise wrote:


I want refactoring to be as simple as Foo.renameSymbol(std.path.sep,
std.path.dirSeperator); if the connection between module- and
filename allows std.path to be traced back to the original file.


I'm not sure but I don't think that is enough. In Clang you do something
like this:

1. Get cursor of source location
2. Get a unique global ID of the cursor the corresponds to the symbol
(unified symbol resolution)
3. Walk the AST to find all matches of this ID
4. Get the source location of the cursors which match
5. Rename the symbol at the source location



Unfortunately rename can't be perfect in D because you can't apply it
inside templates.


In general, there is nothing preventing renaming in templates or
mixins, if you do renaming after semantic analysis. However,
there can be some troubles, e.g., if mixin string is generated on
the fly from some function. Compiler error should suffice in this
case, or even better, refactoring tool should give a warning.


T foo(T)(T x) {
  return x.something();
}

int something(int x) {
  return 1;
}

float something(float x) {
  return 1.0;
}

Now... go and rename the first function named something. What do you 
do with x.something() inside the template... rename it or not?


Re: Let's give a honor to dead giants!

2012-04-19 Thread Ary Manzana

On 4/19/12 6:08 PM, Roman D. Boiko wrote:

On Thursday, 19 April 2012 at 10:00:29 UTC, Denis Shelomovskij wrote:

D already has a history with heroes and great projects...
Unfortunately, lots of dead great projects. IMHO, a monument to them
should be set up.

Reasons:

1. If a project failed because of some design issues (e.g. Descent)
new developers should be informed about a danger of this approach and
ways to fix it (links to discussions of making a library from a
compiler).

2. If a project failed because of past compiler bugs, lack of
developers (DDL?) or other solved/solvable things, new developers
should be able to easily find and start reanimation of this project.

Any thoughts?


Great idea. In particular, I would be interested to have more
information about Descent :)


Things to learn about Descent:

 * It's impossible for one/two men to continually port a project from 
one language to another (C++ to Java in this case)
 * D needs to be implemented as a library. This means no global 
variables and means to make it easy to build tools on top of it. I know 
of about two alternative new implementation of D (SDC and DCT) but for 
now they are making the same mistake as DMD: they use global variables. 
Also, the developers of those projects don't seem to have experience in 
writing compilers so the code is not very nice (though I saw a visitor 
in DCT). People suggest me to send pull requests or open issues to SDC, 
but if the main design has a big flaw it's better to start from scratch 
than to change the whole code.
 * Eclispe is great for writing IDEs but after programming in Ruby for 
some years now and exclusively using vim I can't go back to using a slow 
IDE so I don't think I'll ever write anything else for Eclipse.
 * Maybe it's better to implement things from scratch instead of 
starting with a project like JDT and modifying its source code to be 
usable as a D IDE. Though in the beginning you go faster later it slows 
you down more and more.
 * Maybe the project didn't recieve a lot of help because it was 
written in Java... don't know.


Re: Let's give a honor to dead giants!

2012-04-19 Thread Ary Manzana

On 4/20/12 11:34 AM, H. S. Teoh wrote:

On Fri, Apr 20, 2012 at 05:15:36AM +0200, Joseph Rushton Wakeling wrote:

On 20/04/12 04:51, Ary Manzana wrote:

* Eclispe is great for writing IDEs but after programming in Ruby for
some years now and exclusively using vim I can't go back to using a
slow IDE so I don't think I'll ever write anything else for Eclipse.


vim actually seems like a great development environment for D -- it
was the first I could set up to really meet my preferences (I'm sure
Emacs is also great, but I never got my head round it sufficiently).
The caveat being that my concept of IDE is glorified text editor that
has really nice handling of syntax highlighting and auto-indentation
and in particular supports smart tab indentation with tabs for indent,
spaces for alignment.


I use vim, and would not touch an IDE with a 20-foot sterilized pole.
Vim has decent auto-indentation, and quite configurable in what it does
with tabs (expandtab, noexpandtab, tabstop, shiftwidth, etc.). I'm sure
if somebody's willing to invest the time, you can do D autocompletion in
vim too (but I've never felt the need for it).


I use autocompletion in vim with a plugin that just offers every 
possible text found in all buffers. This has worked great for me in Ruby 
because every name is so intuitive and similar names (count, length, 
size) are provided as aliases so you almost never make a mistake when 
writing code (at least when writing the correct names, of course bugs 
lurk now and then.)



One thing I miss, though, is ctags support for D. You don't know how
powerful such a simple concept is; it lets you navigate 50,000-line
source files without even batting an eyelid.  :-) (Just try that in an
IDE, and you'll soon get an aneurism from trying to scroll with a
1-pixel high scrollbar...)


How do you implement ctags for a language? I know there is one for Ruby. 
What's the difficulty of making one for D?


Re: Random D geekout

2012-04-19 Thread Ary Manzana

On 4/20/12 1:09 PM, H. S. Teoh wrote:

On Fri, Apr 20, 2012 at 08:44:06AM +0400, Denis Shelomovskij wrote:

20.04.2012 8:06, H. S. Teoh написал:

I'm writing some code that does some very simplistic parsing, and I'm
just totally geeking out on how awesome D is for writing such code:

import std.conv;
import std.regex;
import std.stdio;

struct Data {
string name;
string phone;
int age;
... // a whole bunch of other stuff
}

void main() {
Data d;
foreach (line; stdin.byLine()) {
auto m = match(line, (\w+)\s+(\w+));


It's better not to create a regex every iteration. Use e.g.
---
auto regEx = regex(`(\w+)\s+(\w+)`);
---
before foreach. Of course, you are not claiming this as a
high-performance program, but creating a regex every iteration is
too common mistake to show such code to newbies.


You're right, it was unoptimized code. I ended up using ctRegex for
them:

enum attrRx = ctRegex!`...`;
enum blockRx = ctRegex!`...`;

if (auto m = match(line, attrRx)) {
...
} else if (auto m = match(line, blockRx)) {
...
}

The fact that D enums can be arbitrary types is just beyond awesome.


No, enum there means manifest constant, it has nothing to do with an 
enumeration...


Re: D Compiler as a Library

2012-04-15 Thread Ary Manzana

On 4/13/12 9:10 PM, deadalnix wrote:

Le 13/04/2012 11:58, Ary Manzana a écrit :

Having a D compiler available as a library will (at least) give these
benefits:

1. Can be used by an IDE: D is statically typed and so an IDE can
benefit a lot from this. The features Descent had, as far as I remember,
were:
1.1. Outline
1.2. Autocompletion
1.3. Type Hierarchy
1.4. Syntax and semantic errors, showing not only the line number but
also column numbers if it makes sense
1.5. Automatic import inclusion (say, typing writefln and getting a list
of modules that provide that symbol)
1.6. Compile-time view: replace auto with the inferred type, insert
mixins into scope, rewrite operator overloads and other lowerings (but
I'm not sure this point is really useful)
1.7. Determine, given a set of versions and flags, which branches of
static ifs are used/unused
1.8. Open declaration
1.9. Show implementations (of an interface, of interface's method or,
abstract methods, or method overrides).
1.10. Propose to override a method (you type some letters and then hit
some key combination and get a list of methods to override)
1.11. Get the code of a template when instantiated.
2. Can be used to build better doc generators: one that shows known
subclasses or interface implementation, shows inherited methods, type
hierarchy.
3. Can be used for lints and other such tools.

As you can see, a simple lexer/parser built into an IDE, doc generator
or lint will just give basic features but will never achieve something
exceptionally good if it lacks the full semantic knowledge of the code.

I'll write a list of things I'd like this compiler-as-library to have,
but please help me make it bigger :-)

* Don't use global variables (DMD is just thought to be run once, so
when used as a library it can just be used, well, once)
* Provide a lexer which gives line numbers and column numbers
(beginning, end)
* Provide a parser with the same features
* The semantic phase should not discard any information found while
parsing. For example when DMD resolves a type it recursively resolves
aliasing and keeps the last one. An example:

alias int foo;
alias foo* bar;

bar something() { ... }

It would be nice if bar, after semantic analysis is done, carries the
information that bar is foo* and that foo is int. Also that
something's return type is bar, not int*.
* Provide errors and warnings that have line numbers as well as column
numbers.
* Allow to parse the top-level definitions of a module. Whit this I mean
skipping function bodies. At least Descent first built a the outline of
the whole project by doing this. This mode should also allow specifying
a location as a target, and if that location falls inside a function
body then it's contents are returned (useful when editing a file, so you
can get the outline as well as semantic info of the function currently
being edited, which will never affect semantic in other parts of the
module). This will dramatically speed up the editor.
* Don't stop parsing on errors (I think DMD already does this).
* Provide a visitor class. If possible, use visitors to implement
semantic analysis. The visitor will make it super easy to implement
lints and to generate documentation.


SDC have a lot of theses, and I proposed a similar stuff for its
evolution. I think it is easier for SDC than it is for dmd considering
the codebase of both.


Cool! SDC is the way to go. Let's focus our efforts on that project. :-)

One thing I saw in the code is that global variables are used in it 
(specially in the sdc.aglobal module).


Also, the lexer returns a TokenStream which contains the full array of 
tokens. This is very slow compared to returning them as they are 
scanned, when requested. But I guess this is easy to fix as it's hidden 
behind the TokenStream interface.


I tried to compile a simple file:

void main() {}

 ./bin/sdc main.d
core.exception.asserter...@sdc.gen.sdcmodule(560): Assertion failure

5   sdc 0x00010b87aa2a _d_assertm + 42
6   sdc 0x00010b4e2daa void 
sdc.gen.sdcmodule.__assert(int) + 26
7   sdc 0x00010b55d19c void 
sdc.gen.sdcmodule.Store.addFunction(sdc.gen.sdcfunction.Function) + 92
8   sdc 0x00010b55d75b void 
sdc.gen.sdcmodule.Scope.add(immutable(char)[], 
sdc.gen.sdcfunction.Function) + 123
9   sdc 0x00010b54ee36 void 
sdc.gen.declaration.declareFunctionDeclaration(sdc.ast.declaration.FunctionDeclaration, 
sdc.ast.sdcmodule.DeclarationDefinition, sdc.gen.sdcmodule.Module) + 1278
10  sdc 0x00010b54e4b2 void 
sdc.gen.declaration.declareDeclaration(sdc.ast.declaration.Declaration, 
sdc.ast.sdcmodule.DeclarationDefinition, sdc.gen.sdcmodule.Module) + 146
11  sdc 0x00010b54d12e void 
sdc.gen.base.genDeclarationDefinition

Re: IDE Support for D

2012-04-15 Thread Ary Manzana

On 4/7/12 7:15 AM, Brad Roberts wrote:

On Sat, 7 Apr 2012, Manu wrote:


I use VisualD, and it's currently borderline. It has recently gained the
minimum useful feature set, but still has quite a few bugs. It's promising
though. Hoping there is a new release soon with a few of the critical bugs
fixed_

If there was a SublimeText integration, I would pay good money for it...
(actually, I would pay good money for VisualD too if it became solid)


up front: not picking on this email specifically, it just happened to be
handy and represents a common problem with this community.

A large number of people are in the 'want things to be better than they
are camp' and are looking at projects that are largely one man projects.
I can just about guarantee that one man projects will die, it's only a
matter of time.


Except, of course:

http://www.cavestory.org/


Re: floats default to NaN... why?

2012-04-15 Thread Ary Manzana

On 4/16/12 12:00 PM, F i L wrote:

On Monday, 16 April 2012 at 03:25:15 UTC, bearophile wrote:

F i L:


I should be able to tackle something like adding a compiler flag to
default FP variables to zero. If I write the code, would anyone
object to having a flag for this?


I strongly doubt Walter  Andrei will accept this in the main DMD trunk.


Do you have an idea as the reason? To specific/insignificant an issue to
justify a compiler flag? They don't like new contributors?

I'll wait for a definite yes or no from one of them before I approach this.


It's a flag that changes the behavior of the generated output. That's a 
no no.


D Compiler as a Library

2012-04-13 Thread Ary Manzana
Having a D compiler available as a library will (at least) give these 
benefits:


  1. Can be used by an IDE: D is statically typed and so an IDE can 
benefit a lot from this. The features Descent had, as far as I remember, 
were:

1.1. Outline
1.2. Autocompletion
1.3. Type Hierarchy
1.4. Syntax and semantic errors, showing not only the line number 
but also column numbers if it makes sense
1.5. Automatic import inclusion (say, typing writefln and getting a 
list of modules that provide that symbol)
1.6. Compile-time view: replace auto with the inferred type, insert 
mixins into scope, rewrite operator overloads and other lowerings (but 
I'm not sure this point is really useful)
1.7. Determine, given a set of versions and flags, which branches 
of static ifs are used/unused

1.8. Open declaration
1.9. Show implementations (of an interface, of interface's method 
or, abstract methods, or method overrides).
1.10. Propose to override a method (you type some letters and then 
hit some key combination and get a list of methods to override)

1.11. Get the code of a template when instantiated.
 2. Can be used to build better doc generators: one that shows known 
subclasses or interface implementation, shows inherited methods, type 
hierarchy.

 3. Can be used for lints and other such tools.

As you can see, a simple lexer/parser built into an IDE, doc generator 
or lint will just give basic features but will never achieve something 
exceptionally good if it lacks the full semantic knowledge of the code.


I'll write a list of things I'd like this compiler-as-library to have, 
but please help me make it bigger :-)


 * Don't use global variables (DMD is just thought to be run once, so 
when used as a library it can just be used, well, once)
 * Provide a lexer which gives line numbers and column numbers 
(beginning, end)

 * Provide a parser with the same features
 * The semantic phase should not discard any information found while 
parsing. For example when DMD resolves a type it recursively resolves 
aliasing and keeps the last one. An example:


  alias int foo;
  alias foo* bar;

  bar something() { ... }

  It would be nice if bar, after semantic analysis is done, carries 
the information that bar is foo* and that foo is int. Also that 
something's return type is bar, not int*.
 * Provide errors and warnings that have line numbers as well as column 
numbers.
 * Allow to parse the top-level definitions of a module. Whit this I 
mean skipping function bodies. At least Descent first built a the 
outline of the whole project by doing this. This mode should also allow 
specifying a location as a target, and if that location falls inside a 
function body then it's contents are returned (useful when editing a 
file, so you can get the outline as well as semantic info of the 
function currently being edited, which will never affect semantic in 
other parts of the module). This will dramatically speed up the editor.

 * Don't stop parsing on errors (I think DMD already does this).
 * Provide a visitor class. If possible, use visitors to implement 
semantic analysis. The visitor will make it super easy to implement 
lints and to generate documentation.


Re: Foreach Closures?

2012-04-11 Thread Ary Manzana

On 4/11/12 4:27 PM, Jacob Carlborg wrote:

On 2012-04-11 04:50, Ary Manzana wrote:


Yes. In fact, JDT has a built-in Java compiler in their implementation.
Maybe it was easier to do it for them because the Java spec is easier
and doesn't fluctuate that much as the D spec. And JDT used that
compiler all over the place for getting all those IDE features.


Exactly. Java hasn't changed much in the last 10 years (ok, just now it
starts to changed again). JDT also contains a full compiler, not just
the frontend, so it can compile all code. This would be nice to have for
D as well but I think the frontend is the most important part.



Yes. I'm still thinking how could it be done but I have no idea at all 
how to do it. I can't figure out what that API would look like.


I like the idea of a new thread for this discussion. Eventually me or 
someone else should start it. :-P


Re: Foreach Closures?

2012-04-10 Thread Ary Manzana

On 4/10/12 2:46 PM, Jacob Carlborg wrote:

On 2012-04-10 04:24, Andrei Alexandrescu wrote:

On 4/9/12 9:21 PM, Ary Manzana wrote:

Yes, D definitely needs that. The Eclipse plugin could just use bindings
to the D compiler API with JNI.


Would the JSON compiler output help?

Andrei


No, it's no way near sufficient for what Descent can do and what's
expected from an IDE these days, think JDT for Eclipse.

Descent can handle:

* Syntax highlighting
* Semantic highlighting
* Show lex, parse and semantic errors
* Compile time debugging
* Compile time view
* Formatting
* Show the actual type of an inferred or aliased type
* Smart autocompletion
* Many other things as well

Note that in addition to (most of) the above, JDT can handle a lot more.
The compiler is the only tool that can properly handle this. It's also
the only sane approach, to have the compiler usable as a library.

Just take a look how it used to be (and in some cases are) in the C/C++
world before Clang and LLVM came a long:

* You have the compiler
* An IDE with a parser/compiler
* The debugger with an (expression) compiler

All these compilers are different and need to stay in synch. That's
not how you do good software development. You build a compiler library
that can be used in all the above tools. BTW, it's only the real
compiler that can handle everything properly.


Yes. In fact, JDT has a built-in Java compiler in their implementation. 
Maybe it was easier to do it for them because the Java spec is easier 
and doesn't fluctuate that much as the D spec. And JDT used that 
compiler all over the place for getting all those IDE features.




Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/9/12 7:26 AM, Kevin Cox wrote:

I was wondering about the foreach statement and when you implement
opApply() for a class it is implemented using closures.  I was wondering
if this is just how it is expressed or if it is actually syntatic
sugar.  The reason I aski is because if you have a return statement
inside a foreach it returns from the outside function not the closure.

I was just wondering if anyone could spill the implementation details.

Thanks,
Kevin


In this video you can see what foreach with opApply gets translated to 
(at about minute 1):


http://www.youtube.com/watch?v=oAhrFQVnsrY


Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/9/12 10:58 PM, Jacob Carlborg wrote:

On 2012-04-09 15:19, Manu wrote:

OMG, DO WANT! :P
Who wrote this? I wonder if they'd be interested in adapting it to
VisualD + MonoDevelop?


That would be Ary Manzana. I think one of the reasons why he stopped
working on this was that he ported the DMD frontend to Java and it's
just a pain to stay updated with DMD.


Yes, it was a pain. I can't understand how I did it. Aaaah... the times 
when one was young. :-P


Robert Fraser also helped a lot with porting, doing some refactorings 
and many other cool stuff. I don't remember seeing a message of him in 
this newsgroup for a long time now...



This comes back to us again, again and again. We _badly need_ a compiler
that is usable as a library. Preferably with a stable API which it
possible to create bindings for other languages. For that compiler to be
stay up to date it needs to be the reference implementation, i.e. the
one that Walter works on.

Also Walter won't just drop DMD and replace it with something else or
start a major refactoring process on the existing code base.


Yes, D definitely needs that. The Eclipse plugin could just use bindings 
to the D compiler API with JNI.


In fact, I think Walter and company should stop working on the current 
DMD codebase and start all over again. The code, as I see it, is a big 
mess. Now that the spec is more or less clear and not many new 
features are added, I think this is the time to do it.


Actually, nobody has to wait Walter. The community could just start 
writing a D compiler in D, host it in github and work with pull 
requests... something like what Rubinius has done with Ruby.


Though you might think it'll be harder to catch up with language 
changes, if the code has a better design I think introducing new changes 
should be much easier than in DMD's current codebase.



BTW, Descent has a compile time debugger as well, if I recall correctly.


Yeah, I'm not sure how well that works.


Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/9/12 9:35 PM, Kevin Cox wrote:


On Apr 9, 2012 9:19 AM, Manu turkey...@gmail.com
mailto:turkey...@gmail.com wrote:
 
  OMG, DO WANT! :P
  Who wrote this? I wonder if they'd be interested in adapting it to
VisualD + MonoDevelop?
 
 
  On 9 April 2012 12:56, Ary Manzana a...@esperanto.org.ar
mailto:a...@esperanto.org.ar wrote:
 
  On 4/9/12 7:26 AM, Kevin Cox wrote:
 
  I was wondering about the foreach statement and when you implement
  opApply() for a class it is implemented using closures.  I was
wondering
  if this is just how it is expressed or if it is actually syntatic
  sugar.  The reason I aski is because if you have a return statement
  inside a foreach it returns from the outside function not the
closure.
 
  I was just wondering if anyone could spill the implementation details.
 
  Thanks,
  Kevin
 
 
  In this video you can see what foreach with opApply gets translated
to (at about minute 1):
 
  http://www.youtube.com/watch?v=oAhrFQVnsrY
 

Unfortunately I can't get it working.  Ill have to keep fiddling.



Note that, as many already said, it hasn't been updated for a long time 
now, and things won't change. So only use it if coding for a relly 
old D version.


Re: More ddoc complaints

2012-04-09 Thread Ary Manzana

On 4/9/12 7:05 PM, Stewart Gordon wrote:

On 08/04/2012 02:08, Adam D. Ruppe wrote:

I have a pull request up to remove the big misfeature
of embedded html in ddoc, and it is pending action,
from me, to answer some of Walter's concerns.


What have you done - just made it convertin documentation
comments to lt; gt; amp; before processing?

What is the user who wants some output format other than HTML or XML to do?


What other formats is ddoc producing right now that people are using?


Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/10/12 10:47 AM, Brad Anderson wrote:

On Mon, Apr 9, 2012 at 8:21 PM, Ary Manzana a...@esperanto.org.ar
mailto:a...@esperanto.org.ar wrote:

On 4/9/12 10:58 PM, Jacob Carlborg wrote:

On 2012-04-09 15:19, Manu wrote:

OMG, DO WANT! :P
Who wrote this? I wonder if they'd be interested in adapting
it to
VisualD + MonoDevelop?


That would be Ary Manzana. I think one of the reasons why he stopped
working on this was that he ported the DMD frontend to Java and it's
just a pain to stay updated with DMD.


Yes, it was a pain. I can't understand how I did it. Aaaah... the
times when one was young. :-P

Robert Fraser also helped a lot with porting, doing some
refactorings and many other cool stuff. I don't remember seeing a
message of him in this newsgroup for a long time now...


This comes back to us again, again and again. We _badly need_ a
compiler
that is usable as a library. Preferably with a stable API which it
possible to create bindings for other languages. For that
compiler to be
stay up to date it needs to be the reference implementation,
i.e. the
one that Walter works on.

Also Walter won't just drop DMD and replace it with something
else or
start a major refactoring process on the existing code base.


Yes, D definitely needs that. The Eclipse plugin could just use
bindings to the D compiler API with JNI.

In fact, I think Walter and company should stop working on the
current DMD codebase and start all over again. The code, as I see
it, is a big mess. Now that the spec is more or less clear and not
many new features are added, I think this is the time to do it.

Actually, nobody has to wait Walter. The community could just start
writing a D compiler in D, host it in github and work with pull
requests... something like what Rubinius has done with Ruby.


It's already been started.  SDC: https://github.com/bhelyer/SDC

Regards,
Brad Anderson


Awesome!


Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/10/12 10:24 AM, Andrei Alexandrescu wrote:

On 4/9/12 9:21 PM, Ary Manzana wrote:

Yes, D definitely needs that. The Eclipse plugin could just use bindings
to the D compiler API with JNI.


Would the JSON compiler output help?

Andrei


Not sure. At least in Descent you could hover over an auto keyword and 
know the inferred type, even inside function bodies. I don't think te 
JSON compiler output gives you any information about function bodies... 
right?


Re: Custom attributes (again)

2012-04-06 Thread Ary Manzana

On 4/6/12 3:54 PM, Walter Bright wrote:

On 4/6/2012 12:49 AM, Alex Rønne Petersen wrote:

What about type declarations? I think those ought to be supported too.
E.g. it
makes sense to mark an entire type as @attr(serializable) (or the
inverse).



That would make it a type constructor, not a storage class, which we
talked about earlier in the thread. I refer you to that discussion.


What's the difference between type constructor and storage class 
beside the name?


Re: Custom attributes (again)

2012-04-06 Thread Ary Manzana

On 4/6/12 3:48 PM, Walter Bright wrote:

On 4/6/2012 12:35 AM, Alex Rønne Petersen wrote:

It actually can be a problem. In .NET land, there are many attributes
across
many projects (and even in the framework itself) with the same names.
It turns
out that regular namespace lookup rules alleviate this problem.



Perhaps a better scheme is:

enum foo = 3;

...

@attr(foo) int x;

That way, foo will follow all the usual rules.


At least in .Net and Java it's something like this.

1. You declare your attributes. This is something good, because you have 
a place to say This attribute is used for marking fields as 
non-serializable.


The syntax in Java for declaring an attribute:

public @interface Foo {
  String xxx;
  int yyy;
}

In D maybe @interface could be used to (in order to avoid introducing 
another keyword... or maybe use @attribute instead):


@attribute Foo {
  string xxx;
  int yyy;
}

2. You use them by using their names. What you are proposing if for 
attribute foo to be @attr(foo). But in Java it's @foo.


So in Java you would use that attribute like this:

@Foo(xxx = hello, yyy = 1);
void func() {}

Then you can get the Foo attribute in func, and ask for it's xxx and yyy.

---

Now, your proposal is much simpler and it will become inconvenient in 
some cases. For example suppose you want to provide attributes for 
serialization (I guess the classic example). With your proposal it would be:


/// This is actually an attribute. Use this together with serialized_name.
enum serialize = 1;
enum serialized_name = 2;

@attr(serialize = true, serialized_name = Foo)
int x;

Now, with the way things are done in Java and C#:

/// Marks a field to be serialized.
@attribute serialize {
  /// The name to be used.
  /// If not specified, the name of the member will be used instead.
  string name;
}

@serialize(name = Foo)
int x;

You can see the syntax is much cleaner. The attribute declaration also 
serves as documentation and to group attributes related to the 
serialization process.


Now, to implement this is not very much difficult than what you proposed.

1. Introduce the syntax to define attributes. Piece of cake, since it's 
much more or less the syntax of a struct, but functions or nested types 
are not allowed. Parse them into an AttributeDecl or something like that.
2. When the compiler finds @attr(field = value) it uses normal lookup 
rules to find attr. Then it checks it's an attributes. Then all fields 
are check in turn to see if their type match. You can probably put there 
anything that's compile-time evaluatable, though usually primitive types 
and strings are enough. If a field is not specified, it's type.init will 
be used.

3. The syntax for querying is almost the same as you proposed:

__traits(hasAttribute, x, serializable) // true
__traits(getAttribtue, x, serializable, name) // Foo

4. Declare the core attributes in object.di or similar: @safe, @nothrow, 
etc. You can also document them.
5. Probably deprecate __traits(isSafe) and so on, since hasAttribute can 
be used for that.


Re: Custom attributes (again)

2012-04-06 Thread Ary Manzana

On 4/6/12 6:12 PM, Walter Bright wrote:

On 4/6/2012 2:50 AM, Ary Manzana wrote:

The syntax in Java for declaring an attribute:

public @interface Foo {
String xxx;
int yyy;
}

In D maybe @interface could be used to (in order to avoid introducing
another
keyword... or maybe use @attribute instead):

@attribute Foo {
string xxx;
int yyy;
}


I don't see the need for creating a new kind of symbol.



2. You use them by using their names. What you are proposing if for
attribute
foo to be @attr(foo). But in Java it's @foo.

So in Java you would use that attribute like this:

@Foo(xxx = hello, yyy = 1);
void func() {}

Then you can get the Foo attribute in func, and ask for it's xxx and
yyy.


This is a runtime system.


Yes, but I'm thinking about a compile-time system (as I showed in the 
example usages below).






Now, your proposal is much simpler and it will become inconvenient in
some
cases. For example suppose you want to provide attributes for
serialization (I
guess the classic example). With your proposal it would be:

/// This is actually an attribute. Use this together with
serialized_name.
enum serialize = 1;
enum serialized_name = 2;

@attr(serialize = true, serialized_name = Foo)
int x;


No, it would be:

enum serialize = true;
enum serialize_name = Foo;
@attr(serialize, serialized_name) int x;

There would be no initialization in the @attr syntax.


Hmmm... I didn't get that quite well. You are using the symbol's name as 
the attribute name? In my example I missed splitting it into two files:


my_attributes.d:

@attribute serialize { }

my_usage_of_it.d:

import my_attributes;

@serialize(...)





Now, with the way things are done in Java and C#:

/// Marks a field to be serialized.
@attribute serialize {
/// The name to be used.
/// If not specified, the name of the member will be used instead.
string name;
}

@serialize(name = Foo)
int x;

You can see the syntax is much cleaner. The attribute declaration also
serves as
documentation and to group attributes related to the serialization
process.


I don't see that it is cleaner - there's no particular reason why a new
symbol type needs to be introduced.



Now, to implement this is not very much difficult than what you proposed.

1. Introduce the syntax to define attributes. Piece of cake, since
it's much
more or less the syntax of a struct, but functions or nested types are
not
allowed. Parse them into an AttributeDecl or something like that.
2. When the compiler finds @attr(field = value) it uses normal lookup
rules to
find attr. Then it checks it's an attributes. Then all fields are
check in
turn to see if their type match. You can probably put there anything
that's
compile-time evaluatable, though usually primitive types and strings
are enough.
If a field is not specified, it's type.init will be used.
3. The syntax for querying is almost the same as you proposed:

__traits(hasAttribute, x, serializable) // true
__traits(getAttribtue, x, serializable, name) // Foo

4. Declare the core attributes in object.di or similar: @safe,
@nothrow, etc.
You can also document them.
5. Probably deprecate __traits(isSafe) and so on, since hasAttribute
can be used
for that.


@safe, @nothrow, etc., require a lot of semantic support in the
compiler. They cannot pretend to be user defined attributes.


Yes they can. That's how it is done in C# and Java. In fact, IUnknown is 
pretending to be an interface and has semantic support in the compiler.


Re: Custom attributes (again)

2012-04-06 Thread Ary Manzana

On 4/6/12 6:29 PM, Timon Gehr wrote:

On 04/06/2012 12:12 PM, Walter Bright wrote:

On 4/6/2012 2:50 AM, Ary Manzana wrote:

The syntax in Java for declaring an attribute:

public @interface Foo {
String xxx;
int yyy;
}

In D maybe @interface could be used to (in order to avoid introducing
another
keyword... or maybe use @attribute instead):

@attribute Foo {
string xxx;
int yyy;
}


I don't see the need for creating a new kind of symbol.



It would behave like a struct anyway. The issue is whether any struct
should be allowed to be used as an attribute, or whether a runtime
instance of an attribute can be created.

Syntax could just as well be this:

@attribute struct Foo {
// ...
}


True, I struct can be fine. And I don't see any problem in using it in 
runtime. Though I'm sure nobody would like it to remain in the obj file 
if it's only used at compile time...


Re: Custom attributes (again)

2012-04-06 Thread Ary Manzana

On 4/6/12 7:09 PM, Timon Gehr wrote:

On 04/06/2012 12:53 PM, Walter Bright wrote:

On 4/6/2012 3:27 AM, Ary Manzana wrote:

@safe, @nothrow, etc., require a lot of semantic support in the
compiler. They cannot pretend to be user defined attributes.


Yes they can. That's how it is done in C# and Java. In fact, IUnknown is
pretending to be an interface and has semantic support in the compiler.


All the semantics of @safe are in the compiler. None of it can be user
defined. There's just no point to trying to make it user defined. It's
like trying to make int user defined.

IUnknown's semantics are nearly all user-defined.



The proposal is not to make the semantics of @safe user defined. I think
he proposes to make 'safe' a symbol that is looked up like an user
defined symbol.

Some languages do the same for the built-in integer type.


The compiler does the same for TypeInfo, TypeInfo_ClassDeclaration or 
what ever, Object, etc.


I'm just proposing @safe to be seen as a user-defined attribute, but 
implemented in the compiler with special semantic.


I'm saying it so that lookup rules become easier: just search. No 
special cases like if the attribute name is safe. Of course, treat the 
special cases later inside the compiler code.


Re: Cross-references in generated ddoc

2012-04-05 Thread Ary Manzana

On 4/4/12 11:11 PM, Jacob Carlborg wrote:

On 2012-04-04 15:53, Ary Manzana wrote:


You are right!

I was missing doing cross-reference for template instances. Now I did
it, but I was actually forgetting to do cross-references for template
instances inside templates. :-P

So now I did it. Take a look, much better! :-)

http://pancake.io/1e79d0/array.html#array


Thanks, much better now :)


But you can't reference a function in the generated ddoc. You could do
it manually, but then you'd have to figure out the mangling or something
like that. Also, when guessing what an identifier resolves to, I can't
possibly know which template parameters to use.


For example, Phobos uses XREF to reference symbols. The template
parameter doesn't matter since that will refer to the same function.


Also, all overloads will have more or less the same documentation and
they will be one next to the other. I don't think that's an issue for a
documentation system.


I guess you're right.


Thanks. I had a problem with template members. It's now fixed.

http://pancake.io/1e79d0/complex.html#Complex.toString


Good.


I wanted to do that. But I have to deal with ddoc macros. Every
declaration is put inside a dt tag. That is issued with a $(DT ...)
macro. So I'd have to create another macro, say $DT_WITH_ID or something
like that that outputs the id.

I can't simply output an id attribute because I'm not generating html:
I'm generating ddoc.


Ok, I see. But why don't you create $DT_WITH_ID then :)


Yes, I think I will eventually do that, if the pull gets accepted.



Re: Custom attributes (again)

2012-04-05 Thread Ary Manzana

On 4/6/12 1:35 AM, Walter Bright wrote:

On 4/5/2012 5:00 AM, Manu wrote:

C# and Java both have attributes, following these established design
patterns, I
don't think there should be any mystery over how they should be
implemented.


At the Lang.NEXT conference over the last 3 days, I was able to talk to
many smart people about attributes. But I did find some confusion - are
they best attached to the variable/function (i.e. storage class), or
attached to the type (type constructor)? I think the former. Attaching
it to the type leads to all sorts of semantic issues.


I don't understand the difference between storage class and type 
constructor. I guess I do. But my answer is the same as deadalnix: they 
are attached to declarations (at compile time).


Can you give us an example of the confusion that arose? I can't 
understand it without examples.


I think it should work like this:

@custom
class Foo {

  @ custom
  void bar() { }

  void baz() { }
}

class Other {}

__traits(hasAttribute, Foo, 'custom') -- true
__traits(hasAttribute, Other, 'custom') -- false

// I have no idea how to iterate the members of Foo, or get a reference 
to the bar method... I can't understand what __traits(getMember) 
returns from the docs...


Re: Insert an element into an Associative Array ?

2012-04-05 Thread Ary Manzana

On 4/5/12 2:57 AM, Chris Pons wrote:

I'm playing around with associative arrays right now and I can't
seem to figure out how to add additional objects to the array. I
tried insert but it doesn't recognize both arguments.

Also, if I do this it produces an error:

Node[bool] test;

Node node;

Node[bool] temp = [ false:node ];

test ~= temp;


Error 1 Error: cannot append type Node[bool] to type
Node[bool] C:\Users\CP\Documents\Visual Studio
2010\Projects\D\STDS\NPC.d 256

Does this mean you can't use the append operator on associative
arrays ? ( this one ~= ) ?


By the way, why is it called associative array? A name like Hash or 
Map would be much better. Everyone knows what a Hash means. I don't see 
anyone using associative array to refer to a Hash. And I think this is 
the source of the confusion Chris has...


I mean, you can't append to an associate array. What part of it makes it 
an array?


Re: Cross-references in generated ddoc

2012-04-04 Thread Ary Manzana

On 4/4/12 6:35 PM, Jacob Carlborg wrote:

On 2012-04-04 07:38, Ary Manzana wrote:

Hi all,

I just submitted a pull request that makes ddoc generate
cross-references... even for templates!

https://github.com/D-Programming-Language/dmd/pull/865

It would be awesome if you can try it with your projects, see if it's
working properly and doesn't choke. I tried it with phobos and it worked
fine.

Also, if someone has ideas about how to solve the issues I describe,
they are more than welcome.


That's awesome.

Looking at http://pancake.io/1e79d0/array.html, none of the templates
are cross-referenced, or was that one of the problems?


Ah, no. That's because I ran it against object.di, which doesn't have 
ddoc comments at all. I don't generate cross-references to undocumented 
symbols.


I uploaded a new version which I ran against an object.di which has 
empty ddocs for everything. Now you can see there are some 
cross-references. (if you find some is missing, please tell me).



Why are you not using the mangled name when creating anchors?


I don't think there's need for the mangled name. It's also more natural 
to give a link like foo.html?Some.Class than a mangled name.


Re: Cross-references in generated ddoc

2012-04-04 Thread Ary Manzana

On 4/4/12 8:05 PM, Jacob Carlborg wrote:

On 2012-04-04 13:38, Ary Manzana wrote:


Ah, no. That's because I ran it against object.di, which doesn't have
ddoc comments at all. I don't generate cross-references to undocumented
symbols.

I uploaded a new version which I ran against an object.di which has
empty ddocs for everything. Now you can see there are some
cross-references. (if you find some is missing, please tell me).


Cool, but what I actually was referring to was template types, i.e. the
first declaration in http://pancake.io/1e79d0/array.html:

ForeachType!(Range)[] array(Range)(Range r);

ForeachType is not a link.


You are right!

I was missing doing cross-reference for template instances. Now I did 
it, but I was actually forgetting to do cross-references for template 
instances inside templates. :-P


So now I did it. Take a look, much better! :-)

http://pancake.io/1e79d0/array.html#array




Why are you not using the mangled name when creating anchors?


I don't think there's need for the mangled name. It's also more natural
to give a link like foo.html?Some.Class than a mangled name.


Sure but then it won't be possible to reference different overloaded
functions? If you're not creating your own human readable form of
mangling, i.e.

foo.html#Foo.bar(int)
foo.html#Foo.bar(char)

I think it's more important that the doc generator behaves correctly
than outputting pretty URL's.


But you can't reference a function in the generated ddoc. You could do 
it manually, but then you'd have to figure out the mangling or something 
like that. Also, when guessing what an identifier resolves to, I can't 
possibly know which template parameters to use.


Also, all overloads will have more or less the same documentation and 
they will be one next to the other. I don't think that's an issue for a 
documentation system.




I found a case where the fully qualified name is not used:

http://pancake.io/1e79d0/complex.html#toString

The name is just toString instead of Complex.toString.


Thanks. I had a problem with template members. It's now fixed.

http://pancake.io/1e79d0/complex.html#Complex.toString



BTW, why are adding an empty a tag for the anchor? Just add an id on
the actual tag you want to refer to.


I wanted to do that. But I have to deal with ddoc macros. Every 
declaration is put inside a dt tag. That is issued with a $(DT ...) 
macro. So I'd have to create another macro, say $DT_WITH_ID or something 
like that that outputs the id.


I can't simply output an id attribute because I'm not generating html: 
I'm generating ddoc.


I mean, what other formats was Walter thinking of? PDF? Just use an HTML 
to PDF converter. Ummm... plain text? Microsoft Doc? I don't know. Why 
can't we just generate html and that's it?



The cross-referencing worked better in Descent, why are you doing it
differently?


Well, Descent kept a lot of information to be as precise as possible. 
DMD is not my code so I tried to modify it as least as possible, without 
adding too much overhead to the code or memory. I just added a member to 
the TypeIdentifier struct. I would need to change a lot more to make it 
work as Descent worked... but I think what I did now with DMD is good 
enough. :-)


Re: Cross-references in generated ddoc

2012-04-04 Thread Ary Manzana

On 4/4/12 9:53 PM, Ary Manzana wrote:

On 4/4/12 8:05 PM, Jacob Carlborg wrote:

On 2012-04-04 13:38, Ary Manzana wrote:


Ah, no. That's because I ran it against object.di, which doesn't have
ddoc comments at all. I don't generate cross-references to undocumented
symbols.

I uploaded a new version which I ran against an object.di which has
empty ddocs for everything. Now you can see there are some
cross-references. (if you find some is missing, please tell me).


Cool, but what I actually was referring to was template types, i.e. the
first declaration in http://pancake.io/1e79d0/array.html:

ForeachType!(Range)[] array(Range)(Range r);

ForeachType is not a link.


You are right!

I was missing doing cross-reference for template instances. Now I did
it, but I was actually forgetting to do cross-references for template
instances inside templates. :-P

So now I did it. Take a look, much better! :-)

http://pancake.io/1e79d0/array.html#array


Whoa!

And take a look at this:

http://pancake.io/1e79d0/algorithm.html

It's all colorful and linky, even for template if conditions! :-D

Thanks for catching that, Jacob.

By the way, I think a show source would be nice to have, like what 
they have in Ruby... no? It helps you find bugs faster, or understand 
the code better if the documentation is not precise enough...


Re: Cross-references in generated ddoc

2012-04-04 Thread Ary Manzana

On 4/4/12 10:00 PM, David Gileadi wrote:

On 4/3/12 10:38 PM, Ary Manzana wrote:

Hi all,

I just submitted a pull request that makes ddoc generate
cross-references... even for templates!

https://github.com/D-Programming-Language/dmd/pull/865

It would be awesome if you can try it with your projects, see if it's
working properly and doesn't choke. I tried it with phobos and it worked
fine.

Also, if someone has ideas about how to solve the issues I describe,
they are more than welcome.


This looks good.

One bug: for http://pancake.io/1e79d0/array.html#insert it appears to
have dropped the name/link for which function to use instead.


No, that's not a bug. The ddoc comment is:

/++
$(RED Deprecated. It will be removed in May 2012.
  Please use $(LREF insertInPlace) instead.)

Same as $(XREF array, insertInPlace).
  +/

The problem is, I didn't define the LREF and XREF macros when generating 
the docs.


But when Walter and Andrei generate the docs they use this:

https://github.com/D-Programming-Language/d-programming-language.org/blob/master/std.ddoc#L316

If you ask me, that's a bad smell. What if I want to make the docs in my 
own format? How can I know all the macros to use? Hmmm


Cross-references in generated ddoc

2012-04-03 Thread Ary Manzana

Hi all,

I just submitted a pull request that makes ddoc generate 
cross-references... even for templates!


https://github.com/D-Programming-Language/dmd/pull/865

It would be awesome if you can try it with your projects, see if it's 
working properly and doesn't choke. I tried it with phobos and it worked 
fine.


Also, if someone has ideas about how to solve the issues I describe, 
they are more than welcome.


Re: DDoc with cross-references

2012-04-02 Thread Ary Manzana

On 4/2/12 2:07 PM, Jonathan M Davis wrote:

On Monday, April 02, 2012 13:52:47 Ary Manzana wrote:

On 4/2/12 12:39 PM, Jonathan M Davis wrote:

On Monday, April 02, 2012 12:20:31 Ary Manzana wrote:

I'm planning to add cross-references to the default ddoc output. At
least that's the simplest thing I could do right now that might improve
ddoc somehow.

I see the documentation generated for phobos, for example:

http://dlang.org/phobos/std_array.html#Appender

has anchors to the many symbols (in fact, now I notice it's flawed,
because they are not fully-qualified).

Does anyone know where can I get the macros for generating such output?
I will need it for generating the cross-links.

But a more appropriate question is: why the default ddoc output doesn't
generate such anchors by default? At least putting an ID to the
generated DT...


Phobos' macros are in

https://github.com/D-Programming-Language/d-programming-
language.org/blob/master/std.ddoc

As for linking macros,

LREF is used for references within a module.
XREF is used for references to std modules.
CXREF is used for references to core modules.
ECXREF is used for references to etc.c modules.


Again, the same things. D has ddoc and it tries to do everything with
ddoc. No, that's plain wrong. Links to other module members should be
done automatically. And the links should come from the compiler. The
compiler has that knowledge already, why loose it and work on a less
powerful level (ddoc)?


I'm not arguing it one way or another. I'm just pointing out how it works now.


As for Appender, I don't see any links at all, so I don't know what you're
talking about. The generic D macro (which just designates D code) is used
by it in some places, and ddoc does put some stuff in italics in some
cases (e.g. the name of a function's parameter in the documentation for
that function), but there are no links in Appender's documentation.


What I meant is, every member in the module has an anchor. In the case
of Appender it looks like this in the generated HTML:

a name=Appender/a

That's why I can give you this link:

http://dlang.org/phobos/std_array.html#Appender

and it scrolls down to Appender (I know you know it already, but it
seems I wasn't clear in my previous post).

Now, that is flawed because the name is not fully qualified. And there's
no macro to get a fully qualified name or link to other members modules.


The anchors have been a big problem for a long time. A prime example of where
they're horrible is std.datetime. They maintain _no_ hierarchy whatsoever. So,
_everything_ gets lumped together as it were a free function, and if anything
has the same name (e.g. DateTime and SysTime both have a year property), then
they end up with identical anchors. The result is that the links at the top of
std.datetime are nearly useless.

It's ddoc's biggest problem IMHO.


Thanks again. This is what I want to fix.

I see this in the source code:

DDOC_DECL  = $(DT $(BIG $0))\n\

So what I want to do is to change that so that it includes an anchor. 
Should I change it to:


DDOC_DECL  = $(DT a name=$0 / $(BIG $1))\n\

or something like that, and then pass two arguments?

I find it hard to change the documentation output while having to deal 
with all those macros...


Re: DDoc with cross-references

2012-04-02 Thread Ary Manzana

On 4/2/12 2:16 PM, Ary Manzana wrote:

On 4/2/12 2:07 PM, Jonathan M Davis wrote:

On Monday, April 02, 2012 13:52:47 Ary Manzana wrote:

On 4/2/12 12:39 PM, Jonathan M Davis wrote:

On Monday, April 02, 2012 12:20:31 Ary Manzana wrote:

I'm planning to add cross-references to the default ddoc output. At
least that's the simplest thing I could do right now that might
improve
ddoc somehow.

I see the documentation generated for phobos, for example:

http://dlang.org/phobos/std_array.html#Appender

has anchors to the many symbols (in fact, now I notice it's flawed,
because they are not fully-qualified).

Does anyone know where can I get the macros for generating such
output?
I will need it for generating the cross-links.

But a more appropriate question is: why the default ddoc output
doesn't
generate such anchors by default? At least putting an ID to the
generated DT...


Phobos' macros are in

https://github.com/D-Programming-Language/d-programming-
language.org/blob/master/std.ddoc

As for linking macros,

LREF is used for references within a module.
XREF is used for references to std modules.
CXREF is used for references to core modules.
ECXREF is used for references to etc.c modules.


Again, the same things. D has ddoc and it tries to do everything with
ddoc. No, that's plain wrong. Links to other module members should be
done automatically. And the links should come from the compiler. The
compiler has that knowledge already, why loose it and work on a less
powerful level (ddoc)?


I'm not arguing it one way or another. I'm just pointing out how it
works now.


As for Appender, I don't see any links at all, so I don't know what
you're
talking about. The generic D macro (which just designates D code) is
used
by it in some places, and ddoc does put some stuff in italics in some
cases (e.g. the name of a function's parameter in the documentation for
that function), but there are no links in Appender's documentation.


What I meant is, every member in the module has an anchor. In the case
of Appender it looks like this in the generated HTML:

a name=Appender/a

That's why I can give you this link:

http://dlang.org/phobos/std_array.html#Appender

and it scrolls down to Appender (I know you know it already, but it
seems I wasn't clear in my previous post).

Now, that is flawed because the name is not fully qualified. And there's
no macro to get a fully qualified name or link to other members modules.


The anchors have been a big problem for a long time. A prime example
of where
they're horrible is std.datetime. They maintain _no_ hierarchy
whatsoever. So,
_everything_ gets lumped together as it were a free function, and if
anything
has the same name (e.g. DateTime and SysTime both have a year
property), then
they end up with identical anchors. The result is that the links at
the top of
std.datetime are nearly useless.

It's ddoc's biggest problem IMHO.


Thanks again. This is what I want to fix.

I see this in the source code:

DDOC_DECL = $(DT $(BIG $0))\n\

So what I want to do is to change that so that it includes an anchor.
Should I change it to:

DDOC_DECL = $(DT a name=$0 / $(BIG $1))\n\

or something like that, and then pass two arguments?

I find it hard to change the documentation output while having to deal
with all those macros...


Nevermind, found how to do it. I hope I can make it soon, hehe... :-P


Help with C++

2012-04-02 Thread Ary Manzana

Hi,

I'm trying to make some additions to DMD.

First I want to add a virtual function:

virtual void emitLink(OutBuffer *buf)

to the struct Type.

I did that. Then on doc.c I implement it empty:

void Type::emitLink(OutBuffer *buf) { }

Then I use it somewhere, like in AliasDeclaration::toDocBuffer:

type-emitLink(buf);

I compile it, it's fine. When I run DMD with the -D switch I get:

Bus error: 10

I thought maybe the type is null, so:

if (type) type-emitLink(buf);

But no luck.

I tried to copy the prototype of:

virtual TypeBasic *isTypeBasic()

without luck.

What am I doing wrong?

Thanks,
Ary


Re: Help with C++

2012-04-02 Thread Ary Manzana

On 4/3/12 4:01 AM, Dmitry Olshansky wrote:

On 02.04.2012 18:27, Ary Manzana wrote:

Hi,

I'm trying to make some additions to DMD.

First I want to add a virtual function:

virtual void emitLink(OutBuffer *buf)

to the struct Type.

I did that. Then on doc.c I implement it empty:

void Type::emitLink(OutBuffer *buf) { }

Then I use it somewhere, like in AliasDeclaration::toDocBuffer:

type-emitLink(buf);

I compile it, it's fine. When I run DMD with the -D switch I get:

Bus error: 10


Looks like a link-time breakage, i.e. some object file compiled against
an older version of header/src and consequently older v-table.
Maybe the dmd makefile is faulty and doesn't trigger proper rebuilds, do
a 'make clean' to be sure.






Yes, that was it. Thanks!


Re: Getting only the data members of a type

2012-04-01 Thread Ary Manzana

On 4/1/12 8:09 PM, Jacob Carlborg wrote:

On 2012-04-01 08:18, Ali Çehreli wrote:

On 03/31/2012 09:09 PM, Artur Skawina wrote:




 enum s = cast(S*)null;
 foreach (i, m; s.tupleof) {
 enum name = S.tupleof[i].stringof[4..$];
 alias typeof(m) type;
 writef((%s) %s\n, type.stringof, name);
 }

 Real Programmers don't use std.traits. ;)

 artur

Your method works but needing to iterate on a struct variable by
s.tupleof and having to use the struct type as S.tupleof in the loop
body is strange.


Yeah, it's a bit strange. One could think that it would be possible to
use m.stringof but that just returns the type. Instead of using
s.tupleof it's possible to use typeof(S.tupleof).

Have a look at:

https://github.com/jacob-carlborg/orange/blob/master/orange/util/Reflection.d#L212


It's possible to get the type of a field as well, based on the name:

https://github.com/jacob-carlborg/orange/blob/master/orange/util/Reflection.d#L237


This is what I don't like about D. It gives you a hammer and everyone 
tries to solve all problems with that single hammer. Then you get 
duplicated code for basic stuff, like getting the type of a field, in 
many projects.


It's a waste of time for a developer to have to sit down and think how 
we can cheat the compiler or make it talk to give us something it 
already knows, but only having a hammer to do so.


Either put that in the language, or in the core library. But don't make 
people waste time.


I'd suggest sending pull request with methods that accomplish those 
annoyances.


On the other hand, take a look at the implementation of std.traits. Is 
it really a win to implement functionLinkage in D? Right here:


https://github.com/D-Programming-Language/phobos/blob/master/std/traits.d#L704

you are repeating the linkages with their names, when that information 
is already available to the compiler. What's the point in duplicating 
information? The compiler already knows it, and much better than D. It 
could be implemented in a much simpler way. Is it just the pride of 
saying Look what I can do with my powerful compile time reflection 
capabilities (basically stringof in that module)?


I'm not angry, but I don't think things are taking the correct direction...


DDoc with cross-references

2012-04-01 Thread Ary Manzana
I'm planning to add cross-references to the default ddoc output. At 
least that's the simplest thing I could do right now that might improve 
ddoc somehow.


I see the documentation generated for phobos, for example:

http://dlang.org/phobos/std_array.html#Appender

has anchors to the many symbols (in fact, now I notice it's flawed, 
because they are not fully-qualified).


Does anyone know where can I get the macros for generating such output? 
I will need it for generating the cross-links.


But a more appropriate question is: why the default ddoc output doesn't 
generate such anchors by default? At least putting an ID to the 
generated DT...


Re: DDoc with cross-references

2012-04-01 Thread Ary Manzana

On 4/2/12 12:20 PM, Ary Manzana wrote:

I'm planning to add cross-references to the default ddoc output. At
least that's the simplest thing I could do right now that might improve
ddoc somehow.

I see the documentation generated for phobos, for example:

http://dlang.org/phobos/std_array.html#Appender

has anchors to the many symbols (in fact, now I notice it's flawed,
because they are not fully-qualified).

Does anyone know where can I get the macros for generating such output?
I will need it for generating the cross-links.

But a more appropriate question is: why the default ddoc output doesn't
generate such anchors by default? At least putting an ID to the
generated DT...


I also wonder why it's not implemented. I mean, it seems *so* easy to do 
it. Just add a toDdocChars() method to every Dsymbol. For basic types, 
just output their string representation (int, float, etc.). For classes, 
structs, etc, just output:


a href=module_name.html#struct_namestruct_name/a

or something like that...


Re: DDoc with cross-references

2012-04-01 Thread Ary Manzana

On 4/2/12 12:39 PM, Jonathan M Davis wrote:

On Monday, April 02, 2012 12:20:31 Ary Manzana wrote:

I'm planning to add cross-references to the default ddoc output. At
least that's the simplest thing I could do right now that might improve
ddoc somehow.

I see the documentation generated for phobos, for example:

http://dlang.org/phobos/std_array.html#Appender

has anchors to the many symbols (in fact, now I notice it's flawed,
because they are not fully-qualified).

Does anyone know where can I get the macros for generating such output?
I will need it for generating the cross-links.

But a more appropriate question is: why the default ddoc output doesn't
generate such anchors by default? At least putting an ID to the
generated DT...


Phobos' macros are in

https://github.com/D-Programming-Language/d-programming-
language.org/blob/master/std.ddoc

As for linking macros,

LREF is used for references within a module.
XREF is used for references to std modules.
CXREF is used for references to core modules.
ECXREF is used for references to etc.c modules.


Again, the same things. D has ddoc and it tries to do everything with 
ddoc. No, that's plain wrong. Links to other module members should be 
done automatically. And the links should come from the compiler. The 
compiler has that knowledge already, why loose it and work on a less 
powerful level (ddoc)?




As for Appender, I don't see any links at all, so I don't know what you're
talking about. The generic D macro (which just designates D code) is used by
it in some places, and ddoc does put some stuff in italics in some cases (e.g.
the name of a function's parameter in the documentation for that function),
but there are no links in Appender's documentation.


What I meant is, every member in the module has an anchor. In the case 
of Appender it looks like this in the generated HTML:


a name=Appender/a

That's why I can give you this link:

http://dlang.org/phobos/std_array.html#Appender

and it scrolls down to Appender (I know you know it already, but it 
seems I wasn't clear in my previous post).


Now, that is flawed because the name is not fully qualified. And there's 
no macro to get a fully qualified name or link to other members modules.




Re: DIP16: Transparently substitute module with package

2012-03-30 Thread Ary Manzana

On 3/30/12 10:46 PM, Andrei Alexandrescu wrote:

Starting a new thread from one in announce:

http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP16

Please comment, after which Walter will approve. Walter's approval means
that he would approve a pull request implementing DIP16 (subject to
regular correctness checks).


Great. Large modules are my main complaint about D. :-)

If I correctly understand the second part (because I couldn't understand 
the text in the proposal until I read some comments here), then it makes 
sense. Is it like this?


sort(...) - search sort in every module out there
std.sort(...) - search sort in every module that's in the std package

If both std.algorithm.sort and std.path.sort exist, or something like 
that, then you would anyway get a clash so you'd have to fully qualify it.


But if std.algorithm.sort and foo.bar.sort and you'd import both:

import std.algorithm.package;
import foo.bar.package;

and you'd wanted to use both, then it could be convenient:

std.sort(...)
foo.sort(...)

Though I wonder if this indeed happens a lot. That's why I would wait 
until there's a real need for it. The main complaint people have is not 
having a way to import all files in a directory, which is the first 
point, but I never heard a complaint about the second point.


Also, I think it would make sense to change the first part to this:

* If the compiler sees a request for importing foo.bar and foo/bar 
is a directory, then automatically look for the file 
foo/bar/package.d. *If it doesn't exist, automatically expand the 
import to import all files under that directory.* If both foo/bar.d 
and foo/bar/ exists, compilation halts with an error.


That way you have convenience and safety. Most of the time people just 
put in package.d a list of all the files in that directory. Maybe 
sometimes (not sure) people restrict that list to some modules. And in 
those cases you can just restrict the list in package.d


Please, it's the year 2012. Compilers need to be smarter. Save people 
some typing time. You save them typing all the imports. But then you 
make them typing them in that pacakge.d file. Hmm...


Re: Documentation Layout

2012-03-29 Thread Ary Manzana

On 3/29/12 5:24 PM, foobar wrote:

On Thursday, 29 March 2012 at 01:52:28 UTC, James Miller wrote:

On 29 March 2012 13:58, Nathan M. Swan nathanms...@gmail.com wrote:

On Wednesday, 28 March 2012 at 22:43:19 UTC, foobar wrote:



Have you considered that perhaps the granularity of Phobos modules is
too coarse? Perhaps the core issue is too many functions are placed in
one single file without more consideration of their relation and
organization?


I think it's just fine. Not everything is in std.algorithm, and that's 
good. Because everything we write is an algorithm, right?


I mean, what can I expect to find in std.algorithm? Binary search? Index 
of? Levenshtein distance? Spanning tree? Minimum weight? TSP?


Compiling DMD's source code in Mac OSX

2012-03-29 Thread Ary Manzana

Hi,

Does anyone have a build script or something similar for compiling DMD 
under Mac? I cloned the repo but all I can see is a win32.mak file. I 
renamed it to Makefile and tried make but no luck.


I don't have much experience with make or makefiles.

Thanks,
Ary


Re: Documentation Layout

2012-03-28 Thread Ary Manzana

On 3/28/12 2:20 PM, James Miller wrote:

Ok, so I'm going to say this: I like the Java documentation. There, I
said it.


I like it too.

http://downloads.dsource.org/projects/descent/ddoc/phobos/
http://downloads.dsource.org/projects/descent/ddoc/tango/

And cross-references are not hard at all. The compiler has everything it 
needs to know them (it can compile your code, which is way more complex! 
:-P)


Back them when I presented that format (which could be improved, it's 
still hard to find something on the right pane) there wasn't a lot of 
interest for it. I think documentation look  feel is paramount.


Re: D web apps: cgi.d now supports scgi

2012-03-26 Thread Ary Manzana

On 3/25/12 12:43 PM, Adam D. Ruppe wrote:

https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff


some docs:
http://arsdnet.net/web.d/cgi.html
http://arsdnet.net/web.d/cgi.d.html


Very nice!

I'd recommend moving those two html pages to github's wiki, or some 
other wiki. If people start using your library they can contribute with 
explanations, example usages, etc.


I also see many empty or short sections in those documents, which again 
I think is asking for a wiki.


I'm also not sure about the format you provide for getting the code: 
many unrelated modules all in a single directory. If I want to start 
developing web apps using your framework I need to clone that repo, 
think which files to import, etc. If all the related web stuff were in a 
separate repository, I could just clone it, import an all file and 
that's it.


(well, the last point isn't really your fault, something like Jacob 
Carlborg's Orbit is really needed to make D code universally accessible 
and searchable)


Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-26 Thread Ary Manzana

On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:


Dude, this is awesome. I tend to just use time, but if I was doing
anything more complicated, I'd use this. I would suggest changing the
name while you still can. avgtime is not that informative a name given
that it now does more than just Average times.

--
James Miller




Dude, this is awesome.


Thanks!! I appreciate your feedback!


I would suggest changing the name while you still can.


Suggestions welcome!!

--jm



give_me_d_average


Re: D web apps: cgi.d now supports scgi

2012-03-26 Thread Ary Manzana

On 3/27/12 10:25 AM, Adam D. Ruppe wrote:

On Tuesday, 27 March 2012 at 00:53:45 UTC, Ary Manzana wrote:

I'd recommend moving those two html pages to github's wiki, or some
other wiki. If people start using your library they can contribute
with explanations, example usages, etc.


Yeah, I started that for the dom.d but haven't gotten
around to much yet.


(snip)




(well, the last point isn't really your fault, something like Jacob
Carlborg's Orbit is really needed to make D code universally
accessible and searchable)


I could add my build.d up there too... which offers
auto downloading and module adding, but it is kinda
slow (it runs dmd twice).


How slow is it comparing it to a developer doing it manually? :-)


Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-26 Thread Ary Manzana

On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote:

On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:

On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:

Dude, this is awesome. I tend to just use time, but if I was 
doing
anything more complicated, I'd use this. I would suggest 
changing the
name while you still can. avgtime is not that informative a 
name given

that it now does more than just Average times.

--
James Miller




Dude, this is awesome.


Thanks!! I appreciate your feedback!


I would suggest changing the name while you still can.


Suggestions welcome!!

--jm



give_me_d_average



Hahahah, naahh, prefiero avgtime o timestats, porque timestab
autocompletaría a timestats.

Qué hacés tanto tiempo? Gracias por mencionarme D hace años.
Me quedó en la cabeza, y el año pasado cuando empecé un 
laburo

nuevo tuve oportunidad de meterme con D.

Saludos Ary, espero que andes bien!!
--jm


El nombre lo dije en broma :-P

Me sorprendió muchísimo verte en la lista! Pensé Juanma?. 
Qué loco que te guste D. A mí me gusta también, pero tiene 
algunas cosas feas y que lamentablemente no veo que vayan a 
cambiar pronto... (o nunca).


So you are using D for work?


Re: virtual-by-default rant

2012-03-24 Thread Ary Manzana

On 3/24/12 3:03 AM, Manu wrote:

On 23 March 2012 17:24, Ary Manzana a...@esperanto.org.ar
mailto:a...@esperanto.org.ar wrote:

On 3/18/12 9:23 AM, Manu wrote:

The virtual model broken. I've complained about it lots, and people
always say stfu, use 'final:' at the top of your class.

That sounds tolerable in theory, except there's no 'virtual'
keyword to
keep the virtual-ness of those 1-2 virtual functions I have...
so it's
no good (unless I rearrange my class, breaking the logical
grouping of
stuff in it).
So I try that, and when I do, it complains: Error: variable
demu.memmap.MemMap.machine final cannot be applied to variable,
allegedly a D1 remnant.
So what do I do? Another workaround? Tag everything as final
individually?

My minimum recommendation: D needs an explicit 'virtual'
keyword, and to
fix that D1 bug, so putting final: at the top of your class
works, and
everything from there works as it should.


Is virtual-ness your performance bottleneck?


Frequently. It's often the most expensive 'trivial' operation many
processors can be asked to do. Senior programmers (who have much better
things to waste their time on considering their pay bracket) frequently
have to spend late nights mitigating this even in C++ where virtual
isn't default. In D, I'm genuinely concerned by this prospect. Now I
can't just grep for virtual and fight them off, which is time consuming
alone, I will need to take every single method, one by one, prove it is
never overloaded anywhere (hard to do), before I can even begin the
normal process of de-virtualising it like you do in C++.
The problem is elevated by the fact that many programmers are taught in
university that virtual functions are okay. They come to the company,
write code how they were taught in university, and then we're left to
fix it up on build night when we can't hold our frame rate. virtual
functions and scattered/redundant memory access are usually the first
thing you go hunting for. Fixing virtuals is annoying when the system
was designed to exploit them, it often requires some extensive
refactoring, much harder to fix than a bad memory access pattern, which
might be as simple as rearranging a struct.


Interesting.

I spend most of my work time programming in Ruby, where everything is 
virtual+ :-P


It's good to know that virtual-ness can be a bottleneck.


Re: virtual-by-default rant

2012-03-23 Thread Ary Manzana

On 3/18/12 9:23 AM, Manu wrote:

The virtual model broken. I've complained about it lots, and people
always say stfu, use 'final:' at the top of your class.

That sounds tolerable in theory, except there's no 'virtual' keyword to
keep the virtual-ness of those 1-2 virtual functions I have... so it's
no good (unless I rearrange my class, breaking the logical grouping of
stuff in it).
So I try that, and when I do, it complains: Error: variable
demu.memmap.MemMap.machine final cannot be applied to variable,
allegedly a D1 remnant.
So what do I do? Another workaround? Tag everything as final individually?

My minimum recommendation: D needs an explicit 'virtual' keyword, and to
fix that D1 bug, so putting final: at the top of your class works, and
everything from there works as it should.


Is virtual-ness your performance bottleneck?


Re: Proposal: user defined attributes

2012-03-22 Thread Ary Manzana

On 3/22/12 2:32 AM, Andrei Alexandrescu wrote:

On 3/21/12 12:06 PM, Jacob Carlborg wrote:

On 2012-03-21 16:11, Andrei Alexandrescu wrote:
I think the liability here is that b needs to appear in two places, once

in the declaration proper and then in the NonSerialized part. (A
possible advantage is that sometimes it may be advantageous to keep all
symbols with a specific attribute in one place.) A possibility would be
to make the mixin expand to the field and the metadata at once.


Yes, but that just looks ugly:

class Foo
{
int a;
mixin NonSerialized!(int, b);
}

That's why it's so nice with attributes.


Well if the argument boils down to nice vs. ugly, as opposed to possible
vs. impossible - it's quite a bit less compelling.

Andrei


Why don't you program everything with gotos instead of for, foreach and 
while? If it boils down to nice vs. ugly, as opposed to possible vs. 
impossible...


Hmm...


Re: Dynamic language

2012-03-15 Thread Ary Manzana

On 3/15/12 4:09 AM, so wrote:

Hello,

Not related to D but this is a community which i can find at least a few
objective person. I want to invest some quality time on a dynamic
language but i am not sure which one. Would you please suggest one?

To give you an idea what i am after:
Of all one-liners i have heard only one gets me.
The programmable programming language. Is it true? If so Lisp will be
my first choice.

Thanks.


I suggest Ruby. You can practically change everything in the language 
and use metaprogramming. With Ruby you don't have to fight with the 
compiler (or interpreter). You feel free and have fun. Plus the standard 
library has lots and lots of methods you definitely will use and won't 
have to write from scratch.


Re: Multiple return values...

2012-03-14 Thread Ary Manzana

On 3/13/12 6:12 PM, Andrei Alexandrescu wrote:

On 3/13/12 2:57 PM, Manu wrote:

And you think that's more readable and intuitive than: (v1, v2, v3) =
fun(); ?


Yes (e.g. when I see the commas my mind starts running in all directions
because that's valid code nowadays that ignores v1 and v2 and keeps v3
as an lvalue).


Who uses that, except code generators? I'd like D to deprecate the comma 
so it can be used for other things, like tuple assignment.




Let me put it another way: I don't see one syntax over another a deal
maker or deal breaker. At all.


Well, it's sad that syntax is not very important in D. If you have to 
write less code there will be less chance for bugs and it will be more 
understandable (unless you obfuscate the code, obviously).


Here's what you can do in Ruby:

a = 1
b = 2

# Swap the contents
a, b = b, a

Can you do something like that with templates in D, with a nice syntax?


Re: Multiple return values...

2012-03-14 Thread Ary Manzana

On 3/14/12 5:00 PM, Simen Kjærås wrote:

On Wed, 14 Mar 2012 20:02:50 +0100, Ary Manzana a...@esperanto.org.ar
wrote:


Here's what you can do in Ruby:

a = 1
b = 2

# Swap the contents
a, b = b, a

Can you do something like that with templates in D, with a nice syntax?


template to(T...) {
alias T to;
}

auto from(T...)(T t) {
struct Result { T t; alias t this; }
return Result( t );
}

void main( ) {
int a = 3;
int b = 4;

to!(a, b) = from(b, a);

assert( a == 4 );
assert( b == 3 );
}


Awesome! :-)


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-13 Thread Ary Manzana

On 03/13/2012 02:14 AM, H. S. Teoh wrote:

On Mon, Mar 12, 2012 at 10:35:54PM -0400, Nick Sabalausky wrote:

Jonathan M Davisjmdavisp...@gmx.com  wrote in message
news:mailman.572.1331601463.4860.digitalmar...@puremagic.com...

[...]

All I'm saying is that if it makes sense for the web developer to
use javascript given what they're trying to do, it's completely
reasonable to expect that their users will have javascript enabled
(since virtually everyone does). If there's a better tool for the
job which is reasonably supported, then all the better. And if it's
easy to provide a workaround for the lack of JS at minimal effort,
then great. But given the fact that only a very small percentage of
your user base is going to have JS disabled, it's not unreasonable
to require it and not worry about the people who disable it if
that's what you want to do.



Personally, I disagree with the notion that non-JS versions are a
workaround.

[...]

Me too. To me, non-JS versions are the *baseline*, and JS versions are
enchancements. To treat JS versions as baseline and non-JS versions as
workaround is just so completely backwards.


While I don't agree that non-JS is the baseline (because most if not all 
browsers come with JS enabled by default, so why would you want to 
disable javascript for?), I'm starting to understand that providing both 
non-JS and JS versions is useful.


At least so that:
 - Some users don't go mad when they can't use it, and then realise 
it's because JS is disabled

 - And for the above reason, not to loose reputation to those people :-P

But if people didn't have an option to disable JS, we wouldn't have this 
discussion. I think it as having an option to disable CSS.


(I was going to put as an argument that my cellphone didn't have an 
option to disable JS, but it does... h... :-P)


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-13 Thread Ary Manzana

On 03/13/2012 01:52 AM, Nick Sabalausky wrote:

Ary Manzanaa...@esperanto.org.ar  wrote in message
news:jjmhja$3a$2...@digitalmars.com...

On 03/12/2012 10:58 PM, H. S. Teoh wrote:


The problem today is that JS is the next cool thing, so everyone is
jumping on the bandwagon, and everything from a single-page personal
website to a list of links to the latest toaster oven requires JS to
work, even when it's not necessary at all. That's the silliness of it
all.


T


It's not the next cool thing. It makes thing more understandable for the
user. And it makes the web transfer less content,


That gets constantly echoed throughout the web, but it's a red herring: Even
if you handle it intelligently like Adam does (ie, lightweight), the amount
of data transfer saved is trivial. We're talking *part* of *one* measly HTML
file here. And even that can be gzipped: HTML compresses *very* well. Yes,
techincally it can be less transfer, but only negligably so. And bandwith is
the *only* possible realistic improvement here, not speed, because the speed
of even a few extra K during a transfer that was already going to happen
anyway is easily outweighed by the overhead of things like actually making a
round-trip to the server at all, plus likely querying a server-side DB, plus
interpreting JS, etc.

If, OTOH you handle it like most people do, and not like Adam does, then for
brief visits you can actually be tranferring *more* data just because of all
that excess JS boilerplate people like to use. (And then there's the
start-up cost of actually parsing all that boilerplate and then executing
their initialization portions. And in many cases there's even external JS
getting loaded in, etc.)

The problem with optimization is that it's not a clear-cut thing: If you're
not looking at it holistically, optimizing one thing can either be an
effective no-op or even cause a larger de-optimization somewhere else. So
just because you've achived the popular goal of less data transer upon
your user clicking a certain link, doesn't necessarily mean you've won a net
gain, or even broken even.


True.

I always have to remember this interesting talk about saying This is 
faster than this without a scientific proof:


http://vimeo.com/9270320


Re: Can I do an or in a version block?

2012-03-13 Thread Ary Manzana

On 3/13/12 2:21 PM, Ali Çehreli wrote:

On 03/09/2012 06:20 AM, Andrej Mitrovic wrote:

  The same story goes for unittests which can't be independently
  ran to get a list of all failing unittests

D unittest blocks are for code correctness (as opposed to other meanings
of the unfortunately overused term unit testing e.g. the functional
testing of the end product). From that point of view, there should not
be even a single test failing.

 , and so people are coming
  up with their own custom unittest framework (e.g. the Orange library).

Yes, some unit test features are missing. From my day-to-day use I would
like to have the following:

- Ensure that a specific exception is thrown

- Test fixtures

That obviously reflects my use of unit tests but I really don't care how
many tests are failing. The reason is, I start with zero failures and I
finish with zero failures. Any code that breaks an existing test is
either buggy or exposes an issue with the test, which must be dealt with
right then.

Ali



How can you re-run just a failing test? (without having to run all the 
previous tests that will succeed?)


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-12 Thread Ary Manzana

On 03/12/2012 08:32 PM, Nick Sabalausky wrote:

Adam D. Ruppedestructiona...@gmail.com  wrote in message
news:npkazdoslxiuqxiin...@forum.dlang.org...

On Monday, 12 March 2012 at 23:23:13 UTC, Nick Sabalausky wrote:

at the end of the day, you're still saying fuck you to millions of
people.


...for little to no reason. It's not like making 99% of
sites work without javascript takes *any* effort.



*Exactly*. And nobody can tell me otherwise because *I DO* exactly that sort
of web development. Plus, it often makes for a *worse* user experience even
when JS is on - look at Vladimir's D forums vs reddit. Vladimir put reddit
to shame *on reddit*, for god's sake! And how many man-hours of effort do
you think went into those D forums vs reddit?


Indeed, going without javascript is often desirable
anyway, since no JS sites are /much/ faster than script
heavy sites.


Yup. Guess I already responded to this in the paragraph above :)


It's not about the speed. It's about behaviour.

Imagine I do I blog site and want people to leave comments. I decide the 
best thing for the user is to just enter the comment in a text area, 
press a button, and have the comment turn into a text block, and say 
something like Comment saved!. From a UI perspective, it's the most 
reasonable thing to do: you leave a comment, it becomes a definitive 
comment on the blog, that's it.


The implementation is straightforward (much more if I use something like 
knockoutjs): I post the comment to the server via javascript and on the 
callback, turn that editing comment into a definitive comment. Note 
that only the comment contents were transfered between the client and 
the server.


Now, I have to support people who don't like javascript (and that people 
ONLY includes developers, as most people don't even know the difference 
between google and a web browser).


To implement that I have to check for disabled javascript, and post the 
comment to a different url that will save the comment and redirect to 
the same page. First, it's a strange experience for the user: navigating 
to another page while it's really going to the same page, just with one 
more comment (and how can I make it scroll without javascript to let the 
user see the comment just created? Or should I implement an intermediate 
page saying here's your newly created comment, now go back to the 
post). Second, the whole page is transferred again! I can't see how in 
the world that is faster than not transferring anything at all.


I know, I had to transfer some javascript. But just once, since it'll be 
cached by the server. In fact, if the page has a static html which 
invokes javascript that makes callbacks, that's the most efficient thing 
to do. Because even if your comments change, the whole page remains the 
same: elements will be rendered after *just* the comment's content (in 
JSON) are transferred.


Again, I don't understand how that is slower than transferring whole 
pages the whole time.


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-12 Thread Ary Manzana

On 03/12/2012 10:58 PM, H. S. Teoh wrote:

On Mon, Mar 12, 2012 at 09:17:22PM -0400, Jonathan M Davis wrote:

On Tuesday, March 13, 2012 01:50:29 Adam D. Ruppe wrote:

On Tuesday, 13 March 2012 at 00:25:15 UTC, Jonathan M Davis wrote:

But that's a decision based on your needs as a website developer.
If JS best suits whatever the needs of a particular website
developer are, then they are completely justified in using it,
because 99% of the people out there have it enabled in their
browsers.


If it takes ten seconds to support 100% of the people out there, why
not?


[snip]


Now, there *are* cases where you can't do this so easily.
If you're stuck on poor PHP I'm sure this is harder than
in D too... but really, do you have one of those cases?


All I'm saying is that if it makes sense for the web developer to use
javascript given what they're trying to do, it's completely reasonable
to expect that their users will have javascript enabled (since
virtually everyone does). If there's a better tool for the job which
is reasonably supported, then all the better. And if it's easy to
provide a workaround for the lack of JS at minimal effort, then great.
But given the fact that only a very small percentage of your user base
is going to have JS disabled, it's not unreasonable to require it and
not worry about the people who disable it if that's what you want to
do.

[...]

The complaint is not with using JS when it's *necessary*. It's with
using JS *by default*. It's with using JS just because you can, even
when it's *not needed* at all.

It's like requiring you to have a TV just to make a simple phone call.
Sure, you can do cool stuff like hooking up the remote end's webcam to
the TV and other such fluff like that. But *requiring* all of that for a
*phone call*?  Totally unnecessary, and a totally unreasonable
requirement, even if 95% (or is that 99.9%?) of all households own a TV.
(And for the record, I don't own one, and do not plan to. I know I'm in
the minority.  That doesn't negate the fact that such a requirement is
unreasonable.)

OTOH if you want to *watch a movie*, well, then requiring a TV is
completely reasonable.

The problem today is that JS is the next cool thing, so everyone is
jumping on the bandwagon, and everything from a single-page personal
website to a list of links to the latest toaster oven requires JS to
work, even when it's not necessary at all. That's the silliness of it
all.


T


It's not the next cool thing. It makes thing more understandable for the 
user. And it makes the web transfer less content, and leverages server 
processing time. It's the next step. It's not a backwards step. :-P


I figure then Google people are just all a bunch of idiots who just like 
JS a lot...


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-12 Thread Ary Manzana

On 03/13/2012 01:29 AM, James Miller wrote:

On 13 March 2012 17:07, Ary Manzanaa...@esperanto.org.ar  wrote:

On 03/12/2012 08:32 PM, Nick Sabalausky wrote:


Adam D. Ruppedestructiona...@gmail.comwrote in message
news:npkazdoslxiuqxiin...@forum.dlang.org...


On Monday, 12 March 2012 at 23:23:13 UTC, Nick Sabalausky wrote:


at the end of the day, you're still saying fuck you to millions of
people.



...for little to no reason. It's not like making 99% of
sites work without javascript takes *any* effort.



*Exactly*. And nobody can tell me otherwise because *I DO* exactly that
sort
of web development. Plus, it often makes for a *worse* user experience
even
when JS is on - look at Vladimir's D forums vs reddit. Vladimir put reddit
to shame *on reddit*, for god's sake! And how many man-hours of effort do
you think went into those D forums vs reddit?


Indeed, going without javascript is often desirable
anyway, since no JS sites are /much/ faster than script
heavy sites.



Yup. Guess I already responded to this in the paragraph above :)



It's not about the speed. It's about behaviour.

Imagine I do I blog site and want people to leave comments. I decide the
best thing for the user is to just enter the comment in a text area, press a
button, and have the comment turn into a text block, and say something like
Comment saved!. From a UI perspective, it's the most reasonable thing to
do: you leave a comment, it becomes a definitive comment on the blog, that's
it.

The implementation is straightforward (much more if I use something like
knockoutjs): I post the comment to the server via javascript and on the
callback, turn that editing comment into a definitive comment. Note that
only the comment contents were transfered between the client and the server.

Now, I have to support people who don't like javascript (and that people
ONLY includes developers, as most people don't even know the difference
between google and a web browser).

To implement that I have to check for disabled javascript, and post the
comment to a different url that will save the comment and redirect to the
same page. First, it's a strange experience for the user: navigating to
another page while it's really going to the same page, just with one more
comment (and how can I make it scroll without javascript to let the user see
the comment just created? Or should I implement an intermediate page saying
here's your newly created comment, now go back to the post). Second, the
whole page is transferred again! I can't see how in the world that is faster
than not transferring anything at all.

I know, I had to transfer some javascript. But just once, since it'll be
cached by the server. In fact, if the page has a static html which invokes
javascript that makes callbacks, that's the most efficient thing to do.
Because even if your comments change, the whole page remains the same:
elements will be rendered after *just* the comment's content (in JSON) are
transferred.

Again, I don't understand how that is slower than transferring whole pages
the whole time.


Ary, the idea is to start with the static HTML version, then
progressively add javascript to improve the functionality. If you have
javascript at your disposal, you can change the behavior of the
existing page.

Your example would be:

1. Start with normal POST-request comment form, make sure it works.
(HTTP redirect back to original page)
2. Add javascript that listens to the submit on the comment form.
2a. Stop the default submit, submit the form to the same endpoint as 1
3. On success, do your in-page comment action.

And thats about it. I'm sure you could break it down more. There's
also more you can do, most of it server-side (check for ajax post,
return JSON, etc.), but the idea is that the extra effort to support
HTML-only isn't really extra effort. Since you have to submit the form
anyway, then why not allow it to submit by regular HTTP first.

Ideally, you don't have to detect for javascript, you just have to
*shock horror* code to web standards.

--
James Miller


But the non-javascript version is a worse user experience, and it's less 
efficient. Why not make it well from scratch?


Re: Arbitrary abbreviations in phobos considered ridiculous

2012-03-11 Thread Ary Manzana

On 03/11/2012 05:47 AM, Nick Sabalausky wrote:

H. S. Teohhst...@quickfur.ath.cx  wrote in message
news:mailman.454.1331448329.4860.digitalmar...@puremagic.com...

On Sat, Mar 10, 2012 at 09:14:26PM -0500, Nick Sabalausky wrote:

H. S. Teohhst...@quickfur.ath.cx  wrote in message
news:mailman.447.1331426602.4860.digitalmar...@puremagic.com...

[...]

In the past, I've even used UserJS to *edit* the site's JS on the
fly to rewrite stupid JS code (like replace sniffBrowser() with a
function that returns true, bwahahaha) while leaving the rest of the
site functional.  I do not merely hate Javascript, I fight it, kill
it, and twist it to my own sinister ends.:-)



I admire that :) Personally, I don't have the patience. I just bitch
and moan :)


Well, that was in the past. Nowadays they've smartened up (or is it
dumbened down?) with the advent of JS obfuscators. Which, OT1H, is silly
because anything that the client end can run will eventually be cracked,
so it actually doesn't offer *real* protection in the first place, and
OTOH annoying 'cos I really can't be bothered to waste the time and
effort to crack some encrypted code coming from some shady site that
already smells of lousy design and poor implementation anyway.

So I just leave and never come back to the site.



I'd prefer to do that (leave and never come back), but unfortunately, the
modern regression of tying data/content to the interface often makes that
impossible:

For example, I can't see what materials my library has available, or manage
my own library account, without using *their* crappy choice of software.
It's all just fucking data! Crap, DBs are an age-old thing.

Or, I'd love to be able leave GitHub and never come back. But DMD is on
GitHub, so I can't create/browse/review pull requests, check what public
forks are available, etc., without using GitHub's piece of shit site.

I'd love to leave Google Code, Google Docs and YouTube and never come back,
but people keep posting their content on those shitty sites which,
naturally, prevent me from accessing said content in any other way.

Etc...

And most of that is all just because some idiots decided to start treating a
document-transmission medium as an applications platform.

I swear to god, interoperability was better in the 80's.

(And jesus christ, *Google Docs*?!? How the fuck did we ever get a document
platform *ON TOP* of a fucking *DOCUMENT PLATFORM* and have people actually
*TAKE IT SERIOUSLY*!?! Where the hell was I when they started handing out
the free crazy-pills?)


Nick, how would you implement (protocols, architecture, whatever) an 
online document editor?


  1   2   3   >