Re: dmd support for IDEs

2009-10-10 Thread bearophile
Walter Bright:

> but we are operating under severe manpower 
> constraints. I don't have a 100 million dollar budget!

And sometimes this is even an advantage, because it forces to keep things 
simple and not over-engineered :-)

Bye,
bearophile


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Jeremie Pelletier wrote:

The official JSON website has tons of bindings, here's the C one:

http://fara.cs.uni-potsdam.de/~jsg/json_parser/

I'm gonna try and get it converted to D over the weekend.


It has a test suite with it!


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Walter Bright wrote:
Experience also suggests that using fork/exec rather than a shared dll 
approach is much more robust and easier to develop. The reason is that 
the former uses separate processes, which cannot step on each other. The 
latter puts everything in one process space, where you've got all the 
lovely, time-consuming, hair-pulling concurrency problems. The utter 
failure of the parse process also cannot bring down the IDE.



In particular, if the compiler seg faults (does it ever do that? ) it 
won't stop the IDE.


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Jeremie Pelletier wrote:
The IDE usually keeps the files in memory and could therefore just call 
something like getSemantics(char** fileBuffers, int* fileSizes, int 
nFiles, ParseNode* parseTree) and have its parse nodes already allocated 
in process memory ready for use.


Considering a lot of IDEs like to re-parse the current file every time 
the keyboard is idle for a few seconds, this could really help 
performance, nothing is more annoying than an IDE that feels unresponsive.


I understand and agree, but we are operating under severe manpower 
constraints. I don't have a 100 million dollar budget! (I'm sure MS 
spends more than that on VS.)


You're certainly welcome to take the compiler front end and try and make 
a dll out of it or integrate it directly into an IDE. But what I 
suggested would probably get a lot of results for a minimal investment 
in the front end and a minimal investment in existing IDEs.




My experience with making responsive interactive apps on slow machines 
suggests that using a multithreaded approach would make the IDE 
responsive even if the underlying parsing process is slow. What you do 
is, every time the source file changes, fire off a background thread at 
a low priority to reparse. If the source changes before it finishes, 
restart that thread. When the IDE actually needs the results, it uses 
the results of the most recently finished parse.


With this approach, there won't be any hangs where the keyboard is 
unresponsive.



Experience also suggests that using fork/exec rather than a shared dll 
approach is much more robust and easier to develop. The reason is that 
the former uses separate processes, which cannot step on each other. The 
latter puts everything in one process space, where you've got all the 
lovely, time-consuming, hair-pulling concurrency problems. The utter 
failure of the parse process also cannot bring down the IDE.


Re: Phobos.testing

2009-10-10 Thread Andrei Alexandrescu

dsimcha wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

Michel Fortin wrote:

On 2009-10-10 19:01:35 -0400, dsimcha  said:


Overall, the point is that there should be a well-defined process for
getting
code into Phobos and a well-defined place to post this code and
comment on it.
 Bugzilla probably doesn't cut it because it's not easy to download,
compile
and test lots of different snippets of code from here.

There should indeed be a process for proposing new modules or major
features. I don't care much what it is, but it should make code
available for review from all the interested parties, and allow public
discussion about this code. Whether this discussion should happen on
this newsgroup or elsewhere, I'm not sure however.

And it'd be nice if it could auto-generate documentation from the
proposed modules: glancing at the documentation often gives you a
different perspective on the API, and it'd encourage people to write
good documentation.

I'm all for accepting additions to Phobos, and for putting in place a
process to do so. I suggest we follow a procedure used to great effect
by Boost. They have a formal process in place that consists of a
preliminary submission, a refinement period, a submission, a review, and
a vote.
http://www.boost.org/development/submissions.html
I compel you all to seriously consider it, and am willing to provide
website space and access.
Andrei


This sounds pretty good, except that I think it would be even better if the 
whole
phobos.testing lib were easy for testers to download and install and play around
with in non-production code.  Actually using a library, even in toy/hobby
projects, instead of just looking at it on paper makes it a lot easier to give
informed opinions on it.


Yah, I think Boost has a "sandbox" that allows that.

So, ready to submit your Rationals library? :o)

Andrei


Re: dmd support for IDEs

2009-10-10 Thread Jeremie Pelletier

Walter Bright wrote:
But if you want to contribute, how about a JSON parser for phobos? 
You'll need one anyway for your IDE.


BTW, JSON parsing comes for free with javascript. Why not incorporate 
dmdscript into your IDE as its extension language?


The official JSON website has tons of bindings, here's the C one:

http://fara.cs.uni-potsdam.de/~jsg/json_parser/

I'm gonna try and get it converted to D over the weekend.


Re: dmd support for IDEs

2009-10-10 Thread Jeremie Pelletier

Walter Bright wrote:

Jeremie Pelletier wrote:
I think it would be great, but XML is only one format and a heavy one 
at that, JSON for example is much lighter and easier to parse. It 
shouldn't be hard to support both.


I'd never heard of JSON, but looking at it, it makes sense. I don't see 
much point in supporting both.


XML makes sense when saving as a file and it can be transformed by XSLT 
to generate formatted html documentation and whatnot, while JSON is 
lightweight and better suited for pipes between dmd and the IDE.


However I would make the file generation optional, as the IDE might 
just want to read from the standard output stream of dmd instead, this 
would also be useful for shell scripts.


Yes, writing it to stdout as an option is a good idea.


Support to get the semantics information of multiple files at once 
would also be neat, just like dmd can generate one object file from 
multiple source files.


Yes.

Would it even be possible to have the C api behind the xml/json 
frontends exported in a dll, so IDEs could just dynamically link to it 
and call that API directly instead of parsing an intermediary text 
format.


I did that with the C++ compiler, and it's a big pain to support. I 
don't think it would be onerous to fork/exec the compiler to do the 
work, capture the output, and parse it.


The IDE usually keeps the files in memory and could therefore just call 
something like getSemantics(char** fileBuffers, int* fileSizes, int 
nFiles, ParseNode* parseTree) and have its parse nodes already allocated 
in process memory ready for use.


Considering a lot of IDEs like to re-parse the current file every time 
the keyboard is idle for a few seconds, this could really help 
performance, nothing is more annoying than an IDE that feels unresponsive.


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Ellery Newcomer wrote:

ctfe. compile time (weird connection?). what do string mixins evaluate
to?


No


can I look at their result from the ide?


No


what do templates expand
to?


No


what does this here alias/typedef represent?


Yes


what does this here
typeof expand to?


No


what does this here c-style type normalize to (in
d-style)?


No


As for other transformations, it seemed like Ary had some neat tricks in
descent that showed things like int i; going to int i = 0; etc. maybe
wistful thinking.

while we're at it,

when I see a symbol, can I find its type?


Yes


can I find every symbol that
would follow it in a dot list/exp?


Yes


when I see a symbol, can I find everywhere it's used?


No, but could be added


when I see a scope, can I see every symbol that's in it?


Yes


when I see a module, can I find everywhere it's imported?


Yes


can I see exactly what symbols are pulled in?


No, but could be added


Can I perform analysis to
show me where those dang cyclic dependencies are?


Don't know


when I see source code, can I perform a simple walk over the xml to
format it?


No


Think of what it provides as very similar to what ddoc does, except that 
instead of being in a human-readable format it would be a 
machine-readable one.


In other words, for each module you'll be able to get

. all the symbols in that module, and the members of those symbols 
(recursively)

. the file/line of the source location of each symbol
. the ddoc comment for each symbol
. the type of each symbol

Things could be added over time, I was just thinking of this for starters.


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

bearophile wrote:

An IDE is designed to not just let you explore and understand already
written code, but to modify the code too. So such data changes as the
programmer writes the code. So it may be positive for the IDE to have
ways to query DMD and receive a smaller, faster and more focused
amount of data regarding something. Otherwise generating all the data
every few moments may slow down things.


I know this won't help for syntax highlighting or working within a 
source file that may only be partially parsable. But for an easy way to 
do autocompletion and throwing up 'tooltip' documentation on library 
functions, etc., it should be very powerful.


Generating the data on imports should be very fast, and should not be 
necessary every few moments (only when those imported source files change).



Eventually a good thing is to go the route chosen by LLVM, splitting
the compiler in parts, so the IDE can use some of those parts in a
mode direct way. Even LDC, that's based on LLVM, follows the
monolithic design of GCC and DMD, but I am certain that today a
loosely coupled design similar to LLVM is better.


That's far more complex. I'm looking for ways to cover 90% of the 
territory with some simple, straightforward means.


Re: dmd support for IDEs

2009-10-10 Thread Ellery Newcomer
Walter Bright wrote:
> Ellery Newcomer wrote:
>> Well, that's a better solution than reimplementing semantic analysis in
>> the ide. If you make it, I will stop trying to do the latter.

I made it to the symbol table ..


> 
>> In the xml, will we see ct stuff and other transformations that DMD
>> performs on the source expanded?
> 
> ct stuff? What is that? Won't see dmd operations on the source expanded.
> What you'll see is basically what ddoc generates, but in a machine
> readable format. I.e. you'll see function names, with their types, line
> number, comment, etc. Essentially what intellisense would pop up.
> 

ctfe. compile time (weird connection?). what do string mixins evaluate
to? can I look at their result from the ide? what do templates expand
to? what does this here alias/typedef represent? what does this here
typeof expand to? what does this here c-style type normalize to (in
d-style)?

As for other transformations, it seemed like Ary had some neat tricks in
descent that showed things like int i; going to int i = 0; etc. maybe
wistful thinking.

while we're at it,

when I see a symbol, can I find its type? can I find every symbol that
would follow it in a dot list/exp?
when I see a symbol, can I find everywhere it's used?
when I see a scope, can I see every symbol that's in it?
when I see a module, can I find everywhere it's imported?
can I see exactly what symbols are pulled in? Can I perform analysis to
show me where those dang cyclic dependencies are?
when I see source code, can I perform a simple walk over the xml to
format it?

> It would be part of the dmd front end, so all D compilers based on it
> would have it.

How about the Intel D compiler? (It's going to happen. You know it will)

> 
> Yeah, but without a compiler why edit D source? 
> 

Weird users is the best answer I can offer. It happens.

Also, coming from the java IDEs, I'm feeling apprehensive about
integration on disparate platforms and whatnot. There's multiple ways
the compiler could not be there.

>> All in all, I think it would be the bomb. I'd even volunteer to help
>> implementing it if I thought my code contributions would do less harm
>> than good.
> 
> I don't think it would be hard to implement.
> 
> But if you want to contribute, how about a JSON parser for phobos?
> You'll need one anyway for your IDE.
> 
> BTW, JSON parsing comes for free with javascript. Why not incorporate
> dmdscript into your IDE as its extension language?

 sorry. my target is netbeans.

Although I could probably whip up something quick in ANTLR if I really
needed JSON in D.


Re: dmd support for IDEs

2009-10-10 Thread bearophile
Walter Bright:

> So, while I'm not going to be writing an IDE, I figure that dmd can 
> help. dmd already puts out .doc and .di files. How about putting out an 
> xml file giving all the information needed for an IDE to implement 
> autocompletion? There'd be one .xml file generated per .d source file.

An IDE is designed to not just let you explore and understand already written 
code, but to modify the code too. So such data changes as the programmer writes 
the code. So it may be positive for the IDE to have ways to query DMD and 
receive a smaller, faster and more focused amount of data regarding something. 
Otherwise generating all the data every few moments may slow down things.

Eventually a good thing is to go the route chosen by LLVM, splitting the 
compiler in parts, so the IDE can use some of those parts in a mode direct way.
Even LDC, that's based on LLVM, follows the monolithic design of GCC and DMD, 
but I am certain that today a loosely coupled design similar to LLVM is better.

Bye,
bearophile


Re: dmd support for IDEs

2009-10-10 Thread Jordan Miner
Walter Bright Wrote:

> In my discussions with companies about adopting D, the major barrier 
> that comes up over and over isn't Tango vs Phobos, dmd being GPL, 
> debugger support, libraries, bugs, etc., although those are important.
> 
> It's the IDE.
> 
> They say that the productivity gains of D's improvements are 
> overbalanced by the loss of productivity by moving away from an IDE. And 
> what is it about an IDE that is so productive? Intellisense (Microsoft's 
> word for autocompletion).
> 
> So, while I'm not going to be writing an IDE, I figure that dmd can 
> help. dmd already puts out .doc and .di files. How about putting out an 
> xml file giving all the information needed for an IDE to implement 
> autocompletion? There'd be one .xml file generated per .d source file.
> 
> The nice thing about an xml file is while D is relatively easy to parse, 
> xml is trivial. Furthermore, an xml format would be fairly robust in the 
> face of changes to D syntax.
> 
> What do you think?

This is a great idea. If I every work on an IDE, I would use this. (I don't use 
IDEs. I like them, but I haven't found one that keeps out of my way enough.)

And this output isn't just useful for IDEs. Once I get time a couple months 
from now, I am going to finish a program that generates much better 
documentation files than Ddoc. So far, I have Ddoc generate custom output that 
I parse, but it still isn't very machine readable. Instead, I would use this 
provided it has all the information that Ddoc generates.



Re: dmd support for IDEs

2009-10-10 Thread bearophile
Walter Bright:

>In my discussions with companies about adopting D, the major barrier that 
>comes up over and over isn't Tango vs Phobos, dmd being GPL, debugger support, 
>libraries, bugs, etc., although those are important. It's the IDE. They say 
>that the productivity gains of D's improvements are overbalanced by the loss 
>of productivity by moving away from an IDE.<

Welcome to more modern times Walter :-)
You may have noticed that a small army of people in this newsgroup has told you 
the same things in the last years I've spent around here.
In practice modern statically typed languages aren't designed to be used alone, 
they are designed to be used with an IDE. So what in the past was a 
"programming language" today is a "programming language + IDE".
Probably there are ways to overdo this idea (like the idea of programming 
languages written in XML instead of nornal text), but D isn't risking to fall 
into this trap yet.


>And what is it about an IDE that is so productive? Intellisense (Microsoft's 
>word for autocompletion).<

The productivity of modern IDEs is a complex thing, it comes from several tuned 
features. Intellisense is just one of those parts. Have you tried to program 
2-3 days with dotnet C#? If you try, you will see several interesting things 
you don't know about.

Some form of reflection too helps IDEs, I think.
The file you talk about will help refracting tools, I guess.

In C# you have a syntax for sections of code that tells the IDE how to fold 
code, it's named:
#region
Some people don't like them, but they are common in C#:
http://www.codinghorror.com/blog/archives/001147.html

Attributes too are food for the IDEs, they add semantic information on the 
code, and IDEs love such information:
@someattribute(data1, datas2, ...)


> I'd never heard of JSON, but looking at it, it makes sense. I don't see 
> much point in supporting both.

XML is very common, so most tools already support it or have ways to support 
it. So probably big IDEs are able to read XML files. So supporting XML is good.

JSON is light, easy to parse, so if you want to write a simpler tool it can be 
good. I like JSON, it's quite common on the web and with dynamic languages. But 
JSON is less common in some situations, so some existing tools may not be 
already able to digest it.

That's why supporting both looks like a good idea. The good thing is that I 
don't think will be hard to generate one when DMD is able to generate the other.

If you really want to support only one of the two, then you have to look at the 
kind of generated data. XML is better for really complex structures, while JSON 
is enough if the data ha a low level of nesting. The tools designed to query 
very complex XML files are more advanced and common.

What kind of data do you want to put in such files?

The optional output to stdout of such data is good.

Bye,
bearophile


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Ellery Newcomer wrote:

Well, that's a better solution than reimplementing semantic analysis in
the ide. If you make it, I will stop trying to do the latter.


That's what I wanted to hear.


In the xml, will we see ct stuff and other transformations that DMD
performs on the source expanded?


ct stuff? What is that? Won't see dmd operations on the source expanded. 
What you'll see is basically what ddoc generates, but in a machine 
readable format. I.e. you'll see function names, with their types, line 
number, comment, etc. Essentially what intellisense would pop up.



[very very minor] concerns:

standardized? DMD derivatives will have it, what about hypothetical
other D implementations?


It would be part of the dmd front end, so all D compilers based on it 
would have it.



If your ide can't see or doesn't have compiler, it won't be able to do
much (erm duh)


Yeah, but without a compiler why edit D source? 


All in all, I think it would be the bomb. I'd even volunteer to help
implementing it if I thought my code contributions would do less harm
than good.


I don't think it would be hard to implement.

But if you want to contribute, how about a JSON parser for phobos? 
You'll need one anyway for your IDE.


BTW, JSON parsing comes for free with javascript. Why not incorporate 
dmdscript into your IDE as its extension language?


Re: dmd support for IDEs

2009-10-10 Thread Ellery Newcomer
Walter Bright wrote:
> In my discussions with companies about adopting D, the major barrier
> that comes up over and over isn't Tango vs Phobos, dmd being GPL,
> debugger support, libraries, bugs, etc., although those are important.
> 
> It's the IDE.
> 
> They say that the productivity gains of D's improvements are
> overbalanced by the loss of productivity by moving away from an IDE. And
> what is it about an IDE that is so productive? Intellisense (Microsoft's
> word for autocompletion).
> 
> So, while I'm not going to be writing an IDE, I figure that dmd can
> help. dmd already puts out .doc and .di files. How about putting out an
> xml file giving all the information needed for an IDE to implement
> autocompletion? There'd be one .xml file generated per .d source file.
> 
> The nice thing about an xml file is while D is relatively easy to parse,
> xml is trivial. Furthermore, an xml format would be fairly robust in the
> face of changes to D syntax.
> 
> What do you think?

Well, that's a better solution than reimplementing semantic analysis in
the ide. If you make it, I will stop trying to do the latter.

In the xml, will we see ct stuff and other transformations that DMD
performs on the source expanded?

[very very minor] concerns:

standardized? DMD derivatives will have it, what about hypothetical
other D implementations?

If your ide can't see or doesn't have compiler, it won't be able to do
much (erm duh)

All in all, I think it would be the bomb. I'd even volunteer to help
implementing it if I thought my code contributions would do less harm
than good.


Re: Phobos.testing

2009-10-10 Thread dsimcha
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
> Michel Fortin wrote:
> > On 2009-10-10 19:01:35 -0400, dsimcha  said:
> >
> >> Overall, the point is that there should be a well-defined process for
> >> getting
> >> code into Phobos and a well-defined place to post this code and
> >> comment on it.
> >>  Bugzilla probably doesn't cut it because it's not easy to download,
> >> compile
> >> and test lots of different snippets of code from here.
> >
> > There should indeed be a process for proposing new modules or major
> > features. I don't care much what it is, but it should make code
> > available for review from all the interested parties, and allow public
> > discussion about this code. Whether this discussion should happen on
> > this newsgroup or elsewhere, I'm not sure however.
> >
> > And it'd be nice if it could auto-generate documentation from the
> > proposed modules: glancing at the documentation often gives you a
> > different perspective on the API, and it'd encourage people to write
> > good documentation.
> I'm all for accepting additions to Phobos, and for putting in place a
> process to do so. I suggest we follow a procedure used to great effect
> by Boost. They have a formal process in place that consists of a
> preliminary submission, a refinement period, a submission, a review, and
> a vote.
> http://www.boost.org/development/submissions.html
> I compel you all to seriously consider it, and am willing to provide
> website space and access.
> Andrei

This sounds pretty good, except that I think it would be even better if the 
whole
phobos.testing lib were easy for testers to download and install and play around
with in non-production code.  Actually using a library, even in toy/hobby
projects, instead of just looking at it on paper makes it a lot easier to give
informed opinions on it.


Re: Phobos.testing

2009-10-10 Thread Andrei Alexandrescu

Michel Fortin wrote:

On 2009-10-10 19:01:35 -0400, dsimcha  said:

Overall, the point is that there should be a well-defined process for 
getting
code into Phobos and a well-defined place to post this code and 
comment on it.
 Bugzilla probably doesn't cut it because it's not easy to download, 
compile

and test lots of different snippets of code from here.


There should indeed be a process for proposing new modules or major 
features. I don't care much what it is, but it should make code 
available for review from all the interested parties, and allow public 
discussion about this code. Whether this discussion should happen on 
this newsgroup or elsewhere, I'm not sure however.


And it'd be nice if it could auto-generate documentation from the 
proposed modules: glancing at the documentation often gives you a 
different perspective on the API, and it'd encourage people to write 
good documentation.


I'm all for accepting additions to Phobos, and for putting in place a 
process to do so. I suggest we follow a procedure used to great effect 
by Boost. They have a formal process in place that consists of a 
preliminary submission, a refinement period, a submission, a review, and 
a vote.


http://www.boost.org/development/submissions.html

I compel you all to seriously consider it, and am willing to provide 
website space and access.



Andrei


Re: Rationals Lib?

2009-10-10 Thread dsimcha
== Quote from dsimcha (dsim...@yahoo.com)'s article
> == Quote from language_fan (f...@bar.com.invalid)'s article
> > Sat, 10 Oct 2009 21:29:41 +, dsimcha thusly wrote:
> > > I guess I could have implemented some of these suggestions, but the idea
> > > was for this lib to be very simple (it's only about 300 lines of code so
> > > far) and agnostic to the implementation of the integers it's working on
> > > top of, with the caveat that, if you use something that's not arbitrary
> > > precision, the onus is on you to make sure nothing overflows.  If
> > > anyone, for example, made a wrapper to the GNU multiprecision lib that
> > > looked like a D struct w/ operator overloading, it would be able to plug
> > > right into this library.  If std.bigint improves, this library will
> > > automatically benefit.
> > Now that's the most perfect way to test the modularity of the language --
> > does it allow implementing a rational library on top of any (arbitrary
> > precision) number type, assuming we have a sane interface to work with.
> Save for a few small details, yes.  Since there seems to be interest I'll 
> clean up
> the code and post it somewhere in the next few days.  Here are the "few 
> details":
> 1.  To convert to floating-point form, I need to be able to cast the 
> underlying
> arbitrary precision integers to machine-native types.  There's no standard 
> way to
> do this.
> 2.  I need a standard way of constructing any type of integer, whether
> machine-native or arbitrary precision, to implement some syntactic sugar 
> features.
>  Let's say you wanted the number 314 as a rational, with a std.bigint.BigInt 
> as
> the underlying integer.  Right now you'd have to do:
> auto foo = fraction( BigInt(314), BigInt(1));
> There's no shortcut yet for when you want a whole number to be represented
> internally as a fraction because there's no standard way to construct any
> arbitrary integer type with the value 1.
> The same problem applies to conversion from floating-point to rational and
> comparison between rational and integer.

Ok, I got it to work.  I even found kludges around the issues I raised 
previously:

1.  To convert an arbitrary BigInt to a long, use binary search and equality
testing.  It's slow, but converting a BigInt fraction to a float is slow anyhow.

2.  Just implement completely separate overloads for when one of the operands is
an int.

See http://dsource.org/projects/scrapple/browser/trunk/rational/rational.d .


Re: dmd support for IDEs

2009-10-10 Thread Walter Bright

Jeremie Pelletier wrote:
I think it would be great, but XML is only one format and a heavy one at 
that, JSON for example is much lighter and easier to parse. It shouldn't 
be hard to support both.


I'd never heard of JSON, but looking at it, it makes sense. I don't see 
much point in supporting both.


However I would make the file generation optional, as the IDE might just 
want to read from the standard output stream of dmd instead, this would 
also be useful for shell scripts.


Yes, writing it to stdout as an option is a good idea.


Support to get the semantics information of multiple files at once would 
also be neat, just like dmd can generate one object file from multiple 
source files.


Yes.

Would it even be possible to have the C api behind the xml/json 
frontends exported in a dll, so IDEs could just dynamically link to it 
and call that API directly instead of parsing an intermediary text format.


I did that with the C++ compiler, and it's a big pain to support. I 
don't think it would be onerous to fork/exec the compiler to do the 
work, capture the output, and parse it.


Re: dmd support for IDEs

2009-10-10 Thread Jeremie Pelletier

Walter Bright wrote:
In my discussions with companies about adopting D, the major barrier 
that comes up over and over isn't Tango vs Phobos, dmd being GPL, 
debugger support, libraries, bugs, etc., although those are important.


It's the IDE.

They say that the productivity gains of D's improvements are 
overbalanced by the loss of productivity by moving away from an IDE. And 
what is it about an IDE that is so productive? Intellisense (Microsoft's 
word for autocompletion).


So, while I'm not going to be writing an IDE, I figure that dmd can 
help. dmd already puts out .doc and .di files. How about putting out an 
xml file giving all the information needed for an IDE to implement 
autocompletion? There'd be one .xml file generated per .d source file.


The nice thing about an xml file is while D is relatively easy to parse, 
xml is trivial. Furthermore, an xml format would be fairly robust in the 
face of changes to D syntax.


What do you think?


I think it would be great, but XML is only one format and a heavy one at 
that, JSON for example is much lighter and easier to parse. It shouldn't 
be hard to support both.


However I would make the file generation optional, as the IDE might just 
want to read from the standard output stream of dmd instead, this would 
also be useful for shell scripts.


Support to get the semantics information of multiple files at once would 
also be neat, just like dmd can generate one object file from multiple 
source files.


Would it even be possible to have the C api behind the xml/json 
frontends exported in a dll, so IDEs could just dynamically link to it 
and call that API directly instead of parsing an intermediary text format.


Jeremie


Re: dmd support for IDEs

2009-10-10 Thread digited
Walter Bright ïèøåò:
> The nice thing about an xml file is while D is relatively easy to parse,
> xml is trivial.

Why file? An IDE can call compiler process and get output with info from 
stdout, that will be much faster, and if IDE will need to store the info, it 
will, or will not, itself.


dmd support for IDEs

2009-10-10 Thread Walter Bright
In my discussions with companies about adopting D, the major barrier 
that comes up over and over isn't Tango vs Phobos, dmd being GPL, 
debugger support, libraries, bugs, etc., although those are important.


It's the IDE.

They say that the productivity gains of D's improvements are 
overbalanced by the loss of productivity by moving away from an IDE. And 
what is it about an IDE that is so productive? Intellisense (Microsoft's 
word for autocompletion).


So, while I'm not going to be writing an IDE, I figure that dmd can 
help. dmd already puts out .doc and .di files. How about putting out an 
xml file giving all the information needed for an IDE to implement 
autocompletion? There'd be one .xml file generated per .d source file.


The nice thing about an xml file is while D is relatively easy to parse, 
xml is trivial. Furthermore, an xml format would be fairly robust in the 
face of changes to D syntax.


What do you think?


Re: Phobos.testing

2009-10-10 Thread div0
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

dsimcha wrote:
> I've noticed that it's somewhat difficult to get code into Phobos.  This is
> somewhat understandable--noone wants a standard library full of buggy code
> that noone understands.  On the other hand, it doesn't seem like there's a
> very well-organized process for getting stuff into Phobos if you're not a main
> contributor.
> 
> Should something like a Phobos.testing lib be created?  Such a project would
> be an area of dsource.  The bar for getting stuff checked into here would be
> relatively low.  If you write a module and check it into phobos.testing, it
> indicates that you believe that it would be generally useful enough to go into
> Phobos and are posting it for review/comment/other people to use with the
> caveat that it might not be well tested yet.  This dsource project would use
> its own forums to comment on the code and debate about what does and doesn't
> belong in Phobos.  Every release, Andrei would pick off the best well-tested,
> well-reviewed community-created feature and add it to the "real" phobos.

Sounds like a good idea.

At the mo, my biggest annoyance with D is the lack of a decent set of
container classes in Phobos. Considering how D is supposed to be a
superior c++, not having equivalents of the stl containers is a gob
smackingly stupid omission.

I'd be happy to port all of stl to D if it would be used and tested,
though it would be better if it was redesigned with Andrei's ranges.

> Overall, the point is that there should be a well-defined process for getting
> code into Phobos and a well-defined place to post this code and comment on it.
>  Bugzilla probably doesn't cut it because it's not easy to download, compile
> and test lots of different snippets of code from here.

Yeah, bugzilla sucks ass.

I hate not being able to browse it; you have to search and search only
works if you happen to think in the same words as the person that files
a bug.

- --
My enormous talent is exceeded only by my outrageous laziness.
http://www.ssTk.co.uk
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFK0RlZT9LetA9XoXwRAstJAKCbJ/RjR/fApG3C+nB5Puc91JnHEwCg0jie
jKvE3ScAAD3FPPKig33NK4A=
=Shgw
-END PGP SIGNATURE-


Re: Phobos.testing

2009-10-10 Thread Michel Fortin

On 2009-10-10 19:01:35 -0400, dsimcha  said:


Overall, the point is that there should be a well-defined process for getting
code into Phobos and a well-defined place to post this code and comment on it.
 Bugzilla probably doesn't cut it because it's not easy to download, compile
and test lots of different snippets of code from here.


There should indeed be a process for proposing new modules or major 
features. I don't care much what it is, but it should make code 
available for review from all the interested parties, and allow public 
discussion about this code. Whether this discussion should happen on 
this newsgroup or elsewhere, I'm not sure however.


And it'd be nice if it could auto-generate documentation from the 
proposed modules: glancing at the documentation often gives you a 
different perspective on the API, and it'd encourage people to write 
good documentation.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Array literals' default type

2009-10-10 Thread Lars T. Kyllingstad

Michel Fortin wrote:
On 2009-10-10 12:12:27 -0400, "Lars T. Kyllingstad" 
 said:



Christopher Wright wrote:

Don wrote:

At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ 
generatedArgs ~ createArray(arg3, arg4) ~ moreGeneratedArgs);


PHP does this. I haven't used PHP enough to hate it.



I've used PHP a fair bit, and I don't hate its array syntax at all. 
(There are plenty of other things in PHP to hate, though.) It's easily 
readable, and not much of a hassle to write. But array() in PHP isn't 
a function, it's a language construct with special syntax. To create 
an AA, for instance, you'd write


   $colours = array("apple" => "red", "pear" => "green");

I'm not sure what the D equivalent of that one should be.


Associative array literals:

string[string] s = ["hello": "world", "foo": "bar"];



I know that. :) I was just wondering what the equivalent function call 
should look like if we replaced array literals with functions, cf. the 
createArray() function above.


-Lars


Phobos.testing

2009-10-10 Thread dsimcha
I've noticed that it's somewhat difficult to get code into Phobos.  This is
somewhat understandable--noone wants a standard library full of buggy code
that noone understands.  On the other hand, it doesn't seem like there's a
very well-organized process for getting stuff into Phobos if you're not a main
contributor.

Should something like a Phobos.testing lib be created?  Such a project would
be an area of dsource.  The bar for getting stuff checked into here would be
relatively low.  If you write a module and check it into phobos.testing, it
indicates that you believe that it would be generally useful enough to go into
Phobos and are posting it for review/comment/other people to use with the
caveat that it might not be well tested yet.  This dsource project would use
its own forums to comment on the code and debate about what does and doesn't
belong in Phobos.  Every release, Andrei would pick off the best well-tested,
well-reviewed community-created feature and add it to the "real" phobos.

Overall, the point is that there should be a well-defined process for getting
code into Phobos and a well-defined place to post this code and comment on it.
 Bugzilla probably doesn't cut it because it's not easy to download, compile
and test lots of different snippets of code from here.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread Michel Fortin

On 2009-10-10 16:56:33 -0400, language_fan  said:


And from that point of view, you can see templates as an compiled,
optimized version of runtime reflection and type creation capabilities.


Runtime reflection can be really expensive computationally, but it
becomes useful when you need mobile code.


What I was getting at is that given good enough runtime reflection, 
templates could depend on runtime parameters, and those templates could 
be instanciated at runtime.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Rationals Lib?

2009-10-10 Thread dsimcha
== Quote from language_fan (f...@bar.com.invalid)'s article
> Sat, 10 Oct 2009 21:29:41 +, dsimcha thusly wrote:
> > I guess I could have implemented some of these suggestions, but the idea
> > was for this lib to be very simple (it's only about 300 lines of code so
> > far) and agnostic to the implementation of the integers it's working on
> > top of, with the caveat that, if you use something that's not arbitrary
> > precision, the onus is on you to make sure nothing overflows.  If
> > anyone, for example, made a wrapper to the GNU multiprecision lib that
> > looked like a D struct w/ operator overloading, it would be able to plug
> > right into this library.  If std.bigint improves, this library will
> > automatically benefit.
> Now that's the most perfect way to test the modularity of the language --
> does it allow implementing a rational library on top of any (arbitrary
> precision) number type, assuming we have a sane interface to work with.

Save for a few small details, yes.  Since there seems to be interest I'll clean 
up
the code and post it somewhere in the next few days.  Here are the "few 
details":

1.  To convert to floating-point form, I need to be able to cast the underlying
arbitrary precision integers to machine-native types.  There's no standard way 
to
do this.
2.  I need a standard way of constructing any type of integer, whether
machine-native or arbitrary precision, to implement some syntactic sugar 
features.
 Let's say you wanted the number 314 as a rational, with a std.bigint.BigInt as
the underlying integer.  Right now you'd have to do:

auto foo = fraction( BigInt(314), BigInt(1));

There's no shortcut yet for when you want a whole number to be represented
internally as a fraction because there's no standard way to construct any
arbitrary integer type with the value 1.

The same problem applies to conversion from floating-point to rational and
comparison between rational and integer.


Re: null references redux + Looney Tunes

2009-10-10 Thread bearophile
Rainer Deyke:

> Either I'm missing something, or this system only checks units at
> runtime (which would make it both slow and unsafe).
> 
> Boost.Units (C++) checks units at compile time.  There is no reason why
> D could not use the same approach.

In F#, compile time checks shown by the IDE too:
http://blogs.msdn.com/andrewkennedy/archive/2008/08/29/units-of-measure-in-f-part-one-introducing-units.aspx

Bye,
bearophile


Re: Rationals Lib?

2009-10-10 Thread language_fan
Sat, 10 Oct 2009 21:29:41 +, dsimcha thusly wrote:

> I guess I could have implemented some of these suggestions, but the idea
> was for this lib to be very simple (it's only about 300 lines of code so
> far) and agnostic to the implementation of the integers it's working on
> top of, with the caveat that, if you use something that's not arbitrary
> precision, the onus is on you to make sure nothing overflows.  If
> anyone, for example, made a wrapper to the GNU multiprecision lib that
> looked like a D struct w/ operator overloading, it would be able to plug
> right into this library.  If std.bigint improves, this library will
> automatically benefit.

Now that's the most perfect way to test the modularity of the language -- 
does it allow implementing a rational library on top of any (arbitrary 
precision) number type, assuming we have a sane interface to work with.


Re: Rationals Lib?

2009-10-10 Thread Rainer Deyke
dsimcha wrote:
> I guess I could have implemented some of these suggestions, but the idea was 
> for
> this lib to be very simple (it's only about 300 lines of code so far) and 
> agnostic
> to the implementation of the integers it's working on top of, with the caveat
> that, if you use something that's not arbitrary precision, the onus is on you 
> to
> make sure nothing overflows.  If anyone, for example, made a wrapper to the 
> GNU
> multiprecision lib that looked like a D struct w/ operator overloading, it 
> would
> be able to plug right into this library.  If std.bigint improves, this library
> will automatically benefit.

FWIW I use boost::rational quite a bit, and I've never felt the need to
use bigints.


-- 
Rainer Deyke - rain...@eldwood.com


Re: Rationals Lib?

2009-10-10 Thread dsimcha
I guess I could have implemented some of these suggestions, but the idea was for
this lib to be very simple (it's only about 300 lines of code so far) and 
agnostic
to the implementation of the integers it's working on top of, with the caveat
that, if you use something that's not arbitrary precision, the onus is on you to
make sure nothing overflows.  If anyone, for example, made a wrapper to the GNU
multiprecision lib that looked like a D struct w/ operator overloading, it would
be able to plug right into this library.  If std.bigint improves, this library
will automatically benefit.

== Quote from language_fan (f...@bar.com.invalid)'s article
> Sat, 10 Oct 2009 14:25:28 -0400, bearophile thusly wrote:
> > dsimcha:
> >
> >> auto f1 = fraction( BigInt("314159265"), BigInt("27182818"));
> >
> > That's a nice example where bigint literals are useful.
> >
> > Missing bigint literals, this looks shorter than your way to define f1:
> > auto f1 = fraction("314159265 / 27182818"); Or a little better, if/when
> > structs can be assigned statically: fraction f1 = q{314159265 /
> > 27182818};
> FWIW, he could have just redefined the / operator in class bigint:
>   bigint(1) / 3
> Or
>   bigint f;
>   f = 1
>   f /= 3;
> Can't really remember how overloading the assignment works in D.



Re: Rationals Lib?

2009-10-10 Thread language_fan
Sat, 10 Oct 2009 14:25:28 -0400, bearophile thusly wrote:

> dsimcha:
> 
>> auto f1 = fraction( BigInt("314159265"), BigInt("27182818"));
> 
> That's a nice example where bigint literals are useful.
> 
> Missing bigint literals, this looks shorter than your way to define f1:
> auto f1 = fraction("314159265 / 27182818"); Or a little better, if/when
> structs can be assigned statically: fraction f1 = q{314159265 /
> 27182818};

FWIW, he could have just redefined the / operator in class bigint:

  bigint(1) / 3

Or

  bigint f;
  f = 1
  f /= 3;

Can't really remember how overloading the assignment works in D.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread language_fan
Fri, 09 Oct 2009 16:49:19 -0400, Jarrett Billingsley thusly wrote:

> Where CTFE wins at metaprogramming:
> 
> Liiists. If you have a list of something, it's far easier to
> deal with in an imperative CTFE function than in an awkwardly recursive
> template. Edge cases (first, last items) are also easier to deal with
> imperatively.

Of course the "lists" become more efficient and easier to use inside CTFE 
functions since D does not even have built-in lists but arrays. Calling 
Head!(foo) and Head!(Reverse!(foo)) is not that hard in template code, 
but D just does not have a standard template utility function library.

> DSLs, and more generally, parsing. Doing DSL parsing with templates is
> possible but not fun. You end up with a ton of templates. Not that I'm
> advocating parsing lots of things with CTFE either.. you already know
> how I feel about that.

You liek it?


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread language_fan
Fri, 09 Oct 2009 20:20:02 -0400, Michel Fortin thusly wrote:

> But an interesting thing I realized in the last few months is this: all
> you can do with a template you can also do at runtime provided
> sufficient runtime reflection capabilities. Even creating types!

This is a well known fact..

> And from that point of view, you can see templates as an compiled,
> optimized version of runtime reflection and type creation capabilities.

Runtime reflection can be really expensive computationally, but it 
becomes useful when you need mobile code.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread language_fan
Sat, 10 Oct 2009 01:21:32 -0400, bearophile thusly wrote:

> Jeremie Pelletier:
>  
>> I would rather have TypeInfo usable at compile time than a "type" type.
> 
> That's useful, but it's not enough. So you may want both. Sometimes all
> you want to pass to a function is a type, to replace some of the use
> cases of templates. Time ago I have shown some usage examples here. To
> me it seems that where they can be used they give a little more natural
> means to do some things (but templates can't be fully replaced).

Actually the TypeInfo structure can have more or less the same semantics 
as a distinct 'type' type. Your idea is good, but you need to beef up the 
specs before the proposal gets any useful. What you need is some kind of 
facility to query information about the type, and possible other 
information to generate new types.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread language_fan
Sat, 10 Oct 2009 10:26:11 +0200, Don thusly wrote:

> CTFE doesn't mean "string mixins using CTFE". It just means CTFE. (BTW
> you can do string mixins with templates only, no CTFE, if you are
> completely insane).

CTFE without mixins is rather limited form of metaprogramming. You can 
basically only initialize some static non-code data, and not much more. 
String mixins with templates was the only way to go before CTFE became 
possible -- those were the times!


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread language_fan
Sat, 10 Oct 2009 10:30:31 +0200, Don thusly wrote:

> The more fundamental problem is that you can't
> instantiate a template from inside CTFE. IE, you can cross from the
> "compile-time world" to the "runtime world" only once -- you can never
> get back.

That's not exactly true. Also both templates and CTFE are compile time 
features. You can compute a value with CTFE in the "value world" and lift 
the result to the "type world" with a template. The range of a template 
used as a metafunction is {types, values} and their mixture. CTFE 
functions only return values.  Understanding the concept of 'lambda cube' 
will help you here.


Re: Array literals' default type

2009-10-10 Thread Michel Fortin
On 2009-10-10 12:12:27 -0400, "Lars T. Kyllingstad" 
 said:



Christopher Wright wrote:

Don wrote:

At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ generatedArgs ~ 
createArray(arg3, arg4) ~ moreGeneratedArgs);


PHP does this. I haven't used PHP enough to hate it.



I've used PHP a fair bit, and I don't hate its array syntax at all. 
(There are plenty of other things in PHP to hate, though.) It's easily 
readable, and not much of a hassle to write. But array() in PHP isn't a 
function, it's a language construct with special syntax. To create an 
AA, for instance, you'd write


   $colours = array("apple" => "red", "pear" => "green");

I'm not sure what the D equivalent of that one should be.


Associative array literals:

string[string] s = ["hello": "world", "foo": "bar"];

Note that an "array" in PHP is always a double-linked list indexed by a 
hash-table. Writing `array(1, 2, 3)` is the same as writing `array(0 => 
1, 1 => 2, 2 => 3)`: what gets constructed is identical. That's quite 
nice as a generic container.




--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Rationals Lib?

2009-10-10 Thread bearophile
dsimcha:

> auto f1 = fraction( BigInt("314159265"), BigInt("27182818"));

That's a nice example where bigint literals are useful.

Missing bigint literals, this looks shorter than your way to define f1:
auto f1 = fraction("314159265 / 27182818");
Or a little better, if/when structs can be assigned statically:
fraction f1 = q{314159265 / 27182818};

Bye,
bearophile


Rationals Lib?

2009-10-10 Thread dsimcha
I've written a prototype lib that does arithmetic on rational numbers
(fractions).  I got the idea from the Maxima computer algebra system.  (
http://maxima.sourceforge.net ). It's templated to work on any integer type
where operators are properly overloaded, though in practice you'd probably
want to use something arbitrary precision, since adding/subtracting fractions
can yield some really really big numbers for numerators and denominators, and
if you don't care that much about accuracy, floats are faster anyhow.

I'm still cleaning things up, etc, but usage is something like this:

import std.bigint, fractions;

void main() {
auto f1 = fraction( BigInt("314159265"), BigInt("27182818"));
auto f2 = fraction( BigInt("8675309"), BigInt("362436"));
f1 += f2;
assert(f1 == fraction( BigInt("174840986505151"),
BigInt("4926015912324")));

// Print result.  Prints:
// "174840986505151 / 4926015912324"
writeln(f1);

// Print result in decimal form.  Prints:
// "35.4934"
writeln(cast(real) result);
}

Some questions for the community:

1.  Does this look useful to anyone?
2.  What might be some non-obvious key features for a lib like this?
3.  What is the status of arbitrary precision integer arithmetic in D2?  Will
we be getting something better than std.bigint in the foreseeable future?
This lib isn't very useful without a fast BigInt underneath it.
4.  There is one small part (conversion to float) where I had to assume that
the BigInt implementation was the one in std.bigint, to cast certain division
results back to native types.  Will there eventually be a de facto standard
way to cast BigInts to native types so I can get rid of this dependency?
5.  Is there any use for approximate rational arithmetic built on
machine-sized integers?  For example, if adding two fractions would generate
an overflow, try to find the closest answer that wouldn't?  I would guess that
if you want to do something like this, you're better off just using floats,
but I could be wrong.


Re: Array literals' default type

2009-10-10 Thread Lars T. Kyllingstad

Christopher Wright wrote:

Don wrote:

Christopher Wright wrote:

Don wrote:

I don't understand why runtime-determined array literals even exist.
They're not literals!!!
They cause no end of trouble. IMHO we'd be *much* better off without 
them.


You don't see the use. I do. I would go on a murderous rampage if 
that feature were removed from the language.


For example, one thing I recently wrote involved creating a process 
with a large number of arguments. The invocation looked like:
exec("description", [procName, arg1, arg2] ~ generatedArgs ~ [arg3, 
arg4] ~ moreGeneratedArgs);


There were about ten or fifteen lines like that.

You'd suggest I rewrite that how?
char[][] args;
args ~= procName;
args ~= arg1;
args ~= arg2;
args ~= generatedArgs;
args ~= arg3;


Of course not. These runtime 'array literals' are just syntax sugar 
for a constructor call. Really, they are nothing more.


I'm quite surprised that there is a runtime function for this. I would 
expect codegen to emit something like:


array = __d_newarray(nBytes)
array[0] = exp0
array[1] = exp1
...


At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ generatedArgs 
~ createArray(arg3, arg4) ~ moreGeneratedArgs);


PHP does this. I haven't used PHP enough to hate it.



I've used PHP a fair bit, and I don't hate its array syntax at all. 
(There are plenty of other things in PHP to hate, though.) It's easily 
readable, and not much of a hassle to write. But array() in PHP isn't a 
function, it's a language construct with special syntax. To create an 
AA, for instance, you'd write


  $colours = array("apple" => "red", "pear" => "green");

I'm not sure what the D equivalent of that one should be.

-Lars


Re: DFL IDE Editor ?

2009-10-10 Thread dolive
Robert Jacques дµ½:

> On Thu, 24 Sep 2009 16:31:56 -0400, dolive  wrote:
> 
> > Robert Jacques дµ½:
> >
> >> On Thu, 24 Sep 2009 14:21:55 -0400, dolive  wrote:
> >>
> >> > Robert Jacques Ã�´µÂÂ�
> >> >
> >> >> On Thu, 24 Sep 2009 06:22:57 -0400, dolive89   
> >> wrote:
> >> >>
> >> >> > can DFL make ide editor ?
> >> >> > can do  expansion of the corresponding function?
> >> >>
> >> >> Yes.
> >> >> There's the Entice IDE (http://www.dprogramming.com/entice.php)
> >> >> Or the simpler DCode IDE(http://www.dprogramming.com/dcode.php)
> >> >> Or the Scintilla control if you want to roll your own
> >> >> (http://wiki.dprogramming.com/Dfl/ScintillaControl)
> >> >
> >> > thank you very much !!!
> >> > but version is older,can do be upgraded to dmd2.032 ?  thank you !!!
> >> >
> >> > dolive
> >>
> >> DFL hasn't been updated to DMD 2.032 yet. I've updated my local copy.
> >> Here's the link:
> >> https://jshare.johnshopkins.edu/xythoswfs/webui/_xy-3615403_1-t_VRRBqZAG
> >
> >
> > yes, I use DFL is dmd2.032,
> >
> >  Scintilla DFL Control  version is older, is 2007 year. hope upgraded to  
> > dmd2.032 .
> >
> > how to make lib ? I use makescintillalib.bat is error, I copyed cpp  
> > directory to  DFL directory. changed dmd_path, dmc change to dmd.
> >
> > can be compiled out .obj file.
> >
> > thank you !
> >
> > dolive
> >
> 
> Sorry, I've never used the Scintilla control myself. If you just want to  
> make DFL there is a makelib.bat file in the dfl directory.



have you dfl for d2.033 and examples.

thank you !


dolive



Re: DFL IDE Editor ?

2009-10-10 Thread dolive
Robert Jacques дµ½:

> On Thu, 24 Sep 2009 16:31:56 -0400, dolive  wrote:
> 
> > Robert Jacques дµ½:
> >
> >> On Thu, 24 Sep 2009 14:21:55 -0400, dolive  wrote:
> >>
> >> > Robert Jacques Ã�´µÂÂ�
> >> >
> >> >> On Thu, 24 Sep 2009 06:22:57 -0400, dolive89   
> >> wrote:
> >> >>
> >> >> > can DFL make ide editor ?
> >> >> > can do  expansion of the corresponding function?
> >> >>
> >> >> Yes.
> >> >> There's the Entice IDE (http://www.dprogramming.com/entice.php)
> >> >> Or the simpler DCode IDE(http://www.dprogramming.com/dcode.php)
> >> >> Or the Scintilla control if you want to roll your own
> >> >> (http://wiki.dprogramming.com/Dfl/ScintillaControl)
> >> >
> >> > thank you very much !!!
> >> > but version is older,can do be upgraded to dmd2.032 ?  thank you !!!
> >> >
> >> > dolive
> >>
> >> DFL hasn't been updated to DMD 2.032 yet. I've updated my local copy.
> >> Here's the link:
> >> https://jshare.johnshopkins.edu/xythoswfs/webui/_xy-3615403_1-t_VRRBqZAG
> >
> >
> > yes, I use DFL is dmd2.032,
> >
> >  Scintilla DFL Control  version is older, is 2007 year. hope upgraded to  
> > dmd2.032 .
> >
> > how to make lib ? I use makescintillalib.bat is error, I copyed cpp  
> > directory to  DFL directory. changed dmd_path, dmc change to dmd.
> >
> > can be compiled out .obj file.
> >
> > thank you !
> >
> > dolive
> >
> 
> Sorry, I've never used the Scintilla control myself. If you just want to  
> make DFL there is a makelib.bat file in the dfl directory.



have you dfl for d2.033 and examples.

thank you !


dolive



Re: Array literals' default type

2009-10-10 Thread Yigal Chripun

On 10/10/2009 16:12, Jarrett Billingsley wrote:

On Sat, Oct 10, 2009 at 7:33 AM, Yigal Chripun  wrote:


You keep calling these literals "constructor calls' and I agree that that's
what they are. My question is then why not make them real constructors?

auto a = new int[](x, y, z);


Teehee, that syntax already has meaning in D. Well, that right there
would give you a semantic error, but "new int[][][](x, y, z)" creates
a 3-D array with dimensions x, y, and z.

That brings up another point. If you *did* use a class-style ctor
syntax, how would you list the arguments for a multidimensional array?
That is, what would be the equivalent of [[1, 2], [3, 4]]?


I know about the current meaning, I was suggesting to change it.

to answer your question -
a) compile-time literals should remain as is so your example of:
[[1, 2], [3, 4]] is still valid.
b) for run-time arrays:
 auto a = new int[][]( (new int[](x, y), new int[](z, w) );
 auto a = new int[][]( Tuple!(x, y), Tuple!(z, w) );
 auto a = new int[][]( [1, 2], [3, 4] );

you can construct a regular array with a literal or a tuple:
int[] a = [1, 2];
int[] b = new int[](x, y);
int[][] is an array of arrays so the rules apply recursively:
both forms can initialize each array in the array of arrays.
tuples of tuples are a shortcut for the second option.

Now, wouldn't it be wonderful if D had provided real tuple support 
without all the Tuple!() nonsense?


Re: Array literals' default type

2009-10-10 Thread Jarrett Billingsley
On Sat, Oct 10, 2009 at 7:33 AM, Yigal Chripun  wrote:
>
> You keep calling these literals "constructor calls' and I agree that that's
> what they are. My question is then why not make them real constructors?
>
> auto a = new int[](x, y, z);

Teehee, that syntax already has meaning in D. Well, that right there
would give you a semantic error, but "new int[][][](x, y, z)" creates
a 3-D array with dimensions x, y, and z.

That brings up another point. If you *did* use a class-style ctor
syntax, how would you list the arguments for a multidimensional array?
That is, what would be the equivalent of [[1, 2], [3, 4]]?


Re: Array literals' default type

2009-10-10 Thread Christopher Wright

Don wrote:

Christopher Wright wrote:

Don wrote:

I don't understand why runtime-determined array literals even exist.
They're not literals!!!
They cause no end of trouble. IMHO we'd be *much* better off without 
them.


You don't see the use. I do. I would go on a murderous rampage if that 
feature were removed from the language.


For example, one thing I recently wrote involved creating a process 
with a large number of arguments. The invocation looked like:
exec("description", [procName, arg1, arg2] ~ generatedArgs ~ [arg3, 
arg4] ~ moreGeneratedArgs);


There were about ten or fifteen lines like that.

You'd suggest I rewrite that how?
char[][] args;
args ~= procName;
args ~= arg1;
args ~= arg2;
args ~= generatedArgs;
args ~= arg3;


Of course not. These runtime 'array literals' are just syntax sugar for 
a constructor call. Really, they are nothing more.


I'm quite surprised that there is a runtime function for this. I would 
expect codegen to emit something like:


array = __d_newarray(nBytes)
array[0] = exp0
array[1] = exp1
...


At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ generatedArgs ~ 
createArray(arg3, arg4) ~ moreGeneratedArgs);


PHP does this. I haven't used PHP enough to hate it.

Depending on what the 'exec' signature is, it could be simpler than 
that. But that's the absolute worst case.


The language pays a heavy price for that little bit of syntax sugar.


The price being occasional heap allocation where it's unnecessary? The 
compiler should be able to detect this in many cases and allocate on the 
stack instead. Your createArray() suggestion doesn't have that advantage.


Or parsing difficulties? It's not an insanely difficult thing to parse, 
and people writing parsers for D comprise an extremely small segment of 
your audience.


Or just having another construct to know? Except in PHP, you can't use 
arrays without knowing about the array() function, and in D, you can't 
easily use arrays without knowing about array literals. So it's the same 
mental load.


You could say array() is more self-documenting, but that's only when you 
want someone who has no clue what D is to read your code. I think it's 
reasonable to require people to know what an array literal is.


What is the price?


Re: clear()

2009-10-10 Thread Max Samukha
On Sat, 10 Oct 2009 14:06:16 +0400, "Denis Koroskin"
<2kor...@gmail.com> wrote:

>obj.clear(42);
>
>Wait, uniform function call syntax doesn't work with classes! Oh, well...
>
>clear!(C)(obj, 42);

We still need polymorphic behavior, meaning all constructors have to
be in classinfo, with meta information about parameter types, so that
an appropriate constructor can be found.

void clear(A...)(Object obj, A args)
{
  ...
  // find a matching constructor and call it
}

Object a = new B(41);
clear(a, 42); 





Re: Array literals' default type

2009-10-10 Thread Yigal Chripun

On 10/10/2009 10:11, Don wrote:

Christopher Wright wrote:

Don wrote:

I don't understand why runtime-determined array literals even exist.
They're not literals!!!
They cause no end of trouble. IMHO we'd be *much* better off without
them.


You don't see the use. I do. I would go on a murderous rampage if that
feature were removed from the language.

For example, one thing I recently wrote involved creating a process
with a large number of arguments. The invocation looked like:
exec("description", [procName, arg1, arg2] ~ generatedArgs ~ [arg3,
arg4] ~ moreGeneratedArgs);

There were about ten or fifteen lines like that.

You'd suggest I rewrite that how?
char[][] args;
args ~= procName;
args ~= arg1;
args ~= arg2;
args ~= generatedArgs;
args ~= arg3;


Of course not. These runtime 'array literals' are just syntax sugar for
a constructor call. Really, they are nothing more.
At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ generatedArgs ~
createArray(arg3, arg4) ~ moreGeneratedArgs);

Depending on what the 'exec' signature is, it could be simpler than
that. But that's the absolute worst case.

The language pays a heavy price for that little bit of syntax sugar.



You keep calling these literals "constructor calls' and I agree that 
that's what they are. My question is then why not make them real 
constructors?


auto a = new int[](x, y, z);
auto b = new int[3](x, y, z);
auto c = new int[]; // empty array
auto d = new int[3]; // use default ctor: d == [0,0,0]

for arrays of class instances this could be extended to call a 
constructor for each index, something like:


// tuples would be very handy here..
auto e = new Class[2](Tuple!(args1), Tuple!(args2));



Re: clear()

2009-10-10 Thread Jacob Carlborg

On 10/10/09 12:06, Denis Koroskin wrote:

On Sat, 10 Oct 2009 13:08:01 +0400, Max Samukha 
wrote:


On Fri, 09 Oct 2009 21:50:48 +0200, Yigal Chripun 
wrote:


On 09/10/2009 19:53, Max Samukha wrote:

On Fri, 09 Oct 2009 11:40:43 -0500, Andrei Alexandrescu
 wrote:


I'm talking with Sean and Walter about taking the first step towards
eliminating delete: defining function clear() that clears the state of
an object. Let me know of what you think.

One problem I encountered is that I can't distinguish between a
default
constructor that doesn't need to exist, and one that was disabled
because of other constructors. Consider:

class A {}
class B { this(int) {} }

You can evaluate "new A" but not "new B". So it's legit to create
objects of type A all default-initialized. But the pointer to
constructor stored in A.classinfo is null, same as B.

Any ideas?



The notion of default constructor is not quite clear.

class A
{
this(int a = 22) {}
}

Should A be considered as having a default constructor?

class B
{
this(int) {}
}

Should passing int.init to B's constructor be considered default
construction? If yes, we could recreate B using the init value. But
then:

class C
{
this(int a) {}
this(int a, int b) {}
}

Which constructor to call? The one with fewer parameters? What if
there are overloaded constructors with identical number of parameters?
Should we explicitly mark one of the constructors as default?


I agree. classinfo.defaultConstructor should be replaced by an array of
all the constructors. Only when the array is empty you assume the
existence of the default compiler generated constructor.


I'd prefer complete runtime information for all members.

The problem is I do not understand what 'default constructor' and
'default construction' means in D for classes that have explicit
constructors with parameters. How to automatically construct this cl
ass:

class B
{
this(int a = 22) {}
}
?

For example, Object.factory will always return null for B, which is an
arbitrary limitation.

class C
{
this(int a) {}
}

For C, the "default constructor" should probably be generated like
this:

void C_ctor(C c)
{
c.__ctor(int.init);
}
etc.

Otherwise, I cannot see how one can reconstruct an instance of C in
'clear'.


obj.clear(42);

Wait, uniform function call syntax doesn't work with classes! Oh, well...

clear!(C)(obj, 42);


I've suggested to implement uniform function call syntax: 
http://d.puremagic.com/issues/show_bug.cgi?id=3382


Re: clear()

2009-10-10 Thread Denis Koroskin
On Sat, 10 Oct 2009 13:08:01 +0400, Max Samukha   
wrote:



On Fri, 09 Oct 2009 21:50:48 +0200, Yigal Chripun 
wrote:


On 09/10/2009 19:53, Max Samukha wrote:

On Fri, 09 Oct 2009 11:40:43 -0500, Andrei Alexandrescu
  wrote:


I'm talking with Sean and Walter about taking the first step towards
eliminating delete: defining function clear() that clears the state of
an object. Let me know of what you think.

One problem I encountered is that I can't distinguish between a  
default

constructor that doesn't need to exist, and one that was disabled
because of other constructors. Consider:

class A {}
class B { this(int) {} }

You can evaluate "new A" but not "new B". So it's legit to create
objects of type A all default-initialized. But the pointer to
constructor stored in A.classinfo is null, same as B.

Any ideas?



The notion of default constructor is not quite clear.

class A
{
   this(int a = 22) {}
}

Should A be considered as having a default constructor?

class B
{
   this(int) {}
}

Should passing int.init to B's constructor be considered default
construction? If yes, we could recreate B using the init value. But
then:

class C
{
   this(int a) {}
   this(int a, int b) {}
}

Which constructor to call? The one with fewer parameters? What if
there are overloaded constructors with identical number of parameters?
Should we explicitly mark one of the constructors as default?


I agree. classinfo.defaultConstructor should be replaced by an array of
all the constructors. Only when the array is empty you assume the
existence of the default compiler generated constructor.


I'd prefer complete runtime information for all members.

The problem is I do not understand what 'default constructor' and
'default construction' means in D for classes that have explicit
constructors with parameters. How to automatically construct this cl
ass:

class B
{
  this(int a = 22) {}
}
?

For example, Object.factory will always return null for B, which is an
arbitrary limitation.

class C
{
  this(int a) {}
}

For C, the "default constructor" should probably be generated like
this:

void C_ctor(C c)
{
c.__ctor(int.init);
}
etc.

Otherwise, I cannot see how one can reconstruct an instance of C in
'clear'.


obj.clear(42);

Wait, uniform function call syntax doesn't work with classes! Oh, well...

clear!(C)(obj, 42);


Re: clear()

2009-10-10 Thread Max Samukha
On Fri, 09 Oct 2009 21:50:48 +0200, Yigal Chripun 
wrote:

>On 09/10/2009 19:53, Max Samukha wrote:
>> On Fri, 09 Oct 2009 11:40:43 -0500, Andrei Alexandrescu
>>   wrote:
>>
>>> I'm talking with Sean and Walter about taking the first step towards
>>> eliminating delete: defining function clear() that clears the state of
>>> an object. Let me know of what you think.
>>>
>>> One problem I encountered is that I can't distinguish between a default
>>> constructor that doesn't need to exist, and one that was disabled
>>> because of other constructors. Consider:
>>>
>>> class A {}
>>> class B { this(int) {} }
>>>
>>> You can evaluate "new A" but not "new B". So it's legit to create
>>> objects of type A all default-initialized. But the pointer to
>>> constructor stored in A.classinfo is null, same as B.
>>>
>>> Any ideas?
>>>
>>
>> The notion of default constructor is not quite clear.
>>
>> class A
>> {
>>this(int a = 22) {}
>> }
>>
>> Should A be considered as having a default constructor?
>>
>> class B
>> {
>>this(int) {}
>> }
>>
>> Should passing int.init to B's constructor be considered default
>> construction? If yes, we could recreate B using the init value. But
>> then:
>>
>> class C
>> {
>>this(int a) {}
>>this(int a, int b) {}
>> }
>>
>> Which constructor to call? The one with fewer parameters? What if
>> there are overloaded constructors with identical number of parameters?
>> Should we explicitly mark one of the constructors as default?
>
>I agree. classinfo.defaultConstructor should be replaced by an array of 
>all the constructors. Only when the array is empty you assume the 
>existence of the default compiler generated constructor.

I'd prefer complete runtime information for all members.

The problem is I do not understand what 'default constructor' and
'default construction' means in D for classes that have explicit
constructors with parameters. How to automatically construct this cl
ass:

class B
{
  this(int a = 22) {}
}
?

For example, Object.factory will always return null for B, which is an
arbitrary limitation.

class C
{
  this(int a) {}
}

For C, the "default constructor" should probably be generated like
this:

void C_ctor(C c)
{
c.__ctor(int.init);  
}
etc.

Otherwise, I cannot see how one can reconstruct an instance of C in
'clear'.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread Don

Jarrett Billingsley wrote:

On Fri, Oct 9, 2009 at 4:57 PM, Sean Kelly  wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

Thanks!
I plan to add more text at the end of the chapter that discusses the
opportunities of CTFE. Walter revealed to me that CTFE, particularly now
after it's been improved by leaps and bounds by Don and by Walter
himself, could obviate a lot of the traditional metaprogramming
techniques developed for C++.
One question that bugs me is, where do you draw the line? Say there's a
metaprogramming problem at hand. How to decide on solving it with CTFE
vs. solving it with templates? It would be great to have a simple
guideline that puts in contrast the pluses and minuses of the two
approaches.

CTFE is great for working with values while template metaprogramming is
great for working with types.  String mixins make CTFE good at working
with types as well, but I wouldn't consider them a novice-level feature.


Throw templates in there. Boom! Goodbye, CTFE.

(.stringof and templates do not get along.)


That's just a bug. The more fundamental problem is that you can't 
instantiate a template from inside CTFE. IE, you can cross from the 
"compile-time world" to the "runtime world" only once -- you can never 
get back.


Re: CTFE vs. traditional metaprogramming

2009-10-10 Thread Don

Jarrett Billingsley wrote:

On Fri, Oct 9, 2009 at 3:49 PM, Andrei Alexandrescu
 wrote:

Thanks!


I plan to add more text at the end of the chapter that discusses the
opportunities of CTFE. Walter revealed to me that CTFE, particularly now
after it's been improved by leaps and bounds by Don and by Walter himself,
could obviate a lot of the traditional metaprogramming techniques developed
for C++.

One question that bugs me is, where do you draw the line? Say there's a
metaprogramming problem at hand. How to decide on solving it with CTFE vs.
solving it with templates? It would be great to have a simple guideline that
puts in contrast the pluses and minuses of the two approaches.

It is quite possible that templates get relegated to parameterized functions
and types, whereas all heavy lifting in metaprogramming should be carried
with CTFE.


God, I wish we had a real forum with table capabilities. I can't even
rely on monospaced fonts..

Where templates win at metaprogramming:

Templates have pattern-matching capabilities for picking apart types.
CTFE is forced to reimplement part of the D lexer/parser to do so (and
combined with buggy/incompletely specified .stringof, you can't really
depend on your parsing to work right).


CTFE doesn't mean "string mixins using CTFE".
It just means CTFE. (BTW you can do string mixins with templates only, 
no CTFE, if you are completely insane).


Re: Array literals' default type

2009-10-10 Thread Don

Christopher Wright wrote:

Don wrote:

I don't understand why runtime-determined array literals even exist.
They're not literals!!!
They cause no end of trouble. IMHO we'd be *much* better off without 
them.


You don't see the use. I do. I would go on a murderous rampage if that 
feature were removed from the language.


For example, one thing I recently wrote involved creating a process with 
a large number of arguments. The invocation looked like:
exec("description", [procName, arg1, arg2] ~ generatedArgs ~ [arg3, 
arg4] ~ moreGeneratedArgs);


There were about ten or fifteen lines like that.

You'd suggest I rewrite that how?
char[][] args;
args ~= procName;
args ~= arg1;
args ~= arg2;
args ~= generatedArgs;
args ~= arg3;


Of course not. These runtime 'array literals' are just syntax sugar for 
a constructor call. Really, they are nothing more.

At worst, it would be something like:

exec("description", createArray(procName, arg1, arg2) ~ generatedArgs ~ 
createArray(arg3, arg4) ~ moreGeneratedArgs);


Depending on what the 'exec' signature is, it could be simpler than 
that. But that's the absolute worst case.


The language pays a heavy price for that little bit of syntax sugar.