Re: Moving to D

2011-01-06 Thread Travis Boucher

On 01/06/11 17:55, Vladimir Panteleev wrote:


Disclaimer: I use Git, and avoid Mercurial if I can mainly because I
don't want to learn another VCS. Nevertheless, I tried to be objective
above.
As I mentioned on IRC, I strongly believe this must be a fully-informed
decision, since changing VCSes again is unrealistic once it's done.



Recently I have been using mercurial (bitbucket).  I have used git 
previously, and subversion alot.


The question I think is less of git vs. mercurial and more of 
(git|mercurial) vs. (subversion) and even more (github|bitbucket) vs. 
dsource.


I like dsource alot, however it doesn't compare feature wise to github  
bitbucket.  The only argument feature wise is forums, and in reality we 
already have many places to offer/get support for D and D projects other 
than the dsource forums (newsgroups  irc for example).


Another big issue I have with dsource is that its hard to find active 
projects and projects that have been dead (sometimes for 5+ years).


The 'social coding' networks allow projects to be easily revived in the 
case they do die.


Personally I don't care which is used (git|mercurial, github|bitbucket), 
as long as we find a better way of managing the code, and a nice way of 
doing experimental things and having a workflow to have those 
experimental things pulled into the official code bases.


dsource has served us well, and could still be a useful tool (maybe have 
it index D stuff from (github|bitbucket)?), but its time to start using 
some of the other, better, tools out there.




Re: Moving to D

2011-01-06 Thread Travis Boucher

On 01/06/11 18:30, Vladimir Panteleev wrote:

On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin
michel.for...@michelf.com wrote:


Easy forking is nice, but it could be a problem in our case. The
license for the backend is not open-source enough for someone to
republish it (in a separate own repo) without Walter's permission.


I suggested elsewhere in this thread that the two must be separated
first. I think it must be done anyway when moving to a DVCS, regardless
of the specific one or which hosting site we'd use.



I agree, separating out the proprietary stuff has other interesting 
possibilities such as a D front end written in D and integration with 
IDEs and analysis tools.


Of course all of this is possible now, but it'd make merging front end 
updates so much nicer.




lgamma gamma reentrant

2011-01-05 Thread Travis Boucher

I need some feedback from some of the math nerds on the list.

The functions gammaf and lgammaf are not reentrant and set a global 
'signgam' to indicate the sign.


Currently it looks like druntime/phobos2 use these non-reentrant 
versions, which could cause some issues in a threaded environment.


My questions for the math nerds are:

How important is this signgam value?

Should we provide a safe way of getting this value?

In std.math should we wrap the reentrant versions and store signgam in 
TLS, or should we expose the *_r reentrant versions in std.math directly?


I think now in D2 global variables are stored thread-local by default, 
so providing a safe signgam would be trivial (of course only accessible 
to the thread that called the lgamma/gamma).


Another option is to just leave it alone.  Personally I couldn't care 
less since I have never used the functions.


-- tbone


Re: Advocacy (Was: Who here actually uses D?)

2011-01-03 Thread Travis Boucher

On 11-01-02 03:35 AM, Walter Bright wrote:

bearophile wrote:

But D is not currently the best fit to write a kernel


Based on what?


Currently the issues I see with D in kernel land is a fat runtime and 
type system.  Although you can reduce the runtime requirements, you end 
up losing alot of features and might as well be writing in C at that 
point anyway.


I don't think D as a language is a bad fit for a kernel (conceptually I 
think I'd be a great fit for a kernel).  I think the bigger issues are 
the current state of the D+druntime implementations that are problematic.


Re: DMD Automatic Dependency Linking

2010-11-19 Thread Travis Boucher

On 10-11-16 12:04 PM, Matthias Pleh wrote:

Am 16.11.2010 18:38, schrieb Travis:



The one thing I have been wondering however is why doesn't DMD have a
flag for
easy project building which compiles dependencies in a single command.


[...]


Thanks,
tbone




Have you tried 'rdmd' ?


Son of a bitch, I didn't realize rdmd did the dependencies as well.

Previously I have only used rdmd for D 'scripts' and unittesting.

I have done some testing with it using derelict (with some modifications 
to work with D2) and gtkd and it works perfectly (slow for gtkd, but 
gtkd is kinda slow to compile anyway).


I'll start using rdmd and suggest it to others.

Thanks!


Re: Tidy auto [Was: Re: @disable]

2010-01-19 Thread Travis Boucher

Jerry Quinn wrote:

bearophile Wrote:


myself). If this is true then a syntax like:
auto immutable x = y * 2;

can be seen as too much long and boring to write all the time, so the immutable keyword may need to be 
changed again :-) For example into val (also as retard has said), so it becomes shorter (here 
auto is present if and only if the programmer wants type inference, as I have written in other posts, to 
clean up its semantics):
auto val x = y * 2;


How about fixed?  It's a little longer than val, but still carries set in 
stone semantics.

auto fixed x = y * 2;



It'd be nice to not introduce 'fixed' unless it referred to fixed point 
math.  (not using it at all leaves the openings for vendor extensions 
targeting embedded platforms)


Re: D+Ubuntu+SDL_image said undefined reference

2009-12-30 Thread Travis Boucher

alisue wrote:

Trass3r Wrote:


Michael P. schrieb:

How come you are not using something like rebuild or DSSS?

don't forget xfBuild ;)


Well... Because I haven't heard about them. I knew 'rebuild' and 'DSSS' but I 
thought it might be too old (For Mac OS X 10.4 I use Mac OS X 10.6 and Most 
of time when I try to use package for Mac OS X 10.4 they doesn't take a effort)

So I just decide to use Ant(for now using Makefile but in future) and build all 
of .d file in src directory and manually link to lib. I knew it stupid but 
could't find the way.

What is the best way to do it. Is DSSS not too old? or xfBuild

P.S.
Well... Where can I find some tutorial things? I found some but too old and 
can't compile.

P.S.
Derelict... seem good I'll try.


http://www.dsource.org/projects

The 3 big ones I personally use are dsss, gtkd and derelict.  Both gtkd 
an derelict do runtime linking (ie. dlopen()) and both work well under 
winderz as well (I test some of my crap in windows once in a while with 
minimal code changes, just some build cruft).


Re: dmd-x64

2009-12-23 Thread Travis Boucher

alkor wrote:

i've tested g++, gdc  dmd on an ordinary task - processing compressed data w 
using zlib
all compilers're made from sources, target - gentoo x32 i686

c++  d codes are simplest  alike

but, dmd makes faster result then g++
and gdc loses g++ 'cause gdc'es not have any optimization options

gdc makes slower code then dmd and does'nt support d 2.0, so it's useless

so ... i'm waiting for dmd x64



If you can't get gdc to generate optimized code, then you are using it 
wrong.


Re: dmd-x64

2009-12-23 Thread Travis Boucher

alkor wrote:

$ dmd -O -release -oftest-dmd test-performance.d  strip test-dmd
$ gdc  -O3 test-performance.d -o test-gdc  strip test-gdc
so, dmd's code optimization rules 
Walter made nice lang  good compiler - it's true





Add -frelease to gdc (if you want a fair comparison), and look at the 
code generated rather then running a micro benchmark on something that 
takes a fraction of a second to run.


Re: dmd-x64

2009-12-23 Thread Travis Boucher

alkor wrote:

thanks,  w -frelease gdc makes a good result - faster then dmd's one  normal 
size



Thats because -frelease removes certain array bounds checking code, 
assertion testing and I think a few other things.


Re: dmd-x64

2009-12-22 Thread Travis Boucher

alkor wrote:

it's bad
d's good enough to make real projects, but complier MUST supports linux x64 as 
a target platform

believe, it's time to make 64-bit code generation

is it possible to take back-end (i.e. code generation) from gcc or it's too 
complicated?


Look up gdc and ldc, both can target x86_64.  gdc tends to be lagging 
behind (ALOT) in the dmd front end, ldc not as much.


Re: mixin not overloading other mixins, Bug or feature?

2009-12-22 Thread Travis Boucher

BCS wrote:

Hello Travis,


float func1(float[2] v) { return v[0]; }
float func1(float[3] v) { return v[0]; }
template Test (size_t S) {
float func2(float[S] v) { return v[0]; }
}
mixin Test!(2);
mixin Test!(3);
void main() {
float[2] a2;
func1(a2);
func2(a2);
}
Here the call to func1 is fine, but the call to func2 results in a
conflict.

Test!(2).func2 conflicts with unit.Test!(3).func2

This was tested with ldc (dmd 1.051).

Is this a bug or a feature?



IIRC it's a fature. I forget where, but I recall reading that they don't 
overload.





I know they don't, I am just wondering why.  Is it a side 
effect/oversight of the implementation (misfeature), something that is 
suppose to work (bug) or is there a concrete reason why (feature).




Re: dmd-x64

2009-12-22 Thread Travis Boucher

Matt wrote:

On 12/22/09 2:34 AM, Travis Boucher wrote:

alkor wrote:

it's bad
d's good enough to make real projects, but complier MUST supports
linux x64 as a target platform

believe, it's time to make 64-bit code generation

is it possible to take back-end (i.e. code generation) from gcc or
it's too complicated?


Look up gdc and ldc, both can target x86_64. gdc tends to be lagging
behind (ALOT) in the dmd front end, ldc not as much.


GDC is being maintained again. See 
http://bitbucket.org/goshawk/gdc/wiki/Home
They are up to DMD 1.043 and there has been significant activity 
recently. It could take a while for them to get fully caught up, but 
they are making good progress.


gdc is still lagging quite a bit, I've been following the goshawk 
branch.  The problem here is he has to deal with both the major DMD 
changes (in 2 different D versions) and the big changes in GCC, so 
maintaining gdc itself would be an annoying process since there isn't a 
bit of support on either end of the bridge.  (DM does what best for DM, 
gcc won't accept a language like D (even though it has more similarities 
to C/C++ then java/fortran/ada does).


ldc on the other hand has a great structure which promotes using it as a 
backend for a different front end, however it doesn't (yet) generic code 
nearly as good as gcc.


dmd's focus seems to be more about a reference compiler then a flexible 
compile that generates great code.


Personally, I still use an old ass gdc based on GCC 4.1.3, DMD1.020 
because it happens to be the one that best supports my platform 
(FreeBSD/amd64).  The only real issues I run into is a few issues with 
CTFE and dsss/rebuild's handling of a few compiler errors  (eg. 
writefln(...; results in rebuild exploding.)




Re: dmd-x64

2009-12-22 Thread Travis Boucher

bearophile wrote:

Travis Boucher:
ldc on the other hand has a great structure which promotes using it as a 
backend for a different front end, however it doesn't (yet) generic code 
nearly as good as gcc.


Can you explain better what do you mean?

Bye,
bearophile


llvm has been designed for use for code analyzers, compiler development, 
IDEs, etc.  The APIs are well documented and well thought out, as it its 
IR (which is an assembler-like language itself).  It is easy to use 
small parts of llvm due to its modular structure.  Although it's design 
promotes all sorta of optimization techniques, its still pretty young 
(compared to gcc) and just doesn't have all of the optimization stuff 
gcc has.


gcc has evolved over a long time, and contains alot of legacy cruft. 
It's IR changes on a (somewhat) regular basis, and its internals are a 
big hairy intertwined mess.  Trying to learn one small part of how GCC 
works often involves learning how alot of other unrelated things work. 
However, since it is so mature, many different optimization techniques 
have been developed, and continue to be developed as underlying hardware 
changes.  It also supports generating code for a huge number of targets.


When I say 'ldc' above, I really mean 'llvm' in general.


Re: mixin not overloading other mixins, Bug or feature?

2009-12-22 Thread Travis Boucher

BCS wrote:

By don't overload, I'm taking about defined to not overload.

That removes bug leaving misfeature, and feature.

I think the rational is that allowing them to overload makes the order 
of expansion hard to impossible to work out.

For example:

template Bar(T) { const bool v = true; }
template Foo(T)
{
  static if(Bar!(T).v)
  template Bar(U : T) { const bool v = false; }
  else
  template Bar(U : T) { const bool v = true; }
}

mixin Foo!(int);

static assert(Bar!(char)); // works
static assert(Bar!(int));  // what about this?

By making mixins not overload, many (if not all) such cases become illegal.





I'm not fully sure this applies to my issue, maybe it is because I am 
not fully sure how templates are implemented (in my mind, I think 
something similar to macro expansion).


My issue is with function overloads.  2 functions, same name, different 
parameters.  Right now my only solution is hacky string mixins.


It seems to me that 2 templates should be able to mix into the same 
struct, overloading the same functions, if:


1. They don't contain the same parameters with eachother.  If they do, 
then conflict.


2. They don't contain the same parameters of the struct they are mixing 
into.  If they do, then use the one in the struct (like it works now).


mixin not overloading other mixins, Bug or feature?

2009-12-21 Thread Travis Boucher

float func1(float[2] v) { return v[0]; }
float func1(float[3] v) { return v[0]; }

template Test (size_t S) {
float func2(float[S] v) { return v[0]; }
}

mixin Test!(2);
mixin Test!(3);

void main() {
float[2] a2;

func1(a2);
func2(a2);
}


Here the call to func1 is fine, but the call to func2 results in a conflict.

Test!(2).func2 conflicts with unit.Test!(3).func2

This was tested with ldc (dmd 1.051).

Is this a bug or a feature?


Re: Short list with things to finish for D2

2009-11-24 Thread Travis Boucher

Denis Koroskin wrote:
On Tue, 24 Nov 2009 14:00:18 +0300, Gerrit Wichert g...@green-stores.de 
wrote:



how about opLimit ?


I recall that Visual Basic has UBound function that returns upper bound 
of a multi-dimensional array:


Dim a(100, 5, 4) As Byte

UBound(a, 1) - 100
UBound(a, 2) - 5
UBound(a, 3) - 4

Works for single-dimensional arrays, too:

Dim b(8) As Byte
UBound(b) - 8


See the length property.

char[] a = Hello, World!;  // a.length = 13


Re: thank's ddmd !

2009-11-23 Thread Travis Boucher

Denis Koroskin wrote:

Travis Boucher has shown his interest in contribution, but he currently
has issues with D2 not working on FreeBSD. To quote him:

I have dmd working, and druntime (which was a quick hack to make work, 
but should work well enough).  The problems I am having at the moment 
is with phobos,  mostly because I don't fully understand how phobos 
interacts with druntime (certain things seem to be duplicated between 
druntime and phobos).


Once I figure out how all that works (which I'll want to do anyway for 
ddmd), I should have a working port of D2 for FreeBSD.


You can join the project, too, developing is not hard at all. No special 
knowledge is required since porting code is pretty much a 
straightforward process. Everyone who is interested is welcome. Contact 
me if you need help to get yourself started.


I have pretty much given up on D2 until it is finalized, and development 
focus changes from specifications to library  compiler implementation 
and enhancements.  I think this will be the best time (for me) to get 
involved in the process.


Unfortunately for me (and possibly others) I got into D at a really 
shitty time.  The language itself is in a state of flux (at least for 
D2).  So I am changing my own focus on application development with D1, 
which I am sure will be around for quite a while before D2 gets 
community acceptance.


Re: Conspiracy Theory #1

2009-11-22 Thread Travis Boucher

Don wrote:

Travis Boucher wrote:

retard wrote:

Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:


The future of D to me is very uncertain.  I see some very bright
possibilities in the embedded area and the web cluster area (these are
my 2 areas, so I can't speak on the scientific applications).  However
the limited targets for the official DMD, and the adoption lag in gdc
(and possibly ldc) are issues that need to be addressed before I can 
see

the language getting some of the real attention that it deserves.


Agreed, basically you would need to go the gdc/gcc route since e.g. arm/
mips backends on llvm aren't as mature and clearly digitalmars only 
targets x86.


I hope sometime after the D2 specs are finalized, and dmd2 stablizes, 
Walter decides to make the dmd backend Boost or MIT licensed (or 
similar).


AFAIK, he can't. He doesn't own exclusive rights to it. The statement 
that it's not guaranteed to work after Y2K is a Symantec requirement, it 
definitely doesn't come from Walter!




Sadly thats even more reason to focus on non-digital mars compilers. 
Personally I like the digital mars compiler, its relatively simple 
(compared to the gcc code mess), but legacy symantec stuff could be a 
bit of a bottleneck.


Re: Conspiracy Theory #1

2009-11-21 Thread Travis Boucher

retard wrote:

Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:


The future of D to me is very uncertain.  I see some very bright
possibilities in the embedded area and the web cluster area (these are
my 2 areas, so I can't speak on the scientific applications).  However
the limited targets for the official DMD, and the adoption lag in gdc
(and possibly ldc) are issues that need to be addressed before I can see
the language getting some of the real attention that it deserves.


Agreed, basically you would need to go the gdc/gcc route since e.g. arm/
mips backends on llvm aren't as mature and clearly digitalmars only 
targets x86.


I hope sometime after the D2 specs are finalized, and dmd2 stablizes, 
Walter decides to make the dmd backend Boost or MIT licensed (or 
similar).  Then we can all call the Digital Mars compiler 'the reference 
implementation', and standardize on GCC/LLVM.


For most applications/libraries, forking means death.  But look at the 
cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA 
roots).  These implementations of their respective protocols are still 
the 'standard' and 'reference' implementations, they still have a huge 
installation, and are still see active development.


However, their alternatives in many cases offer better support, features 
and/or speed (not to mention security, especially in the case of bind 
and sendmail).


Of course, I am not even touching on the windows end of things, the 
weird marketing and politics involved in windows software I can't 
comment on as it is too confusing for me.  (freeware, shareware, 
crippleware, EULAs).


Re: Short list with things to finish for D2

2009-11-21 Thread Travis Boucher

Justin Johansson wrote:

Stewart Gordon wrote:

Denis Koroskin wrote:

On Sat, 21 Nov 2009 09:06:53 +0300, Don nos...@nospam.com wrote:


Justin Johansson wrote:

Stewart Gordon wrote:

snip

Why I believe opLength and opSize are also wrong names:

http://www.digitalmars.com/d/archives/digitalmars/D/announce/Re_opDollar_12939.html 
http://d.puremagic.com/issues/show_bug.cgi?id=3474


Stewart.

 FWIW, another suggestion: opCount
 Though I'm unsure if that is also the wrong name by your criteria.
 -- Justin

Like opSize(), opCount() only makes sense for integers.


opDim(ension)?


You've lost me

Stewart.


Me too.

Another suggestion: opRational.

I jest :-)

--Justin


Is that something like opOp?  The operation you define to define 
operations for new operations?


Re: Conspiracy Theory #1

2009-11-21 Thread Travis Boucher

Nick Sabalausky wrote:
dsimcha dsim...@yahoo.com wrote in message 
news:he6aah$4d...@digitalmars.com...

== Quote from Denis Koroskin (2kor...@gmail.com)'s article

Aren't uint array allocations have hasPointers flag set off? I always
thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers.  In this case, 
false
pointers keep each instance of foo[] alive, leading to severe memory 
leaks.


I don't suppose there's a way to lookup the pointers the GC believes it has 
found to a given piece of GC-ed memory? Sounds like that would be very 
useful, if not essential, for debugging/optimizing memory usage.





Maybe extend the GC interface so the compiler and language in general 
will give hints on what the memory is being used for.  This could even 
be extended to application code as well.


MEM_OBJECT, MEM_STRUCT, MEM_PTRARRY, MEM_ARRAY, etc. (I haven't fully 
thought this through so these examples may be bad).


Then the GC implementations can decide how to allocate the memory in the 
 best way for the underlying architecture.  I know this would be useful 
on weird memory layouts found in embedded machines (NDS for example), 
but could also be extended language-wise to other hardware memory areas. 
 For example, allocating memory on video cards or DSP hardware.


Like I said, this isn't something I have thought through much, and I 
don't know how much (if any) compiler/GC interface support would be 
required.


Re: removal of cruft from D

2009-11-21 Thread Travis Boucher

Leandro Lucarella wrote:

Walter Bright, el 21 de noviembre a las 11:51 me escribiste:

Nick Sabalausky wrote:

Yes! Capitalization consistency in the predefined versions! If it
needs to be worded as a removal, then Remove version's
capitalization inconsistencies ;). The current state of that is
absolutely ridiculous, and frankly, a real PITA (Ok, I need to
version for Blah OS...now what random capitalization did Walter
chose to use for that one again...?). I don't care about that
change breaking existing code: For one thing, it's D2, it's not
supposed to be stable yet, and secondly: Just say with this
release, grep all your code for version and update your
capitalizations, or, better yet, depricate any use of the old
names as errors, and just get the damn issue fixed already!

The choices were not random. They coincided with the common usage of
the underlying C compiler.


Right, like OSX.



http://predef.sourceforge.net/preos.html Has a decent list of macros 
defined for different OSes (and even different compilers in some cases).


Be glad he didn't use the different underscores.  I still think 
consistency would be nice (either all caps or no caps).


Re: removal of cruft from D (OT: XML rant n' rage, YAML)

2009-11-21 Thread Travis Boucher

Chad J wrote:

Justin Johansson wrote:

I wasn't thinking XSLT particularly.

By XML aware, I meant awareness of (any parts of) the wider XML
ecosystem in general and W3C related specs so not just XML syntax but
including XML Schema Datatypes for example.  Obviously XSLT is something
that would be implemented in a library rather than being reflected in a
language but such a library would be easier to implement in a language
that acknowledged XML Schema Datatypes.

In the case of XML syntax, note that both Scala and JavaScript support
XML syntax at the language level (the latter via the E4X extension to
JavaScript).  At some point in the (distant) future, D might support XML
syntax in the language in similar fashion to Scala, who knows.  I
understand that D1 has some ability to embed D code in HTML.  Though
I've never used it, and considering that (X)HTML is an application of
XML, this is at least an acknowledgement by D that HTML exists!

My point basically boils down to this.  We all accept IEEE Standard for
Floating-Point Arithmetic (IEEE 754) as the basis for the binary
representation of floating point data and nobody is going to argue
against that.  In terms of the evolution of standards, XML Schema
Datatypes does for the lexical representation of common datatypes
(numeric and string data), what IEEE 754 does for floating point data at
the binary level.

In the future I believe that PL's will implicitly acknowledge XML Schema
Datatypes as much as vernacular PL's implicitly acknowledge IEEE 754
today and that's why I took shot at your comment Useless hindrance to
future language expansion.

Cheers
Justin


Thank you for the well written explanation.

Now then, if XML is the way of the future, just shoot me now.

I know ActionScript 3 also supports XML syntax at the language level.
When I first learned this I likely had a huge look of disgust on my
face.  Something like (╬ ಠ益ಠ).  Requiring a general purpose programming
language to also implement XML is just too harsh for too little gain.
Wrap that stuff in qoutes.  D even has a rather rich selection of string
literals; too many if you ask me.  I really do not understand why XML
should have such a preferred status over every other DSL that will find
itself embedded in D code (or any other PL for that matter).

In other news, I discovered YAML.  I haven't used it enough to see if it
has a dark side or not, but so far it looks promising.  It doesn't make
my eyes bleed.  That's a good start.  It may just be worthy of me using
it instead of rolling my own encodings.

And yes, I'll roll my own encodings if I damn well feel like it.  I plan
on using D for hobby game programming in the future, so I have no desire
to drink the over-engineered koolaid that is XML.  I'll swallow SVG, but
only in small doses.  SVG is actually useful because Inkscape exists,
but I don't really intend to implement all of it, since SVG is also
quite over-engineered.

Ah, that felt good.

- Chad


Face it, XML is a text base markup language, not a programming language. 
 Text is for strings, and belong in quotes.  I don't care if the 
underlying data is a structure, or some logical construct which pretends 
to be code.


XML is not a programming language.  We should not be hindered by it.  I 
do not want to have to amp; codes for extended characters either. 
Also, D is targeted at being a system level programming language.  XML 
does not belong in system level code (yes redhat, I am glaring at you).


We already have standards which we follow, including UTF-8/16/32.  If 
you want a to standardize the way we represent numbers beyond the way we 
are doing it, then we might as well implement full localization and 
binary formatted source code.  I guess my rant is simple, XML is XML, D 
is D, mixing them is stupid.


DMD - Druntime - Phobos questions.

2009-11-21 Thread Travis Boucher
I am trying to learn some of the internal implementations of D2, mostly 
so I can make it work under other platforms.  Right now it is making it 
work under FreeBSD, since most of the stubs for FreeBSD already exist. 
In the future, this will be extended to other operating systems, and 
possibly embedded targets or even os development itself.


I need some help with the following, to make sure I am correct and to 
add any missing pieces to puzzle.


---
DMD - Digital Mars D Compiler.
http://svn.dsource.org/projects/dmd/

- src/ contains the 'front end'.  This is the lexical and semantic analysis.

- src/backend/ contains the code generation parts of the compiler.

- src/root/ contains the application glue, combining the front end and 
back end into something useful.


- src/tk contains helper code.

Linking (on gcc-based platforms) is done externally.  The compiler just 
generates objects appropriate for linking with external linkers.


Overall, the dependencies are minimal, host porting is trivial.  Target 
OS porting less trivial, but still pretty easy assuming well documented 
and standardized object formats.  Target CPU porting, don't even bother 
trying with DMD.


---
druntime - The runtime
http://svn.dsource.org/projects/druntime/

- import/ contains the core interfaces for the D language.  These 
interfaces (at least object.di and parts of core/*) need to be implemented.


- src/gc contains the garbage collector implementation.  I assume 
separated from the rest of the runtime to easy swapping out the GC.


- src/common/core contains the default implementation of the interfaces 
in import/.  Also serves as a good example of how to implement the 
runtime in multiple languages (in this case, I see some D, some C and 
some assembly).


- src/compiler - This one I am not too sure about.  Not sure how and why 
it differs from src/common/core.  This is where object.d seems to be 
implemented.


---
phobos - The standard library (at least one of them)
http://svn.dsource.org/projects/phobos/

I won't go into too much detail of how this is organized.  Overall it is 
the stuff from 'import std.*'.  The end-user callable code.  std.c 
contains the (mostly) unsafe direct interface to libc, and the rest is 
wrappers around it.  (of course this description is over simplified).


The standard library isn't something that is even really required (in 
the way that libc for C applications isn't really required).


However implementing and using these interfaces (or the tango 
interfaces) will make other code written in D work.


It should even be possible to use both tango and phobos in the same 
application (correct me if I am wrong here please).


---
Some things I am still unclear about.

- How does dmd know where the standard libraries (or interfaces) live? 
Purely via command line?  (since dmd.conf just modifies the command line)


- How does dmd know what to link in?  Same as the include directories? 
druntime.a is installed somewhere, and a -ldruntime (or similar) command 
line is used?


- What requires what?  Phobos isn't required (but without it or 
something similar, things are pretty useless).


- How much of druntime is really required?  This is an important 
question for embedded/os development targets.
 http://svn.dsource.org/projects/druntime/trunk/src/gc/stub/gc.d a good 
example of a minimal garbage collector.


 http://svn.dsource.org/projects/druntime/trunk/import/object.di seems 
to be the only import that is *required*.


 http://svn.dsource.org/projects/druntime/trunk/import/core/sys/ seems 
to be system Dependant, and not required.


 http://svn.dsource.org/projects/druntime/trunk/import/core/stdc/ seems 
to be mostly an abstract of libc, so not really needed.


These questions obviously show my interest in (future) development on 
either embedded platforms, or for OS development.


---
Other general D questions.

- What is the general consensus on the different D compiler 
implementations?  I know this is a very opinionated question, I am 
looking for answers that related to the D implementation and not the 
compilers themselves. I am mostly interested in GCC due to it's huge 
list of targets, complete and mature toolchain, and its something I've 
always used.


- What is the general consensus on the different D standard libraries? 
Again, I don't want religion here, just overall state related to D2. 
From what I've seen, phobos is similar (in functionality) to something 
like libc (plus extras of course), and tango would be more like boost 
and STL.



Thanks,
Travis


Re: hello world

2009-11-21 Thread Travis Boucher

Ellery Newcomer wrote:

On 11/21/2009 02:11 PM, Jesse Phillips wrote:

On Sat, 21 Nov 2009 14:01:23 -0600, Ellery Newcomer wrote:


Just switched back to 64 bit fedora, and wonder of wonders, dmd doesn't
work. Well, actually dmd sorta does. The linker doesn't. I'm getting

$ ./dmd test.d

/usr/bin/ld: crt1.o: No such file: No such file or directory



ideas?

dmd 2.036


While not an error I remember seeing, you might take a look at:

http://stackoverflow.com/questions/856328/compiling-with-dmd-on-64bit-
linux-or-linking-with-32bit-object-files


That helped!

yum install glibc-devel.i686

was all.

Tapadh leibh!




You can also look at http://bitbucket.org/goshawk/gdc/wiki/Home for 
instructions on getting D2 working with gdc.  Its only 2.015, but 
hopefully will get updated.  (infact, I am thinking of doing it myself 
manually)


Re: Can we drop static struct initializers?

2009-11-20 Thread Travis Boucher

Leandro Lucarella wrote:

Walter Bright, el 19 de noviembre a las 23:53 me escribiste:

It's not difficult to fix these compiler problems, but I'm just
not sure if it's worth implementing. Maybe they should just be
dropped? (The { field: value } style anyway).

Funny, I've been thinking the same thing. Those initializers are
pretty much obsolete, the only thing left is the field name thing.
To keep the field name thing with the newer struct literals would
require named function parameters as well, something doable but I'm
not ready to do all the work to implement that yet.


Is nice to read that you like the idea of having named function
parameters, even when you don't have the time or don't want to implement
them :)



Whats even nicer is that dmd front end and back end are open source 
allowing anyone to implement them if they really want to.


Of course it will be even nicerer once the back end is at a state where 
it can be under a less restrictive license (single user, no 
redistribution? seriously?).


Re: Conspiracy Theory #1

2009-11-20 Thread Travis Boucher

dsimcha wrote:

== Quote from Denis Koroskin (2kor...@gmail.com)'s article

On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha dsim...@yahoo.com wrote:

== Quote from Travis Boucher (boucher.tra...@gmail.com)'s article

dsimcha wrote:

== Quote from Travis Boucher (boucher.tra...@gmail.com)'s article

Sean Kelly wrote:
 Its harder
to create a memory leak in D then it is to prevent one in C.

void doStuff() {
uint[] foo = new uint[100_000_000];
}

void main() {
while(true) {
doStuff();
}
}


Hmm, that seems like that should be an implementation bug.  Shouldn't
foo be marked for GC once it scope?  (I have never used new on a
primitive type, so I don't know)

It's conservative GC.  D's GC, along with the Hans Boehm GC and probably
most GCs
for close to the metal languages, can't perfectly identify what's a
pointer and
what's not.  Therefore, for sufficiently large allocations there's a high
probability that some bit pattern that looks like a pointer but isn't
one will
keep the allocation alive long after there are no real references to
it left.

Aren't uint array allocations have hasPointers flag set off? I always
thought they aren't scanned for pointers (unlike, say, void[]).


Right, but they can still be the target of false pointers.  In this case, false
pointers keep each instance of foo[] alive, leading to severe memory leaks.


But the issue is more of a GC implementation issue then a language 
issue, correct?  Or is this an issue of all lower level language garbage 
collectors?


I do not know much about GC, just basic concepts.



Re: Conspiracy Theory #1

2009-11-20 Thread Travis Boucher

Denis Koroskin wrote:

On Fri, 20 Nov 2009 19:24:05 +0300, dsimcha dsim...@yahoo.com wrote:


== Quote from Travis Boucher (boucher.tra...@gmail.com)'s article

dsimcha wrote:
 == Quote from Denis Koroskin (2kor...@gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha dsim...@yahoo.com 
wrote:

 == Quote from Travis Boucher (boucher.tra...@gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.tra...@gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
 void doStuff() {
 uint[] foo = new uint[100_000_000];
 }

 void main() {
 while(true) {
 doStuff();
 }
 }

 Hmm, that seems like that should be an implementation bug.  
Shouldn't

 foo be marked for GC once it scope?  (I have never used new on a
 primitive type, so I don't know)
 It's conservative GC.  D's GC, along with the Hans Boehm GC and 
probably

 most GCs
 for close to the metal languages, can't perfectly identify what's a
 pointer and
 what's not.  Therefore, for sufficiently large allocations 
there's a high
 probability that some bit pattern that looks like a pointer but 
isn't

 one will
 keep the allocation alive long after there are no real 
references to

 it left.
 Aren't uint array allocations have hasPointers flag set off? I always
 thought they aren't scanned for pointers (unlike, say, void[]).

 Right, but they can still be the target of false pointers.  In this 
case, false
 pointers keep each instance of foo[] alive, leading to severe 
memory leaks.

But the issue is more of a GC implementation issue then a language
issue, correct?


Yes.


Or is this an issue of all lower level language garbage
collectors?


Kinda sorta.  It's possible, but not easy, to implement fully precise 
GC (except
for the extreme corner case of unions of reference and non-reference 
types) in a

close to the metal, statically compiled language.


Unions could be deprecated in favor of tagged unions (see an example in 
Cyclone http://cyclone.thelanguage.org/wiki/Tagged%20Unions). Would that 
help?


Probably not since the bit pattern of int i could still match a valid 
pointer.


Foo.i = cast(int)Foo;   // for bad practice ugliness

or Foo.i = (some expression that happens to equal Foo)


Adding extra information to a union could also have the bad side effect 
of killing performance as writes would include an extra write, and 
additional memory would be required (which would cause another set of 
issues on how to handle alignment).




Re: Conspiracy Theory #1

2009-11-20 Thread Travis Boucher

Leandro Lucarella wrote:

dsimcha, el 20 de noviembre a las 16:24 me escribiste:

Right, but they can still be the target of false pointers.  In this case, false
pointers keep each instance of foo[] alive, leading to severe memory leaks.

But the issue is more of a GC implementation issue then a language
issue, correct?

Yes.


Or is this an issue of all lower level language garbage
collectors?

Kinda sorta.  It's possible, but not easy, to implement fully precise GC (except
for the extreme corner case of unions of reference and non-reference types) in a
close to the metal, statically compiled language.


I don't think so if you want to be able to link to C code, unless I'm
missing something...



The extern (C) stuff and malloc allocated memory isn't garbage collected.


Re: removal of cruft from D

2009-11-20 Thread Travis Boucher

Yigal Chripun wrote:
Based on recent discussions on the NG a few features were 
deprecated/removed from D, such as typedef and C style struct initializers.


IMO this cleanup and polish is important and all successful languages do 
such cleanup for major releases (Python and Ruby come to mind). I'm glad 
to see that D follows in those footsteps instead of accumulating craft 
like C++ does.



As part of this trend of cleaning up D before the release of D2, what 
other features/craft should be removed/deprecated?


I suggest reverse_foreach and c style function pointers

please add your candidates for removal.




Make version() statements case insensitive, although I guess that would 
be addition and not removal.  Either that, or add the common cases for 
all reserved version keywords (or at least some consistency, linux and 
FreeBSD).


Re: Short list with things to finish for D2

2009-11-19 Thread Travis Boucher

aarti_pl wrote:

Andrei Alexandrescu pisze:
We're entering the finale of D2 and I want to keep a short list of 
things that must be done and integrated in the release. It is clearly 
understood by all of us that there are many things that could and 
probably should be done.


1. Currently Walter and Don are diligently fixing the problems marked 
on the current manuscript.


2. User-defined operators must be revamped. Fortunately Don already 
put in an important piece of functionality (opDollar). What we're 
looking at is a two-pronged attack motivated by Don's proposal:


http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7

The two prongs are:

* Encode operators by compile-time strings. For example, instead of 
the plethora of opAdd, opMul, ..., we'd have this:


T opBinary(string op)(T rhs) { ... }

The string is +, *, etc. We need to design what happens with 
read-modify-write operators like += (should they be dispatch to a 
different function? etc.) and also what happens with index-and-modify 
operators like []=, []+= etc. Should we go with proxies? Absorb 
them in opBinary? Define another dedicated method? etc.


* Loop fusion that generalizes array-wise operations. This idea of 
Walter is, I think, very good because it generalizes and democratizes 
magic. The idea is that, if you do


a = b + c;

and b + c does not make sense but b and c are ranges for which a.front 
= b.front + c.front does make sense, to automatically add the 
iteration paraphernalia.


3. It was mentioned in this group that if getopt() does not work in 
SafeD, then SafeD may as well pack and go home. I agree. We need to 
make it work. Three ideas discussed with Walter:


* Allow taking addresses of locals, but in that case switch allocation 
from stack to heap, just like with delegates. If we only do that in 
SafeD, behavior will be different than with regular D. In any case, 
it's an inefficient proposition, particularly for getopt() which 
actually does not need to escape the addresses - just fills them up.


* Allow @trusted (and maybe even @safe) functions to receive addresses 
of locals. Statically check that they never escape an address of a 
parameter. I think this is very interesting because it enlarges the 
common ground of D and SafeD.


* Figure out a way to reconcile ref with variadics. This is the 
actual reason why getopt chose to traffic in addresses, and fixing it 
is the logical choice and my personal favorite.


4. Allow private members inside a template using the eponymous trick:

template wyda(int x) {
   private enum geeba = x / 2;
   alias geeba wyda;
}

The names inside an eponymous template are only accessible to the 
current instantiation. For example, wyda!5 cannot access 
wyda!(4).geeba, only its own geeba. That we we elegantly avoid the 
issue where is this symbol looked up?


5. Chain exceptions instead of having a recurrent exception terminate 
the program. I'll dedicate a separate post to this.


6. There must be many things I forgot to mention, or that cause grief 
to many of us. Please add to/comment on this list.




Andrei


I kinda like this proposal. But I would rather call template like below:

T opInfix(string op)(T rhs) { ... }
T opPrefix(string op)(T rhs) { ... }
T opPostfix(string op)(T rhs) { ... }

and allow user to define her own operators (though it doesn't have to be 
done now).


I know that quite a few people here doesn't like to allow users to 
define their own operators, because it might obfuscate code. But it 
doesn't have to be like this. Someone here already mentioned here that 
it is not real problem for programs in C++. Good libraries don't abuse 
this functionality.


User defined operators would allow easy definition of Domain Specific 
Languages in D. I was already writing about it some time ago:


http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.Darticle_id=81026 

http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.Darticle_id=81352 



BR
Marcin Kuszczak
(aarti_pl)


Sweet, I've been waiting for a way to implement brainfuck using operators!

auto bf = new BrainFuck();
bf++[+-]++.+.+++..+++.++.+++..+++.--..+..;
writef(bf.toString()); // outputs Hello World!\n



Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

Andrei Alexandrescu wrote:


Today that reality is very visible already from certain spots. I've 
recently switched fields from machine learning/nlp research to 
web/industry. Although the fields are apparently very different, they 
have a lot in common, along with the simple adage that obsession with 
performance is a survival skill that (according to all trend 
extrapolations I could gather) is projected to become more, not less, 
important.



Andrei


Except in the web world performance is network and parallelism (cloud 
computing). Much less code efficiency, much more programmer productivity 
(which currently is mutually exclusive, but doesn't have to be)


Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

Andrei Alexandrescu wrote:

Travis Boucher wrote:

Andrei Alexandrescu wrote:


Today that reality is very visible already from certain spots. I've 
recently switched fields from machine learning/nlp research to 
web/industry. Although the fields are apparently very different, they 
have a lot in common, along with the simple adage that obsession with 
performance is a survival skill that (according to all trend 
extrapolations I could gather) is projected to become more, not less, 
important.



Andrei


Except in the web world performance is network and parallelism (cloud 
computing). Much less code efficiency, much more programmer 
productivity (which currently is mutually exclusive, but doesn't have 
to be)


You'd be extremely surprised. With Akamai delivery and enough CPUs, it 
really boils down to sheer code optimization. Studies have shown that 
artificially inserted delays on the order of tens/hundreds of 
milliseconds influence user behavior on the site dramatically.


Andrei


This is one thing that doesn't surprise me.  Even some large sites, when 
given a choice between a fast language with slower development (C/C++) 
and a slow language with fast development (Ruby, Perl, Python, PHP), the 
choice is almost always the fast development.


Sure, there are a few people who work on making the lower level stuff 
faster (mostly network load optimization), but the majority of the 
optimization is making the code run on a cluster of machines.  A site 
falls into 2 categories.  Needs scalability and doesn't.


Those who need scalability, design frameworks that scale.  Need more 
speed?  Add more machines.


Those who don't need scalability, don't care what they write in or how 
slow their crap is (you don't know how often I've seen horrid SQL 
queries that cause full table scans).


The fast, highly optimized web code is a very niche market.


Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

Sean Kelly wrote:

retard Wrote:


Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:



It seems to me that MS expects C++ to go the way of FORTRAN and
COBAL.  Still there, still used, but by an increasingly small number of
people for a small (but important!) subset of things.  Note how MS still
hasn't produced a C99 compiler. They just don't see it as relevant to
enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages 
such as Python on the desktop and Web 2.0 (mostly javascript, flash, 
silverlight, php, python) is a strongly growing platform. I expect most 
of the every day apps to move to the cloud during the next 10 years. 
Unfortunately c++ and d missed the train here. People don't care about 
performance anymore. Even application development has moved from library 
writing to high level descriptions of end user apps that make use of high 
quality foss/commercial off-the-shelf components. Cloud computing, real-
time interactive communication, and fancy visual look are the key 
features these days.


Performance per watt is a huge issue for server farms, and until all this talk of low 
power, short pipeline, massively parallel computing is realized (ie. true cloud 
computing), systems languages will have a very definite place in this arena.  I 
know of large-scale Java projects that go to extreme lengths to avoid garbage collection 
cycles because they take upwards of 30 seconds to complete, even on top-of-the-line 
hardware.  Using a language like C remains a huge win in these situations.


This I agree with to a certain degree.  This really only applies to 
colocated systems.  Shared hosting situations, users are often too 
stupid to understand the effects of crap code, and shared hosting 
providers tend to over commit machines.


Then comes in the virtualization providers, Amazon EC2 being a perfect 
example.  As long as income is greater then costs, EC2 users rarely get 
their code running as well as it could, even tho they'd see the most 
direct cost savings from doing so.  With today's web languages, the cost 
to make something efficient and fast (and maintain, debug, etc) is 
higher then the cost to run slow crappy code.


This is amplified by the loss of money in an emerging market where 
coming out even a month after your competitors could mean your death.


Languages like D (and even java and erlang to some degree) had the 
opportunity to change this trend 10-15 years ago when scalable clusters 
were not a common thing.  However, with the direction the web has gone 
in the past 5-10 years, to more 'web applications' the opportunity might 
come again.  We just need 'derail' all of those ruby kids, and get some 
killer web application framework for D.


Personally, I hate the Interwebs, and I don't care if it collapses under 
its own bloated weight.  As long as I still have some way of accessing 
source code.



Even in this magical world of massively parallel computing there will be a 
place for systems languages.  After all, that's how interaction with hardware 
works, consistent performance for time-critical code is achieved, etc.  I think 
the real trend to consider is that projects are rarely written in just one 
language these days, and ease of integration between pieces is of paramount 
importance.  C/C++ still pretty much stinks in this respect.


Yes, the days of multi-cpu, multi-core, multi-thread hardware is here. 
I recently got a chance to do some work on a 32 hardware thread sun 
machine.  Very interesting design concepts.


This is where languages like erlang have an advantage, and D is heading 
in the right direction (but still quite far off).  D at least has the 
ability to adapt to these new architectures, where as C/C++ will soon be 
dealing with contention hell (they already do in some aspects).


The idea of a single machine with 100+ processing contexts (hardware 
threads) is not something in the distant future.  I know some of the sun 
machines (T5240) already can do 128 hardware threads in a single 
machine.  Add in certain types of high bandwidth transferring (rdma 
infiniband for example), and the concepts of things like Mosix and 
erlang and we'll have single processes with multiple threads running on 
multiple hardware threads, cores, cpus and even machines.


Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

Sean Kelly wrote:

Travis Boucher Wrote:

The fast, highly optimized web code is a very niche market.


I'm not sure it will remain this way for long.  Look at social networking sites, where 
people spend a great deal of their time in what are essentially user-created apps.  Make 
them half as efficient and the cloud will need twice the resources to run 
them.



I hope it doesn't remain this way.  Personally I am sick of fixing 
broken PHP code, retarded ruby code, and bad SQL queries.  However, the 
issue isn't the language as much as it is the coders.


Easy powerful languages = stupid coders who do stupid things.

D is an easy, powerful language, but has one aspect which may protect it 
against stupid coders.  Its hard to do stupid things in D.  Its harder 
to create a memory leak in D then it is to prevent one in C.


Hell, I've seen ruby do things which personally I thought was a memory 
leak at first, to later realize it was just a poor GC implementation. 
(this is mats ruby, not jruby or rubinius).


I know stupid coders will always exist, but D promotes good practice 
without sacrificing performance.


Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

dsimcha wrote:

== Quote from Travis Boucher (boucher.tra...@gmail.com)'s article

Sean Kelly wrote:
 Its harder
to create a memory leak in D then it is to prevent one in C.


void doStuff() {
uint[] foo = new uint[100_000_000];
}

void main() {
while(true) {
doStuff();
}
}



Hmm, that seems like that should be an implementation bug.  Shouldn't 
foo be marked for GC once it scope?  (I have never used new on a 
primitive type, so I don't know)


Re: Conspiracy Theory #1

2009-11-19 Thread Travis Boucher

Michael Farnsworth wrote:


I love it when I hear people don't care about performance anymore, 
because in my experience that couldn't be further from the truth.  It 
sorta reminds me of the Apple is dying argument that crops up every so 
often.  There will probably always be a market for Apple, and there will 
always be a market for performance.


Mmmperformance...

-Mike


Its not that people don't care about performance, companies care more 
about rapid development and short time to market.  They work like 
insurance companies, where if cost of development (ie. coder man hours) 
is less then (cost of runtime time) * (code lifetime), then the fewer 
coder man hours wins.  Its like the cliche that hardware is cheaper the 
coders.  Also, slow sloppy broken code also means revisions and updates 
which in some cases are another avenue of revenue.


Now in the case of movie development, the cost of coding an efficient 
rendering system is cheaper then a large rendering farm and/or the money 
loss if the movie is released at the wrong time.


Focusing purely on performance is niche, as is focusing purely on syntax 
of a language.  What matters to the success of a language is how money 
can be made off of it.


Do you think PHP would have been so successful if it wasn't such an easy 
language which was relatively fast (compared to old CGI scripts), being 
released at a time when the web was really starting to take off?


Right now, from my perspective at least, D has the performance and the 
syntax, its just the deployment that is sloppy.  GDC has a fairly old 
DMD front end, the official DMD may or may not work as expected (I'm 
talking about the compiler/runtime/standard library integration on this 
point).


The battle between compiler/runtime/library is something that I think is 
very much needed (the one part of capitalism I actually agree with), but 
I think it is definitely something that is blocking D from a wider 
acceptance.




Re: version() abuse! Note of library writers.

2009-11-18 Thread Travis Boucher

Anders F Björklund wrote:

Travis Boucher wrote:
The use of version(...) in D has the potential for some very elegant 
portable code design.  However, from most of the libraries I have 
seen, it is abused and misused turning it into a portability nightmare.


It has done this for years, so it's already turned that way.
Usually it's version(Win32) /*Windows*/; else /*linux*/;...


I'm fairly new to D, and one thing I really love about it is the removal 
of the preprocessor in favor of specific conditional compilation 
(version, debug, unittest, static if, CTFE, etc).  Nothing was worse 
then trying to decode a massive #ifdef tree supporting different 
features from different OSes.


I don't expect things to change right now, but I think that there should 
be some standard version() statements that are not only implementation 
defined.  I'd also like people to start thinking about the OS 
hierarchies with version statements.


Windows
  Win32
  Win64
  WinCE  (as an example...)
Posix (or Unix, I don't care which one)
  BSD
FreeBSD
OpenBSD
NetBSD
Darwin
  Linux
  Solaris

The problem with version(Win32) /*Windows*/; else /*linux*/; is fairly 
subtle, but I have run into it alot with bindings to C libraries that 
use the dlopen() family and try to link against libdl.


Anything that accesses standard libc functions, standard unix 
semantics (eg. signals, shm, etc) should use version(Posix) or 
version(unix).


Nice rant, but it's version(Unix) in GCC and we're probably
stuck with the horrible version(linux) and version(OSX) forever.


On my install (FreeBSD) version(Unix) and version(Posix) are both defined.

Build systems and scripts that are designed to run on unix machines 
should not assume the locations of libraries and binaries, and refer 
to posix standards for their locations.  For example, bash in 
/bin/bash or the assumption of the existence of bash at all.  If you 
need a shell script, try writing it with plain bourne syntax without 
all of the bash extensions to the shell, and use /bin/sh.  Also avoid 
using the GNU extensions to standard tools (sed and awk for example).  
If you really want to do something fancy, do it in D and use the 
appropriate {g,l}dmd -run command.


I rewrote my shell scripts in C++ for wxD, to work on Windows.
Tried to use D (mostly for DSSS), but it wasn't working right.


Yeah, I can understand in some cases using D itself could be a major 
bootstrapping hassle.  This issue isn't D specific, and exists in alot 
of packages.  I've even gotten to the point to expect most third party 
packages won't work with FreeBSD's make, and always make sure GNU make 
is available.



A few things to keep in mind about linux systems vs. pretty much all
other unix systems:


Nice list, you should put it on a web page somewhere (Wiki4D ?)
Usually one also ends up using runtime checks or even autoconf.


I haven't registered in Wiki4D yet, I might soon once I take the time to 
clean up this ranty post into something a little more useful.




PS. Some people even think that /usr/bin/python exists. :-)
Guess they were confusing it with standard /usr/bin/perl


I won't even go into my feelings about python.  Sadly perl is slowly 
becoming more extinct.  It would be nice for people to remember that 
perl started as a replacement for sed  awk, and still works well for 
that purpose.  At least people don't assume ruby exists.


The bad thing is when a build system breaks because of something 
non-critical failing.  A good example of this is the gtkd demoselect.sh 
script.  It use to assume /bin/bash, which would trigger a full build 
failure.  Since it was changed to /bin/sh, it doesn't work correctly on 
FreeBSD (due to I think some GNU extensions used in sed), but it doesn't 
cause a build failure.  It just means the default demos are built.


Re: version() abuse! Note of library writers.

2009-11-18 Thread Travis Boucher
Another note, something I see in tango and I don't know why I didn't 
think about it before.


If you want to require bash, use:

#!/usr/bin/env bash

instead of

#!/bin/bash
#!/usr/bin/bash


Re: Short list with things to finish for D2

2009-11-18 Thread Travis Boucher

bearophile wrote:

Andrei Alexandrescu:

* Encode operators by compile-time strings. For example, instead of the 
plethora of opAdd, opMul, ..., we'd have this:


T opBinary(string op)(T rhs) { ... }

The string is +, *, etc.


Can you show an example of defining an operator, like a minus, with that?



T opBinary(string op)(T rhs) {
static if (op == -) return data - rhs.data;
static if (op == +) return data + rhs.data;

// ... maybe this would work too ...
mixin(return data  ~ op ~ rhs.data;);
}

I love this syntax over the tons of different operation functions. 
Makes it so much nicer, especially when supporting a bunch of different 
paramater types (vectors are a good example of this).


T opBinary(string op)(T rhs)
T opBinary(string op)(float[3] rhs)
T opBinary(string op)(float rx, ry, rz)



In my set data structure I'd like to define = among two sets as is subset. Can 
that design allow me to overload just = and = ? (opCmp is not enough here).

Bye,
bearophile


Re: Short list with things to finish for D2

2009-11-18 Thread Travis Boucher

grauzone wrote:


If I had a better proposal, I'd post it. I'm just saying that's it's a 
bad hack, that _although_ solves the problem, will have negative side 
effects for other reasons.


Does the current proposal make things simpler at all? All you're doing 
is to enable the programmer to fix the clumsy semantics by throwing 
lots of CTFE onto the problem. Why not generate the operator functions 
with CTFE in the first place...


From my point of view (trying different ways of implementing something 
as simple as a vector), this makes things much simpler without 
sacrificing functionality.


Personally, I'd love to see an unknownMethod(string method)(...) thing 
implemented as well, but that might be asking for too much (and might 
sacrifice performance in some cases).


Re: Short list with things to finish for D2

2009-11-18 Thread Travis Boucher

Rainer Deyke wrote:

Andrei Alexandrescu wrote:

I am thinking that representing operators by their exact token
representation is a principled approach because it allows for
unambiguous mapping, testing with if and static if, and also allows
saving source code by using only one string mixin. It would take more
than just a statement that it's hackish to convince me it's hackish. I
currently don't see the hackishness of the approach, and I consider it a
vast improvement over the current state of affairs.


Isn't opBinary just a reduced-functionality version of opUnknownMethod
(or whatever that is/was going to be called)?

T opBinary(string op)(T rhs) {
static if (op == +) return data + rhs.data;
else static if (op == -) return data - rhs.data;
...
else static assert(0, Operator ~op~ not implemented);
}

T opUnknownMethod(string op)(T rhs) {
static if (op == opAdd) return data + rhs.data;
else static if (op == opSub) return data - rhs.data;
...
else static assert(0, Method ~op~ not implemented);
}

I'd much rather have opUnknownMethod than opBinary.  If if I have
opUnknownMethod, then opBinary becomes redundant.




Passing op as the symbol allows for mixin(this.data~op~that.data;);

What I was hoping for was a catch-all for unknown non-operation methods, 
which could allow for dispatching to functions that are not even known 
at runtime (ie. trigger a lookup in a shared object, or pass along to a 
scripting language engine).




Re: String Mixins

2009-11-17 Thread Travis Boucher

Bill Baxter wrote:

On Mon, Nov 16, 2009 at 3:42 PM, Travis Boucher
boucher.tra...@gmail.com wrote:

I've been playing with string mixins, and they are very powerful.

One thing I can't figure out is what exactly can and cannot be evaluated at
compile time.

For example:


char[] myFunc1() {
   return int a = 1;;
}

char[] myFunc2() {
   char[] myFunc3() {
   return int b = 2;;
   }
   return myFunc3();
}

void main() {
   mixin(myFunc1());
   mixin(myFunc2());
}


myFunc1() can be used as a string mixin.
myFunc2() can't be.

I'm sure there are other things that I'll run into, but I figure there is
some simple set of rules of what can and can't be used as a string mixin.


Unfortunately there aren't any easy rules to go by.  If it doesn't
work CTFE, and a bug hasn't already been filed, then you could file a
bug, especially if you find the problem blocking your progress.
However, at this point there are plenty of things that don't work that
are known and being targeted by Don already.  So a flood of this and
that don't work in CTFE bug reports may not be so useful just yet.

Anyway, just be thankful that it now at least tells you what can't be
evaluated.  That's a vast improvement over the old days when the
compiler would just give a generic error message about CTFE and leave
you guessing about which line it didn't like!

--bb


Don responded in D.learn.  The examples above should work on recent 
(=1.047) DMD, I happen to be using gdc with 1.020.  Right now I am just 
seeing how far I can push it and how weird I can make code that works.


version() abuse! Note of library writers.

2009-11-17 Thread Travis Boucher
The use of version(...) in D has the potential for some very elegant 
portable code design.  However, from most of the libraries I have seen, 
it is abused and misused turning it into a portability nightmare.


http://dsource.org/projects/dmd/browser/trunk/src/mars.c#L313 defines 
the following versions: Windows, Posix, linux, OSX, darwin, FreeBSD and 
Solaris.


http://dgcc.svn.sourceforge.net/viewvc/dgcc/trunk/d/target-ver-syms.sh?view=markup 
defines aix, unix, cygwin, darwin, freebsd, Win32, skyos, solaris, 
freebsd (and others).


The problem I run into is the assumption that linux == unix/posix.  This 
assumption is not correct.


The use of version(linux) should be limited to code that:

1. Declares externals from sys/*.h
2. Accesses /proc or /sys (or other Linux specific pseudo filesystems)
3. Things that interface directly with the dynamic linker (eg. linking 
against libdl)

4. Other things that are linux specific

Anything that accesses standard libc functions, standard unix semantics 
(eg. signals, shm, etc) should use version(Posix) or version(unix).


Build systems and scripts that are designed to run on unix machines 
should not assume the locations of libraries and binaries, and refer to 
posix standards for their locations.  For example, bash in /bin/bash or 
the assumption of the existence of bash at all.  If you need a shell 
script, try writing it with plain bourne syntax without all of the bash 
extensions to the shell, and use /bin/sh.  Also avoid using the GNU 
extensions to standard tools (sed and awk for example).  If you really 
want to do something fancy, do it in D and use the appropriate {g,l}dmd 
-run command.


A few things to keep in mind about linux systems vs. pretty much all 
other unix systems:


Linux is a kernel.  The userspace stuff you use (your shell, your libc, 
etc) is (typically) GNU, and not Linux.  On other unix systems, the 
kernel is tightly linked to the libc and other standard libraries, and 
the other base applications (shell, login, sed, awk, etc) are often part 
of the base system and not GNU (this is not always true, as some systems 
use GNU tools as part of the base system as well).


If you are writing a wrapper around something, and it is a library in 
Linux, it most likely is also a library in other unix machines.  This 
includes opengl, image libraries, X11 libraries, sound  media 
libraries, etc.  If you require an external library, please state as 
much in the documentation and don't hide it in a version(linux) 
statement because you just abused the reason for version(...) to exist 
in the first place.


Other tips:
 - Don't use the /proc filesystem unless you really must.  If you do, 
abstract the interface and implement a Linux specific interface.  This 
will ease porting (I'd be happy to come in and do FreeBSD stuff where 
possible).
 - If you are unsure, check http://www.freebsd.org/cgi/man.cgi This 
interface can look up the manual pages for multiple OSes, and the 
FreeBSD versions of the manuals are very consistent.  Some of the other 
ones will also give hints on bugs and subtle differences for different 
implementations.
 - If you want to make something work under FreeBSD or other OSes, post 
on the NG (or for FreeBSD, bug me directly).
 - Linux programs need to be linked against libdl for access to 
dlopen() and family, most unix OSes have access to dlopen() from libc (I 
think this is partially due to the tight libc/kernel coupling in most 
unix OSes).
 - Darwin/OSX  FreeBSD all share alot of similar kernel interfaces, 
specifically most of the process model, network stack and virtual 
filesystem layers.  OSX even has kqueue! (although slightly different 
then FreeBSD).
 - FreeBSD  Solaris share some common ancestry.  Although the 
similarities are not very important to application developers, internal 
kernel interfaces and designs are similar.
 - The GNU tools typically conform to standards, but add extra 
extensions everywhere.  If you write your scripts for a standard bourne 
shell rather then bash, bash will still be able to run it most of the 
time.  Personally the GNU extensions to everything feels like a move 
Microsoft would do, just breaking enough compatibility to create vendor 
lock in.


Thanks,
Travis


String Mixins Compile Time Evaluation

2009-11-17 Thread Travis Boucher

I've been playing with string mixins, and they are very powerful.

One thing I can't figure out is what exactly can and cannot be evaluated 
at compile time.


For example:


char[] myFunc1() {
return int a = 1;;
}

char[] myFunc2() {
char[] myFunc3() {
return int b = 2;;
}
return myFunc3();
}

void main() {
mixin(myFunc1());
mixin(myFunc2());
}


myFunc1() can be used as a string mixin.
myFunc2() can't be.

Another (slightly more complex) example is using an ExpressionTuple.


template DataGenerator(T, M...) {
char[] data() {
char[] rv;
foreach (m; M) rv ~= T.stringof ~   ~ m ~ ;;
return rv;
}
}

alias DataGenerator!(int, r, g, b) ColorRGBgen;

writefln(ColorRGBgen.data()); // int R; int G; int B;

struct Color {
mixin(ColorRGBgen.data()); // Can't evaluate at compile time
}



I'm sure there are other things that I'll run into, but I figure there 
is some simple set of rules of what can and can't be used as a string 
mixin and other compile time evaluations.


Re: String Mixins Compile Time Evaluation

2009-11-17 Thread Travis Boucher

Don wrote:

Travis Boucher wrote:

I've been playing with string mixins, and they are very powerful.

One thing I can't figure out is what exactly can and cannot be 
evaluated at compile time.


For example:


char[] myFunc1() {
return int a = 1;;
}

char[] myFunc2() {
char[] myFunc3() {
return int b = 2;;
}
return myFunc3();
}

void main() {
mixin(myFunc1());
mixin(myFunc2());
}


myFunc1() can be used as a string mixin.
myFunc2() can't be.


I think you're using an old version of DMD. It's been working since 
DMD1.047. Please upgrade to the latest version, you'll find it a lot 
less frustrating.

The bottom of function.html in the spec gives the rules.
Though it says nested functions aren't supported, but they are.


Yeah, I am running 1.020 with gdc (freebsd default for gdc package).  I 
found a few work arounds, just trying to see what can be done.


For now I'll take it as a work in progress and once I start doing 
anything real with it I'll upgrade to the latest version of dmd.


Thanks,
Travis Boucher


String Mixins

2009-11-16 Thread Travis Boucher

I've been playing with string mixins, and they are very powerful.

One thing I can't figure out is what exactly can and cannot be evaluated 
at compile time.


For example:


char[] myFunc1() {
return int a = 1;;
}

char[] myFunc2() {
char[] myFunc3() {
return int b = 2;;
}
return myFunc3();
}

void main() {
mixin(myFunc1());
mixin(myFunc2());
}


myFunc1() can be used as a string mixin.
myFunc2() can't be.

I'm sure there are other things that I'll run into, but I figure there 
is some simple set of rules of what can and can't be used as a string mixin.


Re: thank's ddmd !

2009-11-09 Thread Travis Boucher

zsxxsz wrote:

== Quote from dolive (doliv...@sina.com)'s article

thank's ddmd ! it��s too great !
http://www.dsource.org/projects/ddmd
dolive


Greate work! But it doesn't support LINUX yet:(


Once you get it running under linux, I'll be more then happy to make 
sure all of the freebsd issues get some attention.


Re: Request for comment _ survey of the 'D programming language ' community.

2009-11-08 Thread Travis Boucher

Nick B wrote:
What is the definition that this community is succeeding / making 
progress ?


I would like to propose there is _only_ one.  That the community is 
growing from year to year.


 From ten years ago when Walter, started the D project it certainly has 
grown, but compared to one year ago, has it grown or shrunk ?  Without 
any hard data there is no way to know.


I propose a brief email survey to gather of the size of the community, 
repeated at yearly intervals.


Proposed questions:

Name:
Alternative name (handle:)
Number of years using D:
Framework: Phobos ; Tango, Both; None;  Other


Comments ?


regards
Nick B


An Email Survey is the quickest way to turn me into a silent NG prowler 
with a new email address.  If you want to do a survey, post it in the NG 
and have people optionally go to a site to fill it out.  Even then, it 
won't give accurate results.


I'd suggest that a better metric of community growth is community 
involvement in open source projects and discussion (eg. dsource stats 
and NG stats).





Re: Introducing Myself

2009-11-05 Thread Travis Boucher

Saaa wrote:

Travis Boucher wrote

I guess I should introduce myself.

Hi, I'm Travis, and I am a code-a-holic and general purpose unix geek.

Hi

In comes D.


I love learning new things, and D is the most exciting thing I have gotten 
into the past 5 years.  I hope to become part of the community in some way 
or another.



You say hello after 5 years :)




No, I only recently I got into D.  What I meant is in the past 5 years 
(or so), nothing has really excited me this much.


Re: Arrays passed by almost reference?

2009-11-05 Thread Travis Boucher

dsimcha wrote:

== Quote from Ali Cehreli (acehr...@yahoo.com)'s article

I haven't started reading Andrei's chapter on arrays yet. I hope I won't find

out that the following behavior is expected. :)

import std.cstream;
void modify(int[] a)
{
a[0] = 1;
a ~= 2;
dout.writefln(During: , a);
}
void main()
{
int[] a = [ 0 ];
dout.writefln(Before: , a);
modify(a);
dout.writefln(After : , a);
}
The output with dmd 2.035 is
Before: [0]
During: [1,2]
After : [1]
I don't understand arrays. :D
Ali


This is one of those areas where the low-level details of how arrays are
implemented arrays leak out.  This is unfortunate, but in a close-to-the-metal
language it's sometimes a necessary evil.

(Dynamic) Arrays are structs that consist of a pointer to the first element and 
a
length.  Essentially, the memory being pointed to by the array is passed by
reference, but the pointer to the memory and the length of the array are passed 
by
value.  While this may seem ridiculous at first, it's a tradeoff that allows for
the extremely convenient slicing syntax we have to be implemented efficiently.

When you do the a[0] = 1, what you're really doing is:

*(a.ptr) = 1;

When you do the a ~= 2, what you're really doing is:

// Make sure the block of memory pointed to by a.ptr
// has enough capacity to be appended to.
a.length += 1;
*(a.ptr + 1) = 2;

Realistically, the only way to understand D arrays and use them effectively is 
to
understand the basics of how they work under the hood.  If you try to memorize a
bunch of abstract rules, it will seem absurdly confusing.


main.a starts as:
struct {
  int length = 1;
  int *data = 0x12345; // some address pointing to [ 0 ]
}

inside of modify, a is:
struct { // different then main.a
   int length = 2;
   int *data = 0x12345; // same as main.a data [ 1, 2]
}

back in main:
struct { // same as original main.a
  int length = 1;
  int *data = 0x12345; // hasn't changed address, but data has to [ 1 ]
}


To get the expected results, pass a as a reference:

void modify(ref int[] a);


Re: Operator overloading and loop fusion

2009-11-05 Thread Travis Boucher

div0 wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Andrei Alexandrescu wrote:

I wanted to discuss operator overloading a little bit. A good starting
point is Don's proposal

http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7



Just read it and I hate it.

Nobody is smart enough to think of all the possible uses for operator
overloading and make a decision as to whether they are valid or worth while.

D's fucktarded ('scuse me, but it really sucks) operator overloading is
the *one* and only respect where C++ still rules D.

Just because you (for certain values of you) deal with numbers doesn't
make the relationship between ,  = etc cast in stone and semi quoting
an early paper by 'Bjarne Stroustrup' doesn't give any credence to the
argument; he was wrong then and so is Don now.

(and btw, does Bjarne still feel operator abuse is a bad idea?)

boost::spirit and boost::Xpressive are 2 for instances, which make
incredibly good use of operator abuse to achieve seriously useful
functionality.

I think the main problem is calling these functions 'operators'.

Rather step back and call them 'infix function notation'.
Stop imagining the objects they operate on as numbers and then you can
do some really funky things.

D needs more operators not less and operators defined globally;
so I can finish porting boost::spirit properly.

- --
My enormous talent is exceeded only by my outrageous laziness.
http://www.ssTk.co.uk

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFK82sMT9LetA9XoXwRAjXxAJ40jHoQqal1l6/vpV3lNlEJwT+AKgCgtOWG
QWQpTFfEH80eqxm5TpvU0xU=
=/62J
-END PGP SIGNATURE-


One of D's strongest points is it's discouragement of 'funky things'. 
Operators like compare are designed for things like sorting, and doing 
funky things with them (for example, modifying the internal state of an 
object) should be something that is discouraged.


Keep operators logical, and use other language features such as 
delegates if you want to do 'funky things'.




Re: Arrays passed by almost reference?

2009-11-05 Thread Travis Boucher

Leandro Lucarella wrote:

Andrei Alexandrescu, el  5 de noviembre a las 16:10 me escribiste:

Ali Cehreli wrote:

Thanks for all the responses.

And yes, I know that 'ref' is what works for me here. I am trying to figure out whether I 
should develop a guideline like always pass arrays with 'ref', or you may face 
surprises.

I understand it very well now and was able to figure out a way to cause some 
bugs. :)

What can be said about the output of the following program? Will main.a[0] be 
printed as 1 or 111?

import std.cstream;

void modify(int[] a)
{
   a[0] = 1;

   // ... more operations ...

   a[0] = 111;
}

void main()
{
   int[] a;
   a ~= 0;
   modify(a);

   dout.writefln(a[0]);
}

It depends on the operations in between the two assignments to a[0] in 'modify':

- if we leave the comment in place, main.a[0] is 111

- if we replace the comment with this code

   foreach (i; 0 .. 10) {
   a ~= 2;
   }

then main.a[0] is 1. In a sense, modify.a caused only some side effects in 
main.a. If we shorten the foreach, then main.a[0] is again 111. To me, this is at an 
unmanagable level. Unless we always pass with 'ref'.

I don't think that this is easy to explain to a learner; and I think that is a 
good indicator that there is a problem with these semantics.

The ball is in your court to define better semantics.


Just make arrays a reference value, like classes!



You mean dynamic arrays, but what about static arrays?  Sometimes it 
makes more sense to send a static array as a value rather then a 
reference (think in the case of small vectors).


Then we'd have 2 semantics for arrays, one for static arrays and one for 
dynamic arrays.


I am not fully against pass-by-ref arrays, I just think in passing by 
reference all of the time could have some performance implications.


Re: Arrays passed by almost reference?

2009-11-05 Thread Travis Boucher

I am not fully against pass-by-ref arrays, I just think in passing by
reference all of the time could have some performance implications.


OK, make 2 different types then: slices (value types, can't append, they
are only a view on other's data) and dynamic arrays (reference type, can
append, but a little slower to manipulate).

It's a shame this idea didn't came true after all...



I just wonder if that would be confusing.

Static arrays of 2 different sizes are 2 different types.

Another example of how it is already confusing:

--
int[2] a = [1, 2];
int[] b = [11, 22, 33];

b = a;
a[0] = 111;

/*
 Now both a and b == [111, 2], instead of the intuitive b == [1,2], c 
== [111, 2].  They point at the same data.

*/
b.length = b.length + 1; // now at different data.

a[1] = 222;
/* a == [111, 222], b == [111,2,0] as expected */
--

Something that is nice about dynamic arrays is how they can intermix 
with static arrays (int[] b = int[2]) in an efficient (and lazy copying) 
manor.  It makes functions like this fast and efficient:


int addThemAll(int[] data) {
int rv = 0;
foreach (i,v; data) rv += v;
return rv;
}

Since an implicit case from a static array to a dynamic array is cheap, 
and slicing an array to a dynamic array is cheap (as long as you are 
only reading from the array).


I don't see how separating them to have different call semantics solves 
the problem.  However making a clearer definition of each (in 
documentation for example) might be helpful.


Me, being new D, I am glad this thread exists because I can see how I 
could have shot myself in the foot in the future without playing around 
and learning the difference.


Re: (Phobos - SocketStream) Am I doing something wrong or is this a

2009-11-04 Thread Travis Boucher

Zane wrote:

Jesse Phillips Wrote:


On Tue, 03 Nov 2009 20:05:17 -0500, Zane wrote:


If I am to receive
these in arbitrarily sized chunks for concatenation, I don't see a
sensible way of constructing a loop.  Example?

Zane
You can use the number of bytes read to determine and use string slicing 
to concatenation into the final array.


Thanks Jesse,

Can you (or someone) confirm that this program should work.  I added a loop with array 
slicing, but it does not seem to work for me.  The final output of num is 
17593, and the file of that size is created, but it is not a valid gif image.  The code 
is below (note that this is assuming google still has their 'big-bird logo up :-P)

import std.stream;
import std.stdio;
import std.socket;
import std.socketstream;

import std.c.time;

int main()
{
char[] line;
ubyte[] data = new ubyte[17593];
uint num = 0;

TcpSocket socket = new TcpSocket(new InternetAddress(www.google.com, 
80));

socket.send(GET /logos/bigbird-hp.gif HTTP/1.0\r\n\r\n);

SocketStream socketStream = new SocketStream(socket);

while(!socketStream.eof)
{
line = socketStream.readLine();

if (line==)
break;

writef(%s\n, line);
}

num = socketStream.readBlock(data.ptr, 17593);
writef(\n\nNum: %d\n, num);

while(num  17593)
{
num += socketStream.readBlock(data[(num-1)..length].ptr, 
data.length-num);
writef(\n\nNum: %d\n, num);
}

socketStream.close;
socket.close;

File file = new File(logo.gif, FileMode.Out);
file.write(data);
file.close;

return 0;
}

Thanks for everyone's help so far!


There are a few issues with your implementation.

First, parse the headers properly.  Below see my trivial implementation. 
 You want to parse them properly so you can find the correct 
end-of-headers, and check the size of the content from the headers.


readLine() looks to be designed for a text based protocol.  The biggest 
issue is with the end-of-line detection.  \r, \n and \r\n are all 
valid end-of-line combinations and it doesn't seem to do the detection 
in a greedy manor.  This leaves us with a trailing '\n' at the end of 
the headers.


The implementation of readBlock() doesn't seem to really wait to fill 
the buffer.  It fills the buffer, if it can.  This is pretty standard
of a read on a socket.  So wrap it in a loop and read chunks.  You want 
to do it this way anyway for many reasons.  The implementation below 
double-buffers which does result in an extra copy.  Although logically 
this seems like a pointless copy, but in a real application it is very 
useful many reasons.


Below is a working version (but still has its own issues).

#!/usr/bin/gdmd -run

import std.stream;
import std.stdio;
import std.socket;
import std.socketstream;

import std.string;  // for header parsing
import std.conv;// for toInt

import std.c.time;

int main()
{
char[] line;
ubyte[] data;
uint num = 0;

	TcpSocket socket = new TcpSocket(new InternetAddress(www.google.com, 
80));


socket.send(GET /logos/bigbird-hp.gif HTTP/1.0\r\n\r\n);

SocketStream socketStream = new SocketStream(socket);

string[] response;  // Holds the lines in the response
while(!socketStream.eof)
{
line = socketStream.readLine();

if (line==)
break;

// Append this line to array of response lines
response ~= line;
}

// Due to how readLine() works, we might end up with a
//trailing \n, so
// get rid of it if we do.
ubyte ncr;
socketStream.read(ncr);
if (ncr != '\n')
data ~= ncr;


// D's builtin associative arrays (safe  easy hashtables!)
string[char[]] headers; 

// Parse the HTTP response.  NOTE: This is a REALLY bad HTTP
// parser. a real parser would handle header parsing properly.
// See RFC2616 for proper rules.
foreach (v; response) {
// There is likely a better way to do this then
// a join(split())
string[] kv_pair = split(v, : );
headers[tolower(kv_pair[0])] = join(kv_pair[1 .. $], :);
}

foreach (k, v; headers)
writefln([%s] [%s], k, v);

uint size;
if (isNumeric(headers[content-length])) {
size = toInt(headers[content-length]);
} else {
writefln(Unable to parse content length of '%s' to a number.,
headers[content-length]);
return 0;
}
// This fully buffers the data, if you are fetching large files you
// process them in chunks 

Template Base Classes, Refering to typeof(this)

2009-11-04 Thread Travis Boucher
I am writing a generic vector base class.  The class implements all of 
the operator overloads so I don't have to implement them over and over 
and over for each type of vector class.


class VectorBase(size_t S, T) {
T[S] data;

...
}

class Vector3f : VectorBase!(3, float) { ... }

The problem I am having is implementing operations that can take a 
matching vector.  I can't figure out the proper way of declaring the 
type of input.


eg.

void opAssign(VectorBase!(S, T) r);
 function VectorBase!(3LU,float).VectorBase.opAssign identity 
assignment operator overload is illegal



void opAssign(this r);
 basic type expected, not this


The only way I can think of handling it is to add another parameter to 
the template declaration, eg:


class VectorBase(size_t S, T, N) { ... }
class Vector3f : VectorBase!(3, float, Vector3f) { ... }

But I would like to avoid that if possible.

Any hints on how to implement this so I can keep my original 
declaration? class VectorBase(size_t S, T)


Introducing Myself

2009-11-04 Thread Travis Boucher

I guess I should introduce myself.

Hi, I'm Travis, and I am a code-a-holic and general purpose unix geek.

I heard about D a long time ago, but never took a good look at it.  A 
few weeks ago a friend of mine suggested I look at D when I was brushing 
up on some more advanced uses of C++ (I was mostly brushing up on STL 
and template usage in general).


I love studying different programming languages, semantics, syntax and 
implementation.  I also love some of the different paradigms, and seeing 
how they work.


Now I am not some coding expert, I wouldn't even call myself a good 
programmer.  I can get stuff done when I need, but its usually messy, 
ugly, works for me, code is meant to be run, not read (ie. PERL) 
sort of crap.


The one thing that frustrates me about the direction of programming in 
general is how high level and bloated it is getting, and how alot of 
programmers I have come across are fine with that.  Abstraction upon 
abstraction upon abstraction, turning something as simple as 1 + 1 into 
an operation that goes through layers upon layers of code until the 
machine finally says 2, then back up the abstraction chain until you 
get a value that may or may not be 2.  Turning a 1 tick operation of 2-4 
bytes into a 100+ tick operation of 100+ bytes.  (ok, I may be 
exaggerating a bit on the numbers but you get the point).


Don't get me wrong, I love a language that allows me to make 1 + 1 = 3 
if I want it to, but I don't think it should require massive amounts of 
memory or CPU time to do it.


In comes D.

D lets me code like I am coding in a scripting language, but executes 
like I am coding in C/C++.  It has taken the best parts of all languages 
and put them into one pretty package.  Ok, the implementations are still 
less mature then I'd like, but they are getting better.  The language 
lets me ignore issues I don't care about (like memory management), and 
moves out of the way on issues I do care about (like memory management).


I could go on forever on what I love about D, conditional compiling, 
delegates, templates (especially the syntax), but most people on this 
newsgroup probably feel the same.


Anyway, since I don't have that many geek friends capable of 
understanding the merits of D, or sharing my excitement of new features 
I learn to use, I turn to this newsgroup.


A little about me (thats what an introduction is for anyway, isn't it?)

I have mostly worked on systems administration tasks.  Programming is 
more of a hobby that has applied uses in systems administration.


The past few years I've focused mostly on large scale web clustering, 
both high transaction and high throughput.


Recently I started teaching myself a bit about the 3d world world (no, 
that double world is not a typo).  Learned Blender (and Python by 
association).  Been poking around 3D engines for a few years including 
Ogre and Irrlicht.


Have done a bit with embedded stuff, including micro controllers (just 
AMR, and mostly in emulators as my hands as not steady enough anymore to 
do much electronics, too much caffine) and nintendo DS (devkitpro).


I use open source software almost exclusively.  I have a couple windows 
boxes around just to keep myself up to date on the new stuff microsoft 
is doing.  I don't do OSX, but I'd love to.


I use Linux (mostly ubuntu these days, but started with Slackware back 
in the 2.0 kernel days), and BSDs (mostly FreeBSD, but OpenBSD and 
NetBSD a bit as well).  I like different architectures, and trying to 
get a unix of some sort running on them (I have MIPS, ARM, Alpha, Sparc, 
x86, and x86_64 machines in one form or another).


I love learning new things, and D is the most exciting thing I have 
gotten into the past 5 years.  I hope to become part of the community in 
some way or another.





Re: Template Base Classes, Refering to typeof(this)

2009-11-04 Thread Travis Boucher

Ellery Newcomer wrote:

Travis Boucher wrote:

Any hints on how to implement this so I can keep my original
declaration? class VectorBase(size_t S, T)


Make that bugger a struct or forget about opAssign.


Why wouldn't opAssign work for a class?  (I don't have a problem with 
structs, they make more sense for a small (2-5) Vector class anyway.)


From what I understand, structs can't inherit from other structs. I 
could implement the specific classes using template mixins if needed.


Re: Template Base Classes, Refering to typeof(this)

2009-11-04 Thread Travis Boucher

Robert Jacques wrote:
On Wed, 04 Nov 2009 13:35:45 -0500, Travis Boucher 
boucher.tra...@gmail.com wrote:


I am writing a generic vector base class.  The class implements all of 
the operator overloads so I don't have to implement them over and over 
and over for each type of vector class.


class VectorBase(size_t S, T) {
T[S] data;

...
}

class Vector3f : VectorBase!(3, float) { ... }

The problem I am having is implementing operations that can take a 
matching vector.  I can't figure out the proper way of declaring the 
type of input.


eg.

void opAssign(VectorBase!(S, T) r);
  function VectorBase!(3LU,float).VectorBase.opAssign identity 
assignment operator overload is illegal



void opAssign(this r);
  basic type expected, not this


The only way I can think of handling it is to add another parameter to 
the template declaration, eg:


class VectorBase(size_t S, T, N) { ... }
class Vector3f : VectorBase!(3, float, Vector3f) { ... }

But I would like to avoid that if possible.

Any hints on how to implement this so I can keep my original 
declaration? class VectorBase(size_t S, T)


Well first, you can't overload assignment of a class to it's own type.


Ok, that makes sense, since an object is just a reference, so an 
assignment is really just a pointer copy, not a data copy, correct?


(It's part of the language spec, at the bottom of the operator overload 
page IIRC) Second, I've already solved this in D2, (using structs) so 
let me know if you want code. 


Yeah, I'd be interested.  I am currently running gdc with D1, but I did 
see some notes on getting gdc working with D2.


Third, fixed sized arrays as value-type 
are coming in the next release (I think), so you could wait for that. 
Lastly, you're (probably) going to run into issues with your other 
operator overloads because of some bugs in instantiating templates 
inside of templates using template literals as opposed to types.


Do you have any hints on what to look out for?  I did implement a Vector 
class template, passing in a template parameter to refer to the 
instantiated type.  It used mixins.


eg.
class VectorBase(size_t S, T, N) { ... }
class Vector3f {mixin VectorBase!(3, float, Vector3f); }

It compiled and worked for the basic tests, including all operator 
overloading.


auto a = new Vector3f();
auto b = new Vector3f(1, 2, 3);
auto c = b * 2; // c = Vector3f(2, 4, 6)
auto d = b + c; // d = Vector3f(3, 6, 9)

I didn't run into any bugs, and even more complex methods (explict 
methods) returned valid results (eg. T dotProduct(N vec) { ... }).