Re: Recursive aliases?

2012-04-26 Thread bcs

On 04/23/2012 04:44 AM, Steven Schveighoffer wrote:

On Sat, 21 Apr 2012 15:46:29 -0400, Mehrdad  wrote:


alias int delegate(out ItemGetter next) ItemGetter;

We currently can't do the above^ in D, but what do people think about
allowing it?
i.e. More specifically, I think an alias expression should be able to
refer to the identifier (unless it doesn't make sense, like including
a struct inside itself).
(It would require a look-ahead.)


It doesn't work.

If I do this:

alias int delegate() dtype;

alias int delegate(dtype d) ddtype;

pragma(msg, ddtype.stringof);

I get:

int delegate(int delegate() d)

Note how it's not:

int delegate(dtype d)

Why? Because alias does not create a new type. It's a new symbol that
*links* to the defined type.



The only problem is that the text representation of the type contains 
it's self as a proper sub-string. As far as the compiler is concerned, 
this should be no harder to handle than a linked list node or the 
curiously recurring template pattern.


Another use for it would be one of the cleaner state machine 
implementations I've ever seen:



alias foo delegate() foo;

foo = start();
while ((foo = foo())) {}




Re: Cross module version specs

2012-04-26 Thread bcs

On 04/26/2012 05:37 AM, Steven Schveighoffer wrote:


versions should be defined for the *entire program*, not just for
certain files. And if they are defined just for certain files, define
them in the file itself.



Versions should be defined for the *entire program*, not just for 
whatever you happen to be compiling right now.


Is there any way to make different modules complain if you link object 
files built with diffident versions set?


Re: Cross module version specs

2012-04-26 Thread bcs

On 04/26/2012 03:20 AM, Jonathan M Davis wrote:

On Thursday, April 26, 2012 12:09:19 James Miller wrote:

All I can think is that version specifiers aren't carried across
modules


They can't be. The only time that versions apply to your entire program is if
they're built-in or they're specified on the command line.



One monster of a string mixin?


which pretty much makes them completely useless unless
you only use the built-in versions.


That's not true at all. It just means that versions are either useful for
something within a module or they're intended for your program as a whole and
passed on the command line (e.g. StdDdoc is used by Phobos, and it's not
standard at all; the makefile adds it to the list of compiler flags). But yes,
it's true that if you want to define a version in one module which affects
another, you can't do it.

The closest that you would be able to do would be something along the lines of
having a function in the imported module which returned the version statements
as a string which the module doing the importing mixed in. Another option
would be to just use static ifs, since they'd be affected by whatever variables
or enums where defined in the imported modules. e.g.

static if(is(myVersionEnum1))
{
}
else static if(is(myVersionEnum2))
{
}

- Jonathan M Davis




Re: Convert a delegate to a function (i.e. make a thunk)

2012-04-26 Thread David Brown
On 2012-04-25, H. S. Teoh  wrote:

> This is like GCC's implementation of nested function pointers using
> trampolines:
>
>   http://gcc.gnu.org/onlinedocs/gccint/Trampolines.html
>
> These nested function pointers have access to their containing lexical
> scope, even though they are just regular, non-fat pointers.
>
> Something similar can be used here, though for full-fledged D delegates
> the trampoline would need to be in executable heap memory. I believe
> nowadays heap memory is non-executable due to security concerns with
> heap overflow exploits, right? So this may require special treatment.

Gcc (at least on Linux) does a few things.  The trampoline is on
the stack, but the generated code:

  - Creates a special section

.section .note.GNU-stack,"x",@progbits

in the assembly that indicates to the linker that the resulting
elf file needs to allow the stack to be executable.

  - Depending on the CPU, it may call a libgcc function
__clear_cache.  x86 is coherent, so this isn't needed, but it is
in the general case.

David


Re: Static method conflicts with non-static method?

2012-04-26 Thread Paulo Pinto

On Friday, 27 April 2012 at 06:14:13 UTC, H. S. Teoh wrote:

Is this a bug? Code:

import std.stdio;

struct S {
static int func(int x) { return x+1; }
int func(int x) { return x+2; }
}

void main() {
S s;
writeln(s.func(1));
}

DMD (latest git) output:

	test.d(10): Error: function test.S.func called with argument 
types:

((int))
matches both:
test.S.func(int x)
and:
test.S.func(int x)

The error message is unhelpful, but basically the complaint is 
that the

static method is conflicting with the non-static method.

But I would've thought it is unambiguous; I'd expect that 
s.func should
resolve to the non-static method, and S.func to the static 
method. After
all, no object is needed to invoke the static method, and the 
static

method cannot be invoked without an object.


T



I always thought that D follows the same rules as Java, C++ and 
C# have. Meaning you can use an instance object to call a static 
method, even though it is not needed.


As such the call becomes ambiguous, because the compiler won't 
know what it supposed to call.


I tried to look in the language reference, but did not find a 
clear explanation for this, but I would expect such scenarios to 
be forbidden. This type of code is a pain to maintain if it is 
allowed.


--
Paulo




Re: Cross module version specs

2012-04-26 Thread Paulo Pinto

On Friday, 27 April 2012 at 05:51:36 UTC, Walter Bright wrote:

On 4/26/2012 3:09 AM, James Miller wrote:
I'm trying to write a binding that has conditional sections 
where some features

have to be enabled. I am using version statements for this.

I have a list of version specs in a module by themselves. When 
I try to compile
another module that imports this module, it acts as if the 
version was never
specified. I have tried wrapping the specs inside a version 
block, then setting
that from the command but that doesn't work. Setting the 
version manually works
as expected. I have also tried including the versions file on 
the command line.


All I can think is that version specifiers aren't carried 
across modules, which
pretty much makes them completely useless unless you only use 
the built-in

versions.


This is quite deliberate behavior.

Aside from the rationale and other solutions given in this 
thread, the one I prefer is to define features as functions, 
and then implement those functions or not depending on the 
configuration.



I would be with Walter on this.

This is the usual behavior in any other module based language 
with conditional compilation support. Developers picking up D 
would be confused, if the behavior would be different here.


--
Paulo



Static method conflicts with non-static method?

2012-04-26 Thread H. S. Teoh
Is this a bug? Code:

import std.stdio;

struct S {
static int func(int x) { return x+1; }
int func(int x) { return x+2; }
}

void main() {
S s;
writeln(s.func(1));
}

DMD (latest git) output:

test.d(10): Error: function test.S.func called with argument types:
((int))
matches both:
test.S.func(int x)
and:
test.S.func(int x)

The error message is unhelpful, but basically the complaint is that the
static method is conflicting with the non-static method.

But I would've thought it is unambiguous; I'd expect that s.func should
resolve to the non-static method, and S.func to the static method. After
all, no object is needed to invoke the static method, and the static
method cannot be invoked without an object.


T

-- 
The early bird gets the worm. Moral: ewww...


Re: Cross module version specs

2012-04-26 Thread Walter Bright

On 4/26/2012 3:09 AM, James Miller wrote:

I'm trying to write a binding that has conditional sections where some features
have to be enabled. I am using version statements for this.

I have a list of version specs in a module by themselves. When I try to compile
another module that imports this module, it acts as if the version was never
specified. I have tried wrapping the specs inside a version block, then setting
that from the command but that doesn't work. Setting the version manually works
as expected. I have also tried including the versions file on the command line.

All I can think is that version specifiers aren't carried across modules, which
pretty much makes them completely useless unless you only use the built-in
versions.


This is quite deliberate behavior.

Aside from the rationale and other solutions given in this thread, the one I 
prefer is to define features as functions, and then implement those functions or 
not depending on the configuration.


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Brad Anderson

On Friday, 27 April 2012 at 00:25:44 UTC, H. S. Teoh wrote:

On Thu, Apr 26, 2012 at 06:13:00PM -0400, Nick Sabalausky wrote:
"H. S. Teoh"  wrote in message 
news:mailman.2173.1335475413.4860.digitalmar...@puremagic.com...

[...]
> And don't forget that some code points (notably from the CJK 
> block)
> are specified as "double-width", so if you're trying to do 
> text

> layout, you'll want yet a different length (layoutLength?).
>


Correction: the official term for this is "full-width" (as 
opposed to

the "half-width" of the typical European scripts).


Interesting. Kinda makes sence that such thing exists, though: 
The CJK
characters (even the relatively simple Japanese *kanas) are 
detailed
enough that they need to be larger to achieve the same 
readability.
And that's the *non*-double-length ones. So I don't doubt 
there's ones

that need to be tagged as "Draw Extra Big!!" :)


Have you seen U+9598? It's an insanely convoluted glyph 
composed of

*three copies* of an already extremely complex glyph.

http://upload.wikimedia.org/wikipedia/commons/3/3c/U%2B9F98.png

(And yes, that huge thing is supposed to fit inside a SINGLE
character... what *were* those ancient Chinese scribes 
thinking?!)




For example, I have my font size in Windows Notepad set to a
comfortable value. But when I want to use hiragana or 
katakana, I have
to go into the settings and increase the font size so I can 
actually
read it (Well, to what *little* extent I can even read it in 
the first

place ;) ). And those kana's tend to be among the simplest CJK
characters.

(Don't worry - I only use Notepad as a quick-n-dirty scrap 
space,

never for real coding/writing).


LOL... love the fact that you felt obligated to justify your 
use of

notepad. :-P



> So we really need all four lengths. Ain't unicode fun?! :-)
>

No kidding. The *one* thing I really, really hate about 
Unicode is the
fact that most (if not all) of its complexity actually *is* 
necessary.


We're lucky the more imaginative scribes of the world have 
either been
dead for centuries or have restricted themselves to writing 
fictional
languages. :-) The inventions of the dead ones have been 
codified and
simplified by the unfortunate people who inherited their overly 
complex
systems (*cough*CJK glyphs*cough), and the inventions of the 
living ones
are largely ignored by the world due to the fact that, well, 
their

scripts are only useful for writing fictional languages. :-)

So despite the fact that there are still some crazy convoluted 
stuff out
there, such as Arabic or Indic scripts with pair-wise 
substitution rules

in Unicode, overall things are relatively tame. At least the
subcomponents of CJK glyphs are no longer productive (actively 
being
used to compose new characters by script users) -- can you 
imagine the
insanity if Unicode had to support composition by those 
radicals and

subparts? Or if Unicode had to support a script like this one:

http://www.arthaey.com/conlang/ashaille/writing/sarapin.html

whose components are graphically composed in, shall we say, 
entirely
non-trivial ways (see the composed samples at the bottom of the 
page)?



Unicode *itself* is undisputably necessary, but I do sure miss 
ASCII.


In an ideal world, where memory is not an issue and bus width is
indefinitely wide, a Unicode string would simply be a sequence 
of
integers (of arbitrary size). Things like combining diacritics, 
etc.,
would have dedicated bits/digits for representing them, so 
there's no
need of the complexity of UTF-8, UTF-16, etc.. Everything fits 
into a
single character. Every possible combination of diacritics on 
every
possible character has a unique representation as a single 
integer.

String length would be equal to glyph count.

In such an ideal world, screens would also be of indefinitely 
detailed
resolution, so anything can fit inside a single grid cell, so 
there's no
need of half-width/double-width distinctions.  You could port 
ancient
ASCII-centric C code just by increasing sizeof(char), and 
things would

Just Work.

Yeah I know. Totally impossible. But one can dream, right? :-)


[...]
> I've been thinking about unicode processing recently. 
> Traditionally,
> we have to decode narrow strings into UTF-32 (aka dchar) 
> then do
> table lookups and such. But unicode encoding and properties, 
> etc.,
> are static information (at least within a single unicode 
> release).

> So why bother with hardcoding tables and stuff at all?
>
> What we *really* should be doing, esp. for commonly-used 
> functions
> like computing various lengths, is to automatically process 
> said
> tables and encode the computation in finite-state machines 
> that can

> then be optimized at the FSM level (there are known algos for
> generating optimal FSMs), codegen'd, and then optimized 
> again at the
> assembly level by the compiler. These FSMs will operate at 
> the
> native narrow string char type level, so that there will be 
> no need

> fo

Re: Is it possible to build DMD using Windows SDK?

2012-04-26 Thread Andre Tampubolon
On 4/27/2012 1:30 AM, Rainer Schuetze wrote:
> 
> 
> On 4/26/2012 2:40 PM, Andre Tampubolon wrote:
>> Rainer Schuetze  wrote:
>>> On 4/24/2012 6:43 PM, David Nadlinger wrote:
 On Tuesday, 24 April 2012 at 13:47:30 UTC, Andre Tampubolon wrote:
> Any suggestions?

 In case he doesn't read your message here anyway, you might want to
 ping
 Rainer Schuetze directly, as he is the one who worked on VC support.

 David
>>>
>>>
>>> Unfortunately some changes to the makefile have been reverted, but I
>>> don't know why. This is the version that should work:
>>>
>>> https://github.com/D-Programming-Language/dmd/blob/965d831df554fe14c793ce0d6a1dc9f0b2956911/src/win32.mak
>>>
>>
>> But that one is still using dmc, right? I tried to use "CC=cl" (of course
>> MS' cl), and got a bunch of errors.
> 
> You should still use vcbuild\builddmd.bat which replaces dmc with
> dmc_cl, a batch that replaces dmc's options with the respective cl options.

Well I used vcbuild\builddmd.bat (see my first post), and it failed.


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread H. S. Teoh
On Thu, Apr 26, 2012 at 09:55:54PM -0400, Nick Sabalausky wrote:
[...]
> Crazy stuff! Some of them look rather similar to Arabic or Korean's
> Hangul (sp?), at least to my untrained eye. And then others are just
> *really* interesting-looking, like:
> 
> http://www.omniglot.com/writing/12480.htm
> http://www.omniglot.com/writing/ayeri.htm
> http://www.omniglot.com/writing/oxidilogi.htm
> 
> You're right though, if I were in charge of Unicode and tasked with
> handling some of those, I think I'd just say "Screw it. Unicode is now
> depricated.  Use ASCII instead. Doesn't have the characters for your
> langauge? Tough! Fix your language!" :)

You think that's crazy, huh? Check this out:

http://www.omniglot.com/writing/sumerian.htm

Now take a deep breath...

... this writing was *actually used* in ancient times. Yeah.

Which means it probably has a Unicode block assigned to it, right now.
:-)


> > When I get the time? Hah... I really need to get my lazy bum back to
> > working on the new AA implementation first. I think that would
> > contribute greater value than optimizing Unicode algorithms. :-) I
> > was hoping *somebody* would be inspired by my idea and run with
> > it...
> >
> 
> Heh, yea. It is a tempting project, but my plate's overflowing too.
> (Now if only I could make the same happen to bank account...!)
[...]

On the other hand though, sometimes it's refreshing to take a break from
"serious" low-level core language D code, and just write plain ole
normal boring application code in D. It's good to be reminded just how
easy and pleasant it is to write application code in D.

For example, just today I was playing around with a regex-based version
of formattedRead: you pass in a regex and a bunch of pointers, and the
function uses compile-time introspection to convert regex matches into
the correct value types. So you could call it like this:

int year;
string month;
int day;
regexRead(input, `(\d{4})\s+(\w+)\s+(\d{2})`, &year, &month, &day);

Basically, each pair of parentheses corresponds with a pointer argument;
non-capturing parentheses (?:) can be used for grouping without
assigning to an item.

Its current implementation is still kinda crude, but it does support
assigning to user-defined types if you define a fromString() method that
does the requisite conversion from the matching substring.

The next step is to standardize on enums in user-defined types that
specify a regex substring to be used for matching items of that type, so
that the caller doesn't have to know what kind of string pattern is
expected by fromString(). I envision something like this:

struct MyDate {
enum stdFmt = `(\d{4}-\d{2}-\d{2})`;
enum americanFmt = `(\d{2}-\d{2}-\d{4})`;
static MyDate fromString(Char)(Char[] value) { ... }
}
...
string label1, label2;
MyDate dt1, dt2;
regexRead(input, `\s+(\w+)\s*=\s*`~MyDate.stdFmt~`\s*$`,
&label1, &dt1);
regexRead(input, `\s+(\w+)\s*=\s*`~MyDate.americanFmt~`\s*$`,
&label2, &dt2);

So the user can specify, in the regex, which date format to use in
parsing the dates.

I think this is a vast improvement over the current straitjacketed
formattedRead. ;-) And it's so much fun to code (and use).


T

-- 
Let X be the set not defined by this sentence...


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread H. S. Teoh
On Fri, Apr 27, 2012 at 04:12:25AM +0200, Matt Peterson wrote:
> On Friday, 27 April 2012 at 01:35:26 UTC, H. S. Teoh wrote:
> >When I get the time? Hah... I really need to get my lazy bum back to
> >working on the new AA implementation first. I think that would
> >contribute greater value than optimizing Unicode algorithms. :-) I
> >was hoping *somebody* would be inspired by my idea and run with it...
> 
> I actually recently wrote a lexer generator for D that wouldn't be
> that hard to adapt to something like this.

That's awesome! Would you like to give it a shot? ;-)

Also, I'm in love with lexer generators... I'd love to make good use of
your lexer generator if the code is available somewhere.


T

-- 
Nothing in the world is more distasteful to a man than to take the path
that leads to himself. -- Herman Hesse


Re: Cairo Deimos bindings

2012-04-26 Thread Walter Bright

On 4/26/2012 7:02 PM, James Miller wrote:

On Friday, 27 April 2012 at 01:45:20 UTC, Walter Bright wrote:

On 4/26/2012 1:28 AM, James Miller wrote:

I am currently writing D bindings for Cairo for submission into Deimos, could
somebody please make the repository so I can fork it?


I need:

library file name

cairo (that is it)

description

Cairo is a 2D graphics library with support for multiple output devices.
Currently supported output targets include the X Window System (via both Xlib
and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output.
Experimental backends include OpenGL, BeOS, OS/2, and DirectFB.

Cairo is designed to produce consistent output on all output media while taking
advantage of display hardware acceleration when available (eg. through the X
Render Extension).

(From the web page)

home page url for the library

http://www.cairographics.org/



https://github.com/D-Programming-Deimos/cairo


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Matt Peterson

On Friday, 27 April 2012 at 01:35:26 UTC, H. S. Teoh wrote:
When I get the time? Hah... I really need to get my lazy bum 
back to

working on the new AA implementation first. I think that would
contribute greater value than optimizing Unicode algorithms. 
:-) I was
hoping *somebody* would be inspired by my idea and run with 
it...


I actually recently wrote a lexer generator for D that wouldn't 
be that hard to adapt to something like this.


Re: Cairo Deimos bindings

2012-04-26 Thread James Miller

On Friday, 27 April 2012 at 01:45:20 UTC, Walter Bright wrote:

On 4/26/2012 1:28 AM, James Miller wrote:
I am currently writing D bindings for Cairo for submission 
into Deimos, could

somebody please make the repository so I can fork it?


I need:

library file name

cairo (that is it)

description
Cairo is a 2D graphics library with support for multiple output 
devices. Currently supported output targets include the X Window 
System (via both Xlib and XCB), Quartz, Win32, image buffers, 
PostScript, PDF, and SVG file output. Experimental backends 
include OpenGL, BeOS, OS/2, and DirectFB.


Cairo is designed to produce consistent output on all output 
media while taking advantage of display hardware acceleration 
when available (eg. through the X Render Extension).


(From the web page)

home page url for the library

http://www.cairographics.org/


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"Andrej Mitrovic"  wrote in message 
news:mailman.2183.1335491333.4860.digitalmar...@puremagic.com...
> On 4/27/12, H. S. Teoh  wrote:
>> It's ironic how useless Notepad is compared to an ancient DOS program
>> from the dinosaur age.
>
> If you run "edit" in command prompt or the run dialog (well, assuming
> you had a win32 box somewhere), you'd actually get a pretty decent
> dos-based editor that is still better than Notepad. It has split
> windows, a tab stop setting, and even a whole bunch of color settings.
> :P

Heh, I remember that :)

Holy crap, even in XP, they updated it to use the Windows standard key 
combos for cut/copy/paste. I had no idea, all this time. Back in DOS, it 
used that old "Shift-Ins" stuff.




Re: ^^ limitation

2012-04-26 Thread James Miller

On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:


D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.
Because the compiler doesn't know how to make a BigInt, BigInt is 
part of the library, not the language.


Why couldn't to!string be overloaded to take a BigInt?

It is, its the same overload that takes other objects.


The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?


Because this is a native language. The idea is to be close to the 
hardware, and that means fixed-sized integers, fixed-sized floats 
and having to live with that. Making BigInt part of the language 
opens up the door for a whole host of other things to become 
"part of the language". While we're at it, why don't we make 
matrices part of the language, and regexes, and we might aswell 
move all that datetime stuff into the language too. Oh and I 
would love to see all the signals stuff in there too.


The reason we don't put everything in the language is because the 
more you put into the language, the harder it is to move. There 
are more than enough bugs in D right now, and adding more 
features into the language means a higher burden for core 
development. There is a trend of trying to move away from tight 
integration into the compiler, and by extension the language. 
Associative arrays are being worked on to make most of the work 
be done in object.d, with the end result being the compiler only 
has to convert T[U] into AA(T, U) and do a similar conversion for 
aa literals. This means that there is no extra fancy work for the 
compiler to do to support AA's


Also, D is designed for efficiency, if I don't want a BigInt, and 
all of the extra memory that comes with, then I would rather have 
an error. I don't want what /should/ be a fast system to slow 
down because I accidentally type 1 << 33 instead of 1 << 23, I 
want an error of some sort.


The real solution here isn't to just blindly allow arbitrary 
features to be "in the language" as it were, but to make it 
easier to integrate library solutions so they feel like part of 
the language.


--
James Miller


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"H. S. Teoh"  wrote in message 
news:mailman.2182.1335490591.4860.digitalmar...@puremagic.com...
>
> Now in that, at least, it surpasses Norton Editor. :-) But had Norton
> not been bought over by Symantec, we'd have a modern, much more powerful
> version of NE today. But, oh well. Things have moved on. Vim beats the
> crap out of NE, Notepad, and just about any GUI editor out there. It
> also beats the snot out of emacs, but I don't want to start *that*
> flamewar. :-P
>

"We didn't start that flamewar,
It was always burning,
Since the world's been turning..."

>
> Here's more:
>
> http://www.omniglot.com/writing/conscripts2.htm
>
> Imagine if some of the more complicated scripts there were actually used
> in a real language, and Unicode had to support it...  Like this one:
>
> http://www.omniglot.com/writing/talisman.htm
>
> Or, if you *really* wanna go all-out:
>
> http://www.omniglot.com/writing/ssioweluwur.php
>
> (Check out the sample text near the bottom of the page and gape in
> awe at what creative minds let loose can produce... and horror at the
> prospect of Unicode being required to support it.)
>

Crazy stuff! Some of them look rather similar to Arabic or Korean's Hangul 
(sp?), at least to my untrained eye. And then others are just *really* 
interesting-looking, like:

http://www.omniglot.com/writing/12480.htm
http://www.omniglot.com/writing/ayeri.htm
http://www.omniglot.com/writing/oxidilogi.htm

You're right though, if I were in charge of Unicode and tasked with handling 
some of those, I think I'd just say "Screw it. Unicode is now depricated. 
Use ASCII instead. Doesn't have the characters for your langauge? Tough! Fix 
your language!" :)

>
> When I get the time? Hah... I really need to get my lazy bum back to
> working on the new AA implementation first. I think that would
> contribute greater value than optimizing Unicode algorithms. :-) I was
> hoping *somebody* would be inspired by my idea and run with it...
>

Heh, yea. It is a tempting project, but my plate's overflowing too. (Now if 
only I could make the same happen to bank account...!)




Re: Cairo Deimos bindings

2012-04-26 Thread Walter Bright

On 4/26/2012 1:28 AM, James Miller wrote:

I am currently writing D bindings for Cairo for submission into Deimos, could
somebody please make the repository so I can fork it?


I need:

library file name
description
home page url for the library



Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Andrej Mitrovic
On 4/27/12, H. S. Teoh  wrote:
> It's ironic how useless Notepad is compared to an ancient DOS program
> from the dinosaur age.

If you run "edit" in command prompt or the run dialog (well, assuming
you had a win32 box somewhere), you'd actually get a pretty decent
dos-based editor that is still better than Notepad. It has split
windows, a tab stop setting, and even a whole bunch of color settings.
:P


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread H. S. Teoh
On Thu, Apr 26, 2012 at 09:03:59PM -0400, Nick Sabalausky wrote:
[...]
> Heh, any usage of Notepad *needs* to be justified. For example, it has an 
> undo buffer of exactly ONE change.

Don't laugh too hard. The original version of vi also had an undo buffer
of depth 1. In fact, one of the *current* vi's still only has an undo
buffer of depth 1. (Fortunately vim is much much saner.)


> And the stupid thing doesn't even handle Unix-style newlines.
> *Everything* handes Unix-style newlines these days, even on Windows.
> Windows *BATCH* files even accept Unix-style newlines, for 
> goddsakes! But not Notepad.
> 
> It is nice in it's leanness and no-nonsence-ness. But it desperately needs 
> some updates.

Back in the day, my favorite editor ever was Norton Editor. It's tiny
(only about 50k or less, IIRC) yet had innovative (for its day)
features... like split pane editing, ^V which flips capitalization to
EOL (so a single function serves for both uppercasing and lowercasing,
and you just apply it twice to do a single word).  Unfortunately it's a
DOS-only program.  I think it works in the command prompt, but I've
never tested it (the modern windows command prompt is subtly different
from the old DOS command prompt, so things may not quite work as they
used to).

It's ironic how useless Notepad is compared to an ancient DOS program
from the dinosaur age.


> At least it actually supports Unicode though. (Which actually I find 
> somewhat surprising.)

Now in that, at least, it surpasses Norton Editor. :-) But had Norton
not been bought over by Symantec, we'd have a modern, much more powerful
version of NE today. But, oh well. Things have moved on. Vim beats the
crap out of NE, Notepad, and just about any GUI editor out there. It
also beats the snot out of emacs, but I don't want to start *that*
flamewar. :-P


[...]
> > http://www.arthaey.com/conlang/ashaille/writing/sarapin.html
> >
> > whose components are graphically composed in, shall we say, entirely
> > non-trivial ways (see the composed samples at the bottom of the
> > page)?
> >
> 
> That's insane!
> 
> And yet, very very interesting...

Here's more:

http://www.omniglot.com/writing/conscripts2.htm

Imagine if some of the more complicated scripts there were actually used
in a real language, and Unicode had to support it...  Like this one:

http://www.omniglot.com/writing/talisman.htm

Or, if you *really* wanna go all-out:

http://www.omniglot.com/writing/ssioweluwur.php

(Check out the sample text near the bottom of the page and gape in
awe at what creative minds let loose can produce... and horror at the
prospect of Unicode being required to support it.)


[...]
> > Currently, std.uni code (argh the pun!!)
> 
> Hah! :)
> 
> > is hand-written with tables of which character belongs to which
> > class, etc.. These hand-coded tables are error-prone and
> > unnecessary. For example, think of computing the layout width of a
> > UTF-8 stream. Why waste time decoding into dchar, and then doing all
> > sorts of table lookups to compute the width? Instead, treat the
> > stream as a byte stream, with certain sequences of bytes evaluating
> > to length 2, others to length 1, and yet others to length 0.
> >
> > A lexer engine is perfectly suited for recognizing these kinds of
> > sequences with optimal speed. The only difference from a real lexer
> > is that instead of spitting out tokens, it keeps a running total
> > (layout) length, which is output at the end.
> >
> > So what we should do is to write a tool that processes Unicode.txt
> > (the official table of character properties from the Unicode
> > standard) and generates lexer engines that compute various Unicode
> > properties (grapheme count, layout length, etc.) for each of the UTF
> > encodings.
> >
> > This way, we get optimal speed for these algorithms, plus we don't
> > need to manually maintain tables and stuff, we just run the tool on
> > Unicode.txt each time there's a new Unicode release, and the correct
> > code will be generated automatically.
> >
> 
> I see. I think that's a very good observation, and a great suggestion.
> In fact, it'd imagine it'd be considerably simpler than a typial lexer
> generator. Much less of the fancy regexy-ness would be needed. Maybe
> put together a pull request if you get the time...?
[...]

When I get the time? Hah... I really need to get my lazy bum back to
working on the new AA implementation first. I think that would
contribute greater value than optimizing Unicode algorithms. :-) I was
hoping *somebody* would be inspired by my idea and run with it...


T

-- 
What do you mean the Internet isn't filled with subliminal messages? What about 
all those buttons marked "submit"??


Re: export extern (C) void Fun Error

2012-04-26 Thread Trass3r

export c callback fun:

alias void function(int id) ConnectedCallBack;
alias void function(int id, void* data, int len) ReadCallBack;


add extern(C) to be safe


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 17:26:40 H. S. Teoh wrote:
> Currently, std.uni code (argh the pun!!) is hand-written with tables of
> which character belongs to which class, etc.. These hand-coded tables
> are error-prone and unnecessary. For example, think of computing the
> layout width of a UTF-8 stream. Why waste time decoding into dchar, and
> then doing all sorts of table lookups to compute the width? Instead,
> treat the stream as a byte stream, with certain sequences of bytes
> evaluating to length 2, others to length 1, and yet others to length 0.
> 
> A lexer engine is perfectly suited for recognizing these kinds of
> sequences with optimal speed. The only difference from a real lexer is
> that instead of spitting out tokens, it keeps a running total (layout)
> length, which is output at the end.
> 
> So what we should do is to write a tool that processes Unicode.txt (the
> official table of character properties from the Unicode standard) and
> generates lexer engines that compute various Unicode properties
> (grapheme count, layout length, etc.) for each of the UTF encodings.
> 
> This way, we get optimal speed for these algorithms, plus we don't need
> to manually maintain tables and stuff, we just run the tool on
> Unicode.txt each time there's a new Unicode release, and the correct
> code will be generated automatically.

That's a fantastic idea! Of course, that leaves the job of implementing it... 
:)

- Jonathan M Davis


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"H. S. Teoh"  wrote in message 
news:mailman.2179.1335486409.4860.digitalmar...@puremagic.com...
>
> Have you seen U+9598? It's an insanely convoluted glyph composed of
> *three copies* of an already extremely complex glyph.
>
> http://upload.wikimedia.org/wikipedia/commons/3/3c/U%2B9F98.png
>
> (And yes, that huge thing is supposed to fit inside a SINGLE
> character... what *were* those ancient Chinese scribes thinking?!)
>

Yikes!

>
>> For example, I have my font size in Windows Notepad set to a
>> comfortable value. But when I want to use hiragana or katakana, I have
>> to go into the settings and increase the font size so I can actually
>> read it (Well, to what *little* extent I can even read it in the first
>> place ;) ). And those kana's tend to be among the simplest CJK
>> characters.
>>
>> (Don't worry - I only use Notepad as a quick-n-dirty scrap space,
>> never for real coding/writing).
>
> LOL... love the fact that you felt obligated to justify your use of
> notepad. :-P
>

Heh, any usage of Notepad *needs* to be justified. For example, it has an 
undo buffer of exactly ONE change. And the stupid thing doesn't even handle 
Unix-style newlines. *Everything* handes Unix-style newlines these days, 
even on Windows. Windows *BATCH* files even accept Unix-style newlines, for 
goddsakes! But not Notepad.

It is nice in it's leanness and no-nonsence-ness. But it desperately needs 
some updates.

At least it actually supports Unicode though. (Which actually I find 
somewhat surprising.)

'Course, this is all XP. For all I know maybe they have finally updated it 
in MS OSX, erm, I mean Vista and Win7...

>
>> > So we really need all four lengths. Ain't unicode fun?! :-)
>> >
>>
>> No kidding. The *one* thing I really, really hate about Unicode is the
>> fact that most (if not all) of its complexity actually *is* necessary.
>
> We're lucky the more imaginative scribes of the world have either been
> dead for centuries or have restricted themselves to writing fictional
> languages. :-) The inventions of the dead ones have been codified and
> simplified by the unfortunate people who inherited their overly complex
> systems (*cough*CJK glyphs*cough), and the inventions of the living ones
> are largely ignored by the world due to the fact that, well, their
> scripts are only useful for writing fictional languages. :-)
>
> So despite the fact that there are still some crazy convoluted stuff out
> there, such as Arabic or Indic scripts with pair-wise substitution rules
> in Unicode, overall things are relatively tame. At least the
> subcomponents of CJK glyphs are no longer productive (actively being
> used to compose new characters by script users) -- can you imagine the
> insanity if Unicode had to support composition by those radicals and
> subparts? Or if Unicode had to support a script like this one:
>
> http://www.arthaey.com/conlang/ashaille/writing/sarapin.html
>
> whose components are graphically composed in, shall we say, entirely
> non-trivial ways (see the composed samples at the bottom of the page)?
>

That's insane!

And yet, very very interesting...

>>
>> While I find that very intersting...I'm afraid I don't actually
>> understand your suggestion :/ (I do understand FSM's and how they
>> work, though) Could you give a little example of what you mean?
> [...]
>
> Currently, std.uni code (argh the pun!!)

Hah! :)

> is hand-written with tables of
> which character belongs to which class, etc.. These hand-coded tables
> are error-prone and unnecessary. For example, think of computing the
> layout width of a UTF-8 stream. Why waste time decoding into dchar, and
> then doing all sorts of table lookups to compute the width? Instead,
> treat the stream as a byte stream, with certain sequences of bytes
> evaluating to length 2, others to length 1, and yet others to length 0.
>
> A lexer engine is perfectly suited for recognizing these kinds of
> sequences with optimal speed. The only difference from a real lexer is
> that instead of spitting out tokens, it keeps a running total (layout)
> length, which is output at the end.
>
> So what we should do is to write a tool that processes Unicode.txt (the
> official table of character properties from the Unicode standard) and
> generates lexer engines that compute various Unicode properties
> (grapheme count, layout length, etc.) for each of the UTF encodings.
>
> This way, we get optimal speed for these algorithms, plus we don't need
> to manually maintain tables and stuff, we just run the tool on
> Unicode.txt each time there's a new Unicode release, and the correct
> code will be generated automatically.
>

I see. I think that's a very good observation, and a great suggestion. In 
fact, it'd imagine it'd be considerably simpler than a typial lexer 
generator. Much less of the fancy regexy-ness would be needed. Maybe put 
together a pull request if you get the time...?




Re: This shouldn't happen

2012-04-26 Thread Martin Nowak

We can't just use Typedef!(void*) or Typedef!(int) because -=/+= will
be allowed, which shouldn't be allowed for handles. const(void*) won't
work either, because you should be allowed to assign one handle to
another and const forbids that.


struct None; // undefined struct as bottom type
alias None* HWND;
enum INVALID_HANDLE_VALUE = cast(HWND)-1;

static assert(__traits(compiles, {HWND h; h = INVALID_HANDLE_VALUE;}));
static assert(!__traits(compiles, {None n;}));
static assert(!__traits(compiles, {HWND h; ++h;}));
static assert(!__traits(compiles, {HWND h; h + 1;}));

HWND foo(HWND h)
{
return h;
}

void main()
{
HWND h;
assert(h is null);
h = foo(h);
assert(h is null);
h = foo(INVALID_HANDLE_VALUE);
assert(h is INVALID_HANDLE_VALUE);
h = foo(null);
assert(h is null);
}


Re: ^^ limitation

2012-04-26 Thread Tryo[17]

On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:

Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" :

I believe the following two lines of code should produce the 
same output. Is there a specific reason why doesn't allow 
this? Of course the only way to store the result would be to 
put in into a BigInt variable or convert it to string but I 
don't that shouldn't prevent the compiler from producing the 
correct value.


(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Well... what do you want to hear? I like to know that the


Honestly, I just want to hear the rationale for why things are
the way they are. I see thing possible in other languages that
I know are not as powerful as D and I get to wonder why... If
I don't understand enough to make a determination on my
own, I simply ask.

result of mathematical operations doesn't change its type 
depending on the ability to  compile-time evaluate it and the 
magnitude of the result. Imagine the mess when the numbers are 
replaced by constants that are defined else where. This may


D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.

work in languages that are not strongly typed, but we rely on 
the exact data type of an expression. You are calling a 
function called to!string with the overload that takes an int.


Why couldn't to!string be overloaded to take a BigInt?

A BigInt or a string may be handled entirely differently by 
to!string. The compiler doesn't know what either BigInt is or 
what to!string is supposed to do. It cannot make the assumption


The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?

that passing a string to it will work the same way as passing 
an int. What you would need is that int and BigInt have the 
same semantics everywhere. But once you leave the language by 
calling a C function for example you need an explicit 32-bit 
int again.
If you need this functionality use a programming language that 
has type classes and seamlessly switches between int/BigInt 
types, but drops the systems language attribute. You'll find 
languages that support unlimited integers and floats without 
friction. Or you use BigInt everywhere. Maybe Python or 
Mathematica.


I am not interested in another language (maybe in then future),
simply an understanding why things are the way they are.

Andrew



Re: ^^ limitation

2012-04-26 Thread Tryo[17]

On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:

Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" :

I believe the following two lines of code should produce the 
same output. Is there a specific reason why doesn't allow 
this? Of course the only way to store the result would be to 
put in into a BigInt variable or convert it to string but I 
don't that shouldn't prevent the compiler from producing the 
correct value.


(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew


Well... what do you want to hear? I like to know that the


Honestly, I just want to hear the rationale for why things are
the way
they are. I see thing possible in other languages that I know is
not as
powerful as D and I get to wonder why... If I don't understand
enough
to make a determination on my own, I ask.

result of mathematical operations doesn't change its type 
depending on the ability to  compile-time evaluate it and the 
magnitude of the result. Imagine the mess when the numbers are 
replaced by constants that are defined else where. This may


D provides an auto type facility that determins which the type
that
can best accommodate a particular value. What prevents the
from determining that the only type that can accommodate that
value is a BigInt? The same way it decides between int, long,
ulong, etc.

work in languages that are not strongly typed, but we rely on 
the exact data type of an expression. You are calling a 
function called to!string with the overload that takes an int.


Why couldn't to!string be overloaded to take a BigInt?

A BigInt or a string may be handled entirely differently by 
to!string. The compiler doesn't know what either BigInt is or 
what to!string is supposed to do. It cannot make the assumption


The point is this, currently 2^^31 will produce a negative long
value
on my system. Not that the value is wrong, the variable simply
cannot support the magnitude of the result for this calculation
so it wraps around and produces a negative value. However,
2^^n for n>=32 produces a value of 0. Why not produce the value
and let the user choose what to put it into? Why not make the he
language BigInt aware? What is the negative effect of taking
BigInt out of the library and make it an official part of the
language?

that passing a string to it will work the same way as passing 
an int. What you would need is that int and BigInt have the 
same semantics everywhere. But once you leave the language by 
calling a C function for example you need an explicit 32-bit 
int again.
If you need this functionality use a programming language that 
has type classes and seamlessly switches between int/BigInt 
types, but drops the systems language attribute. You'll find 
languages that support unlimited integers and floats without 
friction. Or you use BigInt everywhere. Maybe Python or 
Mathematica.


I am not interested in another language (maybe in then future),
simply an understanding why things are the way they are.

Andrew



Re: export extern (C) void Fun Error

2012-04-26 Thread Andrej Mitrovic
On 4/26/12, "拖狗散步"  wrote:
> export c callback fun:
>
> alias void function(int id) ConnectedCallBack;
> alias void function(int id, void* data, int len) ReadCallBack;

Those are D function pointers. Try this:
alias extern(C) void function(int id) ConnectedCallBack;
alias extern(C) void function(int id, void* data, int len) ReadCallBack;


Re: export extern (C) void Fun Error

2012-04-26 Thread 拖狗散步

O God, no one answered...


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread H. S. Teoh
On Thu, Apr 26, 2012 at 06:13:00PM -0400, Nick Sabalausky wrote:
> "H. S. Teoh"  wrote in message 
> news:mailman.2173.1335475413.4860.digitalmar...@puremagic.com...
[...]
> > And don't forget that some code points (notably from the CJK block)
> > are specified as "double-width", so if you're trying to do text
> > layout, you'll want yet a different length (layoutLength?).
> >

Correction: the official term for this is "full-width" (as opposed to
the "half-width" of the typical European scripts).


> Interesting. Kinda makes sence that such thing exists, though: The CJK
> characters (even the relatively simple Japanese *kanas) are detailed
> enough that they need to be larger to achieve the same readability.
> And that's the *non*-double-length ones. So I don't doubt there's ones
> that need to be tagged as "Draw Extra Big!!" :)

Have you seen U+9598? It's an insanely convoluted glyph composed of
*three copies* of an already extremely complex glyph.

http://upload.wikimedia.org/wikipedia/commons/3/3c/U%2B9F98.png

(And yes, that huge thing is supposed to fit inside a SINGLE
character... what *were* those ancient Chinese scribes thinking?!)


> For example, I have my font size in Windows Notepad set to a
> comfortable value. But when I want to use hiragana or katakana, I have
> to go into the settings and increase the font size so I can actually
> read it (Well, to what *little* extent I can even read it in the first
> place ;) ). And those kana's tend to be among the simplest CJK
> characters.
> 
> (Don't worry - I only use Notepad as a quick-n-dirty scrap space,
> never for real coding/writing).

LOL... love the fact that you felt obligated to justify your use of
notepad. :-P


> > So we really need all four lengths. Ain't unicode fun?! :-)
> >
> 
> No kidding. The *one* thing I really, really hate about Unicode is the
> fact that most (if not all) of its complexity actually *is* necessary.

We're lucky the more imaginative scribes of the world have either been
dead for centuries or have restricted themselves to writing fictional
languages. :-) The inventions of the dead ones have been codified and
simplified by the unfortunate people who inherited their overly complex
systems (*cough*CJK glyphs*cough), and the inventions of the living ones
are largely ignored by the world due to the fact that, well, their
scripts are only useful for writing fictional languages. :-)

So despite the fact that there are still some crazy convoluted stuff out
there, such as Arabic or Indic scripts with pair-wise substitution rules
in Unicode, overall things are relatively tame. At least the
subcomponents of CJK glyphs are no longer productive (actively being
used to compose new characters by script users) -- can you imagine the
insanity if Unicode had to support composition by those radicals and
subparts? Or if Unicode had to support a script like this one:

http://www.arthaey.com/conlang/ashaille/writing/sarapin.html

whose components are graphically composed in, shall we say, entirely
non-trivial ways (see the composed samples at the bottom of the page)?


> Unicode *itself* is undisputably necessary, but I do sure miss ASCII.

In an ideal world, where memory is not an issue and bus width is
indefinitely wide, a Unicode string would simply be a sequence of
integers (of arbitrary size). Things like combining diacritics, etc.,
would have dedicated bits/digits for representing them, so there's no
need of the complexity of UTF-8, UTF-16, etc.. Everything fits into a
single character. Every possible combination of diacritics on every
possible character has a unique representation as a single integer.
String length would be equal to glyph count.

In such an ideal world, screens would also be of indefinitely detailed
resolution, so anything can fit inside a single grid cell, so there's no
need of half-width/double-width distinctions.  You could port ancient
ASCII-centric C code just by increasing sizeof(char), and things would
Just Work.

Yeah I know. Totally impossible. But one can dream, right? :-)


[...]
> > I've been thinking about unicode processing recently. Traditionally,
> > we have to decode narrow strings into UTF-32 (aka dchar) then do
> > table lookups and such. But unicode encoding and properties, etc.,
> > are static information (at least within a single unicode release).
> > So why bother with hardcoding tables and stuff at all?
> >
> > What we *really* should be doing, esp. for commonly-used functions
> > like computing various lengths, is to automatically process said
> > tables and encode the computation in finite-state machines that can
> > then be optimized at the FSM level (there are known algos for
> > generating optimal FSMs), codegen'd, and then optimized again at the
> > assembly level by the compiler. These FSMs will operate at the
> > native narrow string char type level, so that there will be no need
> > for explicit decoding.
> >
> > The generation algo can then be run just once per un

Re: Cairo Deimos bindings

2012-04-26 Thread Andrej Mitrovic
On 4/27/12, Trass3r  wrote:
> //! bring named enum members into current scope
> string flattenNamedEnum(EnumType)()
> if (is (EnumType == enum))
> {
>   string s = "";
>   foreach (i, e; __traits(allMembers, EnumType))
>   {
>   s ~= "alias " ~ EnumType.stringof ~ "." ~ __traits(allMembers,
> EnumType)[i] ~ " " ~ __traits(allMembers, EnumType)[i] ~ ";\n";
>   }
>
>   return s;
> }

I used something similar for a custom DLL symbol loader. I defined all
extern(C) function pointers inside of a struct, then mixed in the
loader code for each function by iterating all fields of the struct,
and then used a "flattenName" type template to make all the function
pointers global.


Re: Cairo Deimos bindings

2012-04-26 Thread James Miller

On Thursday, 26 April 2012 at 23:28:19 UTC, Trass3r wrote:

enums cause issues because the C enum:

   enum Status {
  STATUS_SUCCESS
   }

has type enum Status and the members are access like 
STATUS_SUCCESS. The same enum in D is


   enum Status {
  STATUS_SUCCESS
   }

has type Status and the members are accessed using 
Status.STATUS_SUCCESS


//! bring named enum members into current scope
string flattenNamedEnum(EnumType)()
if (is (EnumType == enum))
{
string s = "";
foreach (i, e; __traits(allMembers, EnumType))
{
		s ~= "alias " ~ EnumType.stringof ~ "." ~ 
__traits(allMembers, EnumType)[i] ~ " " ~ __traits(allMembers, 
EnumType)[i] ~ ";\n";

}

return s;
}

I proposed 'extern(C) enum' to get rid of all those manual 
aliases but as always nothing happened.


Exactly.


I like that, its cool, but I figured just doing a minor rewrite
of the enum would suffice. Its not that hard since Vim has a
block select, and cairo has some pretty consistent naming that
makes doing macros easy for them, the last step is just to check
that everything gets renamed properly.

--
James Miller


Re: Cairo Deimos bindings

2012-04-26 Thread Trass3r

enums cause issues because the C enum:

enum Status {
   STATUS_SUCCESS
}

has type enum Status and the members are access like STATUS_SUCCESS. The  
same enum in D is


enum Status {
   STATUS_SUCCESS
}

has type Status and the members are accessed using Status.STATUS_SUCCESS


//! bring named enum members into current scope
string flattenNamedEnum(EnumType)()
if (is (EnumType == enum))
{
string s = "";
foreach (i, e; __traits(allMembers, EnumType))
{
		s ~= "alias " ~ EnumType.stringof ~ "." ~ __traits(allMembers,  
EnumType)[i] ~ " " ~ __traits(allMembers, EnumType)[i] ~ ";\n";

}

return s;
}

I proposed 'extern(C) enum' to get rid of all those manual aliases but as  
always nothing happened.



htod is not a useful tool, especially if you want to do any sort of  
cross-platform, robust binding, manual binding is really the only way to  
do it properly and properly reflect the original binding and api of the  
library.


It doesn't take that long, I did the binding for the 3000 line cairo.h  
file in about 3 hours, through judicious use of regex replaces and  
macros (I love Vim).


Exactly.


Re: Cairo Deimos bindings

2012-04-26 Thread James Miller

On Thursday, 26 April 2012 at 22:45:01 UTC, Trass3r wrote:
I'd say that usable htod generated headers still are a welcome 
addition to Deimos.


David


Even using some regex's is better than htod. It drops const, 
removes or messes up the comments etc.


There are also many things that should be changed in a binding to 
make it more D compatible, without affecting the C binding. Many 
C libraries define their own bool type, but D has a bool type 
that can be used just as easily, also making it easier to write D 
using native types.


Lots of C code has extraneous typedefs that are only there to 
strip out struct and union keywords, so they can be rewritten. 
enums cause issues because the C enum:


   enum Status {
  STATUS_SUCCESS
   }

has type enum Status and the members are access like 
STATUS_SUCCESS. The same enum in D is


   enum Status {
  STATUS_SUCCESS
   }

has type Status and the members are accessed using 
Status.STATUS_SUCCESS, which can be very bloated when converting 
heavily-namespaced code into D, because accessing the member 
CAIRO_STATUS_NO_MEMORY from the enum cario_status_t is fine in C, 
and neccessary because of the lack of modules, but the same in D 
is cario_status_t.CAIRO_STATUS_NO_MEMORY, which is very verbose. 
Consider that this is one of the shorter enums in cairo, then it 
becomes a problem.


Sometimes code will rely on specific, extra, headers to determine 
what to do, further complicating bindings, especially when you 
need to check for certain functionality.


htod is not a useful tool, especially if you want to do any sort 
of cross-platform, robust binding, manual binding is really the 
only way to do it properly and properly reflect the original 
binding and api of the library.


It doesn't take that long, I did the binding for the 3000 line 
cairo.h file in about 3 hours, through judicious use of regex 
replaces and macros (I love Vim).


--
James Miller


Re: What to do about default function arguments

2012-04-26 Thread Manu
I've actually been making fairly extensive use of function/delegate default
args. I was pleasantly surprised when I realised it was possible.
Funnily enough, It's one of those things that I just expected should work
(as I find most things I just expect should work do in fact tend to work in
D), so it seems it was intuitive too at some level.

It would be a shame to see it go, but I wouldn't say it's critical, just
very handy. In my case, used in shared interface bindings. Occurs more
often than not in some interfaces.

On 26 April 2012 07:00, Jonathan M Davis  wrote:

> On Wednesday, April 25, 2012 20:44:07 Walter Bright wrote:
> > A subtle but nasty problem - are default arguments part of the type, or
> part
> > of the declaration?
> >
> > See http://d.puremagic.com/issues/show_bug.cgi?id=3866
> >
> > Currently, they are both, which leads to the nasty behavior in the bug
> > report.
> >
> > The problem centers around name mangling. If two types mangle the same,
> then
> > they are the same type. But default arguments are not part of the mangled
> > string. Hence the schizophrenic behavior.
> >
> > But if we make default arguments solely a part of the function
> declaration,
> > then function pointers (and delegates) cannot have default arguments.
> (And
> > maybe this isn't a bad thing?)
>
> Can function pointers have default arguments in C? Honestly, it strikes me
> as
> rather bizarre for them to have default arguments. I really don't think
> that
> they buy you much.
>
> If you use the function or delegate immediately after declaring it (as is
> typically the case when they're nested), then you could have just just put
> the
> default argument in the function itself and not have it as a parameter.
> And if
> you're holding on to a function or delegate long term, you're almost
> certainly
> going to be using it generically, in which case I wouldn't expect a default
> argument to make sense there often either.
>
> I'd vote to just disallow default arguments for function pointers and
> delegates.
>
> - Jonathan M Davis
>


Re: Cairo Deimos bindings

2012-04-26 Thread Trass3r
I'd say that usable htod generated headers still are a welcome addition  
to Deimos.


David


Even using some regex's is better than htod. It drops const, removes or  
messes up the comments etc.


Re: Cross module version specs

2012-04-26 Thread James Miller
On Thursday, 26 April 2012 at 12:37:44 UTC, Steven Schveighoffer 
wrote:
On Thu, 26 Apr 2012 06:32:58 -0400, James Miller 
 wrote:


On Thursday, 26 April 2012 at 10:20:37 UTC, Jonathan M Davis 
wrote:

On Thursday, April 26, 2012 12:09:19 James Miller wrote:

which pretty much makes them completely useless unless
you only use the built-in versions.


That's not true at all. It just means that versions are 
either useful for
something within a module or they're intended for your 
program as a whole and
passed on the command line (e.g. StdDdoc is used by Phobos, 
and it's not
standard at all; the makefile adds it to the list of compiler 
flags). But yes,
it's true that if you want to define a version in one module 
which affects

another, you can't do it.


Is there any reason for that limitation? Seems like an 
arbitrary limit to me.


The library I am binding to uses ifdefs to let the compiler 
see the appropriate declarations in the header files. It would 
be nice in general for D to be able to mimic that capability, 
as it means you can have a "configuration" file with a list of 
specs that can be generated at build-time by something like 
autoconf.


No, it would not be nice, it would be horrible.  C's 
preprocessor is one of the main reasons I sought out something 
like D.  The fact that you can include files in a different 
order and get a completely different result is not conducive to 
understanding code or keeping code sane.


The correct thing to use for something like this is enums and 
static ifs.  They work because enums are fully qualified within 
the module they are defined, and you can't define and use the 
same unqualified enums in multiple places.



[snip]


These kinds of decoupled effects are what kills me when I ever 
read a heavily #ifdef'd header file.


versions should be defined for the *entire program*, not just 
for certain files.  And if they are defined just for certain 
files, define them in the file itself.


-Steve


I didn't think about it like that, thanks. I'm planning on just 
using enums and static ifs in this case now, since it certainly 
makes more sense and I can achieve the same effect.


Thanks all

--
James Miller


Re: Cairo Deimos bindings

2012-04-26 Thread James Miller

On Thursday, 26 April 2012 at 18:20:01 UTC, David Nadlinger wrote:
On Thursday, 26 April 2012 at 18:15:49 UTC, Andrej Mitrovic 
wrote:
Is there really a need to write it manually? All I had to do 
to use
the C library directly is call HTOD on the headers. Otherwise 
I use

CairoD.


I'd say that usable htod generated headers still are a welcome 
addition to Deimos.


David


On top of that htod doesn't work on linux AFAIK. Also there are 
alot of header files missing from that, probably due to people 
not realising that not all installations install all the headers.


I have the headers for the following surfaces: beos, cogl, 
directfb, drm, gl, os2, pdf, ps, qt, quartz, quartz-image, 
script, skia, svg, tee, vg, wind32, xcb, xlib and xml. There are 
also the extra headers like the core cairo, gobject support, 
hardware-specific definitions and so on.


I am also slightly altering some of the code (in a 
well-documented manner) to reflect the difference between the 
usage of similar constructs in C. So, un-namespacing enums 
because you access the values as TypeName.Member, rather than 
just Member, as in C. Also replacing ifdef blocks with 
conditional compilation so I can replicate, in D, similar error 
messages to the C headers. There is alot that is difficult to do 
with automated tools, and it would be nice if this was properly 
complete, I plan on actually writing a proper install for this so 
your installed D bindings reflect the available C functions.


--
James Miller


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"H. S. Teoh"  wrote in message 
news:mailman.2173.1335475413.4860.digitalmar...@puremagic.com...
> On Thu, Apr 26, 2012 at 01:51:17PM -0400, Nick Sabalausky wrote:
>> "James Miller"  wrote in message
>> news:qdgacdzxkhmhojqce...@forum.dlang.org...
>> > I'm writing an introduction/tutorial to using strings in D, paying
>> > particular attention to the complexities of UTF-8 and 16. I realised
>> > that when you want the number of characters, you normally actually
>> > want to use walkLength, not length. Is is reasonable for the
>> > compiler to pick this up during semantic analysis and point out this
>> > situation?
>> >
>> > It's just a thought because a lot of the time, using length will get
>> > the right answer, but for the wrong reasons, resulting in lurking
>> > bugs. You can always cast to immutable(ubyte)[] or
>> > immutable(short)[] if you want to work with the actual bytes anyway.
>>
>> I find that most of the time I actually *do* want to use length. Don't
>> know if that's common, though, or if it's just a reflection of my
>> particular use-cases.
>>
>> Also, keep in mind that (unless I'm mistaken) walkLength does *not*
>> return the number of "characters" (ie, graphemes), but merely the
>> number of code points - which is not the same thing (due to existence
>> of the [confusingly-named] "combining characters").
> [...]
>
> And don't forget that some code points (notably from the CJK block) are
> specified as "double-width", so if you're trying to do text layout,
> you'll want yet a different length (layoutLength?).
>

Interesting. Kinda makes sence that such thing exists, though: The CJK 
characters (even the relatively simple Japanese *kanas) are detailed enough 
that they need to be larger to achieve the same readability. And that's the 
*non*-double-length ones. So I don't doubt there's ones that need to be 
tagged as "Draw Extra Big!!" :)

For example, I have my font size in Windows Notepad set to a comfortable 
value. But when I want to use hiragana or katakana, I have to go into the 
settings and increase the font size so I can actually read it (Well, to what 
*little* extent I can even read it in the first place ;) ). And those kana's 
tend to be among the simplest CJK characters.

(Don't worry - I only use Notepad as a quick-n-dirty scrap space, never for 
real coding/writing).

> So we really need all four lengths. Ain't unicode fun?! :-)
>

No kidding. The *one* thing I really, really hate about Unicode is the fact 
that most (if not all) of its complexity actually *is* necessary.

Unicode *itself* is undisputably necessary, but I do sure miss ASCII.

> Array length is simple.  Walklength is already implemented. Grapheme
> length requires recognition of 'combining characters' (or rather,
> ignoring said characters), and layout length requires recognizing
> widthless, single- and double-width characters.
>

Yup.

> I've been thinking about unicode processing recently. Traditionally, we
> have to decode narrow strings into UTF-32 (aka dchar) then do table
> lookups and such. But unicode encoding and properties, etc., are static
> information (at least within a single unicode release). So why bother
> with hardcoding tables and stuff at all?
>
> What we *really* should be doing, esp. for commonly-used functions like
> computing various lengths, is to automatically process said tables and
> encode the computation in finite-state machines that can then be
> optimized at the FSM level (there are known algos for generating optimal
> FSMs), codegen'd, and then optimized again at the assembly level by the
> compiler. These FSMs will operate at the native narrow string char type
> level, so that there will be no need for explicit decoding.
>
> The generation algo can then be run just once per unicode release, and
> everything will Just Work.
>

While I find that very intersting...I'm afraid I don't actually understand 
your suggestion :/ (I do understand FSM's and how they work, though) Could 
you give a little example of what you mean?




Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"Jonathan M Davis"  wrote in message 
news:mailman.2166.1335463456.4860.digitalmar...@puremagic.com...
> On Thursday, April 26, 2012 13:51:17 Nick Sabalausky wrote:
>> Also, keep in mind that (unless I'm mistaken) walkLength does *not* 
>> return
>> the number of "characters" (ie, graphemes), but merely the number of code
>> points - which is not the same thing (due to existence of the
>> [confusingly-named] "combining characters").
>
> You're not mistaken. Nothing in Phobos (save perhaps some of std.regex's
> internals) deals with graphemes. It all operates on code points, and 
> strings
> are considered to be ranges of code points, not graphemes. So, as far as
> ranges go, walkLength returns the actual length of the range. That's 
> _usually_
> the number of characters/graphemes as well, but it's certainly not 100%
> correct. We'll need further unicode facilities in Phobos to deal with that
> though, and I doubt that strings will ever change to be treated as ranges 
> of
> graphemes, since that would be incredibly expensive computationally. We 
> have
> enough performance problems with strings as it is. What we'll probably get 
> is
> extra functions to deal with normalization (and probably something to 
> count
> the number of graphemes) and probably a wrapper type that does deal in
> graphemes.
>

Yea, I'm not saying that walkLength should deal with graphemes. Just that if 
someone wants the number of "characters", then neither length *nor* 
walkLength are guaranteed to be correct.




Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread H. S. Teoh
On Thu, Apr 26, 2012 at 01:51:17PM -0400, Nick Sabalausky wrote:
> "James Miller"  wrote in message 
> news:qdgacdzxkhmhojqce...@forum.dlang.org...
> > I'm writing an introduction/tutorial to using strings in D, paying
> > particular attention to the complexities of UTF-8 and 16. I realised
> > that when you want the number of characters, you normally actually
> > want to use walkLength, not length. Is is reasonable for the
> > compiler to pick this up during semantic analysis and point out this
> > situation?
> >
> > It's just a thought because a lot of the time, using length will get
> > the right answer, but for the wrong reasons, resulting in lurking
> > bugs. You can always cast to immutable(ubyte)[] or
> > immutable(short)[] if you want to work with the actual bytes anyway.
> 
> I find that most of the time I actually *do* want to use length. Don't
> know if that's common, though, or if it's just a reflection of my
> particular use-cases.
> 
> Also, keep in mind that (unless I'm mistaken) walkLength does *not*
> return the number of "characters" (ie, graphemes), but merely the
> number of code points - which is not the same thing (due to existence
> of the [confusingly-named] "combining characters").
[...]

And don't forget that some code points (notably from the CJK block) are
specified as "double-width", so if you're trying to do text layout,
you'll want yet a different length (layoutLength?).

So we really need all four lengths. Ain't unicode fun?! :-)

Array length is simple.  Walklength is already implemented. Grapheme
length requires recognition of 'combining characters' (or rather,
ignoring said characters), and layout length requires recognizing
widthless, single- and double-width characters.

I've been thinking about unicode processing recently. Traditionally, we
have to decode narrow strings into UTF-32 (aka dchar) then do table
lookups and such. But unicode encoding and properties, etc., are static
information (at least within a single unicode release). So why bother
with hardcoding tables and stuff at all?

What we *really* should be doing, esp. for commonly-used functions like
computing various lengths, is to automatically process said tables and
encode the computation in finite-state machines that can then be
optimized at the FSM level (there are known algos for generating optimal
FSMs), codegen'd, and then optimized again at the assembly level by the
compiler. These FSMs will operate at the native narrow string char type
level, so that there will be no need for explicit decoding.

The generation algo can then be run just once per unicode release, and
everything will Just Work.


T

-- 
Give me some fresh salted fish, please.


Re: What to do about default function arguments

2012-04-26 Thread deadalnix

Le 26/04/2012 20:49, Walter Bright a écrit :

On 4/26/2012 8:06 AM, Sean Kelly wrote:

Sounds to me like you just answered your own question :-)


Pretty much. I posted it here to see if I missed something major. Not
that that ever happens :-)


Why does default argument should be mangled at all ?


Re: How can D become adopted at my company?

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 21:08, Walter Bright wrote:

I have tried, but failed.

I also agree with you that it's moot, as LDC and GDC exist.


I think I should probably add here that I do recognize the amount of effort 
you've put in here, and wasn't intending to be pejorative about DMD.  I just 
think it's a terrible shame that you've been constrained in this way.


Re: How can D become adopted at my company?

2012-04-26 Thread Sean Kelly
On Apr 26, 2012, at 5:58 AM, Joseph Rushton Wakeling wrote:

> The problem is not GPL compatibility but whether sufficient freedoms are 
> granted to distribute and modify sources.  That has a knockon impact on the 
> ability of 3rd parties to package and distribute the software, to patch it 
> without necessarily going via upstream, etc. etc., all of which affects the 
> degree to which others can easily use the language.

While distributing modified sources is certainly one way of dealing with 
changes not represented by the official distribution, I prefer distributing 
patches instead.  It's easier to audit what's being changed, and updating to a 
new release tends to be easier.

Re: What to do about default function arguments

2012-04-26 Thread Timon Gehr

On 04/26/2012 08:56 PM, Walter Bright wrote:

On 4/26/2012 2:21 AM, Timon Gehr wrote:

This is a matter of terminology. For example, for 'equal' just exclude
the
default parameters from the comparison. For 'the same' include default
parameters in the comparison. (therefore, 'the same' implies 'equal')


I think this is torturing the semantics.



The result of ?: is the type of the two arguments if they are the
same, and it
is the equal type without default arguments if they are not the same.


That's the problem - the selection of arbitrary rules.


It is not entirely arbitrary. Anyway, point taken.


Re: Cairo Deimos bindings

2012-04-26 Thread Andrej Mitrovic
On 4/26/12, David Nadlinger  wrote:
> On Thursday, 26 April 2012 at 18:15:49 UTC, Andrej Mitrovic wrote:
>> Is there really a need to write it manually? All I had to do to
>> use
>> the C library directly is call HTOD on the headers. Otherwise I
>> use
>> CairoD.
>
> I'd say that usable htod generated headers still are a welcome
> addition to Deimos.

Somewhat related: Deimos doesn't seem to show up on github search:
https://github.com/search?utf8=%E2%9C%93&q=deimos&type=Everything&repo=&langOverride=&start_value=1

I think the link to it should be put in the Community section, right
below the Github link.

The link is also here but very hard to spot imo:
http://dlang.org/interfaceToC.html


Re: How can D become adopted at my company?

2012-04-26 Thread Walter Bright

On 4/26/2012 2:27 AM, Jonathan M Davis wrote:

I think that the "openness" of dmd being an issue is purely  a matter of
misunderstandings and FUD. And if Walter _could_ make the backend GPL, he may
very well have done so ages ago. But he can't, so there's no point in
complaining about it - especially since it doesn't impede your ability to use
dmd.


I have tried, but failed.

I also agree with you that it's moot, as LDC and GDC exist.



Re: What to do about default function arguments

2012-04-26 Thread Walter Bright

On 4/26/2012 1:54 AM, Jacob Carlborg wrote:

On 2012-04-26 05:44, Walter Bright wrote:


But if we make default arguments solely a part of the function
declaration, then function pointers (and delegates) cannot have default
arguments. (And maybe this isn't a bad thing?)


Why not?



Because a function pointer, and delegate, can exist without a corresponding 
declaration.


Re: What to do about default function arguments

2012-04-26 Thread Walter Bright

On 4/26/2012 2:21 AM, Timon Gehr wrote:

This is a matter of terminology. For example, for 'equal' just exclude the
default parameters from the comparison. For 'the same' include default
parameters in the comparison. (therefore, 'the same' implies 'equal')


I think this is torturing the semantics.



The result of ?: is the type of the two arguments if they are the same, and it
is the equal type without default arguments if they are not the same.


That's the problem - the selection of arbitrary rules.


Re: What to do about default function arguments

2012-04-26 Thread Walter Bright

On 4/26/2012 8:06 AM, Sean Kelly wrote:

Sounds to me like you just answered your own question :-)


Pretty much. I posted it here to see if I missed something major. Not that that 
ever happens :-)


Re: Is it possible to build DMD using Windows SDK?

2012-04-26 Thread Rainer Schuetze



On 4/26/2012 2:40 PM, Andre Tampubolon wrote:

Rainer Schuetze  wrote:

On 4/24/2012 6:43 PM, David Nadlinger wrote:

On Tuesday, 24 April 2012 at 13:47:30 UTC, Andre Tampubolon wrote:

Any suggestions?


In case he doesn't read your message here anyway, you might want to ping
Rainer Schuetze directly, as he is the one who worked on VC support.

David



Unfortunately some changes to the makefile have been reverted, but I
don't know why. This is the version that should work:

https://github.com/D-Programming-Language/dmd/blob/965d831df554fe14c793ce0d6a1dc9f0b2956911/src/win32.mak


But that one is still using dmc, right? I tried to use "CC=cl" (of course
MS' cl), and got a bunch of errors.


You should still use vcbuild\builddmd.bat which replaces dmc with 
dmc_cl, a batch that replaces dmc's options with the respective cl options.


Re: Cairo Deimos bindings

2012-04-26 Thread David Nadlinger

On Thursday, 26 April 2012 at 18:15:49 UTC, Andrej Mitrovic wrote:
Is there really a need to write it manually? All I had to do to 
use
the C library directly is call HTOD on the headers. Otherwise I 
use

CairoD.


I'd say that usable htod generated headers still are a welcome 
addition to Deimos.


David


Re: Cairo Deimos bindings

2012-04-26 Thread Andrej Mitrovic
On 4/26/12, Andrej Mitrovic  wrote:
> Is there really a need to write it manually? All I had to do to use
> the C library directly is call HTOD on the headers. Otherwise I use
> CairoD.

Sorry for the wrong quote and text above quote, it was meant for OP.


Re: Cairo Deimos bindings

2012-04-26 Thread Andrej Mitrovic
Is there really a need to write it manually? All I had to do to use
the C library directly is call HTOD on the headers. Otherwise I use
CairoD.

On 4/26/12, Johannes Pfau  wrote:
> Am Thu, 26 Apr 2012 10:28:52 +0200
> schrieb "James Miller" :
>
>> I am currently writing D bindings for Cairo for submission into
>> Deimos, could somebody please make the repository so I can fork
>> it?
>>
>> Thanks
>>
>> --
>> James Miller
>
> Sounds like you already finished most of the bindings, but this could
> still be useful:
>
> https://github.com/jpf91/cairoD/tree/master/src/cairo/c
>


Re: What to do about default function arguments

2012-04-26 Thread Martin Nowak

   int function(int) opAddrOf() { return &fbody; }


That was wrong, this should be supported through implicit conversion.


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 13:51:17 Nick Sabalausky wrote:
> Also, keep in mind that (unless I'm mistaken) walkLength does *not* return
> the number of "characters" (ie, graphemes), but merely the number of code
> points - which is not the same thing (due to existence of the
> [confusingly-named] "combining characters").

You're not mistaken. Nothing in Phobos (save perhaps some of std.regex's 
internals) deals with graphemes. It all operates on code points, and strings 
are considered to be ranges of code points, not graphemes. So, as far as 
ranges go, walkLength returns the actual length of the range. That's _usually_ 
the number of characters/graphemes as well, but it's certainly not 100% 
correct. We'll need further unicode facilities in Phobos to deal with that 
though, and I doubt that strings will ever change to be treated as ranges of 
graphemes, since that would be incredibly expensive computationally. We have 
enough performance problems with strings as it is. What we'll probably get is 
extra functions to deal with normalization (and probably something to count 
the number of graphemes) and probably a wrapper type that does deal in 
graphemes.

Regardless, you're right about walkLength returning the number of code points 
rather than graphemes, because strings are considered to be ranges of dchar.

- Jonathan M Davis


Re: What to do about default function arguments

2012-04-26 Thread Martin Nowak
On Thu, 26 Apr 2012 06:10:14 +0200, Walter Bright  
 wrote:



On 4/25/2012 8:44 PM, Walter Bright wrote:
The problem centers around name mangling. If two types mangle the same,  
then
they are the same type. But default arguments are not part of the  
mangled

string. Hence the schizophrenic behavior.


One might suggest mangling the default argument into the type. But  
default arguments need not be compile time constants - they are  
evaluated at runtime! Hence the unattractive specter of trying to mangle  
a runtime expression.


import std.stdio;

int readVal()
{
int val;
stdin.readf("%s", &val);
return val;
}

void main()
{
auto dg = (int a=readVal()) => a;
writeln(dg());
}



Stuffing the value into the type is not going to work out when taking the  
address.

I think it would be interesting to transform them to values of a type that
preserves the behavior. This would work for polymorphic lambdas as values  
too.



auto dg = (int a=readVal()) => a;

static struct Lamba
{
  int opCall() { return fbody(readVal()); }
  int opCall(int a) { return fbody(a); }
  int function(int) opAddrOf() { return &fbody; }
  static int fbody(int a) { return a; }
}


auto dg = a => 2 * a;

struct Lambda
{
  auto opCall(Ta)(auto ref Ta a) { return fbody(a); }
  @disable opAddrOf();
  /*static*/ auto fbody(Ta)(Ta a) { return 2 * a; }
}


Re: What to do about default function arguments

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 19:45:55 Joseph Rushton Wakeling wrote:
> On 26/04/12 19:25, Jonathan M Davis wrote:
> > There is an _enormous_ difference between disallowing default arguments in
> > general and disallowing them in function pointers and delegates.
> 
> I think maybe I've misunderstood the problem. Are we talking about default
> values _of the function pointer_ or that get passed to the function pointer?
> 
> i.e. are we talking about,
> 
> int foo(int delegate() dg = &bar) {
> ...
> }
> 
> or about
> 
> int foo(int delegate() dg = &bar) {
> bar(); // assumes default arguments for bar
> }

The second. If you have

int foo(int a = 1)
{
 return a + 3;
}

and you call foo directly without any arguments, then the default argument 
would be inserted. The problem is when you have a pointer to foo. Should calls 
to the pointer use a default argument or not? To do that, they need to be part 
of the type of the function pointer (or delegate).

Personally, I think that it's a very clear and resounding _no_. Default 
arguments are merely syntactic sugar (albeit very useful syntactic sugar) and 
should _not_ affect the type of a function pointer or delegate. A function 
pointer shouldn't care about whatever default arguments its function might 
have, and a pointer to the foo above should have the same type as a pointer to

int bar(int a)
{
 return a - 2;
}

As such, code like

auto foo = (int a = 1) { return a;};

should probably be illegal, since the default argument could never be used. 
Right now, we get very weird behavior due to an attempt to make such things 
work. And I think that it's pretty clear that that was a mistake (though 
obviously this discussion is to gauge what the group as a whole think and 
debate it if necessary).

- Jonathan M Davis


Re: Notice/Warning on narrowStrings .length

2012-04-26 Thread Nick Sabalausky
"James Miller"  wrote in message 
news:qdgacdzxkhmhojqce...@forum.dlang.org...
> I'm writing an introduction/tutorial to using strings in D, paying 
> particular attention to the complexities of UTF-8 and 16. I realised that 
> when you want the number of characters, you normally actually want to use 
> walkLength, not length. Is is reasonable for the compiler to pick this up 
> during semantic analysis and point out this situation?
>
> It's just a thought because a lot of the time, using length will get the 
> right answer, but for the wrong reasons, resulting in lurking bugs. You 
> can always cast to immutable(ubyte)[] or immutable(short)[] if you want to 
> work with the actual bytes anyway.

I find that most of the time I actually *do* want to use length. Don't know 
if that's common, though, or if it's just a reflection of my particular 
use-cases.

Also, keep in mind that (unless I'm mistaken) walkLength does *not* return 
the number of "characters" (ie, graphemes), but merely the number of code 
points - which is not the same thing (due to existence of the 
[confusingly-named] "combining characters").




Re: What to do about default function arguments

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 19:25, Jonathan M Davis wrote:

There is an _enormous_ difference between disallowing default arguments in
general and disallowing them in function pointers and delegates.


I think maybe I've misunderstood the problem.  Are we talking about default 
values _of the function pointer_ or that get passed to the function pointer?


i.e. are we talking about,

int foo(int delegate() dg = &bar) {
...
}

or about

int foo(int delegate() dg = &bar) {
bar();  // assumes default arguments for bar
}

...?


Re: What to do about default function arguments

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 15:09:15 Joseph Rushton Wakeling wrote:
> On 26/04/12 05:44, Walter Bright wrote:
> > But if we make default arguments solely a part of the function
> > declaration, then function pointers (and delegates) cannot have default
> > arguments. (And maybe this isn't a bad thing?)
> 
> I can't see disallowing default arguments as being a good thing. For
> example, instead of,
> 
> void foo(int a, int b = 2)
> {
> ...
> }
> 
> surely I can just put instead
> 
> void foo(int a, int b)
> {
> ...
> }
> 
> void foo(int a)
> {
> foo(a, 2);
> }
> 
> ... and surely I can do something similar for function pointers and
> delegates. So, I can still have default arguments in effect, I just have to
> work more as a programmer, using a less friendly and easy-to-understand
> syntax. That doesn't really seem like a good way to operate unless there's
> an extreme level of complication in getting the compiler to handle the
> situation.

There is an _enormous_ difference between disallowing default arguments in 
general and disallowing them in function pointers and delegates.

Think about how function pointers and delegates get used. They're almost 
always called generically. Something gets passed a function pointer or 
delegate and argument and calls it. The caller isn't going to care about 
default arguments. It expects a specific signature and passes those specific 
arguments to the function/delegate when it calls it. It's not going to be 
calling it with 2 arguments in one case and 3 in another.

No default arguments are involved with function pointers in C/C++, and I've 
never heard of anyone complain about it there. Default arguments are great 
when you're going to be calling a function in a variety of places, and you 
want defaults for some parameters in cases where they're almost always a 
particular value so that you don't have to always type them. But function 
pointers and delegates are generally called in _one_ way, if not in one place, 
so their usage is _very_ different.

- Jonathan M Davis


Re: What to do about default function arguments

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 13:54:47 bearophile wrote:
> The simplest solution is to the breaking change of disallowing
> default arguments for function pointers and delegates. This also
> means disallowing talking the pointer/delegate of function with
> default arguments.

No it doesn't. You could definitely have a pointer to function with default 
arguments. It's just that the pointer doesn't use the default arguments at 
all. The default arguments get inserted at the call site if you explicitly 
call a function and don't pass arguments for those parameters. The function's 
signature is unaffected by the presence of default arguments, so the pointer 
should be unaffected.

What should be disallowed is giving default arguments to function pointers and 
delegate literals. So, stuff like

void main()
{
 int foo(int a = 1)
 {
 return a;
 }
}

should work just fine. The default argument will only get used if foo is called 
directly. However, stuff like

void main()
{
 auto foo = (int a = 1) { return a;};
}

wouldn't work, because you can't call the function except through its pointer. 
As you can never call the function directly, its default argument would never 
be used, and there would be no point to having it.

- Jonathan M Davis


Re: Cairo Deimos bindings

2012-04-26 Thread Johannes Pfau
Am Thu, 26 Apr 2012 10:28:52 +0200
schrieb "James Miller" :

> I am currently writing D bindings for Cairo for submission into 
> Deimos, could somebody please make the repository so I can fork 
> it?
> 
> Thanks
> 
> --
> James Miller

Sounds like you already finished most of the bindings, but this could
still be useful:

https://github.com/jpf91/cairoD/tree/master/src/cairo/c


Re: Random distributions in Phobos

2012-04-26 Thread Jesse Phillips
On Thursday, 26 April 2012 at 11:46:32 UTC, Joseph Rushton 
Wakeling wrote:
OK, thanks.  So to be clear: I should submit my proposed 
changes as a pull request, making sure to include a and should 
expect feedback after about 2 weeks ... ?


Good follow up, I actually forgot about the three types of 
submissions and yours being the second (bugs third).


The review processes is really for major additions or changes. 
You are suggesting an expansion on existing functionality.


In which case submission via a pull request before review is 
acceptable, the review is done by the Phobos maintainers instead 
of requiring whole community input. However once you have a pull 
request ready, I think it is good to post to the forum to give it 
more visibility.


I also think adding a bugzilla entry for the enhancement may also 
be welcomed (other opinions?). The pull request would then state 
it closes the bug.


http://d.puremagic.com/issues/

As for response expectations, there can't really be. There is a 
good chance things will sit in the queue for some time, if it 
takes too long, then asking again I don't see as being frowned 
upon.


Re: Random distributions in Phobos

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 13:46:20 Joseph Rushton Wakeling wrote:
> On 26/04/12 06:03, Jesse Phillips wrote:
> > I suppose it would make sense for these to make it into phobos, personally
> > am not familiar with the use case.
> 
> The use case is mostly to do with scientific simulation: if you look at most
> science-oriented languages and libraries (MATLAB/Octave, R, GNU Scientific
> Library, ...) they offer an extensive range of different random number
> distributions.
> 
> SciD would also be an acceptable location for this kind of functionality,
> but going by the example of e.g. Boost.Random it seems appropriate to have
> the basic RNG functionality coupled with extra distributions. It's also
> clear from the std.random documentations that more distributions are
> planned for inclusion.
> > Instead of writing the answer to your question here, I've made changes to
> > the wiki. I think there is a page I'm missing but don't know where it is
> > so maybe someone else will correct it:
> > 
> > http://www.prowiki.org/wiki4d/wiki.cgi?HelpDProgress#ContributingtoPhobos
> 
> OK, thanks. So to be clear: I should submit my proposed changes as a pull
> request, making sure to include a and should expect feedback after about 2
> weeks ... ?
> 
> I ask because I wasn't clear if I'd done the right thing when I submitted a
> pull request on my random sampling functionality. I was expecting to get
> at least an acknowledgement quite quickly, either saying that the code
> would be looked at or highlighting an obvious missing factor (e.g.
> appropriate unittests or benchmarks).

No. The whole "2 weeks" thing is talking about the review process for adding 
major new functionality to Phobos (e.g. adding a new module). Major stuff needs 
to be reviewed and voted in on the newsgroup per the Boost review process (or 
something approximating it anyway). After something has been reviewed and 
voted it, then it goes through the normal pull request process to actually get 
merged into Phobos.

Smaller stuff (e.g. fixing a bug or adding one function) can go through the 
pull 
request process without a review in the newsgroup (the Phobos devs will 
normally point it out if something is major enough to need full review if 
you're not sure).

However, there is no guarantee whatsoever about how quickly a pull request 
will be processed. Sometimes, it's very quick. Other times, a pull request can 
sit there for a while before it gets looked at. The Phobos devs are all 
volunteers working in their own time, and there are only so many of them, so 
when they get to pull requests tends to be very dependent on their personal 
schedules and on what the pull request is for (e.g. if it's for something that 
a particular developer works on regularly or it's very simple, it's much more 
likely to be processed quickly, but if it's more esoteric and/or large and 
complicated, then it's much more likely to take a while).

- Jonathan M Davis


Re: What to do about default function arguments

2012-04-26 Thread Daniel Murphy
"Walter Bright"  wrote in message 
news:jnagar$2d8k$1...@digitalmars.com...
>A subtle but nasty problem - are default arguments part of the type, or 
>part of the declaration?
>
>See http://d.puremagic.com/issues/show_bug.cgi?id=3866
>
> Currently, they are both, which leads to the nasty behavior in the bug 
> report.
>
> The problem centers around name mangling. If two types mangle the same, 
> then they are the same type. But default arguments are not part of the 
> mangled string. Hence the schizophrenic behavior.
>
> But if we make default arguments solely a part of the function 
> declaration, then function pointers (and delegates) cannot have default 
> arguments. (And maybe this isn't a bad thing?)

>From what I remember, function pointer parameter names have similar 
problems.  It never made any sense to me to have default parameters or 
parameter names as part of the type. 




Re: Convert a delegate to a function (i.e. make a thunk)

2012-04-26 Thread Daniel Murphy
"Mehrdad"  wrote in message 
news:jn9ops$13bs$1...@digitalmars.com...
> It would be nice if there was a way to convert delegates to 
> functions/thunks, because certain annoying tasks (e.g. 'WndProc's in 
> Windows) would become a heck of a lot easier.
>
> Is there any way to already do this? If not, how about adding a 
> toFunction() method in std.functional?
>

Well, there's this: http://pastebin.com/XwEb2cpm

But it's pre-3797, and probably doesn't work anywhere except win32. 




Re: How can D become adopted at my company?

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 16:59, Don Clugston wrote:

And the only one such limitation of freedom which has ever been identified, in
numerous posts (hundreds!) on this topic, is that the license is not GPL
compatible and therefore cannot be distributed with (say) OS distributions.


Yes, I appreciate I touched on a sore point and one that must have been 
discussed to death.  I wasn't meaning to add to the noise, but your response to 
my original email was so hostile I felt I had to reply at length to clarify.


I personally don't think it's a minor issue that the reference version of D 
can't be included with open source distributions, but I also think there are 
much more pressing immediate issues than this to resolve in the short term.


By the way, there are plenty of non-GPL-compatible licences that have 
traditionally been considered acceptable by open source distributions -- the 
original Mozilla Public Licence and Apache Licence (new versions have since been 
released which ensure compatibility), at least one variant of the permissive 
BSD/MIT licences, and probably others.  It's whether the licence implements the 
"four freedoms" that matters.


Re: What to do about default function arguments

2012-04-26 Thread Sean Kelly
On Apr 25, 2012, at 9:10 PM, Walter Bright  wrote:

> On 4/25/2012 8:44 PM, Walter Bright wrote:
>> The problem centers around name mangling. If two types mangle the same, then
>> they are the same type. But default arguments are not part of the mangled
>> string. Hence the schizophrenic behavior.
> 
> One might suggest mangling the default argument into the type. But default 
> arguments need not be compile time constants - they are evaluated at runtime! 
> Hence the unattractive specter of trying to mangle a runtime expression.

Sounds to me like you just answered your own question :-)

Re: How can D become adopted at my company?

2012-04-26 Thread Don Clugston

On 26/04/12 14:58, Joseph Rushton Wakeling wrote:

On 26/04/12 11:07, Don Clugston wrote:


"open source" is a horrible, duplicitous term. Really what you mean is
"the
license is not GPL compatible".



No, I don't mean "GPL compatible". I'd be perfectly happy for the DMD
backend to be released under a GPL-incompatible free/open source licence
like the CDDL.

The problem is not GPL compatibility but whether sufficient freedoms are
granted to distribute and modify sources.


And the only one such limitation of freedom which has ever been 
identified, in numerous posts (hundreds!) on this topic, is that the 
license is not GPL compatible and therefore cannot be distributed with 
(say) OS distributions.


Everything else is FUD.


Re: Compiling DMD for the iPhone simulator

2012-04-26 Thread Jacob Carlborg

On 2012-04-26 15:23, Michel Fortin wrote:


That might help. Although I'd suspect that all that's really needed is
to specify the simulator's SDK as the system root with a linker flag
(--sysroot=) when linking D code. I'd suggest you try that first.


That's a good idea. Unfortunately I haven't managed to compile druntime 
yet due to the conflict module names. When I manage to get to the 
linking phase I'll keep this in mind.


--
/Jacob Carlborg


Re: What to do about default function arguments

2012-04-26 Thread H. S. Teoh
On Wed, Apr 25, 2012 at 10:39:01PM -0700, Walter Bright wrote:
> On 4/25/2012 10:29 PM, Ary Manzana wrote:
> >I don't understand the relationship between two delegate types being
> >the same and thus sharing the same implementation for default
> >arguments for *different instances* of a delegate with the same type.
> >
> >Maybe a bug in how it's currently implemented?
> 
> If you call a delegate directly, then the default arguments work. If
> you call it indirectly, the default arguments won't work.

This is even more an argument for *not* including default arguments in
the type.


T

-- 
IBM = I Blame Microsoft


Re: Compiling DMD for the iPhone simulator

2012-04-26 Thread Michel Fortin

On 2012-04-26 11:40:57 +, Jacob Carlborg  said:


On 2012-04-26 12:20, Michel Fortin wrote:


You are assuming those compilers linked to the iOS SDK, but they could
be "cross compilers" in the sense that the compiler is linked to Mac
libraries (just like a normal Mac compiler) but creates executables for
the iOS Simulator platform. (Obviously, the ARM ones are true cross
compilers.)


Yes, exactly. I was hoping I could do the same with DMD.


My suspicion is that you could use the same Mac DMD compiler as long as
all the generated code is linked with the iOS SDK. As far as I know, the
only ABI difference is that the Objective-C runtime for the simulator is
the Modern runtime while the Mac is still using the Legacy runtime for
32-bit. So you can't use the same Objective-C compiler, but beside
Objective-C I'd expect all the generated code to be the same.


I assume I would need change DMD to use the gcc located in the iPhone 
simulator SDK instead of the "regular" one.


That might help. Although I'd suspect that all that's really needed is 
to specify the simulator's SDK as the system root with a linker flag 
(--sysroot=) when linking D code. I'd suggest you try that first.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: What to do about default function arguments

2012-04-26 Thread Steven Schveighoffer
On Thu, 26 Apr 2012 09:08:07 -0400, Steven Schveighoffer  
 wrote:



void main()
{
 auto a = (int x = 1) { return x;};
 pure nothrow @safe int function(int) b = (int x) { return x;};
 pragma(msg, typeof(a).stringof);
 pragma(msg, typeof(b).stringof);
 b = a; // ok
 //a = b; // error

 //b(); // error
}

output:

int function(int x = 1) pure nothrow @safe
int function(int)



Nevermind, I just realized it was ignoring my pure nothrow @safe for the  
declaration.  Moving it after the declaration results in:


void main()
{
auto a = (int x = 1) { return x;};
int function(int) pure nothrow @safe b = (int x) { return x;};
pragma(msg, typeof(a).stringof);
pragma(msg, typeof(b).stringof);
}

output:

int function(int x = 1) pure nothrow @safe
int function(int x = 1) pure nothrow @safe

which clearly mimics the auto behavior.  This is *really* no good, since  
it seems to be ignoring the explicit type that I specified.


IMO, the correct solution is to make the default argument part of the type  
(and don't let it affect things globally!), and make it derived from the  
version without a default arg.  I think Michel Fortin said the same thing.


-Steve


Re: What to do about default function arguments

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 05:44, Walter Bright wrote:

But if we make default arguments solely a part of the function declaration, then
function pointers (and delegates) cannot have default arguments. (And maybe this
isn't a bad thing?)


I can't see disallowing default arguments as being a good thing.  For example, 
instead of,


void foo(int a, int b = 2)
{
...
}

surely I can just put instead

void foo(int a, int b)
{
...
}

void foo(int a)
{
foo(a, 2);
}

... and surely I can do something similar for function pointers and delegates. 
So, I can still have default arguments in effect, I just have to work more as a 
programmer, using a less friendly and easy-to-understand syntax.  That doesn't 
really seem like a good way to operate unless there's an extreme level of 
complication in getting the compiler to handle the situation.


Re: What to do about default function arguments

2012-04-26 Thread Steven Schveighoffer
On Wed, 25 Apr 2012 23:44:07 -0400, Walter Bright  
 wrote:


A subtle but nasty problem - are default arguments part of the type, or  
part of the declaration?


See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug  
report.


The problem centers around name mangling. If two types mangle the same,  
then they are the same type. But default arguments are not part of the  
mangled string. Hence the schizophrenic behavior.


But if we make default arguments solely a part of the function  
declaration, then function pointers (and delegates) cannot have default  
arguments. (And maybe this isn't a bad thing?)


Some testing (2.059):

void main()
{
auto a = (int x = 1) { return x;};
auto b = (int x) { return x;};
pragma(msg, typeof(a).stringof);
pragma(msg, typeof(b).stringof);
}

output:

int function(int x = 1) pure nothrow @safe
int function(int x = 1) pure nothrow @safe

second pass:

void main()
{
auto a = (int x = 1) { return x;};
pure nothrow @safe int function(int) b = (int x) { return x;};
pragma(msg, typeof(a).stringof);
pragma(msg, typeof(b).stringof);
b = a; // ok
//a = b; // error

//b(); // error
}

output:

int function(int x = 1) pure nothrow @safe
int function(int)


if you ask me, everything looks exactly as I'd expect, except the auto  
type inference of b.  Can this not be fixed?  I don't understand the  
difficulty.


BTW, I didn't know you could have default arguments for  
functions/delegates, it's pretty neat :)


-Steve


Re: What to do about default function arguments

2012-04-26 Thread bearophile

Michel Fortin:

That said, there was some talk about adding support for named 
parameters a year ago.


Good reminder. I think such parts of D shouldn't be designed one
of a time. If you want to face the problem of default arguments,
it's better to think about named arguments too (even if you don't
want to implement them now, to not preclude their good future
implementation).

Bye,
bearophile


export extern (C) void Fun Error

2012-04-26 Thread 拖狗散步

export c callback fun:

alias void function(int id) ConnectedCallBack;
alias void function(int id, void* data, int len) ReadCallBack;

export extern (C) void AddListenersss( ConnectedCallBack 
connectedCallBack=null, ReadCallBack readCallBack=null )

{
int id = 0;
int len = 0;
version(Windows)
{
while(true)
{
Thread.sleep( dur!("seconds")( 2 ) );
id++;
connectedCallBack(id);

len+=id;
Thread.sleep( dur!("seconds")( 1 ) );
readCallBack(id, null, len);
}
}
else
{
//server = new epoll();
}
}

And then use the exported dll called D and C, C #
In addition to the D outside the
Other returns are wrong values​​, such as C #:

[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
public delegate void OnConnectedCallBack(int id);

[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
public delegate void OnReceivedCallBack(int sock, IntPtr pData, 
int len);


[DllImport("ShuNetTcp.dll",EntryPoint = "AddListenersss", CharSet 
= CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
public static extern void AddListenersss(OnConnectedCallBack 
connectedCallback, OnReceivedCallBack readCallBack );


using:
m_onConnected = new NetServer.OnConnectedCallBack(OnConnected);
m_onReceived = new NetServer.OnReceivedCallBack(OnRead);
NetServer.AddListenersss( m_onConnected, m_onReceived );

call fun:
private void OnConnected(int id)
{
Console.WriteLine ("连接完成 Id=>" + id.ToString());
}

private void OnRead(int id, IntPtr pdata, int len)
{
}

Received id and len are wrong! :(
Gurus to help me


Re: How can D become adopted at my company?

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 11:07, Don Clugston wrote:


"open source" is a horrible, duplicitous term. Really what you mean is "the
license is not GPL compatible".



No, I don't mean "GPL compatible".  I'd be perfectly happy for the DMD backend 
to be released under a GPL-incompatible free/open source licence like the CDDL.


The problem is not GPL compatibility but whether sufficient freedoms are granted 
to distribute and modify sources.  That has a knockon impact on the ability of 
3rd parties to package and distribute the software, to patch it without 
necessarily going via upstream, etc. etc., all of which affects the degree to 
which others can easily use the language.



Based on my understanding of the legal situation with Symantec, the backend
CANNOT become GPL compatible. Stop using the word "still", it will NEVER happen.


Please understand that I'm not suggesting any bad faith on the part of D's 
developers.  Walter's good intentions are clear in the strong support he's given 
to GDC and other freely-licensed compilers.


All I'm suggesting is that being free software (a somewhat better-defined term) 
was a key factor in some languages gaining popularity without corporate backing, 
and that the non-free nature of the DMD backend may have prevented D from 
enjoying this potential source of support.


On 26/04/12 11:27, Jonathan M Davis wrote:

And it really doesn't need to. I honestly don't understand why it's an issue
at all other than people completely misunderstanding the situation or being
the types of folks who think that anything which isn't completely and totally
open is evil.

Whether the backend is open or not has _zero_ impact on your ability to use
it. The source is freely available, so you can look at and see what it does.
You can even submit pull requests for it. Yes, there are some limitations on
you going and doing  whatever you want with the source, but so what? There's
_nothing_ impeding your ability to use it to compile programs. And the front-
end - which is really where D itself is - _is_ under the GPL.


You misunderstand my point.  I'm not saying anyone is evil; I'm simply pointing 
out that the licensing constraints prevent various kinds of 3rd party 
distribution and engagement that could be useful in spreading awareness and use 
of the language.  That _does_ have an impact on use, in terms of constraining 
the development of 3rd-party support and infrastructure.



Not to mention, if really want a "fully open" D compiler, there's always gdc
and ldc, so you there _are_ alternatives. The fact that dmd isn't really
doesn't affect much except for the people whom are overzealous about "free
software."


Yes, but GDC and LDC both (for now) lag behind DMD in terms of functionality -- 
I was not able to compile my updates to Phobos using GDC -- and it's almost 
inevitable that they will always have to play catch-up, even though the impact 
of that will lessen over time.  That's why I spoke about the "reference 
implementation" of the language: D2 has been available for quite some time now, 
but it's only last Autumn that a D2 compiler landed in my Linux distro.



I think that the "openness" of dmd being an issue is purely  a matter of
misunderstandings and FUD. And if Walter _could_ make the backend GPL, he may
very well have done so ages ago. But he can't, so there's no point in
complaining about it - especially since it doesn't impede your ability to use
dmd.


To an extent I agree with you.  The good intentions of Walter and the other D 
developers are clear, it's always been apparent that there will be fully open 
source compilers for the language, etc. etc.; I wouldn't be here if I wasn't 
happy to work with DMD under its given licence terms.  But it's not FUD to say 
that the licensing does make more difficult certain kinds of engagement that 
have been very helpful for other languages, such as inclusion in Linux distros 
and BSD's or other software collections -- and that has a further impact in 
those suppliers' willingness or ability to ship other software written in D.


It's also fair to say that if the licensing was different, that would remove an 
entire source of potential FUD.


Again, I'm not saying that anyone is evil, that I find the situation personally 
unacceptable or that I don't understand the reasons why things are as they are. 
 I just made the point that _being_ free/open source software was probably an 
important factor in the success of a number of now-popular languages that didn't 
originally enjoy corporate support, and that the licensing of the DMD backend 
prevents it from enjoying some of those avenues to success.


 and I _want_ to see that success, because I think D deserves it.

Best wishes,

-- Joe


Re: Is it possible to build DMD using Windows SDK?

2012-04-26 Thread Andre Tampubolon
Rainer Schuetze  wrote:
> On 4/24/2012 6:43 PM, David Nadlinger wrote:
>> On Tuesday, 24 April 2012 at 13:47:30 UTC, Andre Tampubolon wrote:
>>> Any suggestions?
>> 
>> In case he doesn't read your message here anyway, you might want to ping
>> Rainer Schuetze directly, as he is the one who worked on VC support.
>> 
>> David
> 
> 
> Unfortunately some changes to the makefile have been reverted, but I
> don't know why. This is the version that should work:
> 
> https://github.com/D-Programming-Language/dmd/blob/965d831df554fe14c793ce0d6a1dc9f0b2956911/src/win32.mak

But that one is still using dmc, right? I tried to use "CC=cl" (of course
MS' cl), and got a bunch of errors.


Re: Cross module version specs

2012-04-26 Thread Steven Schveighoffer

On Thu, 26 Apr 2012 06:32:58 -0400, James Miller  wrote:


On Thursday, 26 April 2012 at 10:20:37 UTC, Jonathan M Davis wrote:

On Thursday, April 26, 2012 12:09:19 James Miller wrote:

which pretty much makes them completely useless unless
you only use the built-in versions.


That's not true at all. It just means that versions are either useful  
for
something within a module or they're intended for your program as a  
whole and

passed on the command line (e.g. StdDdoc is used by Phobos, and it's not
standard at all; the makefile adds it to the list of compiler flags).  
But yes,
it's true that if you want to define a version in one module which  
affects

another, you can't do it.


Is there any reason for that limitation? Seems like an arbitrary limit  
to me.


The library I am binding to uses ifdefs to let the compiler see the  
appropriate declarations in the header files. It would be nice in  
general for D to be able to mimic that capability, as it means you can  
have a "configuration" file with a list of specs that can be generated  
at build-time by something like autoconf.


No, it would not be nice, it would be horrible.  C's preprocessor is one  
of the main reasons I sought out something like D.  The fact that you can  
include files in a different order and get a completely different result  
is not conducive to understanding code or keeping code sane.


The correct thing to use for something like this is enums and static ifs.   
They work because enums are fully qualified within the module they are  
defined, and you can't define and use the same unqualified enums in  
multiple places.


I'll give you some examples:

module1.d:
version = abc;

module2.d:
version(abc) int x;
else  double x;

module3.d:
import module1;
import module2;

pragma(msg, typeof(x).stringof); // int or double?

module4.d:
import module3;

version(abc) int y;
else double y;

pragma(msg, typeof(y).stringof); // int or double?


Now, what happens?  Should module2 be affected by module1's versions?   
What about module4?  What if module4 doesn't know that module3 indirectly  
declares abc as a version?  What if module3 didn't import module1 at the  
time module4 was written, but added it later?


These kinds of decoupled effects are what kills me when I ever read a  
heavily #ifdef'd header file.


versions should be defined for the *entire program*, not just for certain  
files.  And if they are defined just for certain files, define them in the  
file itself.


-Steve


Re: What to do about default function arguments

2012-04-26 Thread bearophile

Walter:
A subtle but nasty problem - are default arguments part of the 
type, or part of the declaration?


I'm waiting for years to see you finally facing this problem too 
:-) The current situation is not acceptable, so some change of 
the current situation is required, probably a small breaking 
change.


The simplest solution is to the breaking change of disallowing 
default arguments for function pointers and delegates. This also 
means disallowing talking the pointer/delegate of function with 
default arguments. The advantage of this solution is its 
simplicity, for both the compiler the D programmer and the D 
newbie that has to learn D.


A second alternative solution is to put the default arguments 
inside the function mangled and, and to avoid troubles, allow 
this only for POD data known at compile-time (and throw a 
compile-time error in all other cases). This has the advantage 
that common default arguments like "true", 15, 2.5, are accepted, 
so this gives some flexibility. One disadvantage is that you need 
to add a special case to D, but it's not a nasty special case, 
it's a clean thing.


A third possible way is to put the default arguments inside the 
delegate/function itself, as in Python. So default arguments for 
delegates/function have a different semantics. Then add a hidden 
extra field to such delegates/functions, a ubyte that tells the 
function how many arguments are actually given to the function 
(so the function is later able to use this argument to know how 
many arguments replace with the default ones). This has some 
disadvantages, because you can retrofit already compiled modules, 
so you can't get the function pointer of a function that doesn't 
have this hidden argument. The advantage is that the semantics is 
clean, and it's quite flexible.


Bye,
bearophile


Re: Random distributions in Phobos

2012-04-26 Thread Joseph Rushton Wakeling

On 26/04/12 06:03, Jesse Phillips wrote:

I suppose it would make sense for these to make it into phobos, personally am
not familiar with the use case.


The use case is mostly to do with scientific simulation: if you look at most 
science-oriented languages and libraries (MATLAB/Octave, R, GNU Scientific 
Library, ...) they offer an extensive range of different random number 
distributions.


SciD would also be an acceptable location for this kind of functionality, but 
going by the example of e.g. Boost.Random it seems appropriate to have the basic 
RNG functionality coupled with extra distributions.  It's also clear from the 
std.random documentations that more distributions are planned for inclusion.



Instead of writing the answer to your question here, I've made changes to the
wiki. I think there is a page I'm missing but don't know where it is so maybe
someone else will correct it:

http://www.prowiki.org/wiki4d/wiki.cgi?HelpDProgress#ContributingtoPhobos


OK, thanks.  So to be clear: I should submit my proposed changes as a pull 
request, making sure to include a and should expect feedback after about 2 weeks 
... ?


I ask because I wasn't clear if I'd done the right thing when I submitted a pull 
request on my random sampling functionality.  I was expecting to get at least an 
acknowledgement quite quickly, either saying that the code would be looked at or 
highlighting an obvious missing factor (e.g. appropriate unittests or benchmarks).


Thanks & best wishes,

-- Joe


Re: Can we kill the D calling convention already?

2012-04-26 Thread Alex Rønne Petersen

On 26-04-2012 13:37, Kagamin wrote:

On Wednesday, 25 April 2012 at 14:32:13 UTC, Alex Rønne Petersen wrote:

You're missing the point. D is providing (or trying to provide) a
standard inline assembler, but calling conventions are not
standardized enough for it to be useful across compilers. If you're
writing inline assembly because you *have* to, you don't just "version
it out", you have to write different logic for different compilers,
which is a maintenance nightmare.


Don't implement complex logic in assembly, extract it to D or C code.


Look, your argument is just plain invalid. Please reread what I said: 
"[...] If you're writing inline assembly because you *have* to [...]". 
Notice the emphasis. And even if we disregarded that, the language 
boasts a standardized inline assembler, and therefore needs to actually 
make it sane to use it. As it stands, we're no better than C and C++, 
yet we claim we are.


--
- Alex


Re: Compiling DMD for the iPhone simulator

2012-04-26 Thread Jacob Carlborg

On 2012-04-26 12:20, Michel Fortin wrote:


You are assuming those compilers linked to the iOS SDK, but they could
be "cross compilers" in the sense that the compiler is linked to Mac
libraries (just like a normal Mac compiler) but creates executables for
the iOS Simulator platform. (Obviously, the ARM ones are true cross
compilers.)


Yes, exactly. I was hoping I could do the same with DMD.


My suspicion is that you could use the same Mac DMD compiler as long as
all the generated code is linked with the iOS SDK. As far as I know, the
only ABI difference is that the Objective-C runtime for the simulator is
the Modern runtime while the Mac is still using the Legacy runtime for
32-bit. So you can't use the same Objective-C compiler, but beside
Objective-C I'd expect all the generated code to be the same.


I assume I would need change DMD to use the gcc located in the iPhone 
simulator SDK instead of the "regular" one.


--
/Jacob Carlborg


Re: Can we kill the D calling convention already?

2012-04-26 Thread Kagamin
On Wednesday, 25 April 2012 at 14:32:13 UTC, Alex Rønne Petersen 
wrote:
You're missing the point. D is providing (or trying to provide) 
a standard inline assembler, but calling conventions are not 
standardized enough for it to be useful across compilers. If 
you're writing inline assembly because you *have* to, you don't 
just "version it out", you have to write different logic for 
different compilers, which is a maintenance nightmare.


Don't implement complex logic in assembly, extract it to D or C 
code.


Re: MPI bindings revisited

2012-04-26 Thread Dejan Lekic

On Thursday, 26 April 2012 at 11:19:50 UTC, dominic jones wrote:

Hello,

A while ago a thread was started on implementing MPI bindings 
for D (see 
http://forum.dlang.org/thread/dnjm6k$145u$1...@digitaldaemon.com | 
Stewart Gordon; December 12, 2005; Partial translation of MPI 
headers; digitalmars.D.announce)


I downloaded the bindings (mpi.tar.gz) and tried to compile it, 
but I had no success. I am too incompetent to get it working. 
May someone have a look into it?


I (and probably many others involved in massive numerical 
computation) would find this binding very useful, Once working, 
it seems like it would fit well in Deimos.


Thank you,
Dominic Jones


Judging by the year (2005) my first guess is - the binding is 
done in D1. Many things have changed in last 7 years. D is very 
much different from back then... That code needs to be changed 
quite a bit to work.


MPI bindings revisited

2012-04-26 Thread dominic jones

Hello,

A while ago a thread was started on implementing MPI bindings for 
D (see 
http://forum.dlang.org/thread/dnjm6k$145u$1...@digitaldaemon.com | 
Stewart Gordon; December 12, 2005; Partial translation of MPI 
headers; digitalmars.D.announce)


I downloaded the bindings (mpi.tar.gz) and tried to compile it, 
but I had no success. I am too incompetent to get it working. May 
someone have a look into it?


I (and probably many others involved in massive numerical 
computation) would find this binding very useful, Once working, 
it seems like it would fit well in Deimos.


Thank you,
Dominic Jones


Re: This shouldn't happen

2012-04-26 Thread bearophile

Nick Sabalausky:

Yea, I've got no problem with both (other than sometimes the 
fully-unaliased
one can be really, really long.) But at the very least, the 
type *as used*
needs to be shown. The unaliased form isn't bad to have too, 
but it's

typically of lesser importance..


See:
http://d.puremagic.com/issues/show_bug.cgi?id=5004

Bye,
bearophile


Re: What to do about default function arguments

2012-04-26 Thread Michel Fortin

On 2012-04-26 03:44:07 +, Walter Bright  said:

A subtle but nasty problem - are default arguments part of the type, or 
part of the declaration?


See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug report.

The problem centers around name mangling. If two types mangle the same, 
then they are the same type. But default arguments are not part of the 
mangled string. Hence the schizophrenic behavior.


But if we make default arguments solely a part of the function 
declaration, then function pointers (and delegates) cannot have default 
arguments. (And maybe this isn't a bad thing?)


Assuming we want to allow it, all you need is to treat the type having 
the default parameters as a specialization of the type that has none. 
In other words, the type with the default arugments is implicitly 
castable to the type without.


Should it affect name mangling? Yes and not. It should affect name 
mangling in the compiler since you're using the mangled name to check 
for equality between types. But I think it should not affect name 
mangling for the generated code: the base type without the default 
arguments should give its name to the emitted symbols so that changing 
the default argument does not change the ABI.


But is it desirable? I'm not convinced. I don't really see the point. 
It looks like a poor substitute for overloading to me.


That said, there was some talk about adding support for named 
parameters a year ago. For named parameters, I think it'd make sense to 
have parameter names be part of the type, and little harm would result 
in adding default parameters too into the mix. As suggested above, it 
should be implicitly castable to a base type without parameter names or 
default values. But for default parameters alone, I don't feel the 
complication is justified.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: What to do about default function arguments

2012-04-26 Thread deadalnix

Le 26/04/2012 05:44, Walter Bright a écrit :

A subtle but nasty problem - are default arguments part of the type, or
part of the declaration?

See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug
report.

The problem centers around name mangling. If two types mangle the same,
then they are the same type. But default arguments are not part of the
mangled string. Hence the schizophrenic behavior.

But if we make default arguments solely a part of the function
declaration, then function pointers (and delegates) cannot have default
arguments. (And maybe this isn't a bad thing?)


Maybe the problem is that type are considered to be the same when the 
mangling is the same. The default parameter is part of the type, isn't 
mangled (or hell will come on earth) and is covariant with the type with 
no default.


Re: What to do about default function arguments

2012-04-26 Thread Trass3r
I've always thought of default arguments to be plain syntactic sugar, so  
for void f(int i, int j=5)f(1) is simply transformed to f(1,5) and the  
rest is the same.


Re: What to do about default function arguments

2012-04-26 Thread Don Clugston

On 26/04/12 12:11, Timon Gehr wrote:

On 04/26/2012 11:46 AM, Don Clugston wrote:

On 26/04/12 11:28, Timon Gehr wrote:

On 04/26/2012 10:51 AM, Don Clugston wrote:

On 26/04/12 05:44, Walter Bright wrote:

A subtle but nasty problem - are default arguments part of the
type, or
part of the declaration?

See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug
report.

The problem centers around name mangling. If two types mangle the
same,
then they are the same type. But default arguments are not part of the
mangled string. Hence the schizophrenic behavior.

But if we make default arguments solely a part of the function
declaration, then function pointers (and delegates) cannot have
default
arguments. (And maybe this isn't a bad thing?)


I think it is a mistake to allow default arguments in function pointers
and delegates (it's OK for delegate literals, there you have the
declaration).


The parenthesised part is in conflict with your other statement.


No it doesn't. A default argument is a delegate literal is part of the
declaration, not part of the type.


If types cannot specify default arguments, then those will be thrown
away right away, because what is later called is based on the type of
the delegate and not on the implicit function declaration that has its
address taken. What is the point of allowing it if it cannot be used?


Fair point. It could be used in the case where it is called at the point 
of declaration (I do that a fair bit), but it's pretty much useless 
because it is clearer code to put the default parameter in the call.


int m = (int a, int b = 3){ return a+b;}(7);

Point conceded. So default arguments should be disallowed in delegate 
literals as well.


Re: Cross module version specs

2012-04-26 Thread James Miller
On Thursday, 26 April 2012 at 10:20:37 UTC, Jonathan M Davis 
wrote:

On Thursday, April 26, 2012 12:09:19 James Miller wrote:

which pretty much makes them completely useless unless
you only use the built-in versions.


That's not true at all. It just means that versions are either 
useful for
something within a module or they're intended for your program 
as a whole and
passed on the command line (e.g. StdDdoc is used by Phobos, and 
it's not
standard at all; the makefile adds it to the list of compiler 
flags). But yes,
it's true that if you want to define a version in one module 
which affects

another, you can't do it.


Is there any reason for that limitation? Seems like an arbitrary 
limit to me.


The library I am binding to uses ifdefs to let the compiler see 
the appropriate declarations in the header files. It would be 
nice in general for D to be able to mimic that capability, as it 
means you can have a "configuration" file with a list of specs 
that can be generated at build-time by something like autoconf.


--
James Miller


Re: Compiling DMD for the iPhone simulator

2012-04-26 Thread Michel Fortin

On 2012-04-26 06:56:02 +, Jacob Carlborg  said:

It turned out to be a problem with DMD. It had declared a type as 
"unsigned int" instead of "size_t". stat.st_size appears to be 64bit in 
the iPhone simulator SDK.


:-)

Then I got a new problem. When I compile druntime it complains about 
conflicting module names. Somehow it seems the package name disappears.


:-(


Are you running it straight from the command line? I suspect libraries
in the simulator SDK need the simulator's environment to work, which is
a pile of undocumented things.


Yes, just as you can, I assume, with the compilers already present in 
/iPhoneSimulator.platform.


You are assuming those compilers linked to the iOS SDK, but they could 
be "cross compilers" in the sense that the compiler is linked to Mac 
libraries (just like a normal Mac compiler) but creates executables for 
the iOS Simulator platform. (Obviously, the ARM ones are true cross 
compilers.)


My suspicion is that you could use the same Mac DMD compiler as long as 
all the generated code is linked with the iOS SDK. As far as I know, 
the only ABI difference is that the Objective-C runtime for the 
simulator is the Modern runtime while the Mac is still using the Legacy 
runtime for 32-bit. So you can't use the same Objective-C compiler, but 
beside Objective-C I'd expect all the generated code to be the same.



I'm also quite curious about what you're trying to achieve.


I was planning to try and run a D program in the iPhone simulator. As a 
first step, I thought it would be much easier then running it on the 
real device. The simulator runs 32bit code and not ARM.


If we eventual can run D program on iOS devices I'm pretty sure we also 
want to run them on the simulator.


Can't hurt to try and see what it takes :)


Indeed, it's the first logical step.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Cross module version specs

2012-04-26 Thread Jonathan M Davis
On Thursday, April 26, 2012 12:09:19 James Miller wrote:
> All I can think is that version specifiers aren't carried across
> modules

They can't be. The only time that versions apply to your entire program is if 
they're built-in or they're specified on the command line.

> which pretty much makes them completely useless unless
> you only use the built-in versions.

That's not true at all. It just means that versions are either useful for 
something within a module or they're intended for your program as a whole and 
passed on the command line (e.g. StdDdoc is used by Phobos, and it's not 
standard at all; the makefile adds it to the list of compiler flags). But yes, 
it's true that if you want to define a version in one module which affects 
another, you can't do it.

The closest that you would be able to do would be something along the lines of 
having a function in the imported module which returned the version statements 
as a string which the module doing the importing mixed in. Another option 
would be to just use static ifs, since they'd be affected by whatever variables 
or enums where defined in the imported modules. e.g.

static if(is(myVersionEnum1))
{
}
else static if(is(myVersionEnum2))
{
}

- Jonathan M Davis


Re: What to do about default function arguments

2012-04-26 Thread Timon Gehr

On 04/26/2012 11:54 AM, Don Clugston wrote:

On 26/04/12 11:21, Timon Gehr wrote:

On 04/26/2012 09:54 AM, Walter Bright wrote:

On 4/26/2012 12:47 AM, Timon Gehr wrote:

On 04/26/2012 05:44 AM, Walter Bright wrote:

A subtle but nasty problem - are default arguments part of the
type, or
part of the declaration?

See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both,


That is how it should be.


which leads to the nasty behavior in the bug report.



It contributes, but it is not the main cause.


The problem centers around name mangling. If two types mangle the
same,
then they are the same type.


Then they are equal types.


This is simply not tenable. What defines when they are "equal" types and
when they are "not equal"?


This is a matter of terminology. For example, for 'equal' just exclude
the default parameters from the comparison. For 'the same' include
default parameters in the comparison. (therefore, 'the same' implies
'equal')


The language doesn't have the concepts of "same" and "equal" with
respect to types. There is "equal" and "implicitly converts to", but
that's not quite the same.


The schizophrenic behavior occurs because the types cross-talk. Are
mangled
names kept unique in the compiler or what is the implementation issue
exactly?


It's a conceptual issue. When is one type the same as another, and when
is it not?



void function(int) is the same as void function(int) and both are equal
void function(int=2) is not the same as void function(int=3), but both
are equal.


The question was *when* are they same, not how you name them.


I think I have answered that. Anyway, probably it is indeed a good idea 
to get rid of default parameters for delegates and function pointers.
The issues are certainly resolvable but the complexity of the resulting 
feature could not be justified.


Cross module version specs

2012-04-26 Thread James Miller
I'm trying to write a binding that has conditional sections where 
some features have to be enabled. I am using version statements 
for this.


I have a list of version specs in a module by themselves. When I 
try to compile another module that imports this module, it acts 
as if the version was never specified. I have tried wrapping the 
specs inside a version block, then setting that from the command 
but that doesn't work. Setting the version manually works as 
expected. I have also tried including the versions file on the 
command line.


All I can think is that version specifiers aren't carried across 
modules, which pretty much makes them completely useless unless 
you only use the built-in versions.


--
James Miller


Re: What to do about default function arguments

2012-04-26 Thread Timon Gehr

On 04/26/2012 11:46 AM, Don Clugston wrote:

On 26/04/12 11:28, Timon Gehr wrote:

On 04/26/2012 10:51 AM, Don Clugston wrote:

On 26/04/12 05:44, Walter Bright wrote:

A subtle but nasty problem - are default arguments part of the type, or
part of the declaration?

See http://d.puremagic.com/issues/show_bug.cgi?id=3866

Currently, they are both, which leads to the nasty behavior in the bug
report.

The problem centers around name mangling. If two types mangle the same,
then they are the same type. But default arguments are not part of the
mangled string. Hence the schizophrenic behavior.

But if we make default arguments solely a part of the function
declaration, then function pointers (and delegates) cannot have default
arguments. (And maybe this isn't a bad thing?)


I think it is a mistake to allow default arguments in function pointers
and delegates (it's OK for delegate literals, there you have the
declaration).


The parenthesised part is in conflict with your other statement.


No it doesn't. A default argument is a delegate literal is part of the
declaration, not part of the type.


If types cannot specify default arguments, then those will be thrown 
away right away, because what is later called is based on the type of 
the delegate and not on the implicit function declaration that has its 
address taken. What is the point of allowing it if it cannot be used?


Re: How can D become adopted at my company?

2012-04-26 Thread Jacob Carlborg

On 2012-04-26 11:07, Don Clugston wrote:


Based on my understanding of the legal situation with Symantec, the
backend CANNOT become GPL compatible. Stop using the word "still", it
will NEVER happen.


Theoretically someone could:

A. Replace all parts of the backend that Symantec can't/won't license as 
GPL (don't know if that is the whole backend or not)


B. Buy the backend from Symantec

--
/Jacob Carlborg


  1   2   >