Re: Free?

2011-10-26 Thread Russel Winder
On Wed, 2011-10-26 at 01:32 -0400, Nick Sabalausky wrote:
[ . . . ]
> I think you misunderstood my point. What I was trying to say is: If 
> someone's going to worry about others profiting from their free code, why'd 
> they even make it free in the first place?

I have no problem with companies making money from FOSS by providing
value added service or proprietary components, as long as they are
clearly seen to be "giving back" to the FOSS community as a "quid pro
quo".  What I object to is companies taking FOSS software, making a
proprietary product from it, getting updates from the FOSS community for
free, restricting product purchasers regarding the FOSS software and
generally treating the FOSS community as a pool of slave labour to be
milked.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Is This a Bug

2011-10-26 Thread Gor Gyolchanyan
I see. But is there any practical advantage of a function being pure?
I mean, besides an optimization hint for the compiler, of course.

On Tue, Oct 25, 2011 at 10:41 PM, Jonathan M Davis  wrote:
> On Tuesday, October 25, 2011 10:34 Gor Gyolchanyan wrote:
>> I thought pure functions can't modify their parameters (along with any
>> other non-local variables).
>> if the pure function can't modify anything non-local, why should it
>> care whether it's getting called from a shared context or not?
>
> No. The _only_ direct effects that pure has on a function is that it can't
> access any module-level or static variables which can ever be mutated over the
> course of the program (so, if they're immutable or if they're const value
> types, then they can be used, but otherwise not) and that it can't call impure
> functions. Previously, it was required that all of the parameters to a pure
> function be either immutable or implicitly convertible to immutable (and I
> believe that that's what TDPL states), which would make it impossible to alter
> the function's arguments, but the result was that pure was too restrictive to
> be of much use. So, it was changed.
>
> Now, any function which doesn't access any module-level or static variables
> which can be modified during the course of the program and which doesn't call
> any impure functions can be pure, but unlike before, purity is not enough for
> a function call to be optimized out when it's called multiple times within an
> expression. It can only be optimized out when the compiler can guarantee that
> none of the function's arguments can be altered by that pure function, which
> currently means that only functions whose arguments are all either immutable
> or implicitly convertible to immutable can be optimized in that manner - just
> like it was before. However, this makes it possible to have far more functions
> be pure, so pure functions which do have parameters which are immutable or
> implicitly convertible to immutable can call many more functions and do much
> more. Being able to  optimize out based on purity is therefore purely a
> compiler optimization which it does when it can instead of always applying
> to pure functions.
>
> - Jonathan M Davis
>


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
The best implementation of queue IMO is using a doubly-linked list,
which is missing from Phobos.
I wanted ti implement a thread-safe queue too, but i failed due to the
lack of appropriate lists.

On Wed, Oct 26, 2011 at 6:38 AM, J Arrizza  wrote:
> I need a Queue (put on the front and pop from the back).  The best I can
> come up with so far is:
>     Array!string list;
>     list.insertBack("a1");
>     list.insertBefore(list.opSlice(0,1), "a2");
>     list.insertBefore(list.opSlice(0,1), "a3");
>     list.insertBefore(list.opSlice(0,1), "a4");
>     list.insertBefore(list.opSlice(0,1), "a5");
>     while (!list.empty())
>     {
>         string s = list.back();
>         list.removeBack();
>         writeln("x: ", s);
>     }
> Having to insertBack when the list empty and insertBefore with the range,
> seems strange to me. Is there a better way?
> John
>
>
>
>


Re: Own type for null?

2011-10-26 Thread Gor Gyolchanyan
I agree. Null is a very common special-case value and overloading may
be necessary based on that special-case value. Currently doing so
means accepting any kind of typeless pointer (which is certainly not
desirable).

On Wed, Oct 26, 2011 at 9:49 AM, Benjamin Thaut  wrote:
> I recently tried to replace the deprecated overloading of new and delete and
> came across a serious issue. You can not use std.conv.emplace with null. If
> you pass null to it, null loses it's implicit casting cabablities and just
> becomes a void*.
> This issue pretty much exists with every template. As soon as you pass null
> to a template (compile time) information gets lost.
>
> Besides fixing std.conv.emplace it could be really handy to be able to check
> for a null-type at compile time for example with non-nullable types.
>
> There is already a enhancement reqeust in bugzilla since January but it
> didn't get much attention yet:
> http://d.puremagic.com/issues/show_bug.cgi?id=5416
>
> Would this be a worthwile improvement for the langauge?
> --
> Kind Regards
> Benjamin Thaut
>


Re: Is This a Bug

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 11:15:20 Gor Gyolchanyan wrote:
> I see. But is there any practical advantage of a function being pure?
> I mean, besides an optimization hint for the compiler, of course.

1. You know that it doesn't access global variables, which is at minimum an 
advantage as far as understanding the code goes.

2. Under some situations, pure allows the compiler to know that the return 
value has no aliasing with the function's arguments, so the return value of 
the function can be implicitly converted to immutable.

3. Aside from the potential optimization of removing additional calls to the 
same function with the same arguments within a statement, the combination of 
pure and const on member variables gives stronger guarantees than either alone 
could, potentially further aiding compiler optimizations.

There are probably others, but that's what comes to mind at the moment. 
Primarily though, it comes down to being able to guarantee a certain level of 
encapsulation (which makes it easier to reason and make guarantees about your 
program) and enables the compiler to better optimize code.

- Jonathan M Davis


Re: Free?

2011-10-26 Thread Kagamin
Nick Sabalausky Wrote:

> There's already lots of 
> zlib/MIT/etc software authors out there, how many of them have gotten ripped 
> off from money that would have otherwise ended up in their pocket?

Oracle sells zlib database compression for $11500 per processor.


Re: Free?

2011-10-26 Thread Kagamin
Nick Sabalausky Wrote:

> I think you misunderstood my point. What I was trying to say is: If 
> someone's going to worry about others profiting from their free code, why'd 
> they even make it free in the first place?

Maybe they're commies. Ask Stallman.


Re: Early std.crypto

2011-10-26 Thread Dmitry Olshansky

On 26.10.2011 10:28, Steve Teale wrote:


An easy test is that if the interface takes a T[] as input, consider a
range instead. Ditto for output. If an interface takes a File as input,
it's a red flag that something is wrong.


I have a question about ranges that occurred to me when I was composing a
MySQL result set into a random access range.

To do that I have to provide the save capability defined by

R r1;
R r2 = r1.save;

Would it harm the use of ranges as elements of operation sequences if the
type of the entity that got saved was not the same as the original range
provider type. Something like:

R r1;
S r2 = r1.save;
r1.restore(r2);


Yay! Range with 'restore' strikes again.
Seriously, last time something about range was discussed I noticed this 
very same thing - full copy is costly, there are ranges that can restore 
their state using some special tiny savepoint object.
Though currently this kind of thing is not supported nor expected, plus 
there are cases where copy needs to be made.




For entities with a good deal of state, that implement a range, storing
the whole state, and more significantly, restoring it, may be non-
trivial, but saving and restoring the state of the range may be quite a
lightweight affair.

I'm also puzzled by the semantics of the random access range.

If I restrict myself to indexing into the range, then things are fine.

auto x = r[0];
x = r[10];
x = r[0];

But if I slip in a few popFront's, then presumably x = r[0] will give me
a different result.

This just makes me slightly uneasy.

Steve



--
Dmitry Olshansky


Re: Compiler patch for runtime reflection

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 07:44, Robert Jacques wrote:

On Tue, 25 Oct 2011 12:38:18 -0400, Jacob Carlborg  wrote:

On 2011-10-25 16:21, Robert Jacques wrote:

[snip]

Both are protection attributes. The problem of taking the delegate of a
mutable member on a const object, is the exact same problem as taking
the delegate of a private member.


I don't agree.


[snip]


Bypassing the protection type system and delegate escape are very
different things. The escape of a delegate is fully controlled by the
coder of the class/module; with bypass mechanisms, the library writer
has no control.


So how should this be fixed? Make it a compile error to create a
delegate from a private method?


I think we are taking about different things. Today in D, due to a bug,
for example

foo.bar()

won't compile because it's private/not-const/etc. But

(&foo.bar)()

will compile. That 'feature' is what I thought you were referring to. If
the writer of a class wants to let a delegate of a private member
function escape, I don't have a fundamental problem with it. But I do
have a problem with someone who isn't the writer of the class doing so.


I'm referring to the above but, as I've already written and you haven't 
replied to is, I don't understand how it can be prevented.



[snip]


Still, __traits is still the full fledged compile-time reflection
feature and doesn't bypass protection attributes.


Yeah, but for a serialization library I want to be able to serialize
private fields. This is known issue that this breaks encapsulation.


I understand that at times when serialization has to break
encapsulation. My opinion though, is that classes should have to opt in,
before the serializer goes ahead and breaks their carefully designed
encapsulation. i.e. someone should have to code review the class to make
sure everything is kosher. For code you are developing, this is as
simple and straightforward as adding a single mixin statement to the class.


It's not straightforward when serializing third party types. One of my 
goals when I started writing my serialization library was to, somewhere 
down the road, create a window/interface builder for DWT. The builder 
would (de)serialize the widgets available in DWT and save it as an XML 
file. Since I'm the current maintainer of DWT I could go and modify DWT 
to fit my needs but I don't think that's the right approach and will 
make merging future versions of SWT more difficult.


--
/Jacob Carlborg


Re: Compiler patch for runtime reflection

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 07:59, Kapps wrote:

I really like Gor's idea of @noreflect for this. You have a network
class you don't want as easily reversible, or a security class, or
private fields such as password. You just mark the module, type,
methods, or fields, as @noreflect. It could even disable compile-time
reflection if desired.

It would be important, however, to be able to pass a flag to the
compiler that marks everything as @noreflect. The idea of opt-in
reflection is just silly. Standard example is a serialization library.
Oh hey, you wanna serialize a field of type Vector3? Sorry, the person
forgot to mark it as @reflected. Oh, you wanna serialize these
properties of some class? Sorry, not reflected. Opt-out is definitely
the way to go, as it gives the best of all worlds (and removes all
overhead if you just pass in something like --version=noreflect to dmd).


I agree, than can be SO annoying just because someone forgot to think 
about reflection/serialization.


--
/Jacob Carlborg


Re: Own type for null?

2011-10-26 Thread Timon Gehr

On 10/26/2011 07:49 AM, Benjamin Thaut wrote:

I recently tried to replace the deprecated overloading of new and delete
and came across a serious issue. You can not use std.conv.emplace with
null. If you pass null to it, null loses it's implicit casting
cabablities and just becomes a void*.
This issue pretty much exists with every template. As soon as you pass
null to a template (compile time) information gets lost.

Besides fixing std.conv.emplace it could be really handy to be able to
check for a null-type at compile time for example with non-nullable types.

There is already a enhancement reqeust in bugzilla since January but it
didn't get much attention yet:
http://d.puremagic.com/issues/show_bug.cgi?id=5416

Would this be a worthwile improvement for the langauge?


++vote.

We also need an own type for the empty array literal '[]' btw.


Re: Is This a Bug

2011-10-26 Thread Timon Gehr

On 10/26/2011 09:15 AM, Gor Gyolchanyan wrote:

I see. But is there any practical advantage of a function being pure?
I mean, besides an optimization hint for the compiler, of course.



It is very useful for reasoning about code, because a pure function will 
always behave the same when called with the same arguments.


Re: Free?

2011-10-26 Thread Don

On 23.10.2011 22:59, Chante wrote:

"Jeff Nowakowski"  wrote in message
news:j81rap$1f50$1...@digitalmars.com...

On 10/22/2011 01:56 PM, Steve Teale wrote:

I'd never seen it before - maybe I lead a sheltered life.

GPL: "Free as in Herpes"

Doesn't that just hit the nail on the head.


No, it doesn't. It's pure flamebait. Nobody wants to get herpes and it
serves no useful purpose. On the other hand, many people happily use
GPL software and like the fact that the source is available and will
remain available with further distributions.

If you don't like GPL then don't use it. It's not hidden and going to
infect you without your consent.


I don't think that's completely true. You can look at code (and download 
it) before you discover that it's GPLed. It's happened to me a few 
times. Mostly when there was code included in an article; but sometimes 
when the license was changed from BSD to GPL when the code was updated.


Re: Free?

2011-10-26 Thread Timon Gehr

On 10/26/2011 07:32 AM, Nick Sabalausky wrote:

"Kagamin"  wrote in message
news:j884ui$cra$1...@digitalmars.com...

Nick Sabalausky Wrote:


Licenses like boost exist to allow corporations to make money on free
code
while restricting users. Of course GPL prohibits this.


It's free code. The whole point is to let people go ahead and use it.


Anyone can use GPL, it has no problem with it.


I think you misunderstood my point. What I was trying to say is: If
someone's going to worry about others profiting from their free code, why'd
they even make it free in the first place?




Free software is not software that is gratis. It is software that 
respects the freedom of its end users. The problem is not that others 
profit from the software. That is okay. The problem is that proprietary 
developers may make the end users dependent on their own 
proprietary/non-free version, at which point the end users lose control 
about their computing. The GPL prohibits that.


Re: queue container?

2011-10-26 Thread bearophile
Gor Gyolchanyan:

> The best implementation of queue IMO is using a doubly-linked list,

On modern CPUs there is a better implementation:
http://pages.cpsc.ucalgary.ca/~kremer/STL/1024x768/deque.html
http://www.martinbroadhurst.com/articles/deque.html

Essentially a dynamic (on the right) array of fixed-size arrays managed by 
pointer (in D you need to wrap the small arrays in a struct, I presume, because 
of the curious way D arrays are designed).

Bye,
bearophile


Re: Own type for null?

2011-10-26 Thread Gor Gyolchanyan
Yes! Empty array is also a special-case value, which can be
interpreted in a special way, which is completely different from any
other array.

On Wed, Oct 26, 2011 at 12:56 PM, Timon Gehr  wrote:
> On 10/26/2011 07:49 AM, Benjamin Thaut wrote:
>>
>> I recently tried to replace the deprecated overloading of new and delete
>> and came across a serious issue. You can not use std.conv.emplace with
>> null. If you pass null to it, null loses it's implicit casting
>> cabablities and just becomes a void*.
>> This issue pretty much exists with every template. As soon as you pass
>> null to a template (compile time) information gets lost.
>>
>> Besides fixing std.conv.emplace it could be really handy to be able to
>> check for a null-type at compile time for example with non-nullable types.
>>
>> There is already a enhancement reqeust in bugzilla since January but it
>> didn't get much attention yet:
>> http://d.puremagic.com/issues/show_bug.cgi?id=5416
>>
>> Would this be a worthwile improvement for the langauge?
>
> ++vote.
>
> We also need an own type for the empty array literal '[]' btw.
>


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
I don't understand the advantage of having a dynamic array of static arrays.
How is that gonna increase add/remove performance to the ends?

On Wed, Oct 26, 2011 at 1:28 PM, bearophile  wrote:
> Gor Gyolchanyan:
>
>> The best implementation of queue IMO is using a doubly-linked list,
>
> On modern CPUs there is a better implementation:
> http://pages.cpsc.ucalgary.ca/~kremer/STL/1024x768/deque.html
> http://www.martinbroadhurst.com/articles/deque.html
>
> Essentially a dynamic (on the right) array of fixed-size arrays managed by 
> pointer (in D you need to wrap the small arrays in a struct, I presume, 
> because of the curious way D arrays are designed).
>
> Bye,
> bearophile
>


Re: Is This a Bug

2011-10-26 Thread bearophile
Jonathan M Davis:

> On Wednesday, October 26, 2011 11:15:20 Gor Gyolchanyan wrote:
> > I see. But is there any practical advantage of a function being pure?
> > I mean, besides an optimization hint for the compiler, of course.
> 
> 1. You know that it doesn't access global variables, which is at minimum an 
> advantage as far as understanding the code goes.

The lack of side effects makes it (maybe) less hard to understand code, makes 
testing and unit testing simpler, and allows some optimizations, like replacing 
filter(map()) with a map(filter())...

In DMD 2.056 several functions and higher order functions like array(), map(), 
filter(), etc, aren't (always) pure, so I think D/Phobos purity needs a bit of 
improvement. This compiles:

import std.algorithm;
void main() pure {
int[] a;
map!((int x){ return x; })(a);
map!((x){ return x; })(a);
}


This doesn't:

import std.algorithm, std.array;
void main() pure {
int[] a;
map!q{ a }(a);
filter!q{ a }(a);
array(a);
int[int] aa;
aa.byKey();
aa.byValue();
aa.keys;
aa.values;
aa.get(0, 0);
aa.rehash;
}

Bye,
bearophile


Re: queue container?

2011-10-26 Thread Timon Gehr

On 10/26/2011 11:28 AM, bearophile wrote:

Gor Gyolchanyan:


The best implementation of queue IMO is using a doubly-linked list,


On modern CPUs there is a better implementation:
http://pages.cpsc.ucalgary.ca/~kremer/STL/1024x768/deque.html
http://www.martinbroadhurst.com/articles/deque.html

Essentially a dynamic (on the right) array of fixed-size arrays managed by 
pointer (in D you need to wrap the small arrays in a struct, I presume, because 
of the curious way D arrays are designed).

Bye,
bearophile


A dynamic array of pointers to fixed size arrays is this:

T[N]*[] arr;

What would you wrap exactly?


Do you know how this implementation compares to a circular buffer 
implementation? I'd have expected that to be faster.


Re: Early std.crypto

2011-10-26 Thread Steve Teale
On Tue, 25 Oct 2011 23:45:58 -0700, Jonathan M Davis wrote:

> On Wednesday, October 26, 2011 06:28:52 Steve Teale wrote:
>> > An easy test is that if the interface takes a T[] as input, consider
>> > a range instead. Ditto for output. If an interface takes a File as
>> > input, it's a red flag that something is wrong.
>> 
>> I have a question about ranges that occurred to me when I was composing
>> a MySQL result set into a random access range.
>> 
>> To do that I have to provide the save capability defined by
>> 
>> R r1;
>> R r2 = r1.save;
>> 
>> Would it harm the use of ranges as elements of operation sequences if
>> the type of the entity that got saved was not the same as the original
>> range provider type. Something like:
>> 
>> R r1;
>> S r2 = r1.save;
>> r1.restore(r2);
>> 
>> For entities with a good deal of state, that implement a range, storing
>> the whole state, and more significantly, restoring it, may be non-
>> trivial, but saving and restoring the state of the range may be quite a
>> lightweight affair.
> 
> It's likely to cause issues if the type returned by save differs from
> the original type. From your description, it sounds like the range is
> over something, and it's that something which you don't want to copy. If
> that's the case, then the range just doesn't contain that data. Rather,
> it's a view into the data, so saving it just save that view.

Well, yes Jonathan, that's just what I'm getting at. I want to save just 
those things that are pertinent to the range/view, not the whole panoply 
of the result set representation or whatever other object is providing 
the data that the range is a view of.

I could do it by saving an instance of the object representing the result 
set containing only the relevant data, but that would be misleading for 
the user, because it would not match the object it was cloned from in any 
respect other than representing a range.

Should not the requirement for the thing provided by save be only that it 
should provide the same view as that provided by the source range at the 
time of the save, and the same range 'interface'.

Is there an accepted term for the way ranges are defined, i.e. as an 
entity that satisfies some template evaluating roughly to a bool?

Steve

Steve


Re: queue container?

2011-10-26 Thread bearophile
Gor Gyolchanyan:

> I don't understand the advantage of having a dynamic array of static arrays.
> How is that gonna increase add/remove performance to the ends?

You also keep a length that tells you how many items you are using in the last 
fixed size array. Removing on item means just decreasing this length. Appending 
an item often just means writing the item and then increasing this length. Once 
in a while you add or remove a chunk (keeping one fully empty one at the end 
avoids you to do busywork when the queue item that gets added and removed many 
times is the first one of a chunk).

In a linked list you have to do something with the memory used by the 
added/removed item every time you add and remove an item (like removing and 
putting it into a freelist).

Working on the chunk array gives more CPU cache locality and probably requires 
less management code.

With a small change the dynamic array of chunk pointers is usable to implement 
a deque too: you have to manage the dynamic array as a growable circular queue. 
Like a hash, when this array is full you allocate a new one twice larger, and 
copy the chunk pointers into it.



Timon Gehr:

> A dynamic array of pointers to fixed size arrays is this:
> T[N]*[] arr;
> What would you wrap exactly?

How do you allocate those T[N] in D?


> Do you know how this implementation compares to a circular buffer
> implementation? I'd have expected that to be faster.

A circular buffer is faster in some situations, when you on average need a 
constant number of items in your data structure. But a circular buffer needs 
lot of time to change its size a lot, because you have to halve or double it 
once in a while. Generally, if you don't know much about how your queue will be 
used, a dynamic array of chunks is better (some people use a linked list of 
chunks, but this makes it worse to access items randomly, and the time saved 
not managing a growable circular queue of the chunk pointers is not much, 
because the chunks usually contain hundreds of items or more).

Bye,
bearophile


build system

2011-10-26 Thread Gor Gyolchanyan
I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
I see. makes sense. :-)
Although you could just as well have a pre-allocated and cached links
in doubly-linked list, but i think this one has a greater potential
for optimization.
D needs a few of those ultra-fast container types. Like the Judy array.
Oh, Judy array is by far the sexiest container i ever saw.
http://en.wikipedia.org/wiki/Judy_array

On Wed, Oct 26, 2011 at 2:25 PM, bearophile  wrote:
> Gor Gyolchanyan:
>
>> I don't understand the advantage of having a dynamic array of static arrays.
>> How is that gonna increase add/remove performance to the ends?
>
> You also keep a length that tells you how many items you are using in the 
> last fixed size array. Removing on item means just decreasing this length. 
> Appending an item often just means writing the item and then increasing this 
> length. Once in a while you add or remove a chunk (keeping one fully empty 
> one at the end avoids you to do busywork when the queue item that gets added 
> and removed many times is the first one of a chunk).
>
> In a linked list you have to do something with the memory used by the 
> added/removed item every time you add and remove an item (like removing and 
> putting it into a freelist).
>
> Working on the chunk array gives more CPU cache locality and probably 
> requires less management code.
>
> With a small change the dynamic array of chunk pointers is usable to 
> implement a deque too: you have to manage the dynamic array as a growable 
> circular queue. Like a hash, when this array is full you allocate a new one 
> twice larger, and copy the chunk pointers into it.
>
> 
>
> Timon Gehr:
>
>> A dynamic array of pointers to fixed size arrays is this:
>> T[N]*[] arr;
>> What would you wrap exactly?
>
> How do you allocate those T[N] in D?
>
>
>> Do you know how this implementation compares to a circular buffer
>> implementation? I'd have expected that to be faster.
>
> A circular buffer is faster in some situations, when you on average need a 
> constant number of items in your data structure. But a circular buffer needs 
> lot of time to change its size a lot, because you have to halve or double it 
> once in a while. Generally, if you don't know much about how your queue will 
> be used, a dynamic array of chunks is better (some people use a linked list 
> of chunks, but this makes it worse to access items randomly, and the time 
> saved not managing a growable circular queue of the chunk pointers is not 
> much, because the chunks usually contain hundreds of items or more).
>
> Bye,
> bearophile
>


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 12:26, Gor Gyolchanyan wrote:

I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Sounds horrible to have a build script in comments in the source code. I 
think the right approach is to have a separate build tool, with a proper 
DSL built on an existing language, e.g. Ruby, D, something else.


There are already a couple of build tools available:

DSSS - D1 only, not maintained any more
xfbuild - Probably not maintained any more. No build script
rdmd - Cannot build libraries, no build script, cannot do much besides 
from building executables


I've been working on a build tool, https://bitbucket.org/doob/dake . 
I've started rewriting the tool in D, been a while since I worked on it. 
Currently it's not a prioritized project for me.


I also know others are working on built tools for D.

--
/Jacob Carlborg


Re: build system

2011-10-26 Thread Gor Gyolchanyan
That's the problem - no working build tool exists for D.
Why don't you like the idea of integrating build information in source
code? I mean, that information does not change for a given source
file.

On Wed, Oct 26, 2011 at 3:37 PM, Jacob Carlborg  wrote:
> On 2011-10-26 12:26, Gor Gyolchanyan wrote:
>>
>> I had a few thoughts about integrating build awareness into DMD.
>> It would be really cool to add a flag to DMD to make it compile and
>> link in all import-referenced modules.
>> Also, it would be awesome to store basic build information in modules
>> themselves in the form of special comments (much like documentation
>> comments), where one could specify external build dependencies, output
>> type, etc.
>> There would be no need for makefiles and extra build systems. You'd
>> just feed an arbitrary module to the compiler and the compiler would
>> build the target, to which that module belongs (bu parsing build
>> comments and package hierarchies).
>> Wouldn't this be a good thing to have?
>
> Sounds horrible to have a build script in comments in the source code. I
> think the right approach is to have a separate build tool, with a proper DSL
> built on an existing language, e.g. Ruby, D, something else.
>
> There are already a couple of build tools available:
>
> DSSS - D1 only, not maintained any more
> xfbuild - Probably not maintained any more. No build script
> rdmd - Cannot build libraries, no build script, cannot do much besides from
> building executables
>
> I've been working on a build tool, https://bitbucket.org/doob/dake . I've
> started rewriting the tool in D, been a while since I worked on it.
> Currently it's not a prioritized project for me.
>
> I also know others are working on built tools for D.
>
> --
> /Jacob Carlborg
>


Re: queue container?

2011-10-26 Thread Timon Gehr

On 10/26/2011 12:25 PM, bearophile wrote:

Timon Gehr:


A dynamic array of pointers to fixed size arrays is this:
T[N]*[] arr;
What would you wrap exactly?


How do you allocate those T[N] in D?


int[N]* x = (new int[N][1]).ptr;


I don't like that either.





Do you know how this implementation compares to a circular buffer
implementation? I'd have expected that to be faster.


A circular buffer is faster in some situations, when you on average need a 
constant number of items in your data structure. But a circular buffer needs 
lot of time to change its size a lot, because you have to halve or double it 
once in a while. Generally, if you don't know much about how your queue will be 
used, a dynamic array of chunks is better (some people use a linked list of 
chunks, but this makes it worse to access items randomly, and the time saved 
not managing a growable circular queue of the chunk pointers is not much, 
because the chunks usually contain hundreds of items or more).



Ok, I see, thanks. In my specific application of a queue buffer, the 
number of items is constant on average, except for some extreme border 
cases.





Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Steven Schveighoffer
On Mon, 24 Oct 2011 19:58:59 -0400, Michel Fortin  
 wrote:


On 2011-10-24 21:47:15 +, "Steven Schveighoffer"  
 said:



What if the source character is encoded differently than the search
string?  This is basic unicode stuff.  See my example with fiancé.


The more I think about it, the more I think it should work like this:  
just like we assume they contain well-formed UTF sequences, char[],  
wchar[], and dchar[] should also be assumed to contain **normalized**  
unicode strings. Which normalization form to use? no idea. Just pick a  
sensible one in the four.



Once we know all strings are normalized in the same way, we can then  
compare two strings bitwise to check if they're the same. And we can  
check for a substring in the same manner except we need to insert a  
simple check after the match to verify that it isn't surrounded by  
combining marks applying to the first or last character. And it'll also  
deeply simplify any proper Unicode string handling code we'll add in the  
future.


 - - -

That said, I fear that forcing a specific normalization might be  
problematic. You don't always want to have to normalize everything...



So perhaps we could simplify things a bit more: don't pick a standard  
normalization form. Just assume that both strings being used are in the  
same normalization form. Comparison will work, searching for substring  
in the way specified above will work, and other functions could document  
which normalization form they accept. Problem solved... somewhat.


It's even easier than this:

a) you want to do a proper string comparison not knowing what state the  
unicode strings are in, use the full-fledged decode-when-needed string  
type, and its associated str.find method.
b) you know they are both the same normalized form and want to optimize,  
use std.algorithm.find(haystack.asArray, needle.asArray).


-Steve


Re: Vote on std.regex (FReD)

2011-10-26 Thread Stephan

yes


Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Steven Schveighoffer
On Mon, 24 Oct 2011 18:25:25 -0400, Dmitry Olshansky  
 wrote:



On 25.10.2011 0:52, Steven Schveighoffer wrote:

On Mon, 24 Oct 2011 16:18:57 -0400, Dmitry Olshansky
 wrote:


On 24.10.2011 23:41, Steven Schveighoffer wrote:

On Mon, 24 Oct 2011 11:58:15 -0400, Simen Kjaeraas
 wrote:


On Mon, 24 Oct 2011 16:02:24 +0200, Steven Schveighoffer
 wrote:


On Sat, 22 Oct 2011 05:20:41 -0400, Walter Bright
 wrote:


On 10/22/2011 2:21 AM, Peter Alexander wrote:

Which operations do you believe would be less efficient?


All of the ones that don't require decoding, such as searching,
would be less efficient if decoding was done.


Searching that does not do decoding is fundamentally incorrect. That
is, if you want to find a substring in a string, you cannot just
compare chars.


Assuming both string are valid UTF-8, you can. Continuation bytes can
never
be confused with the first byte of a code point, and the first byte
always
identifies how many continuation bytes there should be.



As others have pointed out in the past to me (and I thought as you did
once), the same characters can be encoded in *different ways*. They  
must

be normalized to accurately compare.



Assuming language support stays on stage of "codepoint is a character"
it's totaly expected to ignore modifiers and compare identically
normalized UTF without decoding. Yes, it risks to hit certain issues.


Again, the "risk" is that it fails to achieve the goal you ask of it!



Last time I checked, the legion of "everything is ASCII" zealots was  
pretty large ;) So the "goal" is a blury line, meaning that "search for  
a substring on in this chunk of unicode text" can mean very different  
things, and be correct in it's own sense on about 3 different levels.
There is no default way, so I'm not sure we have to embed all of this  
complexity into language right now. Phobos is were we must first  
implement this, once it works we may look at revisiting our pick of  
default comparison method, etc.


If we say D obeys the subset of unicode that only works on ascii  
characters, well, then I think we do not support unicode at all ;)


I agree the string type needs to be fleshed out before the language adopts  
it.  I think we could legitimately define a string type that auto-assigns  
from it's respective char[] array type.  Once it's shown that the type is  
nearly as fast as a char[] array, it can be used as the default type for  
string literals (the only real place where the language gets in the way).



D-language: Here, use this search algorithm, it works most of the time,
but may not work correctly in some cases. If you run into one of those
cases, you'll have to run a specialized search algorithm for strings.
User: How do I know I hit one of those cases?
D-language: You'll have to run the specialized version to find out.
User: Why wouldn't I just run the specialized version first?
D-language: Well, because it's slower!
User: But don't I have to use both algorithms to make sure I find the  
data?

D-language: Only if you "care" about accuracy!

Call me ludicrous, but is this really what we want to push on someone as
a "unicode-aware" language?



No, but we might want to fix a string search to do a little more -  
namely check if it skewed a graphem (assuming normalizations match).


That's a big assumption.  Valid unicode is valid unicode, even if it's not  
normalized.


BTW adding normalization to std.uni is another thing to do right now.  
That should be a good enough start, and looking at unicode standard,  
things are rather fluid there meaning that "unicode-aware" language  
should be sufixed by version number :)


I agree we need normalization and it is not necessary to involve the  
compiler in this.  I'd suggest a two to three phase approach:


1. Leave phobos' schizo view of "char arrays are not arrays" for now, and  
build the necessary framework to get a string type that actually works.

2. Remove schizo view.
3. (optional) make compiler use library-defined string type for string  
literals.







Plus, a combining character (such as an umlaut or accent) is part of a
character, but may be a separate code point. If that's on the last
character in the word such as fiancé, then searching for fiance will
result in a match without proper decoding!


Now if you are going to do real characters... If source/needle are
normalized you still can avoid lots of work by searching without
decoding. All you need to decode is one codepoint on each successful
match to see if there is a modifier at end of matched portion.
But it depends on how you want to match if it's case-insensitive
search it will be a lot more complicated, but anyway it boils down to
this:
1) do inexact search, get likely match ( false positives are OK,
negatives not) no decoding here
2) once found check it (or parts of it) with proper decoding

There are cultural subtleties, that complicate these steps if you take
them into account, but it's doable.


I agree with you that si

Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Steven Schveighoffer
On Mon, 24 Oct 2011 19:49:43 -0400, Simen Kjaeraas  
 wrote:


On Mon, 24 Oct 2011 21:41:57 +0200, Steven Schveighoffer  
 wrote:



Plus, a combining character (such as an umlaut or accent) is part of a
character, but may be a separate code point.


If this is correct (and it is), then decoding to dchar is simply not  
enough.
You seem to advocate decoding to graphemes, which is a whole different  
matter.


I am advocating that.  And it's a matter of perception.  D can say "we  
only support code-point decoding" and what that means to a user is, "we  
don't support language as you know it."  Sure it's a part of unicode, but  
it takes that extra piece to make it actually usable to people who require  
unicode.


Even in English, fiancé has an accent.  To say D supports unicode, but  
then won't do a simple search on a file which contains a certain *valid*  
encoding of that word is disingenuous to say the least.


D needs a fully unicode-aware string type.  I advocate D should use it as  
the default string type, but it needs one whether it's the default or not  
in order to say it supports unicode.


-Steve


Re: Free?

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 03:09:48 -0400, Russel Winder   
wrote:



On Wed, 2011-10-26 at 01:32 -0400, Nick Sabalausky wrote:
[ . . . ]

I think you misunderstood my point. What I was trying to say is: If
someone's going to worry about others profiting from their free code,  
why'd

they even make it free in the first place?


I have no problem with companies making money from FOSS by providing
value added service or proprietary components, as long as they are
clearly seen to be "giving back" to the FOSS community as a "quid pro
quo".  What I object to is companies taking FOSS software, making a
proprietary product from it, getting updates from the FOSS community for
free, restricting product purchasers regarding the FOSS software and
generally treating the FOSS community as a pool of slave labour to be
milked.


So you're saying the code you write as FOSS should cost something (i.e.  
you want something in return)?  Interesting...


-Steve


Re: Free?

2011-10-26 Thread Steven Schveighoffer
On Tue, 25 Oct 2011 00:04:18 -0400, Chante   
wrote:




"Steven Schveighoffer"  wrote in message
news:op.v3u2chz6eav7ka@localhost.localdomain...

On Mon, 24 Oct 2011 10:39:54 -0400, Kagamin  wrote:


Chante Wrote:


While I haven't thought it through (and maybe don't have the
knowledge  to
do so), elimination of software patents was something I had in mind
as a
potential cure for the current state of affairs (not a cure for viral
source code though). Of course, noting that first-to-file is now the
thing, it appears (to me) that Big Software Corp and Big Government
are
on one side, humanity on the other.


Patents are seen to exist for humanity. Elimination of patents is
equivalent to elimination of intellectual property. You're not going
to  succeed on that. But GPL3 at least protects you from patent claims
from  the author, so you'd better use it. You're afraid of others, but
GPL can  also protect *your* code.


Patents are to foster innovation.  Software innovation needs no patent
system to foster it.  Nobody writes a piece of software because they
were  able to get a patent for it.

I feel software patents are a completely different entity than material
patents.  For several reasons:

1. Software is already well-covered by copyright.


Software, though, is not like a book: it's not just text. There is
inherent design, architecture, engineering represented by source code.


Books require design, sometimes elaborate design, and engineering of  
sorts.  What an author puts into writing a book is not unlike what an  
entity puts into writing software.



2. With few exceptions, the lifetime of utility of a piece of software
is  well below the lifetime of a patent (currently 17 years).
3. It is a very slippery slope to go down.  Software is a purely
*abstract* thing, it's not a machine.


Maybe literally "abstract", but those flow charts, layers,
boxes-and-arrows actually become realized (rendered, if you will) by the
source code. The text really isn't important. The "abstraction" is.


Software is not unlike math.  It achieves something based on an abstract  
concept of the world.  It has practical uses.  But math is not patentable.





It can be produced en mass with  near-zero cost.  It can be expressed
via source code, which is *not* a  piece of software.  There is a very
good reason things like music, art,  and written works are not
patentable.


Music and art don't "do" anything except titilate the senses. Software,
OTOH, does do things of practical utility.


Music and art are both different from software and the same.  They are  
different because there are no rules for creating valid music or art.  I  
could bang on the wall randomly with a pipe, and try to sell that as music  
(and ironically, I might succeed).  But they are the same because writing  
music and creating art that *is good* is a difficult thing that requires  
careful thought, planning, and execution.


-Steve


Re: build system

2011-10-26 Thread Martin Nowak
On Wed, 26 Oct 2011 12:26:56 +0200, Gor Gyolchanyan  
 wrote:



I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Some work in this direction has been done.
This proposal works by communicating url paths sources to an external
tool which is a flexible and good proposal.
It allows things as 'http://path/to/repo' for simple source file from
web fetching as well as 'pkg://experimental' for more sophisticated package
manager interaction.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP11

I've written a prototype implementation some weeks ago.

This still needs a way to communicated modules to build or link.
For now I've simply added a pragma(build, ) which builds
every import from this package (globally, it's a prototype).

Al together it can be used like this.
https://github.com/dawgfoto/graphics/blob/master/src/graphics/_.d

There are some issues though that would benefit from good ideas.
Especially a better solution for import path and build declarations
would be great.

DIP14 was not so well received.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP14.

martin


Re: Free?

2011-10-26 Thread Kagamin
Steven Schveighoffer Wrote:

> So you're saying the code you write as FOSS should cost something (i.e.  
> you want something in return)?  Interesting...

It's just two paradigms: if you choose freedom, GPL ensures and protects the 
freedom. You can also provide your efforts to corporations - why not? - 
proprietary licenses and patents are *adequate* means to do it.


Re: Free?

2011-10-26 Thread Steven Schveighoffer

On Tue, 25 Oct 2011 17:37:02 -0400, Kagamin  wrote:


Steven Schveighoffer Wrote:


1. Software is already well-covered by copyright.


You can't write software out of thin air. Let's suppose ranges increase  
usability of a collections library. Can you write a collections library  
without knowing about ranges concept? That's what patents are for.


patents exist to give an *incentive* to give away trade secrets that would  
otherwise die with the inventor.  The idea is, if you patent something,  
you enjoy a period of monopoly, where you can profit from the fruits of  
your invention.  In return, you bestow upon the world the secret behind  
your idea.  This allows people to build on your idea in the future,  
instead of nobody ever being able to discover what your invention was.


Not to mention that no other IP protection exists for machine design --  
you cannot copyright a car.


Given that any item of software can be reverse engineered and studied,  
this can never happen with software.  Add that to the fact that software  
patents are *rarely* beneficial to the community.  They are mostly used as  
weapons to stifle innovation from others.  In essence, software patents  
have had an *opposite* effect on the industry compared to something like  
building cars.  In other words, there's no need for patents to allow  
software ideas to be seen by others, it's possible to extract the ideas  
from the code.



3. It is a very slippery slope to go down.  Software is a purely
*abstract* thing, it's not a machine.


Software is a machine: concrete thing doing concrete job. Patent doesn't  
protect the machine itself, it protects concrete design work put into  
it. Design is a high-profile work, a good design has a good chance to be  
more expensive than the actual implementation. So it's perfectly valid  
to claim ownership for a design work and charge fees for it.


And why wouldn't you be able to do this without patents?  Again, copyright  
already covers software.  Plenty of software companies have large amounts  
of IP and are successful without having any software patents.





It can be produced en mass with near-zero cost.


Dead software is seen as unusable. So - no, to produce software you need  
continuous maintenance and development which is as expensive as any  
other labor.


What I mean is, with a traditional machine, there is a cost to recreating  
the machine.  Such manufacturing requires up-front investment that can  
possibly outweigh the cost of implementing the design.  Patents protect  
the entity putting their product out there from having a larger company  
who can throw money around beat you using your idea.  In software, since  
the software is protected by copyright, the competition must build their  
own version of your software ideas first, and the distribution is  
relatively insignificant.  In other words, once you release your idea to  
the world, it can be sold and installed for millions in a matter of days,  
giving you the lion share of the market.


Maintenance costs are not part of distribution, they are part of  
development.  Of course maintenance is required, but maintenance does not  
hinder you from making a profit like manufacturing ramp-up does.


And again, the software you write is already protected IP -- copyright.


4. Unlike a physical entity, it is very likely a simple individual,
working on his own time with his own ideas, can create software that
inadvertently violates a "patent" with low cost.


I don't see how this doesn't apply to physical machines.


When you are talking about patents for a machine or physical entity, there  
is a large investment and cost in just designing the item, or the means to  
manufacture it.  It's less likely that a simple individual has the capital  
necessary to create it, and if he does, or can raise it, a patent search  
is usually done to avoid complications.  He might also look at expired  
patents to get ideas on how to do things.


However, working software can be written by one guy in his apartment in a  
couple weeks.  He's not going to do patent searches when it costs him just  
2 weeks time to create the software.  Here, the patent system is just  
getting in the way of innovation.  It's having the opposite effect by  
instilling fear in anyone writing software that some patent-holding  
company is going to squash him out of business.


When was the last time you did anything with a patented software  
technology except *avoid it like the plague*?


How to improve patent system is another question. GPL3 can actually play  
some role here: there's no mercantile reason to restrict use of a  
patented technology in a GPL3 software.


IMO, there's no reason to ever use any form of GPL anymore.  It's work is  
done.


5. The patent office does *NOT UNDERSTAND* software, so they are more  
apt

to grant trivial patents (e.g. one-click).


http://www.newscientist.com/article/dn965-wheel-patented-in-australia.html


I don'

Re: build system

2011-10-26 Thread Gor Gyolchanyan
This is so cool!!! Package modules have two important roles now:
* Defining the proper imports for the package.
* Defining building rules for the package.

I don't have any particular thoughts about import path and build info
definitions, but I'm certain, that the build rules of parent package
should be overridden by child package (hierarchical build info).
This would reduce building efforts of any D code to zero, as all
necessary info is in the code and all necessary tools are a single
compiler.
Talk about work productivity! :-)

On Wed, Oct 26, 2011 at 5:39 PM, Martin Nowak  wrote:
> On Wed, 26 Oct 2011 12:26:56 +0200, Gor Gyolchanyan
>  wrote:
>
>> I had a few thoughts about integrating build awareness into DMD.
>> It would be really cool to add a flag to DMD to make it compile and
>> link in all import-referenced modules.
>> Also, it would be awesome to store basic build information in modules
>> themselves in the form of special comments (much like documentation
>> comments), where one could specify external build dependencies, output
>> type, etc.
>> There would be no need for makefiles and extra build systems. You'd
>> just feed an arbitrary module to the compiler and the compiler would
>> build the target, to which that module belongs (bu parsing build
>> comments and package hierarchies).
>> Wouldn't this be a good thing to have?
>
> Some work in this direction has been done.
> This proposal works by communicating url paths sources to an external
> tool which is a flexible and good proposal.
> It allows things as 'http://path/to/repo' for simple source file from
> web fetching as well as 'pkg://experimental' for more sophisticated package
> manager interaction.
> http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP11
>
> I've written a prototype implementation some weeks ago.
>
> This still needs a way to communicated modules to build or link.
> For now I've simply added a pragma(build, ) which builds
> every import from this package (globally, it's a prototype).
>
> Al together it can be used like this.
> https://github.com/dawgfoto/graphics/blob/master/src/graphics/_.d
>
> There are some issues though that would benefit from good ideas.
> Especially a better solution for import path and build declarations
> would be great.
>
> DIP14 was not so well received.
> http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP14.
>
> martin
>


Re: queue container?

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 03:18:27 -0400, Gor Gyolchanyan  
 wrote:



The best implementation of queue IMO is using a doubly-linked list,
which is missing from Phobos.
I wanted ti implement a thread-safe queue too, but i failed due to the
lack of appropriate lists.


www.dsource.org/projects/dcollections

Has both a doubly-linked list and deque implementation.

Not thread-safe however...

-Steve


Re: Phobos 'collections' question

2011-10-26 Thread Steven Schveighoffer

On Mon, 24 Oct 2011 22:50:33 -0400, Marco Leise  wrote:

Am 14.09.2011, 18:57 Uhr, schrieb Steven Schveighoffer  
:


On Wed, 14 Sep 2011 12:50:25 -0400, Timon Gehr   
wrote:



On 09/14/2011 04:08 PM, Robert McGinley wrote:

Hey all,
Mostly as an exercise I'm considering writing an ArrayList, AVL tree,  
and possible other standard data structures in D.  I have two  
questions.
1.) If completed should I send these around for review and inclusion  
or do they not belong in phobos?
2.) If I'm working on including these in phobos should I put them in  
container.d (that has RedBlack Trees and a Singlelinked List) or is  
there a better location?

Rob


As far as I know, the reason why std.container is not under active  
development, is that phobos does not have an allocator abstraction  
yet. As soon as there is one, the module will probably undergo some  
breaking changes. But I think the more well implemented standard data  
structures there are in Phobos, the better. I think as soon as the  
standard allocator interface is settled on, your efforts will be  
welcome. Steve can probably answer your question better though.


Certainly more containers are welcome.

The review for getting things into phobos is done via github.  You do  
not need write permission to generate a pull request.  Yes, they should  
all be put into std.container for now.


I'd recommend doing one pull request per container, that way one  
container type does not detract from the inclusion of another.


I don't think that lack of allocators should prevent implementing  
containers.  My collection package  
(www.dsource.org/projects/dcollections) uses allocators, and they're  
pretty orthogonal to the operation of the container.


BTW, feel free to use any ideas/code from dcollections, it's also boost  
licensed.  Note that the red black tree implementation in phobos is  
copied verbatim from dcollections.  If you implement a good AVL tree, I  
might even steal it for dcollections ;)  (with attribution, of course!)


-Steve


I recently had the need for a priority queue and your library was the  
obvious choice. But it did the same that my code did when I ported it  
from 32-bit to 64-bit: array.length is no longer a uint, but a ulong, so  
the code breaks. So my advice is to use size_t when you deal with a  
natural number that can be up to the amount of addressable memory.


The latest (unreleased) version of dcollections uses size_t and ptrdiff_t  
everywhere instead of uint and int.  See here:  
http://www.dsource.org/projects/dcollections/ticket/14


I have to release a new beta soon, especially when inout works in the  
latest impending compiler release.


-Steve


Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Michel Fortin
On 2011-10-26 11:50:32 +, "Steven Schveighoffer" 
 said:



It's even easier than this:

a) you want to do a proper string comparison not knowing what state the
unicode strings are in, use the full-fledged decode-when-needed string
type, and its associated str.find method.
b) you know they are both the same normalized form and want to optimize,
use std.algorithm.find(haystack.asArray, needle.asArray).


Well, treating the string as an array of dchar doesn't work in the 
general case, even with strings normalized the same way your fiancé 
example can break. So should never treat them as plain arrays unless 
I'm sure I have no combining marks in the string.


I'm not opposed to having a new string type being developed, but I'm 
skeptical about its inclusion in the language. We already have three 
string types which can be assumed to contain valid UTF sequences. I 
think the first thing to do is not to develop a new string type, but to 
develop the normalization and grapheme splitting algorithms, and those 
to find a substring, using the existing char[], wchar[] and dchar[] 
types.  Then write a program with proper handling of Unicode using 
those and hand-optimize it. If that proves to be a pain (it might well 
be), write a new string type, rewrite the program using it, do some 
benchmarks and then we'll know if it's a good idea, and will be able to 
quantify the drawbacks.


But right now, all this arguing for or against a new string type is 
stacking hypothesis against other hypothesis, it won't lead anywhere.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Phobos 'collections' question

2011-10-26 Thread Steven Schveighoffer
On Tue, 25 Oct 2011 01:00:51 -0400, Andrew Wiley  
 wrote:



On Mon, Oct 24, 2011 at 9:50 PM, Marco Leise  wrote:


Am 14.09.2011, 18:57 Uhr, schrieb Steven Schveighoffer <
schvei...@yahoo.com>:


 On Wed, 14 Sep 2011 12:50:25 -0400, Timon Gehr   
wrote:


 On 09/14/2011 04:08 PM, Robert McGinley wrote:



Hey all,
Mostly as an exercise I'm considering writing an ArrayList, AVL tree,
and possible other standard data structures in D.  I have two  
questions.
1.) If completed should I send these around for review and inclusion  
or

do they not belong in phobos?
2.) If I'm working on including these in phobos should I put them in
container.d (that has RedBlack Trees and a Singlelinked List) or is  
there a

better location?
Rob



As far as I know, the reason why std.container is not under active
development, is that phobos does not have an allocator abstraction  
yet. As

soon as there is one, the module will probably undergo some breaking
changes. But I think the more well implemented standard data  
structures
there are in Phobos, the better. I think as soon as the standard  
allocator
interface is settled on, your efforts will be welcome. Steve can  
probably

answer your question better though.



Certainly more containers are welcome.

The review for getting things into phobos is done via github.  You do  
not
need write permission to generate a pull request.  Yes, they should  
all be

put into std.container for now.

I'd recommend doing one pull request per container, that way one  
container

type does not detract from the inclusion of another.

I don't think that lack of allocators should prevent implementing
containers.  My collection package (www.dsource.org/projects/**
dcollections ) uses
allocators, and they're pretty orthogonal to the operation of the  
container.


BTW, feel free to use any ideas/code from dcollections, it's also boost
licensed.  Note that the red black tree implementation in phobos is  
copied
verbatim from dcollections.  If you implement a good AVL tree, I might  
even

steal it for dcollections ;)  (with attribution, of course!)

-Steve



I recently had the need for a priority queue and your library was the
obvious choice. But it did the same that my code did when I ported it  
from
32-bit to 64-bit: array.length is no longer a uint, but a ulong, so the  
code
breaks. So my advice is to use size_t when you deal with a natural  
number

that can be up to the amount of addressable memory.



Wait, dcollections has a PriorityQueue?


No.  Not yet.

You could use a tree for that, but my understanding is that a heap is  
much

more efficient?


Yes.  I have a feeling dcollections will use heap from phobos (to avoid  
duplication) for a full priority queue.


-Steve


Re: Free?

2011-10-26 Thread Steven Schveighoffer

On Wed, 26 Oct 2011 09:38:31 -0400, Kagamin  wrote:


Steven Schveighoffer Wrote:


So you're saying the code you write as FOSS should cost something (i.e.
you want something in return)?  Interesting...


It's just two paradigms: if you choose freedom, GPL ensures and protects  
the freedom. You can also provide your efforts to corporations - why  
not? - proprietary licenses and patents are *adequate* means to do it.


I choose to provide my efforts to *anyone* without discrimination.  I.e.  
freedom without strings attached.


-Steve


Re: Dynamic alter-ego of D.

2011-10-26 Thread Steven Schveighoffer
On Tue, 25 Oct 2011 06:00:28 -0400, Gor Gyolchanyan  
 wrote:



I think adding more dynamic typing to D would be a splendid idea to
further widen the variety of solutions for different problems.
Modular app development is a very good practice and modularity means
dynamicity, which in turn means, that one needs to give up on lots of
sweet stuff like templates, overloading and string mixins.
Variant is the first step towards dynamic alter-ego of D, which is
completely undeveloped currently.
Although my benchmarks show, that variant is quite slow and I'd really
like a version of variant, optimized for ultimate performance.
Also, there are lots of stuff, that i really need at run-time, which i
only have at compile-time:
* Interfaces. dynamic interfaces are very important to highly modular
applications, heavily based on plugins. By dynamic interfaces i mean
the set of attributes and methods of an object, which is only known at
run-time.
* Overloading. A.K.A multimethods. required by dynamic interfaces.
* Dynamic templates. In other words, value-based overloading (since
type is yet another attribute of dynamically typed data).

Dynamic interfaces are also different from static ones because the
interface isn't _implemented_ by a type, it's _matched_ by a type,
which means, that if a type fits into an interface at run-time, i can
safely cast it to that interface type and work with it.

Being able to obtain the dynamic version of a delegate would be great
to pass around generic callbacks.


Have you looked into opDispatch?

I think with opDispatch and some Variant[string] member, you should be  
able to implement fully dynamic types.  I think even some have done this  
in the past (even if only for proof of concept).


-Steve


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
It should have both shared and unshared implementations of methods to
be a full-fledged container.
Also, this kind of things are too commonly used to be outside Phobos.

On Wed, Oct 26, 2011 at 5:55 PM, Steven Schveighoffer
 wrote:
> On Wed, 26 Oct 2011 03:18:27 -0400, Gor Gyolchanyan
>  wrote:
>
>> The best implementation of queue IMO is using a doubly-linked list,
>> which is missing from Phobos.
>> I wanted ti implement a thread-safe queue too, but i failed due to the
>> lack of appropriate lists.
>
> www.dsource.org/projects/dcollections
>
> Has both a doubly-linked list and deque implementation.
>
> Not thread-safe however...
>
> -Steve
>


Re: Dynamic alter-ego of D.

2011-10-26 Thread Gor Gyolchanyan
1. opDispatch is no good for overloading when the set of methods are
defined at run-time.
2. opDIspatch doesn't cover operators (why?).

On Wed, Oct 26, 2011 at 6:09 PM, Steven Schveighoffer
 wrote:
> On Tue, 25 Oct 2011 06:00:28 -0400, Gor Gyolchanyan
>  wrote:
>
>> I think adding more dynamic typing to D would be a splendid idea to
>> further widen the variety of solutions for different problems.
>> Modular app development is a very good practice and modularity means
>> dynamicity, which in turn means, that one needs to give up on lots of
>> sweet stuff like templates, overloading and string mixins.
>> Variant is the first step towards dynamic alter-ego of D, which is
>> completely undeveloped currently.
>> Although my benchmarks show, that variant is quite slow and I'd really
>> like a version of variant, optimized for ultimate performance.
>> Also, there are lots of stuff, that i really need at run-time, which i
>> only have at compile-time:
>> * Interfaces. dynamic interfaces are very important to highly modular
>> applications, heavily based on plugins. By dynamic interfaces i mean
>> the set of attributes and methods of an object, which is only known at
>> run-time.
>> * Overloading. A.K.A multimethods. required by dynamic interfaces.
>> * Dynamic templates. In other words, value-based overloading (since
>> type is yet another attribute of dynamically typed data).
>>
>> Dynamic interfaces are also different from static ones because the
>> interface isn't _implemented_ by a type, it's _matched_ by a type,
>> which means, that if a type fits into an interface at run-time, i can
>> safely cast it to that interface type and work with it.
>>
>> Being able to obtain the dynamic version of a delegate would be great
>> to pass around generic callbacks.
>
> Have you looked into opDispatch?
>
> I think with opDispatch and some Variant[string] member, you should be able
> to implement fully dynamic types.  I think even some have done this in the
> past (even if only for proof of concept).
>
> -Steve
>


Re: Dynamic alter-ego of D.

2011-10-26 Thread deadalnix

Le 25/10/2011 17:39, Gor Gyolchanyan a écrit :

That's the point. You don't always need to carry around your type.
Also because you might do it in a specific and very efficient way.
Imposing a single way of storing the type is a very inflexible decision.
The type check may also be done at different points, after which it's
not necessary to carry around.
Also, as i said before, type checking WILL be done in debug mode and
dropped in release mode.
Variant could be built on that typeless value concept to ensure type
safety at all times.



Well I don't kbnow how you can handle type safety on a non typed 
variable. That doesn't make sense to me.


What you want to to will end-up to be more complicated than Variant, but 
without the garanty that strong typing gives you. You'll losse on every 
facelets of the problem.


Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 10:04:08 -0400, Michel Fortin  
 wrote:


On 2011-10-26 11:50:32 +, "Steven Schveighoffer"  
 said:



It's even easier than this:
 a) you want to do a proper string comparison not knowing what state the
unicode strings are in, use the full-fledged decode-when-needed string
type, and its associated str.find method.
b) you know they are both the same normalized form and want to optimize,
use std.algorithm.find(haystack.asArray, needle.asArray).


Well, treating the string as an array of dchar doesn't work in the  
general case, even with strings normalized the same way your fiancé  
example can break. So should never treat them as plain arrays unless I'm  
sure I have no combining marks in the string.


If you have the same normalization, you would at least find all the valid  
instances.  You'd have to eliminate false positives, by decoding the next  
dchar after the match to see if it's a combining character.  This would be  
a simple function to write.


All I'm saying is, doing a search on the array should be available.



I'm not opposed to having a new string type being developed, but I'm  
skeptical about its inclusion in the language. We already have three  
string types which can be assumed to contain valid UTF sequences. I  
think the first thing to do is not to develop a new string type, but to  
develop the normalization and grapheme splitting algorithms, and those  
to find a substring, using the existing char[], wchar[] and dchar[]  
types.  Then write a program with proper handling of Unicode using those  
and hand-optimize it. If that proves to be a pain (it might well be),  
write a new string type, rewrite the program using it, do some  
benchmarks and then we'll know if it's a good idea, and will be able to  
quantify the drawbacks.


But right now, all this arguing for or against a new string type is  
stacking hypothesis against other hypothesis, it won't lead anywhere.




I have a half-implemented string type which does just this.  I need to  
finish it.  There are some language barriers that need to be sorted out  
(such as how to deal with tail-const for arbitrary types).


I think a user who no longer posts here (spir) had some full  
implementation of unicode strings using classes, though I don't see that  
being comparable in performance.


-Steve


Re: Dynamic alter-ego of D.

2011-10-26 Thread Adam Ruppe
You can use opDispatch to make runtime methods and properties
by having it forward to a function to do the lookup.

Something alone these lines:

DynamicObject delegate(DynamicObject[] args) dynamicFunctions;

DynamicObject opDispatch(string name, T...)(T t) {
  if(name !in dynamicFunctions) throw new MethodNotFoundException(name);
  DynamicObject[] args;
  foreach(arg; t)
   args ~= new DynamicObject(arg);

  return dynamicFunctions[name](args);
}


I've done a more complete implementation before, but don't have
it on hand right now.


Re: Dynamic alter-ego of D.

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 10:15:33 -0400, Gor Gyolchanyan  
 wrote:



1. opDispatch is no good for overloading when the set of methods are
defined at run-time.


For those cases, you must use the runtime interface that opDispatch will  
use directly.  D does not have a way to call a function by string at  
runtime.


call("functionName", arg1, arg2, arg3)

opDispatch simply provides a way to write cleaner code when you know the  
name of the function.


Even in dynamic languages, it's uncommon to call functions based on  
runtime strings.



2. opDIspatch doesn't cover operators (why?).


operators are not function symbols, and are already covered by templates.   
It should be as simple as:


opBinary(string s, T...)(T args) if(isValidOperator!s)
{
   call(s, args);
}

-Steve


Re: build system

2011-10-26 Thread Jesse Phillips
On Wed, 26 Oct 2011 14:26:56 +0400, Gor Gyolchanyan wrote:

> I had a few thoughts about integrating build awareness into DMD.
> It would be really cool to add a flag to DMD to make it compile and link
> in all import-referenced modules.
> Also, it would be awesome to store basic build information in modules
> themselves in the form of special comments (much like documentation
> comments), where one could specify external build dependencies, output
> type, etc.
> There would be no need for makefiles and extra build systems. You'd just
> feed an arbitrary module to the compiler and the compiler would build
> the target, to which that module belongs (bu parsing build comments and
> package hierarchies).
> Wouldn't this be a good thing to have?

Do you know about rdmd and pragma(lib,...) ?


Re: queue container?

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan  
 wrote:



It should have both shared and unshared implementations of methods to
be a full-fledged container.


Maybe, maybe not.

I'm reluctant to add a copy of all functions to the containers just to  
support shared.  We don't have a const or inout equivalent for shared.


I have a feeling this problem will come to a head eventually and be solved  
by someone smarter than me.



Also, this kind of things are too commonly used to be outside Phobos.


I would agree with you.  dcollections was proposed as the container  
library for Phobos, Andrei and I could not come to a compromise about the  
design, and so he created std.container instead.  There should be some  
history on the newsgroups, look for my post on d.announce when  
dcollections 2.0 was first announced.


Feel free to steal anything from dcollections to put into std.container,  
std.container.RedBlackTree is already a verbatim copy of dcollections'  
version, I put it in there.  Both have the same license, so there should  
be no issue.  I just happen to prefer the design and philosophies of  
dcollections instead of std.container, so I have little incentive to add  
to std.container.


-Steve


Re: Dynamic alter-ego of D.

2011-10-26 Thread Gor Gyolchanyan
I know how opDispatch works. opDispatch is purely a syntax sugar (a
really neat one, IMO).
What I'm talking about is a way to choose between a set of functions,
based on the parameters.
It's basically dynamic overloading, but with additional ability to
overload, based on values (much like template constraints).
opDispatch, then could be a good way of hiding ugly code like
call("methodName", ...);

I never said, that i want this stuff to be built-in. THis is one of
those cool things, that many people could use as library solutions.
Such dynamic functions with powerful dynamic overloading and the help
of opDispatch would be an unprecedented awesome solution to event
handlers and callbacks.

On Wed, Oct 26, 2011 at 6:26 PM, Steven Schveighoffer
 wrote:
> On Wed, 26 Oct 2011 10:15:33 -0400, Gor Gyolchanyan
>  wrote:
>
>> 1. opDispatch is no good for overloading when the set of methods are
>> defined at run-time.
>
> For those cases, you must use the runtime interface that opDispatch will use
> directly.  D does not have a way to call a function by string at runtime.
>
> call("functionName", arg1, arg2, arg3)
>
> opDispatch simply provides a way to write cleaner code when you know the
> name of the function.
>
> Even in dynamic languages, it's uncommon to call functions based on runtime
> strings.
>
>> 2. opDIspatch doesn't cover operators (why?).
>
> operators are not function symbols, and are already covered by templates.
>  It should be as simple as:
>
> opBinary(string s, T...)(T args) if(isValidOperator!s)
> {
>   call(s, args);
> }
>
> -Steve
>


Re: build system

2011-10-26 Thread Gor Gyolchanyan
Obviously. I'm using rdmd currently, because my work primarily
consists of research and i didn't collect a large enough code base.
It's not fit for building lots of targets from a big code base.
pragma(lib, ...) is the tip of the iceberg, that I'm talking about.

On Wed, Oct 26, 2011 at 6:27 PM, Jesse Phillips
 wrote:
> On Wed, 26 Oct 2011 14:26:56 +0400, Gor Gyolchanyan wrote:
>
>> I had a few thoughts about integrating build awareness into DMD.
>> It would be really cool to add a flag to DMD to make it compile and link
>> in all import-referenced modules.
>> Also, it would be awesome to store basic build information in modules
>> themselves in the form of special comments (much like documentation
>> comments), where one could specify external build dependencies, output
>> type, etc.
>> There would be no need for makefiles and extra build systems. You'd just
>> feed an arbitrary module to the compiler and the compiler would build
>> the target, to which that module belongs (bu parsing build comments and
>> package hierarchies).
>> Wouldn't this be a good thing to have?
>
> Do you know about rdmd and pragma(lib,...) ?
>


Re: Why the hell doesn't foreach decode strings

2011-10-26 Thread Michel Fortin
On 2011-10-26 14:20:54 +, "Steven Schveighoffer" 
 said:



If you have the same normalization, you would at least find all the vali d
instances.  You'd have to eliminate false positives, by decoding the nex t
dchar after the match to see if it's a combining character.  This would  be
a simple function to write.

All I'm saying is, doing a search on the array should be available.


And all I'm saying is that such a standalone function that eliminates 
the false positive should be available outside of a string type too. 
You'll likely need it for your string type anyway, but there's no 
reason you should't be able to use it standalone on a 
char[]/wchar[]/dchar[].


I think most string algorithms should be able to work with normalized 
character arrays; a new string type should just be a shell around these 
that makes things easier to the user.




I have a half-implemented string type which does just this.  I need to
finish it.


That should be interesting.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
thread-awareness of D opens up a large heap of opportunities! They are
not utilized because of poor support. As soon as you start sharing
something, the libraries you use start to throw tons of errors.
i think the infamous __gshared is the "const" of shared.

On Wed, Oct 26, 2011 at 6:36 PM, Steven Schveighoffer
 wrote:
> On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
>  wrote:
>
>> It should have both shared and unshared implementations of methods to
>> be a full-fledged container.
>
> Maybe, maybe not.
>
> I'm reluctant to add a copy of all functions to the containers just to
> support shared.  We don't have a const or inout equivalent for shared.
>
> I have a feeling this problem will come to a head eventually and be solved
> by someone smarter than me.
>
>> Also, this kind of things are too commonly used to be outside Phobos.
>
> I would agree with you.  dcollections was proposed as the container library
> for Phobos, Andrei and I could not come to a compromise about the design,
> and so he created std.container instead.  There should be some history on
> the newsgroups, look for my post on d.announce when dcollections 2.0 was
> first announced.
>
> Feel free to steal anything from dcollections to put into std.container,
> std.container.RedBlackTree is already a verbatim copy of dcollections'
> version, I put it in there.  Both have the same license, so there should be
> no issue.  I just happen to prefer the design and philosophies of
> dcollections instead of std.container, so I have little incentive to add to
> std.container.
>
> -Steve
>


dynamic overloading with constraints for callbacks

2011-10-26 Thread Gor Gyolchanyan
Do anybody know a good implementation of dynamic overloading with
constraints i could use?
Maybe the one used in D for templates?


Re: Free?

2011-10-26 Thread Jeff Nowakowski

On 10/26/2011 12:51 AM, Nick Sabalausky wrote:


"Jeff Nowakowski"  wrote in message

Nitpicking? Are you serious? GPL has provided immense benefits and
has been voluntarily adopted around the world,



So have the non-viral free licenses.


And if I said they were "Free as in dogshit", would this also be "true"
and not mudslinging?


A proposal for better template constraint error reporting.

2011-10-26 Thread Gor Gyolchanyan
This idea is still raw, so don't expect me to be able to answer to all
your question. This is just a general direction in which we could move
to improve the error reports, that occur when templates can't be
instantiated.
What if one could add ddoc comments to parts of the constraint, so
that both ddoc generator could output them in a neat way and the
compiler would use them to report errors:

/// Define a state machine transition types.
template transition(ActionType)
if
{
/// Must be a callable type.
isCallable!ActionType &&
/// Must return a state.
is(ReturnType!ActionType : State) &&
/// Must take exactly two parameters.
ParameterTypeTuple!(ActionType).length == 2 &&
/// The first parameter must be an event.
is(ParameterTypeTuple!(ActionType)[0] : Event) &&
/// The second parameter must be a state.
is(ParameterTypeTuple!(ActionType)[1] : State);
)
{
alias ParameterTypeTuple!(ActionType)[1] FromState;
alias ReturnType!ActionType ToState;
alias ParameterTypeTuple!(ActionType)[0] Event;
}

The ddoc comments, preceding parts of template constraints would be
used to specify why exactly did the template fail to instantiate, for
example:

Error: Cannot instantiate template demo01.transition(ActionType)
because "Must return a state." constraint is not satisfied.
Writing specifications of the template with all different version of
incorrect constraints just to static assert(0) it is too tedious to do
every time.


Re: Counting passed/failed unit tests

2011-10-26 Thread David Gileadi

On 10/25/11 4:04 AM, Jacob Carlborg wrote:

On 2011-10-24 22:08, Jonathan M Davis wrote:

On Monday, October 24, 2011 11:23 Andrej Mitrovic wrote:

I'm not sure why it just stops after the first failing unittest
though. What is the point of that 'failed' counter?


It's a long standing issue that when one unit test fails within a
module, no
more within that module are run (though fortunately, a while back it
was fixed
so that other modules' unit tests will still run). As I recall, there
had to
be a change to the compiler to fix it, but I don't known/remember the
details.
Certainly, the issue still stands.

- Jonathan M Davis


A workaround is to catch AssertErrors, hook it up with some library code
and you get a minimal unit test framework:
https://github.com/jacob-carlborg/orange/blob/master/orange/test/UnitTester.d


Example of usage:

https://github.com/jacob-carlborg/orange/blob/master/tests/Object.d


As an argument for continuing to run tests after one fails, I'm taking a 
TDD class and the instructor asserted that for unit tests you should 
generally only have one or two assertions per test method.  His 
reasoning is that when something breaks you immediately know the extent 
of your breakage by counting the number of failed methods.  This 
argument is pretty convincing to me.


Re: Own type for null?

2011-10-26 Thread Benjamin Thaut

Am 26.10.2011 10:56, schrieb Timon Gehr:

On 10/26/2011 07:49 AM, Benjamin Thaut wrote:

I recently tried to replace the deprecated overloading of new and delete
and came across a serious issue. You can not use std.conv.emplace with
null. If you pass null to it, null loses it's implicit casting
cabablities and just becomes a void*.
This issue pretty much exists with every template. As soon as you pass
null to a template (compile time) information gets lost.

Besides fixing std.conv.emplace it could be really handy to be able to
check for a null-type at compile time for example with non-nullable
types.

There is already a enhancement reqeust in bugzilla since January but it
didn't get much attention yet:
http://d.puremagic.com/issues/show_bug.cgi?id=5416

Would this be a worthwile improvement for the langauge?


++vote.

We also need an own type for the empty array literal '[]' btw.


Do you have an example where a own type for [] would be usefull?

--
Kind Regards
Benjamin Thaut


Re: build system

2011-10-26 Thread jsternberg
Gor Gyolchanyan Wrote:

> I had a few thoughts about integrating build awareness into DMD.
> It would be really cool to add a flag to DMD to make it compile and
> link in all import-referenced modules.
> Also, it would be awesome to store basic build information in modules
> themselves in the form of special comments (much like documentation
> comments), where one could specify external build dependencies, output
> type, etc.
> There would be no need for makefiles and extra build systems. You'd
> just feed an arbitrary module to the compiler and the compiler would
> build the target, to which that module belongs (bu parsing build
> comments and package hierarchies).
> Wouldn't this be a good thing to have?

Take a look at Cabal and ghc-pkg/ghc --make from Haskell. I've been thinking 
about starting to put together something like this, but I haven't found the 
motivation to do it. This type of package management and build integration 
would go a long way to making D less decentralized.

One thing that would be needed to complete such a system is proper support for 
linking shared libraries (at least on Linux).

There are a bunch of things the build system could do that would make 
development in D much easier.

If you're interested in doing this, you should contact me on the IRC channel.


Re: queue container?

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 10:47:10 -0400, Gor Gyolchanyan  
 wrote:



thread-awareness of D opens up a large heap of opportunities! They are
not utilized because of poor support. As soon as you start sharing
something, the libraries you use start to throw tons of errors.


I agree shared is too rough to use right now on existing libraries.  But  
I'm not about to worry about shared natively in collections.


I'd hazard to guess that something could be cobbled together with a  
wrapper class that synchronized a collection of shared types.  Then you  
create this wrapper to be able to use a shared collection.


But that doesn't make a collection "shared-aware", it just makes it thread  
safe.



i think the infamous __gshared is the "const" of shared.


Most definitely __gshared is not the const of shared.  It just means store  
a type in the global namespace without having it be shared.  It does not  
respect shared, nor is it a type modifier (as const is).


-Steve


Re: Own type for null?

2011-10-26 Thread Gor Gyolchanyan
Functions, overladed on different types of arrays will always fail
with empty arrays unless empty arrays have their own type.

On Wed, Oct 26, 2011 at 7:55 PM, Benjamin Thaut  wrote:
> Am 26.10.2011 10:56, schrieb Timon Gehr:
>>
>> On 10/26/2011 07:49 AM, Benjamin Thaut wrote:
>>>
>>> I recently tried to replace the deprecated overloading of new and delete
>>> and came across a serious issue. You can not use std.conv.emplace with
>>> null. If you pass null to it, null loses it's implicit casting
>>> cabablities and just becomes a void*.
>>> This issue pretty much exists with every template. As soon as you pass
>>> null to a template (compile time) information gets lost.
>>>
>>> Besides fixing std.conv.emplace it could be really handy to be able to
>>> check for a null-type at compile time for example with non-nullable
>>> types.
>>>
>>> There is already a enhancement reqeust in bugzilla since January but it
>>> didn't get much attention yet:
>>> http://d.puremagic.com/issues/show_bug.cgi?id=5416
>>>
>>> Would this be a worthwile improvement for the langauge?
>>
>> ++vote.
>>
>> We also need an own type for the empty array literal '[]' btw.
>
> Do you have an example where a own type for [] would be usefull?
>
> --
> Kind Regards
> Benjamin Thaut
>


Re: build system

2011-10-26 Thread Gor Gyolchanyan
I'm certainly interested in doing it, but I'm not familiar with that
IRC channel of yours.
If you need me, here's my skype: gor_f_gyolchanyan. Please mention
digitalmatr...@puremagic.com if you decide to sent an authorization
request.
I'm almost always available.

On Wed, Oct 26, 2011 at 8:01 PM, jsternberg  wrote:
> Gor Gyolchanyan Wrote:
>
>> I had a few thoughts about integrating build awareness into DMD.
>> It would be really cool to add a flag to DMD to make it compile and
>> link in all import-referenced modules.
>> Also, it would be awesome to store basic build information in modules
>> themselves in the form of special comments (much like documentation
>> comments), where one could specify external build dependencies, output
>> type, etc.
>> There would be no need for makefiles and extra build systems. You'd
>> just feed an arbitrary module to the compiler and the compiler would
>> build the target, to which that module belongs (bu parsing build
>> comments and package hierarchies).
>> Wouldn't this be a good thing to have?
>
> Take a look at Cabal and ghc-pkg/ghc --make from Haskell. I've been thinking 
> about starting to put together something like this, but I haven't found the 
> motivation to do it. This type of package management and build integration 
> would go a long way to making D less decentralized.
>
> One thing that would be needed to complete such a system is proper support 
> for linking shared libraries (at least on Linux).
>
> There are a bunch of things the build system could do that would make 
> development in D much easier.
>
> If you're interested in doing this, you should contact me on the IRC channel.
>


Re: Own type for null?

2011-10-26 Thread Timon Gehr

On 10/26/2011 05:55 PM, Benjamin Thaut wrote:

Am 26.10.2011 10:56, schrieb Timon Gehr:

On 10/26/2011 07:49 AM, Benjamin Thaut wrote:

I recently tried to replace the deprecated overloading of new and delete
and came across a serious issue. You can not use std.conv.emplace with
null. If you pass null to it, null loses it's implicit casting
cabablities and just becomes a void*.
This issue pretty much exists with every template. As soon as you pass
null to a template (compile time) information gets lost.

Besides fixing std.conv.emplace it could be really handy to be able to
check for a null-type at compile time for example with non-nullable
types.

There is already a enhancement reqeust in bugzilla since January but it
didn't get much attention yet:
http://d.puremagic.com/issues/show_bug.cgi?id=5416

Would this be a worthwile improvement for the langauge?


++vote.

We also need an own type for the empty array literal '[]' btw.


Do you have an example where a own type for [] would be usefull?



It is the same rationale as the one for null

Eg:

class Class{
this(int,string,double[]) {}
}

auto New(T,A...)(A args) {
return new T(args);
}

void main(){
auto a = new Class(1,"hi",[]); // fine
auto b = New!Class(1,"hi",[]); // would require an own type
}


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
I don't get why don't you like the idea of having a thread-safe
version of methods of containers, so they would be usable as shared
objects?

On Wed, Oct 26, 2011 at 8:08 PM, Steven Schveighoffer
 wrote:
> On Wed, 26 Oct 2011 10:47:10 -0400, Gor Gyolchanyan
>  wrote:
>
>> thread-awareness of D opens up a large heap of opportunities! They are
>> not utilized because of poor support. As soon as you start sharing
>> something, the libraries you use start to throw tons of errors.
>
> I agree shared is too rough to use right now on existing libraries.  But I'm
> not about to worry about shared natively in collections.
>
> I'd hazard to guess that something could be cobbled together with a wrapper
> class that synchronized a collection of shared types.  Then you create this
> wrapper to be able to use a shared collection.
>
> But that doesn't make a collection "shared-aware", it just makes it thread
> safe.
>
>> i think the infamous __gshared is the "const" of shared.
>
> Most definitely __gshared is not the const of shared.  It just means store a
> type in the global namespace without having it be shared.  It does not
> respect shared, nor is it a type modifier (as const is).
>
> -Steve
>


Re: queue container?

2011-10-26 Thread Timon Gehr

On 10/26/2011 04:36 PM, Steven Schveighoffer wrote:

On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
 wrote:


It should have both shared and unshared implementations of methods to
be a full-fledged container.


Maybe, maybe not.

I'm reluctant to add a copy of all functions to the containers just to
support shared. We don't have a const or inout equivalent for shared.



Copying all the methods and marking them with shared does not make the 
code thread safe. The shared and unshared versions would have to be 
different anyways. There is no way do get something that relates to 
shared/unshared as const does to mutable/immutable.


Re: Own type for null?

2011-10-26 Thread deadalnix

Le 26/10/2011 09:20, Gor Gyolchanyan a écrit :

I agree. Null is a very common special-case value and overloading may
be necessary based on that special-case value. Currently doing so
means accepting any kind of typeless pointer (which is certainly not
desirable).



+1 for me ! And +1 for non nullable types.


Re: build system

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 04:44 Gor Gyolchanyan wrote:
> That's the problem - no working build tool exists for D.
> Why don't you like the idea of integrating build information in source
> code? I mean, that information does not change for a given source
> file.

That's just plain messy. And the code that's actually imported could depend on 
compiler flags (such as which import directories are given), allowing you to 
swap out implementations and the like. Hard coding stuff in the source file 
would harm that and IMHO would just clutter the source file.

- Jonathan M Davis


Re: Counting passed/failed unit tests

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 08:40 David Gileadi wrote:
> On 10/25/11 4:04 AM, Jacob Carlborg wrote:
> > On 2011-10-24 22:08, Jonathan M Davis wrote:
> >> On Monday, October 24, 2011 11:23 Andrej Mitrovic wrote:
> >>> I'm not sure why it just stops after the first failing unittest
> >>> though. What is the point of that 'failed' counter?
> >> 
> >> It's a long standing issue that when one unit test fails within a
> >> module, no
> >> more within that module are run (though fortunately, a while back it
> >> was fixed
> >> so that other modules' unit tests will still run). As I recall, there
> >> had to
> >> be a change to the compiler to fix it, but I don't known/remember the
> >> details.
> >> Certainly, the issue still stands.
> >> 
> >> - Jonathan M Davis
> > 
> > A workaround is to catch AssertErrors, hook it up with some library code
> > and you get a minimal unit test framework:
> > https://github.com/jacob-carlborg/orange/blob/master/orange/test/UnitTest
> > er.d
> > 
> > 
> > Example of usage:
> > 
> > https://github.com/jacob-carlborg/orange/blob/master/tests/Object.d
> 
> As an argument for continuing to run tests after one fails, I'm taking a
> TDD class and the instructor asserted that for unit tests you should
> generally only have one or two assertions per test method. His
> reasoning is that when something breaks you immediately know the extent
> of your breakage by counting the number of failed methods. This
> argument is pretty convincing to me.

The value of that would depend on what exactly the tests are doing - 
particularly if tests require some setup - but I could see how it would be 
valuable to essentially list all of the failed assertions rather than just 
failed unittest blocks. However, I don't think that I'd ever do it that way 
simply because the code clutter would be far too great. Even if your unit 
tests consist simply of a bunch of one-liners which are just assertions 
without any setup, having a unittest block per assertion is going to create a 
lot of extra code, which will seriously clutter the file. Now, you have the 
choice to do it either way, which is great, but I don't think that I'd want to 
be leaning towards one assertion per unittest block. I'd generally favor one 
unittest block per function, or at least group of related tests.

- Jonathan M Davis


Re: Is This a Bug

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 03:04 bearophile wrote:
> Jonathan M Davis:
> > On Wednesday, October 26, 2011 11:15:20 Gor Gyolchanyan wrote:
> > > I see. But is there any practical advantage of a function being pure?
> > > I mean, besides an optimization hint for the compiler, of course.
> > 
> > 1. You know that it doesn't access global variables, which is at minimum
> > an advantage as far as understanding the code goes.
> 
> The lack of side effects makes it (maybe) less hard to understand code,
> makes testing and unit testing simpler, and allows some optimizations,
> like replacing filter(map()) with a map(filter())...
> 
> In DMD 2.056 several functions and higher order functions like array(),
> map(), filter(), etc, aren't (always) pure, so I think D/Phobos purity
> needs a bit of improvement. This compiles:
> 
> import std.algorithm;
> void main() pure {
> int[] a;
> map!((int x){ return x; })(a);
> map!((x){ return x; })(a);
> }
> 
> 
> This doesn't:
> 
> import std.algorithm, std.array;
> void main() pure {
> int[] a;
> map!q{ a }(a);
> filter!q{ a }(a);
> array(a);
> int[int] aa;
> aa.byKey();
> aa.byValue();
> aa.keys;
> aa.values;
> aa.get(0, 0);
> aa.rehash;
> }

That's because the functions for associative arrays haven't been fixed up for 
purity yet. It'll happen. The situation with pure has already improved 
drastically over the last few releases.

- Jonathan M Davis


Re: Early std.crypto

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 03:17 Steve Teale wrote:
> On Tue, 25 Oct 2011 23:45:58 -0700, Jonathan M Davis wrote:
> > On Wednesday, October 26, 2011 06:28:52 Steve Teale wrote:
> >> > An easy test is that if the interface takes a T[] as input, consider
> >> > a range instead. Ditto for output. If an interface takes a File as
> >> > input, it's a red flag that something is wrong.
> >> 
> >> I have a question about ranges that occurred to me when I was composing
> >> a MySQL result set into a random access range.
> >> 
> >> To do that I have to provide the save capability defined by
> >> 
> >> R r1;
> >> R r2 = r1.save;
> >> 
> >> Would it harm the use of ranges as elements of operation sequences if
> >> the type of the entity that got saved was not the same as the original
> >> range provider type. Something like:
> >> 
> >> R r1;
> >> S r2 = r1.save;
> >> r1.restore(r2);
> >> 
> >> For entities with a good deal of state, that implement a range, storing
> >> the whole state, and more significantly, restoring it, may be non-
> >> trivial, but saving and restoring the state of the range may be quite a
> >> lightweight affair.
> > 
> > It's likely to cause issues if the type returned by save differs from
> > the original type. From your description, it sounds like the range is
> > over something, and it's that something which you don't want to copy. If
> > that's the case, then the range just doesn't contain that data. Rather,
> > it's a view into the data, so saving it just save that view.
> 
> Well, yes Jonathan, that's just what I'm getting at. I want to save just
> those things that are pertinent to the range/view, not the whole panoply
> of the result set representation or whatever other object is providing
> the data that the range is a view of.
> 
> I could do it by saving an instance of the object representing the result
> set containing only the relevant data, but that would be misleading for
> the user, because it would not match the object it was cloned from in any
> respect other than representing a range.
> 
> Should not the requirement for the thing provided by save be only that it
> should provide the same view as that provided by the source range at the
> time of the save, and the same range 'interface'.
> 
> Is there an accepted term for the way ranges are defined, i.e. as an
> entity that satisfies some template evaluating roughly to a bool?

Ranges are defined per templates in std.range. isForwardRange, isInputRange, 
isRandomAccessRange, etc. And it looks like isForwardRange specifically checks 
that the return type of save is the same type as the range itself. I was 
thinking that it didn't necessarily require that but that you'd have definite 
issues having save return a different type, because it's generally assumed 
that it's the same type and code will reflect that. However, it looks like 
it's actually required.

What you should probably do is have the result set be an object holding the 
data but not be a range itself and then overload opSlice to give you a range 
over that data (as would be done with a container). Then the range isn't in a 
position where it's trying to own the data, and it's clear to the programmer 
where the data is stored.

- Jonathan M Davis


Re: build system

2011-10-26 Thread jsternberg
It's the #d channel on irc.freenode.net. I should have been more specific.

Gor Gyolchanyan Wrote:

> I'm certainly interested in doing it, but I'm not familiar with that
> IRC channel of yours.
> If you need me, here's my skype: gor_f_gyolchanyan. Please mention
> digitalmatr...@puremagic.com if you decide to sent an authorization
> request.
> I'm almost always available.
> 
> On Wed, Oct 26, 2011 at 8:01 PM, jsternberg  
> wrote:
> > Gor Gyolchanyan Wrote:
> >
> >> I had a few thoughts about integrating build awareness into DMD.
> >> It would be really cool to add a flag to DMD to make it compile and
> >> link in all import-referenced modules.
> >> Also, it would be awesome to store basic build information in modules
> >> themselves in the form of special comments (much like documentation
> >> comments), where one could specify external build dependencies, output
> >> type, etc.
> >> There would be no need for makefiles and extra build systems. You'd
> >> just feed an arbitrary module to the compiler and the compiler would
> >> build the target, to which that module belongs (bu parsing build
> >> comments and package hierarchies).
> >> Wouldn't this be a good thing to have?
> >
> > Take a look at Cabal and ghc-pkg/ghc --make from Haskell. I've been 
> > thinking about starting to put together something like this, but I haven't 
> > found the motivation to do it. This type of package management and build 
> > integration would go a long way to making D less decentralized.
> >
> > One thing that would be needed to complete such a system is proper support 
> > for linking shared libraries (at least on Linux).
> >
> > There are a bunch of things the build system could do that would make 
> > development in D much easier.
> >
> > If you're interested in doing this, you should contact me on the IRC 
> > channel.
> >



Re: A proposal for better template constraint error reporting.

2011-10-26 Thread bearophile
Gor Gyolchanyan:

> The ddoc comments, preceding parts of template constraints would be
> used to specify why exactly did the template fail to instantiate, for
> example:

Good. Is it possible to do the same thing with unittests?

/// foo
unittest {
  assert(false);
}

==>
foo unittest failed


But maybe this syntax is better:

unittest(foo) {
  assert(false);
}

Bye,
bearophile


Re: queue container?

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 09:35 Timon Gehr wrote:
> On 10/26/2011 04:36 PM, Steven Schveighoffer wrote:
> > On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
> > 
> >  wrote:
> >> It should have both shared and unshared implementations of methods to
> >> be a full-fledged container.
> > 
> > Maybe, maybe not.
> > 
> > I'm reluctant to add a copy of all functions to the containers just to
> > support shared. We don't have a const or inout equivalent for shared.
> 
> Copying all the methods and marking them with shared does not make the
> code thread safe. The shared and unshared versions would have to be
> different anyways. There is no way do get something that relates to
> shared/unshared as const does to mutable/immutable.

If you really want it thread-safe, you have to mark the class/struct as 
synchronized, which introduces overhead and would be unacceptable in the 
general case. A simple wrapper type would probably suffice for that though. 
The main problem then is being able to have the wrapped instance be shared, 
and if you have to overload all of the functions for that, that's pretty bad 
and would tend to lean toward making shared unusable. Certainly, a large 
amount of code which might otherwise be valuable in some situations as shared 
wouldn't work with it simply because it wasn't worth the time and effort to 
make it possible when it was written.

- Jonathan M Davis


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Gor Gyolchanyan
The first version looks better and the compiler should output the
comment when the unittest fails.
I think all error comments should be marked in a special way to avoid
undesired effects.

On Wed, Oct 26, 2011 at 9:10 PM, bearophile  wrote:
> Gor Gyolchanyan:
>
>> The ddoc comments, preceding parts of template constraints would be
>> used to specify why exactly did the template fail to instantiate, for
>> example:
>
> Good. Is it possible to do the same thing with unittests?
>
> /// foo
> unittest {
>  assert(false);
> }
>
> ==>
> foo unittest failed
>
>
> But maybe this syntax is better:
>
> unittest(foo) {
>  assert(false);
> }
>
> Bye,
> bearophile
>


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 10:10 bearophile wrote:
> Gor Gyolchanyan:
> > The ddoc comments, preceding parts of template constraints would be
> > used to specify why exactly did the template fail to instantiate, for
> 
> > example:
> Good. Is it possible to do the same thing with unittests?
> 
> /// foo
> unittest {
> assert(false);
> }
> 
> ==>
> foo unittest failed
> 
> 
> But maybe this syntax is better:
> 
> unittest(foo) {
> assert(false);
> }

I'd definitely favor unittest(foo) or unittest("foo") for naming unittest 
blocks (and that's how it's been proposed previously). And since it would 
presumably actually affect the name of the function that the unittest block 
generates, I think that it makes more sense that way rather than making it a 
comment (it also takes up less vertical space).

- Jonathan M Davis


Re: queue container?

2011-10-26 Thread Steven Schveighoffer

On Wed, 26 Oct 2011 12:35:32 -0400, Timon Gehr  wrote:


On 10/26/2011 04:36 PM, Steven Schveighoffer wrote:

On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
 wrote:


It should have both shared and unshared implementations of methods to
be a full-fledged container.


Maybe, maybe not.

I'm reluctant to add a copy of all functions to the containers just to
support shared. We don't have a const or inout equivalent for shared.



Copying all the methods and marking them with shared does not make the  
code thread safe. The shared and unshared versions would have to be  
different anyways. There is no way do get something that relates to  
shared/unshared as const does to mutable/immutable.


You would mark them shared and synchronized.  The shared modifier  
guarantees you don't put unshared data into the container, and the  
synchronized modifier guarantees you don't violate thread safety (as far  
as container topology is concerned at least).


But the method content would be identical.

-Steve


Re: queue container?

2011-10-26 Thread Steven Schveighoffer
On Wed, 26 Oct 2011 12:21:56 -0400, Gor Gyolchanyan  
 wrote:



I don't get why don't you like the idea of having a thread-safe
version of methods of containers, so they would be usable as shared
objects?


Because it's vast copy-pasting of code.  I'd rather wait for a better  
solution or use a wrapper that enables all containers to be thread-safe.


-Steve


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Gor Gyolchanyan
I agree. But how to address the template constraint problem then?

On Wed, Oct 26, 2011 at 9:16 PM, Jonathan M Davis  wrote:
> On Wednesday, October 26, 2011 10:10 bearophile wrote:
>> Gor Gyolchanyan:
>> > The ddoc comments, preceding parts of template constraints would be
>> > used to specify why exactly did the template fail to instantiate, for
>>
>> > example:
>> Good. Is it possible to do the same thing with unittests?
>>
>> /// foo
>> unittest {
>> assert(false);
>> }
>>
>> ==>
>> foo unittest failed
>>
>>
>> But maybe this syntax is better:
>>
>> unittest(foo) {
>> assert(false);
>> }
>
> I'd definitely favor unittest(foo) or unittest("foo") for naming unittest
> blocks (and that's how it's been proposed previously). And since it would
> presumably actually affect the name of the function that the unittest block
> generates, I think that it makes more sense that way rather than making it a
> comment (it also takes up less vertical space).
>
> - Jonathan M Davis
>


Re: Compiler patch for runtime reflection

2011-10-26 Thread Jonny Dee

Hello Robert,

Am 26.10.11 07:16, schrieb Robert Jacques:

On Tue, 25 Oct 2011 19:35:52 -0400, Jonny Dee  wrote:

Am 25.10.11 16:41, schrieb Robert Jacques:

On Tue, 25 Oct 2011 09:40:47 -0400, Jonny Dee  wrote:

[...]

Hi Robert,

Well, before I tell you what I would like to see I'll cite Wikipedia [1]:
"
[...]
- Discover and modify source code constructions (such as code blocks,
classes, methods, protocols, etc.) as a first-class object at runtime.
- Convert a string matching the symbolic name of a class or function
into a reference to or invocation of that class or function.
[...]
"

Here is what I would dream of for arbitrary objects/classes (not
necessarily known at compile-time):
- Query an object for its list of methods together with their
signatures. Select a method, bind some values to its arguments, call it,
and retrieve the return type (if any).
- Query an object for its public fields (at least), and provide a way to
get/set their values.
- Query an object's class for all implemented interfaces and its base
class.
- Query a module for all type definitions and provide a way to
introspect these types in more detail. For instance, it would be really
cool if I could find a class with name "Car" in module "cars", get a
list of all defined constructors, select one, bind values to the
constructor's parameters, and create a corresponding object.

[...]

Implementing such a DI container heavily depends on reflection, because
the DI container component doesn't know anything about the objects to be
created during runtime.

Qt also extends C++ with a reflection mechanism through the help of its
meta object compiler (moc). It analyses the C++ source code, generates
meta class definitions [6,7] and weaves them into your Qt class. Hence,
in Qt, you can query an object for fields, methods, interfaces, etc. and
you can call methods with arbitrary parameters, or you can instantiate a
class using an arbitrary constructor. Consequently, somone implemented a
DI container for C++ which is based on Qt and works more or less the
same way the Spring DI container does. You can build up object trees
simply by specifying such trees in an XML file.

I don't go into why dependency injection is a very powerful feature.
This is Martin Fowler's [3] job ;) But when I program with C++ I miss
such a flexible dependency injection mechanism a lot. And I hope this
will eventually be available for D.

Cheers,
Jonny

[1] http://en.wikipedia.org/wiki/Reflection_%28computer_programming%29
[2] http://en.wikipedia.org/wiki/Dependency_injection
[3] http://martinfowler.com/articles/injection.html
[4]
http://en.wikipedia.org/wiki/Spring_Framework#Inversion_of_Control_container_.28Dependency_injection.29

[5] http://qtioccontainer.sourceforge.net/
[6] http://doc.qt.nokia.com/stable/qmetaobject.html
[7]
http://blogs.msdn.com/b/willy-peter_schaub/archive/2010/06/03/unisa-chatter-reflection-using-qt.aspx



Hi Jonny,
Thank you for your informative (and well cited) post. It has provided me
with a new take on an old design pattern and some enjoyable reading. In
return, let me outline my opinion of reflection in D today, and
tomorrow, as it pertains to your wish list.


Many thanks to you, too, for your very elaborate answer :)


Reflection in D today is very different from the host of VM languages
that have popularized the concept. Being a compiled systems language,
actual runtime self-modification is too virus like to become at a
language level feature. However, given the compilation speed of D,
people have made proof of concept libraries that essentially wrapped the
compiler and dynamically loaded the result. As LDC uses LLVM, which has
a jit backend, I'd expect to see something get into and D 'eval' library
into etc eventually. (phobos uses the BOOST license, which isn't
compatible with LLVM).


I know, that "runtime self-modification" and runtime code generation is 
a "dangerous" feature. And there really are rare cases where using such 
an approach might justify the risc in using it. Although this feature is 
not on my wish list, it might be good for generating dynamic proxies to 
arbitrary object instances like they are used by some ORMs. See 
Hibernate/NHibernate, for example [1,2]. Another example is 
aspect-oriented programming. But while I can't see the exacty reason for 
it, such a feature might indeed be a feature which is more appropriate 
for VM languages.



Compile-time reflection and generation of code, on the other hand, is
something D does in spades. It fulfills your dream list, although I
think module level reflection might only be available in the github
version. The API design is still in flux and we are actively iterating /
improving it as find new uses cases and bugs. The current plan is to
migrate all the traits functions over to a special 'meta' namespace
(i.e. __traits(allMembers,D) => meta.allMembers(T) ). Good solid
libraries for each of the concepts I listed, (prototype objects,
duck-typing/casting or serialization), 

Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Simen Kjaeraas
On Wed, 26 Oct 2011 17:34:09 +0200, Gor Gyolchanyan  
 wrote:



This idea is still raw, so don't expect me to be able to answer to all
your question. This is just a general direction in which we could move
to improve the error reports, that occur when templates can't be
instantiated.
What if one could add ddoc comments to parts of the constraint, so
that both ddoc generator could output them in a neat way and the
compiler would use them to report errors:

/// Define a state machine transition types.
template transition(ActionType)
if
{
/// Must be a callable type.
isCallable!ActionType &&
/// Must return a state.
is(ReturnType!ActionType : State) &&
/// Must take exactly two parameters.
ParameterTypeTuple!(ActionType).length == 2 &&
/// The first parameter must be an event.
is(ParameterTypeTuple!(ActionType)[0] : Event) &&
/// The second parameter must be a state.
is(ParameterTypeTuple!(ActionType)[1] : State);
)
{
alias ParameterTypeTuple!(ActionType)[1] FromState;
alias ReturnType!ActionType ToState;
alias ParameterTypeTuple!(ActionType)[0] Event;
}

The ddoc comments, preceding parts of template constraints would be
used to specify why exactly did the template fail to instantiate, for
example:

Error: Cannot instantiate template demo01.transition(ActionType)
because "Must return a state." constraint is not satisfied.
Writing specifications of the template with all different version of
incorrect constraints just to static assert(0) it is too tedious to do
every time.


As always with this type of proposal, the case of multiple non-matching
templates seems not to have been considered. Given:

template foo(T) if (
/// T must be an integer.
is(T : long)) {}
template foo(T) if (
/// T must be a string of some kind.
isSomeString!T) {}
template foo(T) if (
/// T must be an input range.
isInputRange!T) {}

Which template's error messages should be shown? All of them, preceded
by the template definition? e.g:

bar.d(17): Error: template instance foo!(float) does not match any  
template declaration

  bar.d(5): template instance foo(T) if (is(T : long)) error instantiating
bar.d(6): T must be an integer.
  bar.d(7): template instance foo(T) if (isSomeString!T) error  
instantiating

bar.d(8): T must be a string of some kind.
  bar.d(79): template instance foo(T) if (isInputRange!T) error  
instantiating

bar.d(10): T must be an input range.


I was gonna say this would bring problems with more complex constraints,
but after a brief look through Phobos, I have seen that most (~90%) of
constraints seem to be a single condition, with 2 making up the
majority of the remaining, and only one place have I found a constraint
with 3 conditions.

The reason for this is that it's often factored out to a separate
template or function, which helps some, but would perhaps reduce the
helpfulness of the error messages here proposed.

--
  Simen


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 13:44, Gor Gyolchanyan wrote:

That's the problem - no working build tool exists for D.
Why don't you like the idea of integrating build information in source
code? I mean, that information does not change for a given source
file.


* The compiler should only do one thing: compile code

* I think the best approach would be to have a complete language for the 
build scripts. Should the compiler now start to parse comments as well 
for finding code? How would it tell the difference between a build 
script and some arbitrary code that is commented out.


* I think that it would be good if if you could invoke arbitrary 
tasks/actions (think make, rake) using the build tool and that's not 
what a compiler should do


* If the build script adds import paths you have to make sure that the 
source file containing the build script always is handle first by the 
compiler


* It's just ugly

--
/Jacob Carlborg


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 15:39, Martin Nowak wrote:

On Wed, 26 Oct 2011 12:26:56 +0200, Gor Gyolchanyan
 wrote:


I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Some work in this direction has been done.
This proposal works by communicating url paths sources to an external
tool which is a flexible and good proposal.
It allows things as 'http://path/to/repo' for simple source file from
web fetching as well as 'pkg://experimental' for more sophisticated package
manager interaction.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP11

I've written a prototype implementation some weeks ago.

This still needs a way to communicated modules to build or link.
For now I've simply added a pragma(build, ) which builds
every import from this package (globally, it's a prototype).

Al together it can be used like this.
https://github.com/dawgfoto/graphics/blob/master/src/graphics/_.d

There are some issues though that would benefit from good ideas.
Especially a better solution for import path and build declarations
would be great.

DIP14 was not so well received.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP14.

martin


Wasn't these DIP's about a package manager?

--
/Jacob Carlborg


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Jonny Dee

Am 26.10.11 19:16, schrieb Jonathan M Davis:

On Wednesday, October 26, 2011 10:10 bearophile wrote:

Gor Gyolchanyan:

The ddoc comments, preceding parts of template constraints would be
used to specify why exactly did the template fail to instantiate, for



example:

Good. Is it possible to do the same thing with unittests?

/// foo
unittest {
assert(false);
}

==>
foo unittest failed


But maybe this syntax is better:

unittest(foo) {
assert(false);
}


I'd definitely favor unittest(foo) or unittest("foo") for naming unittest
blocks (and that's how it's been proposed previously). And since it would
presumably actually affect the name of the function that the unittest block
generates, I think that it makes more sense that way rather than making it a
comment (it also takes up less vertical space).

- Jonathan M Davis


I'd prefer this approach, too. For the following reason. I don't know if 
you know GoogleTest library [1] for C++. There you define unittests with 
symbols as names, too. And if you want to run your test suite you can 
filter the set of test cases to be executed by providing wildcard 
filters as command line parameters. It's really a nice feature. But OTOH 
having a more explanatory comment is also a nice thing. Maybe one could 
consider both approaches?


Jonny

[1] http://code.google.com/p/googletest/


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 15:39, Martin Nowak wrote:

On Wed, 26 Oct 2011 12:26:56 +0200, Gor Gyolchanyan
 wrote:


I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Some work in this direction has been done.
This proposal works by communicating url paths sources to an external
tool which is a flexible and good proposal.
It allows things as 'http://path/to/repo' for simple source file from
web fetching as well as 'pkg://experimental' for more sophisticated package
manager interaction.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP11

I've written a prototype implementation some weeks ago.

This still needs a way to communicated modules to build or link.
For now I've simply added a pragma(build, ) which builds
every import from this package (globally, it's a prototype).

Al together it can be used like this.
https://github.com/dawgfoto/graphics/blob/master/src/graphics/_.d

There are some issues though that would benefit from good ideas.
Especially a better solution for import path and build declarations
would be great.

DIP14 was not so well received.
http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP14.

martin


BTW, I don't think a build tool and a package manager should be mixed 
together in one tool. Instead they should be very well integrated with 
each other.


--
/Jacob Carlborg


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 10:26 Gor Gyolchanyan wrote:
> I agree. But how to address the template constraint problem then?

It's a completely separate issue. Being able to add comments which appear in 
template constraint messages would definitely be valuable (though using 
functions or eponymous templates with descriptive names - such as 
isForwardRange - mostly fixes the problem) and may very well be worth adding. 
But it would be all or nothing, whereas your proposal seems to assume that 
it's possible to give a message based on which part of the template constraint 
failed. However, at this point, as far as the compiler is concerned either a 
template constraint succeeds or fails. It doesn't try and figure out which 
parts succeeded or failed, and I suspect that for it to do so would definitely 
complicate matters. Still, printing out the whole template constraint with 
comments included could be valuable.

Of greater concern IMHO is the fact that it's always the first template 
constraint which is given. You could have 10 different overloads of the same 
template with you trying to use the 7th one, and the error message prints only 
the first template constraint, even if it really has nothing to do with your 
problem. But I really don't know how to fix that. It can't possibly know which 
overload you were actually targetting. All it knows is that none of the 
template constraints passed. And printing out _all_ of the template 
constraints could get very messier (e.g. think of what you'd get with 
std.conv.to if you printed out _all_ of the template constrainst for toImpl). 
The way that I've taken to solving the problem is to create one template which 
covers _all_ of the cases for that template and has an appropriate template 
constraint, and then have a template that _it_ uses which has all of the 
appropriate overloads, and that more or less solves the problem, but it would 
be nice if we could find a better solution for it such that the compiler can 
give better error messages on its own.

Overall, I'd say that your suggestion has merit, but I don't think that it's 
going to work quite as you envisioned. It would probably also be simpler if it 
were a single comment for the whole constraint (particularly given that it 
can't give you only one message from inside the constraint anyway), but I 
don't know.

- Jonathan M Davis


Re: queue container?

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 10:21 Steven Schveighoffer wrote:
> On Wed, 26 Oct 2011 12:35:32 -0400, Timon Gehr  wrote:
> > On 10/26/2011 04:36 PM, Steven Schveighoffer wrote:
> >> On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
> >> 
> >>  wrote:
> >>> It should have both shared and unshared implementations of methods to
> >>> be a full-fledged container.
> >> 
> >> Maybe, maybe not.
> >> 
> >> I'm reluctant to add a copy of all functions to the containers just to
> >> support shared. We don't have a const or inout equivalent for shared.
> > 
> > Copying all the methods and marking them with shared does not make the
> > code thread safe. The shared and unshared versions would have to be
> > different anyways. There is no way do get something that relates to
> > shared/unshared as const does to mutable/immutable.
> 
> You would mark them shared and synchronized. The shared modifier
> guarantees you don't put unshared data into the container, and the
> synchronized modifier guarantees you don't violate thread safety (as far
> as container topology is concerned at least).
> 
> But the method content would be identical.

That doesn't quite work, because synchronized is supposed to be on the class, 
not the function (at least per TDPL; I don't know what the current behavior 
is). Either everything is synchronized or nothing is. So, I believe that 
unless you want to make the whole thing always synchronized, you need to 
create a wrapper type which is synchronized. Of course, if the wrapper could 
then make the wrapped type's functions shared, then that would be great, but 
that probably requires casting.

- Jonathan M Davis


Re: build system

2011-10-26 Thread Gor Gyolchanyan
I agree.

On Wed, Oct 26, 2011 at 10:35 PM, Jacob Carlborg  wrote:
> On 2011-10-26 15:39, Martin Nowak wrote:
>>
>> On Wed, 26 Oct 2011 12:26:56 +0200, Gor Gyolchanyan
>>  wrote:
>>
>>> I had a few thoughts about integrating build awareness into DMD.
>>> It would be really cool to add a flag to DMD to make it compile and
>>> link in all import-referenced modules.
>>> Also, it would be awesome to store basic build information in modules
>>> themselves in the form of special comments (much like documentation
>>> comments), where one could specify external build dependencies, output
>>> type, etc.
>>> There would be no need for makefiles and extra build systems. You'd
>>> just feed an arbitrary module to the compiler and the compiler would
>>> build the target, to which that module belongs (bu parsing build
>>> comments and package hierarchies).
>>> Wouldn't this be a good thing to have?
>>
>> Some work in this direction has been done.
>> This proposal works by communicating url paths sources to an external
>> tool which is a flexible and good proposal.
>> It allows things as 'http://path/to/repo' for simple source file from
>> web fetching as well as 'pkg://experimental' for more sophisticated
>> package
>> manager interaction.
>> http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP11
>>
>> I've written a prototype implementation some weeks ago.
>>
>> This still needs a way to communicated modules to build or link.
>> For now I've simply added a pragma(build, ) which builds
>> every import from this package (globally, it's a prototype).
>>
>> Al together it can be used like this.
>> https://github.com/dawgfoto/graphics/blob/master/src/graphics/_.d
>>
>> There are some issues though that would benefit from good ideas.
>> Especially a better solution for import path and build declarations
>> would be great.
>>
>> DIP14 was not so well received.
>> http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP14.
>>
>> martin
>
> BTW, I don't think a build tool and a package manager should be mixed
> together in one tool. Instead they should be very well integrated with each
> other.
>
> --
> /Jacob Carlborg
>


Re: queue container?

2011-10-26 Thread Gor Gyolchanyan
Not necessarily. You don't even need to have the entire function
synchronized. You can define your own synchronization blocks, using
the object's or classes monitor.

On Wed, Oct 26, 2011 at 10:41 PM, Jonathan M Davis  wrote:
> On Wednesday, October 26, 2011 10:21 Steven Schveighoffer wrote:
>> On Wed, 26 Oct 2011 12:35:32 -0400, Timon Gehr  wrote:
>> > On 10/26/2011 04:36 PM, Steven Schveighoffer wrote:
>> >> On Wed, 26 Oct 2011 10:13:06 -0400, Gor Gyolchanyan
>> >>
>> >>  wrote:
>> >>> It should have both shared and unshared implementations of methods to
>> >>> be a full-fledged container.
>> >>
>> >> Maybe, maybe not.
>> >>
>> >> I'm reluctant to add a copy of all functions to the containers just to
>> >> support shared. We don't have a const or inout equivalent for shared.
>> >
>> > Copying all the methods and marking them with shared does not make the
>> > code thread safe. The shared and unshared versions would have to be
>> > different anyways. There is no way do get something that relates to
>> > shared/unshared as const does to mutable/immutable.
>>
>> You would mark them shared and synchronized. The shared modifier
>> guarantees you don't put unshared data into the container, and the
>> synchronized modifier guarantees you don't violate thread safety (as far
>> as container topology is concerned at least).
>>
>> But the method content would be identical.
>
> That doesn't quite work, because synchronized is supposed to be on the class,
> not the function (at least per TDPL; I don't know what the current behavior
> is). Either everything is synchronized or nothing is. So, I believe that
> unless you want to make the whole thing always synchronized, you need to
> create a wrapper type which is synchronized. Of course, if the wrapper could
> then make the wrapped type's functions shared, then that would be great, but
> that probably requires casting.
>
> - Jonathan M Davis
>


Re: Using of core.sync.condition.Condition

2011-10-26 Thread Alexander

On 24.10.2011 16:12, Steven Schveighoffer wrote:


The key here is, waiting on a condition atomically unlocks the mutex. If 
waiting on a condition was allowed without locking the mutex, potentially 
thread2 could miss thread1's signal, and you encounter a deadlock!


  OK, thanks. For ages I didn't touch pthreads stuff, so I forgot this 
semantics completely :)

--
/Alexander


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 16:27, Jesse Phillips wrote:

On Wed, 26 Oct 2011 14:26:56 +0400, Gor Gyolchanyan wrote:


I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and link
in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd just
feed an arbitrary module to the compiler and the compiler would build
the target, to which that module belongs (bu parsing build comments and
package hierarchies).
Wouldn't this be a good thing to have?


Do you know about rdmd and pragma(lib,...) ?


RDMD can only build binaries and... well, that's about it.

--
/Jacob Carlborg


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 11:38 Jonny Dee wrote:
> I'd prefer this approach, too. For the following reason. I don't know if
> you know GoogleTest library [1] for C++. There you define unittests with
> symbols as names, too. And if you want to run your test suite you can
> filter the set of test cases to be executed by providing wildcard
> filters as command line parameters. It's really a nice feature. But OTOH
> having a more explanatory comment is also a nice thing. Maybe one could
> consider both approaches?

Having named unittest blocks opens up a number of possibilities for improving 
how unit tests are run, though obviously other improvements would have to be 
made to druntime to enable them. The use case that always comes to my mind is 
how you can tell eclipse to run a specific unit test with JUnit rather than 
telling it to run all of them (though I think that it may still run all of the 
prior tests in the file in case that affects the result). For large test 
suites, that could be quite valuable. So, a number of improvements like that 
could theoretically be done once we have name unittest blocks.

The biggest benefit though IMHO is simply that stack traces for exceptions 
that escape unittest blocks would clearly indicate _which_ unittest block the 
exception escaped from.

- Jonathan M Davis


Re: A proposal for better template constraint error reporting.

2011-10-26 Thread Gor Gyolchanyan
Good points. I think if we add the notion of a predicate to D, things
would change dramatically.
Predicates would become the primary subject of such features, as:
* template constraints
* invariants
* in and out contracts
* unittests
* any other assertions and checks
* overloading
* conditional statements

The predicates would always have a message of failure integrated, so
any time a predicate fails a message is always available to either
throw as an exception, or display as a compile-time error or log in a
file for any reason.
The key point is, that anything, that ends up as a bool is a predicate.

On Wed, Oct 26, 2011 at 10:41 PM, Jonathan M Davis  wrote:
> On Wednesday, October 26, 2011 10:26 Gor Gyolchanyan wrote:
>> I agree. But how to address the template constraint problem then?
>
> It's a completely separate issue. Being able to add comments which appear in
> template constraint messages would definitely be valuable (though using
> functions or eponymous templates with descriptive names - such as
> isForwardRange - mostly fixes the problem) and may very well be worth adding.
> But it would be all or nothing, whereas your proposal seems to assume that
> it's possible to give a message based on which part of the template constraint
> failed. However, at this point, as far as the compiler is concerned either a
> template constraint succeeds or fails. It doesn't try and figure out which
> parts succeeded or failed, and I suspect that for it to do so would definitely
> complicate matters. Still, printing out the whole template constraint with
> comments included could be valuable.
>
> Of greater concern IMHO is the fact that it's always the first template
> constraint which is given. You could have 10 different overloads of the same
> template with you trying to use the 7th one, and the error message prints only
> the first template constraint, even if it really has nothing to do with your
> problem. But I really don't know how to fix that. It can't possibly know which
> overload you were actually targetting. All it knows is that none of the
> template constraints passed. And printing out _all_ of the template
> constraints could get very messier (e.g. think of what you'd get with
> std.conv.to if you printed out _all_ of the template constrainst for toImpl).
> The way that I've taken to solving the problem is to create one template which
> covers _all_ of the cases for that template and has an appropriate template
> constraint, and then have a template that _it_ uses which has all of the
> appropriate overloads, and that more or less solves the problem, but it would
> be nice if we could find a better solution for it such that the compiler can
> give better error messages on its own.
>
> Overall, I'd say that your suggestion has merit, but I don't think that it's
> going to work quite as you envisioned. It would probably also be simpler if it
> were a single comment for the whole constraint (particularly given that it
> can't give you only one message from inside the constraint anyway), but I
> don't know.
>
> - Jonathan M Davis
>


Re: queue container?

2011-10-26 Thread Jonathan M Davis
On Wednesday, October 26, 2011 11:44 Gor Gyolchanyan wrote:
> Not necessarily. You don't even need to have the entire function
> synchronized. You can define your own synchronization blocks, using
> the object's or classes monitor.

True. If you take that approach, then you can sychronize portions of 
functions, but you can't mark individual functions as synchronized. You 
synchronize the whole class or struct at once, not just individual functions - 
unlike Java or C#.

- Jonathan M Davis


Re: build system

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 18:01, jsternberg wrote:

Gor Gyolchanyan Wrote:


I had a few thoughts about integrating build awareness into DMD.
It would be really cool to add a flag to DMD to make it compile and
link in all import-referenced modules.
Also, it would be awesome to store basic build information in modules
themselves in the form of special comments (much like documentation
comments), where one could specify external build dependencies, output
type, etc.
There would be no need for makefiles and extra build systems. You'd
just feed an arbitrary module to the compiler and the compiler would
build the target, to which that module belongs (bu parsing build
comments and package hierarchies).
Wouldn't this be a good thing to have?


Take a look at Cabal and ghc-pkg/ghc --make from Haskell. I've been thinking 
about starting to put together something like this, but I haven't found the 
motivation to do it. This type of package management and build integration 
would go a long way to making D less decentralized.

One thing that would be needed to complete such a system is proper support for 
linking shared libraries (at least on Linux).

There are a bunch of things the build system could do that would make 
development in D much easier.

If you're interested in doing this, you should contact me on the IRC channel.


I'm working on a package manager:

https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
https://github.com/jacob-carlborg/orbit

--
/Jacob Carlborg


Re: build system

2011-10-26 Thread Eric Poggel (JoeCoder)

On 10/26/2011 2:30 PM, Jacob Carlborg wrote:

I think the best approach would be to have a complete language for the
build scripts.


This is the approach I've taken with dsource.org/projects/cdc.  That 
language is D.  It provides a library of common compilation tasks and 
then you fill in the main() with what you want it to do.  Then you can 
simply invoke dmd -run buildscript.d to create your project.


At one point it worked with d1 and d2 with ldc, gdc, and dmd, phobos or 
tango.  But it's been a year or so since I've tested.


It can also be used as a pass-through tool to dmd, gdc, or ldc, except 
it accepts source paths as well as source files (adding all files in the 
path to the build).  But rdmd may already do this better, since cdc 
currently lacks any concept of an incremental build.


Re: build system

2011-10-26 Thread Eric Poggel (JoeCoder)

On 10/26/2011 2:51 PM, Eric Poggel (JoeCoder) wrote:

On 10/26/2011 2:30 PM, Jacob Carlborg wrote:

I think the best approach would be to have a complete language for the
build scripts.


This is the approach I've taken with dsource.org/projects/cdc. That
language is D. It provides a library of common compilation tasks and
then you fill in the main() with what you want it to do. Then you can
simply invoke dmd -run buildscript.d to create your project.

At one point it worked with d1 and d2 with ldc, gdc, and dmd, phobos or
tango. But it's been a year or so since I've tested.

It can also be used as a pass-through tool to dmd, gdc, or ldc, except
it accepts source paths as well as source files (adding all files in the
path to the build). But rdmd may already do this better, since cdc
currently lacks any concept of an incremental build.



But my point here isn't so much to promote CDC, bur rather to insist 
that we should use D instead of a custom language invented for the task. 
 Reasons:


1.  D will always be more powerful.  And you will have all of phobos at 
your disposal.  You can parse xml, ftp files, etc.
2.  Anyone writing this script will already know D.  They won't have to 
learn another language.
3.  We'll truly be eating our own dog food.  Although I wouldn't call it 
dog food.


Re: Early std.crypto

2011-10-26 Thread Steve Teale
On Wed, 26 Oct 2011 13:01:14 -0400, Jonathan M Davis wrote:

> On Wednesday, October 26, 2011 03:17 Steve Teale wrote:
>> On Tue, 25 Oct 2011 23:45:58 -0700, Jonathan M Davis wrote:
>> > On Wednesday, October 26, 2011 06:28:52 Steve Teale wrote:
>> >> > An easy test is that if the interface takes a T[] as input,
>> >> > consider a range instead. Ditto for output. If an interface takes
>> >> > a File as input, it's a red flag that something is wrong.
>> >> 
>> >> I have a question about ranges that occurred to me when I was
>> >> composing a MySQL result set into a random access range.
>> >> 
>> >> To do that I have to provide the save capability defined by
>> >> 
>> >> R r1;
>> >> R r2 = r1.save;
>> >> 
>> >> Would it harm the use of ranges as elements of operation sequences
>> >> if the type of the entity that got saved was not the same as the
>> >> original range provider type. Something like:
>> >> 
>> >> R r1;
>> >> S r2 = r1.save;
>> >> r1.restore(r2);
>> >> 
>> >> For entities with a good deal of state, that implement a range,
>> >> storing the whole state, and more significantly, restoring it, may
>> >> be non- trivial, but saving and restoring the state of the range may
>> >> be quite a lightweight affair.
>> > 
>> > It's likely to cause issues if the type returned by save differs from
>> > the original type. From your description, it sounds like the range is
>> > over something, and it's that something which you don't want to copy.
>> > If that's the case, then the range just doesn't contain that data.
>> > Rather, it's a view into the data, so saving it just save that view.
>> 
>> Well, yes Jonathan, that's just what I'm getting at. I want to save
>> just those things that are pertinent to the range/view, not the whole
>> panoply of the result set representation or whatever other object is
>> providing the data that the range is a view of.
>> 
>> I could do it by saving an instance of the object representing the
>> result set containing only the relevant data, but that would be
>> misleading for the user, because it would not match the object it was
>> cloned from in any respect other than representing a range.
>> 
>> Should not the requirement for the thing provided by save be only that
>> it should provide the same view as that provided by the source range at
>> the time of the save, and the same range 'interface'.
>> 
>> Is there an accepted term for the way ranges are defined, i.e. as an
>> entity that satisfies some template evaluating roughly to a bool?
> 
> Ranges are defined per templates in std.range. isForwardRange,
> isInputRange, isRandomAccessRange, etc. And it looks like isForwardRange
> specifically checks that the return type of save is the same type as the
> range itself. I was thinking that it didn't necessarily require that but
> that you'd have definite issues having save return a different type,
> because it's generally assumed that it's the same type and code will
> reflect that. However, it looks like it's actually required.

But is it just required by std.range, or is it required at some 
theoretical level by algorithms that can work with ranges as inputs and 
outputs?

> What you should probably do is have the result set be an object holding
> the data but not be a range itself and then overload opSlice to give you
> a range over that data (as would be done with a container). Then the
> range isn't in a position where it's trying to own the data, and it's
> clear to the programmer where the data is stored.
> 

But in my mind, just from a quick reading of the language definition, 
slices are closely tied to arrays, which as you have already noted, are 
not the best example of a range.

Are ranges just a library artefact, or are they supported by the 
language. They appear to be recognized by foreach, so if they are 
recognized by D, then we should presumably have operator functions 
opInputRange, opForwardRange, and so on, whatever those terms might mean.

If not, then facilities like std.algorithm should have a warning notice 
like cigarettes that says "this facility may not be usable with POD".

Steve




Re: build system

2011-10-26 Thread Eric Poggel (JoeCoder)

On 10/26/2011 2:55 PM, Eric Poggel (JoeCoder) wrote:

On 10/26/2011 2:51 PM, Eric Poggel (JoeCoder) wrote:

On 10/26/2011 2:30 PM, Jacob Carlborg wrote:

I think the best approach would be to have a complete language for the
build scripts.


This is the approach I've taken with dsource.org/projects/cdc. That
language is D. It provides a library of common compilation tasks and
then you fill in the main() with what you want it to do. Then you can
simply invoke dmd -run buildscript.d to create your project.

At one point it worked with d1 and d2 with ldc, gdc, and dmd, phobos or
tango. But it's been a year or so since I've tested.

It can also be used as a pass-through tool to dmd, gdc, or ldc, except
it accepts source paths as well as source files (adding all files in the
path to the build). But rdmd may already do this better, since cdc
currently lacks any concept of an incremental build.



But my point here isn't so much to promote CDC, bur rather to insist
that we should use D instead of a custom language invented for the task.
Reasons:

1. D will always be more powerful. And you will have all of phobos at
your disposal. You can parse xml, ftp files, etc.
2. Anyone writing this script will already know D. They won't have to
learn another language.
3. We'll truly be eating our own dog food. Although I wouldn't call it
dog food.


4.  It will be easier to implement.  A library of functions instead of a 
complete parser/interpreter.  CDC is boost licensed if anyone wants to 
borrow from it.


Re: Compiler patch for runtime reflection

2011-10-26 Thread Jacob Carlborg

On 2011-10-26 20:20, Jonny Dee wrote:

One more use case for reflection is data binding with GUI components.
This approach is heavily used in Windows Presentation Foundation library
[3,4]. GUI components can update an object's properties by using
reflection. You don't need to register listeners for this purpose anymore.


Runtime reflection is used quite a lot in GUI programming on Mac OS X 
(at least the libraries/tools under the hood).


--
/Jacob Carlborg


  1   2   >