Re: Exceptions in @nogc code

2017-04-08 Thread Christophe via Digitalmars-d

On Saturday, 8 April 2017 at 20:09:49 UTC, Walter Bright wrote:

On 4/7/2017 9:30 AM, deadalnix wrote:

On Thursday, 6 April 2017 at 22:11:55 UTC, Walter Bright wrote:

On 4/6/2017 2:18 PM, H. S. Teoh via Digitalmars-d wrote:
You were asking for a link to deadalnix's original 
discussion, and
that's the link I found (somebody else also posted a link to 
the same

discussion).


Only deadalnix can confirm that's what he's talking about.


Yes this: 
https://forum.dlang.org/thread/kpgilxyyrrluxpepe...@forum.dlang.org
Also this: 
https://forum.dlang.org/post/kluaojijixhwigouj...@forum.dlang.org


Some convenient single page links:

http://www.digitalmars.com/d/archives/digitalmars/D/On_heap_segregation_GC_optimization_and_nogc_relaxing_247498.html

http://www.digitalmars.com/d/archives/digitalmars/D/isolated_owned_would_solve_many_problem_we_face_right_now._212165.html


I also produced a fairly detailed spec of how lifetime can be 
tracked in the
lifetime ML. This address scope and do not require owned by 
itself. Considering
the compiler infer what it calls "unique" already, it could 
solve the @nogc
Exception problem to some extent without the owned part. 
Because it is in a ML,

I cannot post a link.


Please repost it somewhere and a link. It's not very practical 
to refer to documents nobody is able to read.


My dog ate my homework.

deadalnix's a genius of marketing. He has no product, all he 
invested was a couple afternoons in newsgroup posts edging on 
rants. It's impossible to convert the incomplete ideas that he 
throws over the fence into spec and code. But he sells them as 
the perfect product. That the posts are incomplete and unclear 
helps because whatever problem has a solution in the future. What 
amazes me is he still grabs the attention of newbies in the forum.


Re: Worst-case performance of quickSort / getPivot

2013-11-16 Thread Jean Christophe


BTW I'm very interested in finding a library which could 
Quicksort an array of pointers, where each pointer points to a 
class object (or a structure) address. The library would make 
possible, for example, to sort the `class objects` using one 
of their members as the key. Because the swaps are done on the 
actual pointers (and not the Objects pointed), the Quicksort 
should be very efficient. However, the algorithm woulnd't be 
so trivial to implement because each comparison shall be done 
using the Object's member's value :


eg. Obj1.foo < Obj2.foo.

Of course, the programmer can choose any relevant member 
property to sort the collection. And it should be as easy to 
use as:


class SomeClass { string foo; double bar;}
SomeClass[] a;
// put 100 000 objects into a
a.sort("foo");

Do we already have something like that available somewhere or 
is it possible to make one eventually?


You mean, sort!`a.foo < b.foo` ?


Yes.

An indirect sorting, assuming a and b to be ojects of class 
SomePotentialyLargeClass.


Because the array to sort contains pointers only, all the data 
movement is essentially the same as if we were sorting integer.


-- JC


Re: Worst-case performance of quickSort / getPivot

2013-11-16 Thread Jean Christophe
On Saturday, 16 November 2013 at 16:10:56 UTC, Andrei 
Alexandrescu wrote:

On 11/16/13 6:20 AM, Jean Christophe wrote:
On Friday, 15 November 2013 at 21:46:26 UTC, Vladimir 
Panteleev wrote:



getPivot(0..10)
8,7,6,5,4,3,2,1,0,9 <- getPivot - before swap
9,7,6,5,4,8,2,1,0,3 <- getPivot - after swap
9,7,6,5,4,3,2,1,0,8 <- quickSortImpl - after swap
9,8,6,5,4,3,2,1,0,7 <- quickSortImpl - after partition
getPivot(2..10)
6,5,4,3,2,1,0,7 <- getPivot - before swap
7,5,4,3,6,1,0,2 <- getPivot - after swap
7,5,4,3,2,1,0,6 <- quickSortImpl - after swap
7,6,4,3,2,1,0,5 <- quickSortImpl - after partition
(...)


One possible implementation suggests to swap Left and Right 
immediatly
after choosing the Pivot (if Left > Right), then place the 
Pivot at

Right-1. It seems that this option was not taken. Any reason?


That may help this particular situation, but does it do 
anything interesting in general?


Yes. This has the extra advantage that the smallest of the three 
winds up in A[left], which is where the partitioning routine 
would put it anyway. The largest winds up at A[right] which is 
also the correct place, since it is larger than the Pivot. 
Therefore you can place the Pivot in A[right -1] and initialize i 
and j to (left+1) and (right-2) in the partition phase. Another 
benefit is that because A[left] is smaller than the Pivot, it 
will act as a sentinel for j. We do not need to worry about j 
running past the end. Same for i. Thus you assert:


A[left] <= A[center] <= A[right]

even before you hide the Pivot.

-- JC




Re: Worst-case performance of quickSort / getPivot

2013-11-16 Thread Jean Christophe
On Friday, 15 November 2013 at 21:46:26 UTC, Vladimir Panteleev 
wrote:



getPivot(0..10)
8,7,6,5,4,3,2,1,0,9 <- getPivot - before swap
9,7,6,5,4,8,2,1,0,3 <- getPivot - after swap
9,7,6,5,4,3,2,1,0,8 <- quickSortImpl - after swap
9,8,6,5,4,3,2,1,0,7 <- quickSortImpl - after partition
getPivot(2..10)
6,5,4,3,2,1,0,7 <- getPivot - before swap
7,5,4,3,6,1,0,2 <- getPivot - after swap
7,5,4,3,2,1,0,6 <- quickSortImpl - after swap
7,6,4,3,2,1,0,5 <- quickSortImpl - after partition
(...)


One possible implementation suggests to swap Left and Right 
immediatly after choosing the Pivot (if Left > Right), then place 
the Pivot at Right-1. It seems that this option was not taken. 
Any reason?


As the efficiency of Quicksort is known to be bad in sorting a 
small number of elements, ie. < 10, it might be nice to implement 
an option to automatically switch to a more appropriate algorithm 
if it's relevant to do so.


* Many sources recommend using a random element as a pivot. 
According to [2], "Randomized quicksort, for any input, it 
requires only O(n log n) expected time (averaged over all 
choices of pivots)".


IMO it would be costly and not so relevant if the goal is to be 
fast.


Also, if it is not possible to predict the pivot choice, it is 
impossible to craft worst-case input, which is a plus from a 
security point[3]. However, I'm not sure if making the behavior 
of std.algorithm's sort nondeterministic is desirable.


I think it's not desirable.

--

Quicksorting a collection of Objects?

BTW I'm very interested in finding a library which could 
Quicksort an array of pointers, where each pointer points to a 
class object (or a structure) address. The library would make 
possible, for example, to sort the `class objects` using one of 
their members as the key. Because the swaps are done on the 
actual pointers (and not the Objects pointed), the Quicksort 
should be very efficient. However, the algorithm woulnd't be so 
trivial to implement because each comparison shall be done using 
the Object's member's value :


eg. Obj1.foo < Obj2.foo.

Of course, the programmer can choose any relevant member property 
to sort the collection. And it should be as easy to use as:


class SomeClass { string foo; double bar;}
SomeClass[] a;
// put 100 000 objects into a
a.sort("foo");

Do we already have something like that available somewhere or is 
it possible to make one eventually?


-- JC


Re: Gtkd-2

2013-11-13 Thread jean christophe


+1

I'm very happy with Gtkd.
My config:

Debian 7
dmd v2.064
GtkD 2.3.0

On Wednesday, 13 November 2013 at 07:35:41 UTC, Steve Teale wrote:
I'd like to publicly thank and commend Mike Wey for his hard 
work and perseverance on Gtkd.


It is now fully up-to-date with GTK3, and with it and D, 
writing GUI programs has rarely if ever been easier.


If you have not been there recently - http://gtkd.org.

Thanks Mike.




Re: Shall I use std.json at my own risks ?

2013-11-13 Thread jean christophe

On Wednesday, 13 November 2013 at 06:16:07 UTC, Rob T wrote:

I need my Gtkd application to maintain a (possibly big) 
archive database of financial records downloaded daily from 
the server application. In my case JSON seems to be the most 
convenient format. Please let me know if, according to you, 
std.json will cast aside as std.xml.


...

I guess that I'm saying is that while std.json is rock solid 
and very fast, you may want to consider better alternatives to 
the json format unless there's a technical reason why json must 
be used.


Have fun :)

--rt


Well first thank you for sharing your experiences.

You mentioned that a) std.json is solid and fast and b) it's not 
due to be deprecated. You've really helped me to make my choice. 
I'm going to use that module. It'd be easier to implement the 
retrieval of data from the server application side which is 
written in PHP. For example, a simple 
`json_encode($bigDataObject)` would be fair enough to send data 
to the desktop application.


I agree that the API std.json is not sexy. But if it is reputed 
solid and fast, why just not keep it, gently fix possible bugs, 
and for those who need more fancy access to the JSON data, wrap 
it in some kind of std.json.helper or similar extension. Jonathan 
mentioned above that it is not range-based which is a lack as 
ranges are one of the paradigms of D. IMO, it's important to have 
a stable standard library onto which one can build real 
applications programs in D. Too much forking is bad.


BTW I've tested the use of std.json with import std.parallelism 
and it works. It's a pretty good news. The example below is 
borrowed from Ali.


import std.stdio;
import std.json;
import std.conv;
import std.file;
import std.random;
import std.parallelism;
import core.thread;

struct Employee
{
  int id;
  string firstName;
  string lastName;

  void initStuf(JSONValue employeeJson)
  {
writefln("Start long task for Employee %s", id);
JSONValue[string] employee = employeeJson.object;   
firstName = employee["firstName"].str;
lastName  = employee["lastName"].str;
Thread.sleep(uniform(1, 3).seconds); // Wait for a while
writefln("Done long task for Employee %s", id);
  }
}

void main()
{
  auto content =
`{"employees": [
   { "firstName":"John" ,   "lastName":"Doe"},
   { "firstName":"Anna" ,   "lastName":"Smith"},
   { "firstName":"Peter" ,  "lastName":"Jones"},
   { "firstName":"Kim" ,"lastName":"Karne"},
   { "firstName":"Yngwee" , "lastName":"Malmsteen"},
   { "firstName":"Pablo" ,  "lastName":"Moses"}   ]
  }`;

  JSONValue[string] document = parseJSON(content).object;
  JSONValue[] employees = document["employees"].array;
  uint employeeId = 0;

  foreach (employeeJson; parallel(employees))
{
  auto e = Employee( employeeId++ );
  e.initStuf( employeeJson );
}
}

Gives :

Start long task for Employee 0
Start long task for Employee 4
Start long task for Employee 5
Start long task for Employee 1
Start long task for Employee 3
Start long task for Employee 2
Done long task for Employee 4
Done long task for Employee 5
Done long task for Employee 1
Done long task for Employee 3
Done long task for Employee 0
Done long task for Employee 2










Shall I use std.json at my own risks ?

2013-11-12 Thread jean christophe


Hello

would you guys say that std.json is a good or bad choice dor a 
desktop application ? I've read many threads about it on the 
forum and finally I don't realy know what to do Oo`


I need my Gtkd application to maintain a (possibly big) archive 
database of financial records downloaded daily from the server 
application. In my case JSON seems to be the most convenient 
format. Please let me know if, according to you, std.json will 
cast aside as std.xml.


Thenks.

PS: As I'm new to the forum, I'd like to thank the D core 
community for such a GREAT language. I shall admit that it was 
difficult to abandon Emacs :s Anyway I've not been so positively 
impressed by a new language since Java 0.




Re: Access template parameters at runtime

2012-08-10 Thread Christophe Travert
"Henning Pohl" , dans le message (digitalmars.D:174572), a écrit :
> That is what I was trying first, but I could not make it work. 
> Maybe you can show me how it's done?

For example:

import std.stdio;

template TupleToArray(T...)
{
  static if (T.length == 1)
  {
enum TupleToArray = [T[0]];
  }
  else
  {
enum TupleToArray = TupleToArray!(T[0..$-1]) ~ T[$-1];
  }
}

void main()
{
  alias TupleToArray!(1, 2, 3) oneTwoThree;
  foreach (i; 0..3)
writeln(oneTwoThree[i]);
}

output:
1
2
3

TupleToArray should have proper check to make clean code.
There must be something like that somewhere in phobos, or it should be 
added.

-- 
Christophe


Re: Access template parameters at runtime

2012-08-10 Thread Christophe Travert
"Henning Pohl" , dans le message (digitalmars.D:174569), a écrit :
> On Friday, 10 August 2012 at 14:10:38 UTC, Vladimir Panteleev 
> wrote:
>> On Friday, 10 August 2012 at 14:10:02 UTC, Vladimir Panteleev
>> wrote:
>>> On Friday, 10 August 2012 at 14:05:16 UTC, Henning Pohl wrote:
 Oups, sorry, imagine there isn't one.

 So the error is: variable idx cannot be read at compile time.
>>>
>>> You can't index a tuple during compilation.
>>
>> Sorry, meant to say - during runtime.
> 
> Thats it, thank you :]

Note that if your design makes that you must have a tuple, you may build 
the array at compile time, so that you can index it at run time.


Re: The review of std.hash package

2012-08-09 Thread Christophe Travert
If a has is a range, it's an output range, because it's something you   
fee data to. Output range have only one method: put. Johannes used this 
method. But it's not sufficient, you need something to start and to 
finish the hash.

To bring consistency in the library, we should not remove this start and 
finish method. We should make all output range of the library use the 
same functions.

In the library, we have really few output range. We have writable input 
range and we have Appender. Is there more? There should be files, 
socket, maybe even signal, but IIRC these don't implement output range 
at the moment. What did I miss?

Appender doesn't use a finish method, but we have to 'get the 
result' of the appender, and for this we use appender.data. This name is 
not appropriate for generically getting a result or terminating an 
output range.

So We need a name that fits most output range use. start/finish sounds 
not bad. open/close fits files and socket, but many not all output 
ranges. Relying solely on constructors, opCall or alias this seems 
dangerous to me.

-- 
Christophe


Re: The review of std.hash package

2012-08-08 Thread Christophe Travert
Johannes Pfau , dans le message (digitalmars.D:174478), a écrit :
> but I don't know how make it an overload. See thread "overloading a
> function taking a void[][]" in D.learn for details.

Don't overload the function taking a void[][]. Remplace it. void[][] is 
a range of void[].


Re: The review of std.hash package

2012-08-08 Thread Christophe Travert
"Chris Cain" , dans le message (digitalmars.D:174477), a écrit :

I think you misunderstood me (and it's probably my fault, since I don't 
know much of hash functions), I was wanted to compare two kind of 
concepts:

1/ message digest functions, like md5, or sha1, used on large files,
which is what is covered by this std.hash proposal.
2/ small hash function. Like what are use in an associative array, and 
are called toHash when used a member function.

And I didn't thought of:
3/ cryptographic hash functions

My opinion was that in a module or package called hash, I expect tools 
concerning #2. But #1 and #2 can coexist in the same package. The 
proposed std.hash.hash defines a digest concept for #1. That's why I 
would rather have it named std.hash.digest, leaving room in the hash 
package to other concepts, like small hash functions that can be used in 
associative arrays (#2).

I don't know the difference between #1 and #3, so I can't tell if they 
should share a common package. In anycase, I think putting #3 be in a 
crypto package makes sense.

Having 3 different packages seems too much to me. #1 is too 
restricted to be a whole package IMHO, and should be along #2 or #3.

-- 
Christophe


Re: The review of std.hash package

2012-08-08 Thread Christophe Travert
"Chris Cain" , dans le message (digitalmars.D:174466), a écrit :
> On Wednesday, 8 August 2012 at 13:38:26 UTC, 
> trav...@phare.normalesup.org (Christophe Travert) wrote:
>> I think the question is: is std.hash going to contain only
>> message-digest algorithm, or could it also contain other hash 
>> functions?
>> I think there is enough room in a package to have both 
>> message-digest
>> algorithm and other kinds of hash functions.
> 
> Even if that were the case, I'd say they should be kept separate. 
> Cryptographic hash functions serve extremely different purposes 
> from regular hash functions. There is no reason they should be 
> categorized the same.

They should not be categorized the same. I don't expect a regular hash 
function to pass the isDigest predicate. But they have many 
similarities, which explains they are all called hash functions. There 
is enough room in a package to put several related concepts!

Here, we have a package for 4 files, with a total number of line that is 
about one third of the single std.algorithm file (which is probably too 
big, I conceed). There aren't hundreds of message-digest functions to 
add here.

If it where me, I would have the presently reviewed module std.hash.hash 
be called std.hash.digest, and leave room here for regular hash 
functions. In any case, I think regular hash HAVE to be in a std.hash 
module or package, because people looking for a regular hash function 
will look here first.




Re: The review of std.hash package

2012-08-08 Thread Christophe Travert
"Regan Heath" , dans le message (digitalmars.D:174462), a écrit :
> "Message-Digest Algorithm" is the proper term, "hash" is another, correct,  
> more general term.
> 
> "hash" has other meanings, "Message-Digest Algorithm" does not.

I think the question is: is std.hash going to contain only 
message-digest algorithm, or could it also contain other hash functions?
I think there is enough room in a package to have both message-digest 
algorithm and other kinds of hash functions.



Re: The review of std.hash package

2012-08-08 Thread Christophe Travert
I'm not familiar with hash functions in general.

I think the core of std.hash is the digest function:

digestType!Hash digest(Hash)(scope const(void[])[] data...) 
if(isDigest!Hash)
{
Hash hash;
hash.start();
foreach(datum; data)
hash.put(cast(const(ubyte[]))datum);
return hash.finish();
}

That seems to be too restrictive: you can only provide a void[][] or one 
or several void[], but you should be able to give it any range of 
void[] or of ubyte[] like:

auto dig = file.byChunk.digest!MD5;

That's the point of the range interface.

this can be done by templatizing the function, something like 
(untested):

template digest(Hash) if(isDigest!Hash)
{
  auto digest(R)(R data)
if (isInputRange!R &&  is(ElementType!R : void[])
  {
Hash hash;
hash.start();
data.copy(hash);
return hash.finish();
  }
}

An interesting overload for range of single ubyte could be provided. 
This overload would fill a buffer of with data from this range,  
feed the hash, and start again.




Re: std.d.lexer requirements

2012-08-07 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:174421), a écrit :
> On 2012-08-07 12:06, Jonathan M Davis wrote:
> 
>> It's easier to see where in the range of tokens the errors occur. A delegate
>> is disconnected from the point where the range is being consumed, whereas if
>> tokens are used for errors, then the function consuming the range can see
>> exactly where in the range of tokens the error is (and potentially handle it
>> differently based on that information).
> 
> Just pass the same token to the delegate that you would have returned 
> otherwise?

That's what I would do. If you have to define a way to return error 
information as a token, better use it again when using delegates.
Personnaly, I would have the delegate be:
int delegate(Token);
A return value of 0 means: continue parsing. Any other value is an 
error number and stops the parser (makes it empty). The error number 
can be retrieved from the empty parser with a specific function.
If you want to throw, just throw in the delegate. No need to return a 
specific value for that.

But a bool return value may be enough too...


Re: std.d.lexer requirements

2012-08-07 Thread Christophe Travert
Walter Bright , dans le message (digitalmars.D:174394), a écrit :
> On 8/7/2012 1:14 AM, Jonathan M Davis wrote:
>> But you can also configure the lexer to return an error token instead of 
>> using
>> the delegate if that's what you prefer. But Walter is right in that if you
>> have to check every token for whether it's an error, that will incur 
>> overhead.
>> So, depending on your use case, that could be unacceptable.
> 
> It's not just overhead - it's just plain ugly to constantly check for error 
> tokens. It's also tedious and error prone to insert those checks.

It's not necessarily ugly, because of the powerful range design. You can 
branch the lexer to a range adapter that just ignore error tokens, or 
throw when it meats an error token.

For example, just use:
auto tokens = data.lexer.throwOnErrorToken;

I don't think this is more ugly than:
auto tokens = data.lexer!(complex signature) { throw LexException; };

But yes, there is overhead, so I understand returning error tokens is 
not satisfactory for everyone.

> I don't see any advantage to it.

Storing the error somewhere can be of use.
For example, you may want to lex a whole file into an array of tokens, 
and then deal with you errors as you analyse the array of tokens. 
Of course, you can alway make a delegate to store the error somewhere, 
but it is easier if this somewhere is in your token pile.

What I don't see any advantage is using a delegate that can only return 
or throw. A policy makes the job:
auto tokens = data.lexer!ExceptionPolicy.throwException;
That's clean too.

If you want the delegate to be of any use, then it must have 
data to process. That's why I said we have to worry about the 
signature of the delegate.

-- 
Christophe



Re: std.d.lexer requirements

2012-08-07 Thread Christophe Travert
Walter Bright , dans le message (digitalmars.D:174393), a écrit :
> If the delegate returns, then the lexer recovers.

That's an option, if there is only one way to recover (which is a 
reasonable assumption).

You wanted the delegate to "decide what to do with the errors (ignore, 
throw exception, or quit)".

Throwing is handled, but not ignore/quit. Jonathan's solution (delegate 
returning a bool) is good. It could also be a delegate returning an int, 
0 meaning continue, and any other value being an error code that can be 
retrieved later. It could also be a number of characters to skip (0 
meaning break).



Re: std.d.lexer requirements

2012-08-07 Thread Christophe Travert
Walter Bright , dans le message (digitalmars.D:174360), a écrit :
> On 8/6/2012 12:00 PM, Philippe Sigaud wrote:
>> Yes, well we don't have a condition system. And using exceptions
>> during lexing would most probably kill its efficiency.
>> Errors in lexing are not uncommon. The usual D idiom of having an enum
>> StopOnError { no, yes } should be enough.
> 
> 
> That's why I suggested supplying a callback delegate to decide what to do 
> with 
> errors (ignore, throw exception, or quit) and have the delegate itself do 
> that. 
> That way, there is no customization of the Lexer required.

It may be easier to take into accound few cases (return error token and 
throwing is enough, so that is a basic static if), than to define a way 
to integrate a delegate (what should be the delegate's signature, what 
value to return to query for stopping, how to provide ways to recovers, 
etc).


Re: Functional programming in D and some reflexion on the () optionality.

2012-08-07 Thread Christophe Travert
Timon Gehr , dans le message (digitalmars.D:174361), a écrit :
> On 08/06/2012 09:42 PM, Christophe Travert wrote:
>> Timon Gehr , dans le message (digitalmars.D:174329), a écrit :
>>> On 08/06/2012 07:20 PM, Christophe Travert wrote:
>>>>
>>>> What do you think?
>>>
>>> Creating byzantine language rules to cater to unimportant or
>>> non-existent use cases will slaughter the language.
>>
>> What exactly do you consider byzantine here, whatever that means?
> 
> byzantine means involved. Why deliberately make the language more
> complicated for no gain whatsoever?
> 
>> Implicit cast is an already defined feature. Clarifying the way
>> parenthesis-less function calls exist by adding a casting rule is making
>> the langage more simple IMHO,
> 
> I don't know what to respond to this. Are you serious?
> 
>> and might improve the current position. Of
>> course, if your point is that parenthesis-less function calls are
>> unimportant or non-existent,
> 
> It isn't. My point is that there actually is no issue that would
> require some complex solution.

Misha's post reminded me that parenthesis-less function call rules are 
not that complicated, although I think he omitted one or a few things 
like assigning a function to an auto parameter or passing a function to 
a template. I agree that the situation is sufficent can be kept that 
way, and that there is no big issue.

However, I had the impression from deadalnix' post that what he wanted 
to discuss was something that would make function real first class 
types, that you could use, assign, etc, without using function pointers 
(although there have to be function pointers internally of course). If 
foo is a function, I think making "auto a = foo;" be a function is 
something that is expected in a langage where function are first class 
types. Thus, making parenthesis-less function expressions be functions. 
I maintain that I don't think that one implicit cast rule is not more 
complicated than the lines Misha used to describe parenthesis-less 
function call + some probably missing cases.

I understand the D language does not like implicit cast. The approach 
was to make as few implicit casts as possible and loosen the rule 
parsimoniously to avoid to create mess. That rule may be more 
important than making function first-class types in D, or defining 
parenthesis-less function call as an implicit cast.

Now, a little off-topic, implicit cast and rewrites are already 
everywhere in the language. Many feature are already attempts something, 
if it does not work, rewrite and attempt again. That's how the compiler 
works in many cases from what I briefly caught. Describing as many 
language features as possible as rewrites is a way that may make the 
language easier, and provides a way to describe feature interactions 
too, by prioritizing or restricting combinaisons of rewrites. 
Independently of parenthesis-less function call issue, I think that this 
may be a way to redefine the language, and open doors to clarify a few 
tricky issues. I may very well be mistaken, but I think this approach 
should be considered when discussing language features.

-- 
Christophe



Re: Functional programming in D and some reflexion on the () optionality.

2012-08-06 Thread Christophe Travert
Timon Gehr , dans le message (digitalmars.D:174329), a écrit :
> On 08/06/2012 07:20 PM, Christophe Travert wrote:
>>
>> What do you think?
> 
> Creating byzantine language rules to cater to unimportant or
> non-existent use cases will slaughter the language.

What exactly do you consider byzantine here, whatever that means? 
Implicit cast is an already defined feature. Clarifying the way 
parenthesis-less function calls exist by adding a casting rule is making 
the langage more simple IMHO, and might improve the current position. Of 
course, if your point is that parenthesis-less function calls are 
unimportant or non-existent, then I understand your point of view, but 
other people seems to think differently.

-- 
Christophe


Re: Functional programming in D and some reflexion on the () optionality.

2012-08-06 Thread Christophe Travert
deadalnix:
> The same way, the difference between a delegate and an expression don't 
> exist anymore.

int fun();
int fun(int t);

One solution would be to find a way that would enable fun to be both a 
function and its return type, and that would enable 1.fun to be both 
delegate and its return type.

This can be achieved by implicit casting. The type of expression fun is 
int function(), but if an int is expected, execute fun and use its 
return value. Like:

struct Function
{
  int fun();
  alias fun this;
}

One difference with current behavior would be that "fun;" alone doesn't 
actually execute the function. Statements consisting of a single 
function parameterless expression could be made to call the function, 
although I don't think this is a good idea because it would be an 
exception in the previous rule, and parenthesis-less call is more a 
functionnal programming feature, and should IMHO not be used when 
side-effect is desired.

Also, in:
auto bar = fun; 
bar is of type int function(), and not of type int. If the function is 
pure, it doesn't make great difference, but if the function is impure, 
then if is called each time bar is present in an expression that 
requires an int.

The same way, the dot operator on a single parameter free or member 
function would return a delegate, that is implicitely cast to its return 
value if the return type is expected.

This would break code where the use of parenthesis-less function call is 
hazardous, but hopefully not so much when it is legitimate [1].

What do you think?

-- 
Christophe

[1] templates will have to be taken care of: "a.map!(x=>x+1).array" 
would try to instanciate array with a delegate type, whereas it should 
be instanciated with a range type. From an the discussion about template 
on enum type in this newsgroup, there is some opposition to make 
template instanciate after an implicit cast because of the mess that can 
arise and the technical difficulties for the compiler. However, if this 
feature is limitted and controled, I think it can be worth it.


Re: enums and std.traits

2012-08-06 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:174310), a écrit :
>> IMO, the behavior should be this: when trying to call the template with
>> a argument that is an enum type based on string, the compiler should try
>> to instanciate the template for this enum type, and isSomeString should
>> fail. Then, the the compiler will try to instanciate the template for
>> strings, which works. Thus, the template is called with a string
>> argument, which is the enum converted to a string.
> 
> I don't believe that the compiler ever tries twice to instantiate _any_ 
> template. It has a specific type that it uses to instantiate the template 
> (usually the exact type passed or the exact type of the value passed in the 
> case of IFTI - but in the case of IFTI and arrays, it's the tail const 
> version 
> of the type that's used rather than the exact type). If it works, then the 
> template is instantiated. If it fails, then it doesn't. There are no second 
> tries with variations on the type which it could be implicitly converted to.
> 
> And honestly, I think that doing that would just make it harder to figure out 
> what's going on when things go wrong with template instantiations. It would 
> be 
> like how C++ will do up to 3 implicit conversions when a function is called, 
> so you don't necessarily know what type is actually being passed to the 
> function ultimately. It can be useful at times, but it can also be very 
> annoying. D explicitly did not adopt that sort of behavior, and trying 
> multiple types when instantiating a template would not be in line with that 
> decision.

If someone implements a library function taking a string. People start 
to use that function with an enum based on string, which is fine, since 
enum implicitely cast to its base type. Now the library writer found a 
way to make is library more generic, and templatise its function to take 
any dchar range. All codes using enum instead of string breaks. Or there 
may be pressure on the library implementer to add load of template 
specializations to make the template work with enums. There is something 
wrong here: enum works for string function, but not ones that are 
template. It forces the user to check it the function he wants to use is 
a template before trying to use it with something that implicitely cast 
to the function's argument type. This is a problem that can be avoided 
by trying to instanciate the template with types that the argument 
implicitely cast to.

Of course, as you stated, mess can arise, because you don't know right 
away what template instanciation is going to be used. But there would be 
much less mess than in C++. First, D has a more conservative approach to 
implicit casting than C++. If an implicit casting is used, it will be 
one that is visible in the type's declaration, and that the type 
implementer wanted. The problems would be much more controled than for 
C++. Second, D has powerful template guards. You can make sure the 
argument's type given to the template is a of a kind that will work 
correctly for this function.

I don't thing the mess would be huge. Particularly for enums, which are 
more manifest constants than specific types in D.

-- 
Christophe


Re: Why no implicit cast operators?

2012-08-06 Thread Christophe Travert
"Tommi" , dans le message (digitalmars.D:174314), a écrit :
> In D it's not possible to make opCast operators implicit.
> Therefore I see no way of making "transparent wrappers"; like
> structs which could be used as a drop-in replacement for plain
> old data types.
> 
> E.g if I wanted to make a SafeInt struct, that behaves otherwise
> just like an int, but when operators like +=, *=, ++, -- etc are
> used with it, my custom SafeInt operators would check that
> there's no integer overflow. If you use alias this to _reveal_
> the internal int value of SafeInt, then that int value's default
> operators are used, and thus no overflow checking.
> 
> I find the lack of implicit casting a defect of the language.
> I hope that I'm wrong.

Does alias this not fullfill you goal?
http://dlang.org/class.html#AliasThis


Re: enums and std.traits

2012-08-05 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:174267), a écrit :
> On Saturday, August 04, 2012 15:22:34 Jonathan M Davis wrote:
>> On Sunday, August 05, 2012 00:15:02 Timon Gehr wrote:
>> > T fun(T)(T arg) if(isSomeString!arg){
>> > 
>> >  return arg~arg[0];
>> > 
>> > }

IMO, the behavior should be this: when trying to call the template with 
a argument that is an enum type based on string, the compiler should try 
to instanciate the template for this enum type, and isSomeString should 
fail. Then, the the compiler will try to instanciate the template for 
strings, which works. Thus, the template is called with a string 
argument, which is the enum converted to a string. 



Re: enums and std.traits

2012-08-05 Thread Christophe Travert
Andrej Mitrovic , dans le message (digitalmars.D:174259), a écrit :
> On 8/4/12, Jonathan M Davis  wrote:
>> snip
> 
> I agree with you. isSomeString!T predicate failing maybe isn't as
> serious as "!isSomeString!T" passing and ending up with wrong results.
> At the very least this should have been discussed about before the
> change.
> 
> And why should a template care whether what's passed is an enum or
> not? This complicates code by having to write multiple constraints or
> multiple templates just to handle enums as a special case.

Someone might want to specialize a template for some kinds of enum. I 
think the right approach is to try to instanciate a template with the 
enum type, and, on failure, to try to instanciate the template based on 
the underlying type. The same as fun(foo) try to call foo with the foo 
type first, then with something foo implicitely cast to.

-- 
Christophe


Re: std.d.lexer requirements

2012-08-04 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:174223), a écrit :
> On Saturday, August 04, 2012 15:32:22 Dmitry Olshansky wrote:
>> I see it as a compile-time policy, that will fit nicely and solve both
>> issues. Just provide a templates with a few hooks, and add a Noop policy
>> that does nothing.
> 
> It's starting to look like figuring out what should and shouldn't be 
> configurable and how to handle it is going to be the largest problem in the 
> lexer...

Yes, I figured-out a policy could be used to, but since the begining of 
the thread, that makes a lot of things to configure! Jonathan would 
have trouble trying to make them all. Choices have to be made. That's 
why I proposed to use adapter range to enable to do the buffering 
instead of slicing, and to build the look up table. Done correctly, it 
can make the core of the lexer imlementation clean without loosing 
efficiency (I hope). If this policy for parsing literals if the only 
thing that remains to be configured directly in the core of the lexer 
with static if, then it's reasonable.


Re: std.d.lexer requirements

2012-08-04 Thread Christophe Travert
Dmitry Olshansky , dans le message (digitalmars.D:174214), a écrit :
> Most likely - since you re-read the same memory twice to do it.

You're probably right, but if you do this right after the token is 
generated, the memory should still be near the processor. And the 
operation on the first read should be very basic: just check nothing 
illegal appears, and check for the end of the token. The cost is not 
negligible, but what you do with litteral tokens can vary much, and what 
the lexer will propose may not be what the user want, so the user may 
suffer the cost of the litteral decoding (including allocation of the 
decoded string, the copy of the caracters, etc), that he doesn't want, 
or will have to re-do his own way...

-- 
Christophe


Re: Let's not make invariants const

2012-08-04 Thread Christophe Travert
"Era Scarecrow" , dans le message (digitalmars.D:174206), a écrit :
>   I would think it does however during verbose output specifying 
> if an invariant or contract is changing data and that may alter 
> behavior.

Signatures in some place should be by default const, pure, nothrow. This 
is the case for invariant() (if you consider it as a function) [1]. 
However, it is only possible to have different default than non-const, 
non-pure, throw, if the langage support a way to remove those default 
attributes. Maybe this should be included in the langage.

-- 
Christophe

[1] Actually, I would rather have a langage where all functions are by 
default const (wrt all parameters, except this), pure, nothrow... But 
it seems D is not that langage and it not going to be.


Re: std.d.lexer requirements

2012-08-04 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:174191), a écrit :
> On Thursday, August 02, 2012 11:08:23 Walter Bright wrote:
>> The tokens are not kept, correct. But the identifier strings, and the string
>> literals, are kept, and if they are slices into the input buffer, then
>> everything I said applies.
> 
> String literals often _can't_ be slices unless you leave them in their 
> original state rather than giving the version that they translate to (e.g. 
> leaving \© in the string rather than replacing it with its actual, 
> unicode value). And since you're not going to be able to create the literal 
> using whatever type the range is unless it's a string of some variety, that 
> means that the literals often can't be slices, which - depending on the 
> implementation - would make it so that that they can't _ever_ be slices.
> 
> Identifiers are a different story, since they don't have to be translated at 
> all, but regardless of whether keeping a slice would be better than creating 
> a 
> new string, the identifier table will be far superior, since then you only 
> need 
> one copy of each identifier. So, it ultimately doesn't make sense to use 
> slices 
> in either case even without considering issues like them being spread across 
> memory.
> 
> The only place that I'd expect a slice in a token is in the string which 
> represents the text which was lexed, and that won't normally be kept around.
> 
> - Jonathan M Davis

I thought it was not the lexer's job to process litterals. Just split 
the input in tokens, and provide minimal info: TokenType, line and col 
along with the representation from the input. That's enough for a syntax 
highlighting tools for example. Otherwise you'll end up doing complex 
interpretation and the lexer will not be that efficient. Litteral 
interpretation can be done in a second step. Do you think doing litteral 
interpretation separately when you need it would be less efficient?

-- 
Christophe


Re: std.d.lexer requirements

2012-08-03 Thread Christophe Travert
deadalnix , dans le message (digitalmars.D:174155), a écrit :
>> The tokens are not kept, correct. But the identifier strings, and the
>> string literals, are kept, and if they are slices into the input buffer,
>> then everything I said applies.
>>
> 
> Ok, what do you think of that :
> 
> lexer can have a parameter that tell if it should build a table of token 
> or slice the input. The second is important, for instance for an IDE : 
> lexing will occur often, and you prefer slicing here because you already 
> have the source file in memory anyway.
> 
> The token always contains as a member a slice. The slice come either 
> from the source or from a memory chunk allocated by the lexer.

If I may add, there are several possitilities here:
 1- a real slice of the input range
 2- a slice of the input range created with .save and takeExactly
 3- a slice allocated in GC memory by the lexer
 4- a slice of memory owned by the lexer, which is reused for the next 
token (thus, the next call to popFront invalidates the token).
 5- a slice of memory from a lookup table.

All are useful in certain situations.
#1 is usable for sliceable ranges, and is definitely efficient when you 
don't have a huge amont of code to parse.
#2 is usable for forward ranges.
#3 is usable for any range, but I would not recommand it...
#4 is usable for any range,
#5 is best if you perform complicated operations with the tokens.

#1/#2 should not be very hard to code: when you start to lex a new 
token, you save the range, and when you found the end of the token, you 
just use takeExactly on the saved ranged.

#4 requires to use an internal buffer. That's more code, but you have 
to do them in a second step if you want to be able to use input range 
(which you have too). Actually, the buffer may be external, if you use a 
buffered-range adapter to make a forward range out of an input range. 
Having an internal buffer may be more efficient. That something that has 
to be profiled.

#3 can be obtained from #4 by map!(x => x.dup).

#5 requires one of the previous to be implemented. You need to have a 
slice saved somewhere before having a look at the look-up table. 
Therefore, I think #5 should be obtained without a high loss of 
efficiency by an algorithm external to the lexer. This would probably 
bring many ways to use the lexer. For example, you can filter-out many 
tokens that you don't want before building the table, which avoids to 
have an overfull look-up table if you are only interested in a subset of 
tokens.

#1/#2 with adapter ranges might be the only thing that is required to 
code, although the API should allow to define #4 and #5, for the 
user to use the adapters blindly, or if an internal implementation
proves to be significantly more efficient.

-- 
Christophe


Re: std.d.lexer requirements

2012-08-03 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:174131), a écrit :
> static if(isNarrowString!R)
>  Unqual!(ElementEncodingType!R) first = range[0];
> else
>  dchar first = range.front;

I find it more comfortable to just use
first = range.front, with a range of char or ubyte.

This range does not have to be a string, it can be a something over a 
file, stream, socket. It can also be the result of an algorithm, because 
you *can* use algorithm on ranges of char, and it makes sense if you 
know what you are doing.

If Walter discovers the lexer does not work with a socket, a 
"file.byChunk.join", and has to do expensive utf-8 decoding for the 
lexer to work because it can only use range of dchar, and not range of 
char (except that it special-cased strings), he may not be happy.

It the range happens to be a string, I would use an adapter to make it 
appear like a range of char, not dchar, like the library likes to do. I 
think Andrei suggested that already.

-- 
Christophe


Re: std.d.lexer requirements

2012-08-02 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:174069), a écrit :
> On 2012-08-02 10:15, Walter Bright wrote:
> 
>> Worst case use an adapter range.
> 
> And that is better than a plain string?
> 
because its front method does not do any decoding.


Re: std.d.lexer requirements

2012-08-02 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:174060), a écrit :
> I agree frontUnit and popFrontUnit are more generic because they allow 
> other ranges to define them.

Any range of dchar could have a representation (or you may want to call 
it something else) that returns a range of char (or ubyte). And I think 
they are more generic because they use a generic API (ie range), that is 
very powerful: the representation can provide length, slicing, etc... 
that is different that the dchar length or whatever. You don't want to 
duplicate all range methods by postfixing Unit...

>> I wonder how your call with Walter will turn out.
> 
> What call?

You proposed Jonathan to call Walter in an earlier post. I believe there 
is a misunderstandment.


Re: std.d.lexer requirements

2012-08-02 Thread Christophe Travert
"Jonathan M Davis" , dans le message (digitalmars.D:174059), a écrit :
> In either case, because the consumer must do something other than simply 
> operate on front, popFront, empty, etc., you're _not_ dealing with the range 
> API but rather working around it.

In some case a range of dchar is useful. In some case a range of char is 
sufficient, and much more efficient. And for the UTF-aware programer, it 
makes much sense.

The fact that you sometimes have to buffer some information because the 
meaning of one element is affected by the previous element is a normal 
issue in many algoritms, it's not working arround anything. Your lexer 
uses range API, would you say it is just working arroung range because 
you have to take into account several caracters (let they be dchars) at 
the same time to know what they are meaning?



Re: std.d.lexer requirements

2012-08-02 Thread Christophe Travert
Walter Bright , dans le message (digitalmars.D:174015), a écrit :
> On 8/2/2012 12:49 AM, Jacob Carlborg wrote:
>> But what I still don't understand is how a UTF-8 range is going to be usable 
>> by
>> other range based functions in Phobos.
> 
> Worst case use an adapter range.
> 
> 

Yes

auto r = myString.byChar();

after implementing a byChar adapter range or just

auto r = cast(const(ubyte)[]) myString;

And it's a range of code unit, not code point.
And it's usable in phobos.


Re: Let's stop parser Hell

2012-08-02 Thread Christophe Travert
"Jonathan M Davis" , dans le message (digitalmars.D:173942), a écrit :
> It may very well be a good idea to templatize Token on range type. It would 
> be 
> nice not to have to templatize it, but that may be the best route to go. The 
> main question is whether str is _always_ a slice (or the result of 
> takeExactly) of the orignal range. I _think_ that it is, but I'd have to make 
> sure of that. If it's not and can't be for whatever reason, then that poses a 
> problem.

It can't if it is a simple input range! Like a file read with most 
'lazy' methods. Then you need either to transform the input range in a 
forward range using a range adapter that performs buffering, or perform 
your own buffering internally. You also have to decide how long the 
token will be valid (I believe if you want lexing to be blazing fast, 
you don't want to allocate for each token).

Maybe you want you lexer to work with range of strings too, like 
File.byLine or File.byChunk (the latter require buffering if you split 
in the middle of a token...). But that may wait until a nice API for 
files, streams, etc. is found.

> If Token _does_ get templatized, then I believe that R will end up 
> being the original type in the case of the various string types or a range 
> which has slicing, but it'll be the result of takeExactly(range, len) for 
> everything else.

A range which has slicing doesn't necessarily return it's own type when 
opSlice is used, according to hasSlicing. I'm pretty sure parts of 
Phobos doesn't take that into account. However, the result of 
takeExactly will always be the good type, since it uses opSlice when it 
can, so you can just use that.

Making a generic lexer that works with any forward range of dchar and 
returns a range of tokens without performing decoding of litteral seems 
to be a good first step.

> I just made str a string to begin with, since it was simple, and I was still 
> working on a lot of the initial design and how I was going to go about 
> things. 
> If it makes more sense for it to be templated, then it'll be changed so that
> it's templated.

string may not be where you want to start, because it is a 
specialization for which you need to optimize utf-8 decoding.

Also, you said in this thread that you only need to consider ansy 
characters in the lexer because non-ansy characters are only used in 
non-keyword identifier. That is not entirely true: EndOfLine defines 2 
non-ansy characters, namely LINE SEPARATOR and PARAGRAPH SEPARATOR. 
  http://dlang.org/lex.html#EndOfLine
  Maybe they should be dropped, since other non-ansy whitespace are not 
supported. You may want the line count to be consistent with other 
programs. I don't know what text-processing programs usualy considers an 
end of line.

-- 
Christophe


Re: Let's stop parser Hell

2012-08-01 Thread Christophe Travert
"Jonathan M Davis" , dans le message (digitalmars.D:173860), a écrit :
> struct Token
> {
>  TokenType type;
>  string str;
>  LiteralValue value;
>  SourcePos pos;
> }
> 
> struct SourcePos
> {
>  size_t line;
>  size_t col;
>  size_t tabWidth = 8;
> }

The occurence of tabWidth surprises me.
What is col supposed to be ? an index (code unit), a character number 
(code point), an estimation of where the caracter is supposed to be 
printed on the line, given the provided tabwidth ?

I don't think the lexer can realy try to calculate at what column the 
character is printed, since it depends on the editor (if you want to use 
the lexer to syntax highlight for example), how it supports combining 
characters, zero or multiple column characters, etc. (which you may not 
want to have to decode).

You may want to provide the number of tabs met so far. Note that there 
are other whitespace that you may want to count, but you shouldn't have 
a very complicated SourcePos structure. It might be easier to have 
whitespace, endofline and endoffile tokens, and let the user filter out 
or take into account what he wants to take into account. Or just let the 
user look into the original string...



Re: yield iteration

2012-07-31 Thread Christophe Travert
Christophe Travert, dans le message (digitalmars.D:173787), a écrit :
> "bearophile" , dans le message (digitalmars.D:173647), a écrit :
>> Turning that in D code that uses opApply is not hard, but the 
>> code inflates 3X, and you can't use most std.algorithm on it.
> 
> I believe most std.algorithm that work on input range could be made to 
> work with opApply, or opApply-like delegates. It just wouldn't be 
> particularly efficient unless agressive inlining is used.
> 
> For example, filter could work like this for opApply-like delegates:
> 
> template filter(pred)
> {
>   auto filter(T)(int delegate(int delegate(ref T)) apply)
>   {
> return (int delegate(ref T) dg)
> {
>   return apply( (ref T t) { return pred(t)? dg(t): 1; });
     ^should read 0
> }
>   }
> }
> 
> -- 
> Christophe



Re: yield iteration

2012-07-31 Thread Christophe Travert
"bearophile" , dans le message (digitalmars.D:173647), a écrit :
> Turning that in D code that uses opApply is not hard, but the 
> code inflates 3X, and you can't use most std.algorithm on it.

I believe most std.algorithm that work on input range could be made to 
work with opApply, or opApply-like delegates. It just wouldn't be 
particularly efficient unless agressive inlining is used.

For example, filter could work like this for opApply-like delegates:

template filter(pred)
{
  auto filter(T)(int delegate(int delegate(ref T)) apply)
  {
return (int delegate(ref T) dg)
{
  return apply( (ref T t) { return pred(t)? dg(t): 1; });
    }
  }
}

-- 
Christophe


Re: Impressed

2012-07-30 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:173382), a écrit :
> scope on local variables is going away for pretty much the same reason that 
> delete is. They're unsafe, and the fact that they're in the core language 
> encourages their use. So, they're being removed and put into the standard 
> library instead.
> 

I don't mind scope going away, since it can be replaced with a library 
solution. But scope is not more dangerous than a static array, or simple 
function variables. Slice them, or take their reference, and you're up 
for troubles. Do you think they should be removed as core features of 
the langage?


Re: Can you do this in D?

2012-07-26 Thread Christophe Travert
"bearophile" , dans le message (digitalmars.D:173297), a écrit :
>> I'm not sure what you mean. Do you mean I can go edit the open 
>> source compiler and add in my own language feature? Or does the 
>> ability to add a $/@ operator already exist?
> 
> I mean that D compiler writers don't need to introduce new syntax 
> to add that feature. But I don't see lot of people asking for it.

The $ prefix operator *is* a new syntax. It impacts the lexer and parser 
and they would have to create new precedence rules. The only think that 
already exists is the syntax to overload such operator, if it existed.

I don't see anything like new operators comming in the langage before 
long.

-- 
Christophe


Re: Semantics of postfix ops for classes

2012-07-25 Thread Christophe Travert
Don Clugston , dans le message (digitalmars.D:173192), a écrit :
> The question really is, do postfix ++ and -- make sense for reference 
> types? Arguably not. From a theoretical sense, the existing behaviour 
> does make sense, but in practice, every time it is used, it is probably 
> a bug.
> 
> The only other reasonable option I can think of would be to make class++ 
> be of type void, so that you could still write
> bar1++;
> but not bar2 = bar1++;
> since the existing behaviour can be achieved by writing bar2 = ++ bar1;

Similarly, the langage should provide a way to disable postfix++ on 
a struct, since a struct can be a reference type.


Re: Take and website

2012-07-25 Thread Christophe Travert
Russel Winder , dans le message (digitalmars.D:173102), a écrit :
> 
> --=-aHxuwwF1pyt7fCGYFQXP
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
> 
> On Tue, 2012-07-24 at 13:56 -0400, Andrei Alexandrescu wrote:
> [=E2=80=A6]
>> The example is:
>>=20
>> int[] arr1 =3D [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
>> auto s =3D take(arr1, 5);
>> assert(s.length =3D=3D 5);
>> assert(s[4] =3D=3D 5);
>> assert(equal(s, [ 1, 2, 3, 4, 5 ][]));
>>=20
>> Were you referring to this? Example code does not need to be generic,=20
>> and in this case it's fine if the code relies on random access because=
> =20
>> it uses an array of integers.
> 
> That's the one.
> 
> s[4] relies on the fact that arr1 is an array:
> 
>  ( takeExactly ( recurrence ! ( "a[n-1] + a[n-2]" ) ( 0L , 1L ) , cast
> ( size_t ) ( n + 1 ) ) ) [ n ]
> 
> fails with operator [] not defined, I find I have to:

This is expressed in the doc, not in the example:
> If the range offers random access and length, Take offers them as 
> well. 

recurrence does not offer random access, so take!recurrence does not. 
You may try to make sentence this clearer, but it's pretty clear to me.

-- 
Christophe


Re: Formal request to remove "put(OutRange, RangeOfElements)"

2012-07-23 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:173005), a écrit :
> Maybe I should put this request elsewhere? I'm not sure if there 
> is a place where I should put this?
> 
> I know this is not a very exciting issue, but I think it is a 
> very important to resolve, preferably sooner than later.
> 
> Maybe I'll try to help kick start this by listing all the modules 
> that would need to be changed?

You can fill a bug report.

assert(isOutputRange!typeof(r), typeof(e) && !output.r);
output.put(e);

failing on the second line is worthy of attention.
And it makes fill fail IIRC.

-- 
Christophe


Re: Random sampling next steps

2012-07-23 Thread Christophe Travert
Joseph Rushton Wakeling , dans le message (digitalmars.D:172997), a
 écrit :
> In other words, either your input range needs hasLength!R == true or 
> you need to manually specify the total number of items when calling 
> randomSample:
> 
> But what if the total number of items is not known in advance?  E.g. if you 
> are 
> reading a file, line by line, or reading records from a tape; you may know 
> the 
> total is finite, but you don't know what it actually _is_.
[snip]
> ... but doing something similar within RandomSample doesn't seem so easy.  
> Why? 
>   Because the static if()s that you'd require within the struct would not 
> depend 
> just on whether hasLength!R == true, but also on whether you'd passed a 
> size_t 
> total to the constructor.

Why not using takeExactly ? this is the standard way select from a 
subset of the original range. I wouldn't even have provided the overload 
with 3 arguments, the user being able to use takeExactly when necessary 
(which could be advised in the doc in case the user doesn't know).

struct RandomSample(R) if (isInputRange!R && hasLength!R)
{
 ...// always use r.length, never total/available
}

auto randomSample(R)(R r, size_t n, size_t total)
if(isInputRange!R)
{
 return randomSample!(R, void)(takeExactly(r, total), n);
}

struct RandomSample(R) if(isInputRange!R && !hasLength!R)
{
...// always reservoir random sample
}

There is no more issue here.

> I also think it would be a good idea for the reservoir sampling technique to 
> emit a warning when in debug mode, to prompt the user to be _sure_ that they 
> can't specify the total number of points to sample from.  Is there a 
> recommended 
> method of doing something like this?

I don't think library polluting compiler warnings is recommended.
 
> Alternatively, would people prefer to entirely separate the known-total and 
> unknown-total sampling methods entirely, so the choice is always manual?

RandomSample is a lazy range. RandomReservoirSample is not, and has a 
completely different implementation. IMHO, there is a fundamental 
difference that justifies to have a separate function with a different 
name.

> Finally, if hasLength!R == false, is there any way of guaranteeing that the 
> input range is still going to be ultimately finite?  There could be some very 
> nasty worst-case behaviour in the case of infinite ranges.

IsInfinite!Range.

However, a finite range could return false on empty indefinitely, would 
the implementer of the range just forget to make empty an enum, or the 
user meet a corner case (e.g. repeat(1).until(2)). But that's a general 
problem, that would make most eager algorithm result in an infinite 
loop, starting with array and copy...

-- 
Christophe


Re: Just where has this language gone wrong?

2012-07-19 Thread Christophe Travert
Alex Rønne Petersen , dans le message (digitalmars.D:172728), a écrit :
> On 19-07-2012 16:36, Christophe Travert wrote:
>> "Petr Janda" , dans le message (digitalmars.D:172719), a écrit :
>>>> Array gets sorted, then doubles are removed (uniq) and then
>>>> everything is converted to a string (map).
>>>>
>>>> Everything was recently introduced around 2.059.
>>>
>>> Ok, but what is map!(). What's the point of the exclamation mark,
>>> is it a template specialization?
>>
>> Yes, !(...) is template specialization.
>> It is the equivalent of <...> in c++.
>> The parentheses can be omited if only one argument is passed after the
>> exclamation mark.
>>
>> map is a template of the std.algorithm module.
>> http://dlang.org/phobos/std_algorithm.html#map
>>
>> This kind of questions should go in digitalmars.D.learn.
>>
> 
> No, please, template instantiation. Specialization is something 
> completely different, and doesn't happen at the call site.
> 
> I don't mean to be overly pedantic, but I think OP has a C++ background 
> or similar, so wrong terminology is not going to be helpful.

You are right, its my mistake (well, I can still send the mistake back 
to Petr...).


Re: Just where has this language gone wrong?

2012-07-19 Thread Christophe Travert
"Petr Janda" , dans le message (digitalmars.D:172727), a écrit :
> On Thursday, 19 July 2012 at 14:31:53 UTC, 
> trav...@phare.normalesup.org (Christophe Travert) wrote:
>> "q66" , dans le message (digitalmars.D:172716), a écrit :
>>> (so instead of calling a(b(c(d(e(f) you can just call 
>>> a.b.c.d.e.f())
>>
>> rather f.e.d.c.b.a, if you omit the empty parenthesis after 
>> each letter
>> (but f).
> 
> Ok, but the empty parenthesis is is important, it tells you about 
> whether it's a an object or a function.
> 
> It's another thing I hate about Ruby is that a parenthesis 
> enforcement is weak.

property (functions that behaves like fields) don't require 
empty parenthesis. This feature has been extended to all function, 
leading to the current situation. Some people would like this to 
disappear, and enforce strict property. To take the function object, and 
not its result, take its adress.
f == f() : the result
&f : the function.

Indeed, by looking at f, you can't tell if it is a function or an 
object. You can never tell much when you see an isolated symbol...


Re: Just where has this language gone wrong?

2012-07-19 Thread Christophe Travert
"Robik" , dans le message (digitalmars.D:172718), a écrit :
> On Thursday, 19 July 2012 at 14:21:47 UTC, Petr Janda wrote:
>> Hi,
> 
> Hi
> 
>> I'm an occasional lurker on the D forums just to see where the 
>> language is going,but I'm a little puzzled. In another thread I 
>> found this code
>>
>> auto r = [5, 3, 5, 6, 8].sort.uniq.map!(x => x.to!string);
> 
> Here's list what happens:
> 
>   1) Array gets sorted
>   2) Duplicate elements gets removed (only unique stay)
>   3) Then it get's maped by delegate. It converts numbers into 
> strings.
>  `r` variable will be ["3", "5", "6", "8"]

To be more precise, `r` variable is a lazy range, equivalent to this 
array. r.array would be this array.




Re: Just where has this language gone wrong?

2012-07-19 Thread Christophe Travert
"Petr Janda" , dans le message (digitalmars.D:172719), a écrit :
>> Array gets sorted, then doubles are removed (uniq) and then 
>> everything is converted to a string (map).
>>
>> Everything was recently introduced around 2.059.
> 
> Ok, but what is map!(). What's the point of the exclamation mark, 
> is it a template specialization?

Yes, !(...) is template specialization.
It is the equivalent of <...> in c++.
The parentheses can be omited if only one argument is passed after the 
exclamation mark.

map is a template of the std.algorithm module.
http://dlang.org/phobos/std_algorithm.html#map

This kind of questions should go in digitalmars.D.learn.

-- 
Christophe


Re: Just where has this language gone wrong?

2012-07-19 Thread Christophe Travert
"q66" , dans le message (digitalmars.D:172716), a écrit :
> (so instead of calling a(b(c(d(e(f) you can just call 
> a.b.c.d.e.f())

rather f.e.d.c.b.a, if you omit the empty parenthesis after each letter 
(but f).


Re: Initialization of std.typecons.RefCounted objects

2012-07-19 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:172710), a écrit :
> One of the reason the implementation doesn't let you escape a 
> reference is that that reference may become (_unverifiably_) 
> invalid.

The same applies to a dynamic array: it is undistinguishable from a 
sliced static array. More generally, as long as you allow variables on 
the stack with no escaped reference tracking, you can't ensure 
references remain valid. Even in safe code.

If I want my references to remain valid, I use dynamic array and garbage 
collection. If I use Array, I accept that my references may die. Array 
that protects the validity of their references are awesome. But, IMHO, 
not at that cost.

> ...That said, I see no reason for the other containers (SList, 
> I'm looking at you), not to expose references.

I'm against not exposing reference, but all containers will be 
implemented with custom allocator someday.

> The current work around? Copy-Extract, manipulate, re-insert. 
> Sucks.

IMO, what sucks even more is that arr[0].insert(foo) compiles while it 
has no effect. arr[0] is a R-value, but applying method to R-value is 
allowed. I don't know the state of debates about forbiding to call 
non-const methods on R-values. I think this would break too much code.



Re: Initialization of std.typecons.RefCounted objects

2012-07-19 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:172700), a écrit :
> I think it would be better to "initialize on copy", rather than 
> default initialize. There are too many cases an empty array is 
> created, then initialized on the next line, or passed to 
> something else that does the initialization proper.

Not default-initializing Array has a cost for every legitimate use of an 
Array. I think people use Array more often than they create 
uninitialized ones that are not going to be used before an other Array 
instance is assigned to them, so Array would be more efficient if it was 
default initialized and never check it is initialized again. But that's 
just speculation.

> You'd get the correct behavior, and everything else (except dupe) 
> works fine anyways.

Keeping the adress of the content secret may be a valuable intention, 
but as long as properties and opIndex does not allow to correctly 
forward methods, this is completely broken. Is there even a begining of 
a plan to implement this? I don't see how properties or opIndex could 
safely forward methods that uses references and that we do not control 
without escaping the reference to them. That's not possible until D has 
a complete control of escaping references, which is not planned in the 
near or distant future. Not to mention that having a complete control of 
escaping reference break lots of code anyway, and might considerably 
decrease the need for ref counted utilities like... Array.

Please, at least give me hope that there is light at the end of the 
tunnel.

-- 
Christophe


Re: Formal request to remove "put(OutRange, RangeOfElements)"

2012-07-18 Thread Christophe Travert
That sounds reasonable and justified. Let's wait to know if people 
maintaining legacy code will not strongly oppose to this.


Re: Octal Literals

2012-07-18 Thread Christophe Travert
"Dave X." , dans le message (digitalmars.D:172680), a écrit :
> Not that this really matters, but out of curiosity, how does this 
> template work?


By looking at the sources, if the template argument is a string, the 
program just compute the octal value as a human would do, that is it 
makes the sum of the digits multiplied by their respective power of 8. 
If the template argument is not a string, it is transformed into a 
string and the previous algorithm is used.

-- 
Christophe


Re: Initialization of std.typecons.RefCounted objects

2012-07-18 Thread Christophe Travert
I see you found the appropriate entry to discuss this bug:

http://d.puremagic.com/issues/show_bug.cgi?id=6153



Re: Initialization of std.typecons.RefCounted objects

2012-07-18 Thread Christophe Travert
Matthias Walter , dans le message (digitalmars.D:172673), a écrit :
> I looked at Bug #6153 (Array!(Array!int) failure) and found that the
>
> This exactly is what makes the following code fail:
> 
> Array!(Array!int) array2d;
> array2d.length = 1;
> array2d[0].insert(1);
> 
> The inner array "array2d[0]" was not initialized and hence the reference
> pointer is null. Since Array.opIndex returns by value, the 'insert'
> method is called on a temporary object and does not affect the inner
> array (still being empty) which is stored in the outer array.
> 
> What do you think about this?
> 
> Must the user ensure that the Array container is always initialized
> explicitly? If yes, how shall this happen since the only constructor
> takes a (non-empty) tuple of new elements. Or shall opIndex return by
> reference?

I think opIndex should return by reference. opIndexAssign is of no help 
when the user want to use a function that takes a reference (here 
Array.insert). It is normal that Array uses default construction when 
someone increases the array's length.

Besides that point, I don't see why default-constructed Array have an 
uninitialised Payload. This makes uninitialised Array behaves 
unexpectedly, because making a copy and using the copy will not affect 
the original, which is not the intended reference value behavior.

Correcting default-initialization of Array would correct your bug, but 
would not correct the wrong behavior of Array when the value returned by 
opIndex is used with a non-const method with other objects (dynamic 
arrays?). So both have to be changed in my opinion.

-- 
Christophe



Re: Definition of "OutputRange" insuficient

2012-07-17 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:172586), a écrit :
> I was trying to fix a few bugs in algorithm, as well as be more 
> correct in template type specifiers, and I have to say: There is 
> a serious restraint in the definition of an outputRange.
> 
> The current definition of "isOutputRange" is simply that "an 
> output range is defined functionally as a range that supports the 
> operation put".
> 
> The problem with this is this definition is not very useful (at 
> all). Not just because it lacks front/popFront (this is actually 
> OK). It is because it lacks an "empty". This makes it virtually 
> un-useable unless you are blind writing to an infinite output 
> range.
> 
> The proof that it is useless is that NOT A SINGLE ALGORITHM is 
> compatible with output ranges. All algorithms that really only 
> require an "output" systematically actually require 
> "isForwardRange", or, worse yet "isInputRange" (!). This is only 
> because they all make use of range.empty.

OutputRange is designed for infinite output ranges, like output files 
and appender.

Copy is the only algorithm that uses OutputRange. But still, this is 
enough. That enables to do a lot of things, since copy can branch to any 
lazy input range performing all the generic operation you want*.

However, output range with a limited capacity are not taken into 
account. They are partly covered by the input ranges family. Most ranges 
that are output ranges with a capacity are also input ranges. Arguably, 
we may want to cover output ranges with capacity that are not input 
range. This would may make fill cleaner. uninitializedFill would 
be fill with an output range that does not defines a front method, which 
would be much cleaner than using emplace arbitrarily on a range that is 
not designed for that. This also opens the door to an algorithm that 
copy a input range into an output range, but stops when the output range 
is full of the input range is empty, and return both filled output range 
and consumed input range (the input range could be taken by ref). Copy 
could be improved with an additional "StopPolicy" template argument.

To do this, two methods could be added to output ranges, one telling if 
the range is full, and one telling what is the remaining capacity of the 
range (capacity already has a meaning, some other word should be used). 
These methods are like empty and length.

-- 
Christophe

Out of topic foot note:
* After thoughts, one thing you can't do directly in phobos is to modify 
an output range to return a new output range. You are obliged 
to apply the operation on the input range.
For example:
input.map!toString().copy(output);
can't be written:
input.copy(output.preMap!toString());

Creating a modified output range like my (badly named) output.preMap can 
be useful, but may be to marginal to be put in Phobos. With a revamped 
stdio using more ranges, this may become less marginal. map, joiner, 
filter, chain, take and friends, zip and chunks may have a meaning for 
output range with capacity...


Re: Why is std.algorithm so complicated to use?

2012-07-17 Thread Christophe Travert
Jonathan M Davis , dans le message (digitalmars.D:172564), a écrit :
> They're likely to contain a lot of stuff negation of other template 
> constraints. For instance,
> 
> auto func(R)(R range)
> if(isForwardRange!R && !isBidirectionalRange!R)
> {}
> 
> auto func(R)(R range)
> if(isBidirectionalRange!R)
> {}
> 
> If you have a function with very many overloads, it can be very easy to end 
> up 
> with a bunch of different template constraints which are all slightly 
> different. 
> std.algorithm.find is a prime example of this.
> 
> But as much as it may be a bit overwhelming to print every failed constraint, 
> without doing that, you _have_ to go to the source code to see what they 
> were, 
> which isn't all that great (especially if it's not in a library that you 
> wrote 
> and don't normally look at the source of - e.g. Phobos).
> 
> On the other hand, a failed instantiation of std.conv.to would print out 
> reams 
> of failed constraints...

The compiler could stop displaying at about 10 failed constrains and 
claim there are more. It would be best if it could figure out what are 
the 10 most interesting constrains, but that may not be easy!

Then it's up to the programmer to use template constrains, static if and 
eventually pragma, to allow the compiler to display pleasant error 
messages. The langage could help you by allowing you to make hiearchized 
template constrains, but static if is a fine solution most of the time.


Re: Why doesn't to!MyEnumType(42) work

2012-07-17 Thread Christophe Travert
"Era Scarecrow" , dans le message (digitalmars.D:172568), a écrit :
> On Monday, 16 July 2012 at 21:59:17 UTC, Tommi wrote:
>> On Monday, 16 July 2012 at 20:22:12 UTC, Era Scarecrow wrote:
>>> MyEnumType y = cast(MyEnumType) 42; //Error: wtf is 42 anyways?
>>
>> Like the previous fellow said, it's not an error.
> 
>   I had the impression it was illegal by the compiler; Logically 
> forcing an enum to an invalid state is probably undefined and 
> unstable (but casting with compile-time constant that's provably 
> correct would be different). Also I've never tried force casting 
> the enums, so.. Hmmm...
> 
>   I suppose 'use at your own risk' applies here.

For what it's worth, I think cast should be at your own risk, and 
to!MyEnumType should assert the enum is valid.


Re: Array index slicing

2012-07-16 Thread Christophe Travert
"bearophile" , dans le message (digitalmars.D:172300), a écrit :
> If enumerate() is well implemented it's one way to avoid that 
> problem (other solutions are possible), now 'c' gets sliced, so 
> it doesn't start from zero:
> 
> import std.stdio;
> void main() {
>  auto M = [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]];
>  foreach (r, row; M)
>  foreach (c, ref item; enumerate(row)[1 .. $])
>  item = c * 10 + r;
>  writefln("%(%s\n%)", M);
> }

enumerate could be useful with retro too. You may want to change the 
order of the enumeration, but not the order of the indices.




Re: Making uniform function call syntax more complete a feature

2012-07-16 Thread Christophe Travert
"Simen Kjaeraas" , dans le message (digitalmars.D:172349), a écrit :
> On Thu, 12 Jul 2012 16:31:34 +0200, Christophe Travert  
>  wrote:
> 
>> By the way, would it be possible to implement an opCmp that returns a
>> double, to allow it to return a NaN ? That may allow to create values
>> that are neither superior, nor inferior to other value, like NaNs. It's
>> not possible to implement opCmp for a floating point comparison if opCmp
>> is bound to return an int.
> 
> Why don't you just test it? Not like it'd be many lines of code.
> 
> Anyways, yes this works.

Thanks. I don't always have a d compiler at hand when I read this 
newsgroup. Maybe I should just write myself a todo to make this kind of 
test back home rather than directly posting the idea.


Re: nested class inheritance

2012-07-13 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:172280), a écrit :
>> For Fruit.Seed it's Fruit, for AppleSeed it's Apple. This makes sense
>> because the Apple, which AppleSeed sees is the same object, which
>> Fruit.Seed sees as it's base type Fruit.
> 
> That would mean AppleSeed has two outer fields: a Fruit and an Apple.

Only one. Apple. And when AppleSeed.super seed this Apple, it sees a 
fruit.

AppleSeed a;
assert(is(typeof(a.outer) == Apple));
assert(is(typeof(a.super) == Seed));
assert(is(typeof(a.super.outer) == Fruit));
//but:
assert(a.outer is a.super.outer);

If you can't figure out how can a.outer and a.super.outer have two 
different types, but be the same, think about covariant return.




Re: nested class inheritance

2012-07-13 Thread Christophe Travert
"Era Scarecrow" , dans le message (digitalmars.D:172272), a écrit :
>   Then perhaps have the inherited class within fruit?
> 
> class Fruit {
>class Seed {}
>class Appleseed : Seed {}
> }

But then AppleSeed doesn't know about Apple



Re: nested class inheritance

2012-07-13 Thread Christophe Travert
"Era Scarecrow" , dans le message (digitalmars.D:172269), a écrit :
> class Fruit {
>   int x;
>   class Seed {
>void oneMoreToX() {
> x++; //knows about Fruit.x, even if not instantiated
>}
>   }
> 
>   static class Seed2 {
>void oneMoreToX() {
> //  x++; //fails to compile, no knowledge of Fruit
>}
>   }
> }
> 
> class Apple: Fruit {
>   class AppleSeed: Fruit.Seed { } //fails (no outer object 
> (Fruit.x) and makes no sense)
>   class AppleSeed2: Fruit.Seed2 { } //works fine
> }

AppleSeed does have a outer Fruit, and this Fruit happens to be an 
Apple. I don't see what is the issue, and what prevents the langage to 
support such AppleSeed. I'm not saying there is nothing that prevents 
the langage to support such pattern, I am not used to inner classes. 
Pleas enlighten me.


Re: just an idea (!! operator)

2012-07-13 Thread Christophe Travert
"Roman D. Boiko" , dans le message (digitalmars.D:172259), a écrit :
> On Friday, 13 July 2012 at 13:46:10 UTC, David Nadlinger wrote:
>> I guess that this operator is only really worth it in languages
>> where every type is nullable, though.
>>
>> David
> 
> It might mean identity (return the argument unchanged) for value 
> types.

It might mean: give me the default I provide as an extra argument:

Example:
car?.driver?.name ?: "anonymous";

rewrites:
car? car.driver? car.driver.name? car.driver.name? car.driver.name
 :anonymous
:anonymous
   :anonymous
   :anonymous



Re: just an idea (!! operator)

2012-07-13 Thread Christophe Travert
"Jonas Drewsen" , dans le message (digitalmars.D:172242), a écrit :
> Can you identify any ambiguity with an ?. operator.

? could be the begining of a ternary operator, and . the module scope 
indicator, or the beginning of a (badly) written float number.

Both case can be disambiguated by the presence of the ':' in the case 
of a ternary operator.

I don't think '?' has currently any other meaning in D.


Re: Move semantics for D

2012-07-13 Thread Christophe Travert
Benjamin Thaut , dans le message (digitalmars.D:172207), a écrit :
> Move semantics in C++0x are quite nice for optimization purposes. 
> Thinking about it, it should be fairly easy to implement move semantics 
> in D as structs don't have identity. Therefor a move constructor would 
> not be required. You can already move value types for example within an 
> array just by plain moving the data of the value around. With a little 
> new keyword 'mov' or 'move' it would also be possible to move value 
> types into and out of functions, something like this:
> 
> mov Range findNext(mov Range r)
> {
>//do stuff here
> }
> 
> With something like this it would not be neccessary to copy the range 
> twice during the call of this function, the compiler could just plain 
> copy the data and reinitialize the origin in case of the argument.
> In case of the return value to only copying would be neccessary as the 
> data goes out of scope anyway.

If Range is a Rvalue, it will be moved, not copied.
It it's a Lvalue, your operation is dangerous, and does not bring you 
much more than using ref (it may be faster to copy the range than to 
take the reference, but that's an optimiser issue).

auto ref seems to be the solution.

> I for example have a range that iterates over a octree and thus needs to 
> internally track which nodes it already visited and which ones are still 
> left. This is done with a stack container. That needs to be copied 
> everytime the range is copied, which causes quite some overhead.

I would share the tracking data between several instance of the range, 
making bitwise copy suitable. Tracking data would be duplicated only on 
call to save or opSlice(). You'd hit the issue of foreach not calling 
save when it should, but opSlice would solve this, and you could still 
overload opApply if you want to be sure.

-- 
Christophe


Re: Counterproposal for extending static members and constructors

2012-07-13 Thread Christophe Travert
>> In any case, std.container already declares a make which encapsulates
>> constructing an object without caring whether it's a struct or class (since
>> some containers are one and some another), which I intend to move to
>> std.typecons and make work with all types. That seems a lot more useful to me
>> than trying to make a function act like a constructor when it's not - though 
>> I
>> guess that as long as you imported std.typecons, I would just be providing 
>> the
>> free function that your little constructor faking scheme needs.

The same can be said for UFCS. Your just faking member functions with a 
free function. I don't understand why constructors are so different.

A library might write generic code and use a constructor to perform 
something. I can't use that generic code if I don't own the struct or 
class and am able to write a constructor. You can argue that the library 
should have use make, instead of calling the constructor or using some 
cast. But they can't think of every usages, and may not know about make. 
Plus it may make the code uglier.

It's like standard UFCS. library writer should never call a member 
function, and always call a free function that is templated, specialised 
to use the member function if available, to provide workarround, or 
that can be further specialised if someone wants to extend the class. 
yk...


Re: just an idea (!! operator)

2012-07-13 Thread Christophe Travert
"David Piepgrass" , dans le message (digitalmars.D:172164), a écrit :
>>> Yeah, I've been planning to try and get this into D one day.  
>>> Probably
>>> something like:
>>> (a ?: b) ->  (auto __tmp = a, __tmp ? __tmp : b)
>>
>> gcc used to have that extension and they dropped it...
> 
> But GCC can't control the C++ language spec. Naturally there is a 
> reluctance to add nonstandard features. It's a successful feature 
> in C#, however, and a lot of people (including me) have also been 
> pestering the C# crew for "null dot" (for safely calling methods 
> on object references that might be null.)
> 
> I don't see why you would use ?: instead of ??, though.

Because ?: is the ternary conditionnal operator with missing 
second operand.

a ?: b // or maybe a ? : b
is just a short hand for
a ? a : b
(except a is evaluated only once).


Re: Counterproposal for extending static members and constructors

2012-07-12 Thread Christophe Travert
"Jonathan M Davis" , dans le message (digitalmars.D:172156), a écrit :
> On Thursday, July 12, 2012 18:25:03 David Piepgrass wrote:
>> I'm putting this in a separate thread from
>> http://forum.dlang.org/thread/uufohvapbyceuaylo...@forum.dlang.org
>> because my counterproposal brings up a new issue, which could be
>> summarized as "Constructors Considered Harmful":
>> 
>> http://d.puremagic.com/issues/show_bug.cgi?id=8381
> 
> I think that adding constructors to a type from an external source is 
> downright evil. It breaks encapsulation. I should be able to constrain 
> exactly 
> how you construct my type. If you want to create a free function (e.g. a 
> factory function) which uses my constructors, fine. But I'm completely 
> against 
> adding constructors externally.

The proposal is not that add constructors. It is to create a free 
function (.make!Type(args)), that can called like a constructor, by 
writing Type(args). That does not break encapsulation.


Re: Making uniform function call syntax more complete a feature

2012-07-12 Thread Christophe Travert
"Thiez" , dans le message (digitalmars.D:172060), a écrit :
>>> Have you considered adding operator overloading using UFCS 
>>> while you're at it?
>>
>> I assumed it's already possible to add operators 
>> non-intrusively, because operators are just syntactic sugar for 
>> method calls:
>>
>> ++var;  // actual code
>> var.opUnary!"++"(); // lowered once
>> opUnary!"++"(var);  // lowered twice (if necessary)
>>
>> If you're talking about overloading existing operators (which 
>> have been implemented as member functions) non-intrusively for 
>> other types, then I don't know, doesn't it work?
>
> I actually tried those yesterday (with opEquals and opCmp on 
> structs) and couldn't get it to work. Code still used what 
> appeared to be an automatically generated opEquals (that appears 
> to perform a bitwise comparison) instead of my UFCS opEquals.

This behavior for opEquals is debatable, but make sense. If the designer 
of a struct did not implement opEquals, it may be that he intended 
opEqual to be the default opEqual. If you overload opEquals for such 
struct, you may be hijacking it's intended behavior: your not just 
adding a functionality, your overriding an existing functionality.

Did you try operators that are not automatically generated ?
 
> It's already quite obvious that the compiler does not obey its 
> own rewrite rules (see 
> http://dlang.org/operatoroverloading.html#compare) Consider opCmp:
>   a < b
> is rewritten to
>   a.opCmp(b) < 0
> or
>   b.opCmp(a) > 0
> Let's assume the first rule is always chosen. According to the 
> very rewrite rule we just applied, this must be rewritten to
>   a.opCmp(b).opCmp(0) < 0
> 
> It seems quite obvious the compiler does not rewrite compares on 
> integers or all hell would break loose... The language reference 
> should be more specific about these things.

The rewrite rule obviously apply only if the comparison operator is not 
already defined for those types by the langage. That could be precised 
in the web site, but it's consistent.

By the way, would it be possible to implement an opCmp that returns a 
double, to allow it to return a NaN ? That may allow to create values 
that are neither superior, nor inferior to other value, like NaNs. It's 
not possible to implement opCmp for a floating point comparison if opCmp 
is bound to return an int.

Another reason to ban Object imposing a specific signature for opCmp in 
all classes...


Re: just an idea (!! operator)

2012-07-12 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:172056), a écrit :
> On 2012-07-12 13:35, Jonas Drewsen wrote:
> 
>> Or the operator?? could be borrowed from c#
>>
>> auto a = foo ?? new Foo();
>>
>> is short for:
>>
>> auto a = foo is null ? new Foo() : foo;
>>
>> /Jonas
>>
> 
> I really like that operator. The existential operator, also known as the 
> Elvis operator :) . It's available in many languages with slightly 
> different semantics.
> 
> -- 
> /Jacob Carlborg

Sweet.

| Elvis Operator (?: )
|
| The "Elvis operator" is a shortening of Java's ternary operator. One 
| instance of where this is handy is for returning a 'sensible default' 
| value if an expression resolves to false or null. A simple example 
| might look like this:
|
| def displayName = user.name ? user.name : "Anonymous" //traditional 
| ternary operator usage
|
| def displayName = user.name ?: "Anonymous"  // more compact Elvis 
| operator - does same as above


(taken from 
http://groovy.codehaus.org/Operators#Operators-ElvisOperator)



Re: just an idea (!! operator)

2012-07-12 Thread Christophe Travert
Christophe Travert, dans le message (digitalmars.D:172047), a écrit :
> "Jonas Drewsen" , dans le message (digitalmars.D:172039), a écrit :
>> On Wednesday, 11 July 2012 at 11:18:21 UTC, akaz wrote:
>>> if needed, the operator !! (double exclamation mark) could be 
>>> defined.
>>>
>>> ...
>> 
>> Or the operator?? could be borrowed from c#
>> 
>> auto a = foo ?? new Foo();
>> 
>> is short for:
>> 
>> auto a = foo is null ? new Foo() : foo;
> 
> or maybe:
> auto a = ! ! foo ? foo : new Foo();

I forgot to mention that foo would be evaluated only once (and the 
second operand would be evaluated lazily). This is the main point of 
this syntax, and it is not easily emulable (as long a lazy is not 
fixed).



Re: just an idea (!! operator)

2012-07-12 Thread Christophe Travert
"Jonas Drewsen" , dans le message (digitalmars.D:172039), a écrit :
> On Wednesday, 11 July 2012 at 11:18:21 UTC, akaz wrote:
>> if needed, the operator !! (double exclamation mark) could be 
>> defined.
>>
>> ...
> 
> Or the operator?? could be borrowed from c#
> 
> auto a = foo ?? new Foo();
> 
> is short for:
> 
> auto a = foo is null ? new Foo() : foo;

or maybe:
auto a = ! ! foo ? foo : new Foo();

|| could be redifined to have this a behavior, but it would break code.



Re: All right, all right! Interim decision regarding qualified Object methods

2012-07-12 Thread Christophe Travert
"Mehrdad" , dans le message (digitalmars.D:172012), a écrit :
> On Thursday, 12 July 2012 at 04:15:48 UTC, Andrei Alexandrescu 
> wrote:
>> Required reading prior to this: http://goo.gl/eXpuX
> 
> Referenced post (for context):
>>> The problem is not only in the constness of the argument, but 
>>> also in
> its purity, safety, and throwability (although the last two can be
> worked arround easily).
> 
> I think we're looking at the wrong problem here.
> 
> If we're trying to escape problems with 'const' Objects by 
> removing the members form Object entirely, that should be raising 
> a red flag with const, not with Object.

const has no problem. It is bitwise const, and it works like that.
Logical const is not implemented in D, but that is a separate issue.

The problem is to force people to use const, because bitwise const may 
not be suited for their problems. If opEquals and friends are const, 
then D forces people to use bitwise const, and that is the problem, that 
is largely widened by the fact that bitwise transitive const is 
particularly viral. But if we do not impose to implement any const 
methods, the problem disappear.


Re: All right, all right! Interim decision regarding qualified Object methods

2012-07-12 Thread Christophe Travert
Timon Gehr , dans le message (digitalmars.D:172014), a écrit :
> Thank you for taking the time.
> 
> Removing the default methods completely is actually a lot better than 
> making inheriting from Object optional or tweaking const beyond 
> recognition and/or usefulness.
> I was afraid to suggest this because it breaks all code that assumes
> that the methods are present in object (most code?), but I think it is
> a great way to go forward.

It's not worse thant breaking all code that overrides opEqual by 
changing it's signature.

> Regarding toString, getting rid of it would imply that the default way
> of creating a textual representation of an object would no longer be
> part of Object, paving the way for the proposal that uses buffers and
> scope delegates - this will be purely a library thing.

I agree. toString should be a purely library solution. The standard 
library could easily use templates trying to use different ways to print 
the the object, depending on what methods are implemented for that 
object: direct conversion to string/wstring/dstring, a standard method 
using delegates, etc.

> Regarding backwards-compatibility, an issue that is trivial to fix is
> the invalidation of 'override' declarations in the child classes.
> They can be allowed with the -d switch for those methods. And if they
> use 'super', the compiler could magically provide the current default
> implementations.

Magic is not good for langage consistency. I would rather do a different 
fix:

Introduce a class in the standard library that is like the current 
Object. To correct broken code, make all classes inheriting from Objet 
inherit from this new class, and rewrite opEqual/opCmp to take this new 
class as an argument instead of Object. This can be done automatically.

People may not want to use that fix, but in that case, we don't have to 
implement a magical behavior with super. What can be used is 
deprecation: if I someone uses super.opEqual (meaning Object.opEqual), 
and others, he should bet a warning saying it's deprectated, with 
explanations on how to solve the issue.

A possible course of action is this:
 - revert changes in Object (with renewed apologies to people having 
worked on that)
 - introduce a class implementing basic Hashes functions with the 
current signatures. (A class with the new signatures could be provided 
too, making use of the late work on Object, which would not be 
completely wasted after all)
 - introduce a deprecation warning on uses of Object.opEqual and 
friends, informing the programmer about the possibility to derive from 
the new class to solve the issue.
 - in the mean time, make the necessary changes to enable classes not to 
have those methods (like structs)
 - after the deprecation period, remove Object.opEqual and friends.


Re: Congratulations to the D Team!

2012-07-12 Thread Christophe Travert
"Jonathan M Davis" , dans le message (digitalmars.D:172005), a écrit :
> On Wednesday, July 11, 2012 13:46:17 Andrei Alexandrescu wrote:
>> I don't think they should be pure. Do you have reasons to think otherwise?
> 
> As I understand it, Walter's current plan is to require that opEquals, opCmp, 
> toString, and toHash be @safe const pure nothrow - for both classes and 
> structs.

And is the plan add each tag one by one, breaking codes in many places 
each time ?




Re: Inherited const when you need to mutate

2012-07-12 Thread Christophe Travert
"David Piepgrass" , dans le message (digitalmars.D:172009), a écrit :
>> Now, I recognize and respect the benefits of transitive 
>> immutability:
>> 1. safe multithreading
>> 2. allowing compiler optimizations that are not possible in C++
>> 3. ability to store compile-time immutable literals in ROM
>>
>> (3) does indeed require mutable state to be stored separately, 
>> but it doesn't seem like a common use case (and there is a 
>> workaround), and I don't see how (1) and (2) are necessarily 
>> broken.
> 
> I must be tired.
> 
> Regarding (1), right after posting this I remembered the 
> difference between caching to a "global" hashtable and storing 
> the cached value directly within the object: the hashtable is 
> thread-local, but the object itself may be shared between 
> threads. So that's a pretty fundamental difference.
> 
> Even so, if Cached!(...) puts mutable state directly in the 
> object, fast synchronization mechanisms could be used to ensure 
> that two threads don't step on each other, if they both compute 
> the cached value at the same time. If the cached value is 
> something simple like a hashcode, an atomic write should suffice. 
> And both threads should compute the same result so it doesn't 
> matter who wins.

Yes. It is possible to write a library solution to compute a cached 
value by casting away const in a safe manner, even in a multithreaded 
environment. The limitation is that, if I'm not mistaken, this library 
solution cannot ensure it is not immutable (and potentially placed in 
ROM) when it is const, making the cast undefined.


Re: Congratulations to the D Team!

2012-07-12 Thread Christophe Travert
"David Piepgrass" , dans le message (digitalmars.D:172007), a écrit :
> @mutating class B : A
> {
> private int _x2;
> public @property override x() { return _x2++; }
> }

A fun() pure;

You can't cast the result of fun to immutable, because it may be a B 
instance.


Re: Congratulations to the D Team!

2012-07-11 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:171945), a écrit :
> On 7/11/12 1:40 PM, Jakob Ovrum wrote:
>> Some classes don't lend themselves to immutability. Let's take something
>> obvious like a class object representing a dataset in a database. How is
>> an immutable instance of such a class useful?
> 
> This is a good point. It seems we're subjecting all classes to certain 
> limitations for the benefit of a subset of those classes.

Does Object really need to implement opEquals/opHash/... ? This is what 
limits all 
classes that do not want to have the same signature for opEquals.

The problem is not only in the constness of the argument, but also in 
its purity, safety, and throwability (although the last two can be 
worked arround easily).

You can compare struct, all having different opEquals/opHash/ You 
can put them in AA too. You could do the same for classes. Use a 
templated system, and give the opportunity for the class to provide its 
own signature ? There may be code bloat. I mean, classes will suffer 
from the same code bloat as structs. Solutions can be found reduce this 
code bloat.

Did I miss an issue that makes it mandatory for Object to implement 
opEquals and friends ?

-- 
Christophe


Re: opApply not called for foeach(container)

2012-07-11 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:171902), a écrit :
> I just re-read the docs you linked to, and if that was my only 
> source, I'd reach the same conclusion as you.

I think the reference spec for D should be the community driven and 
widely available website, not a commercial book. But that's not the 
issue here.

> however, my "The D 
> Programming Language", states:
> *12: Operator Overloading
> **9: Overloading foreach
> ***1: foreach with Iteration Primitives
> "Last but not least, if the iterated object offers the slice 
> operator with no arguments lst[], __c is initialized with lst[] 
> instead of lst. This is in order to allow ?extracting? the 
> iteration means out of a container without requiring the 
> container to define the three iteration primitives."
> 
> Another thing I find strange about the doc is: "If the foreach 
> range properties do not exist, the opApply method will be used 
> instead." This sounds backwards to me.

Skipping the last paragraph, a reasonable interpretation would be that 
foreach try to use, in order of preference:
 - for each over array
 - opApply
 - the three range primitives (preferably four if we include save)
 - opSlice (iteration on the result of opSlice is determined by the same 
system).

 opApply should come first, since if someone defines opApply, he or she 
obviously wants to override the range primitive iteration.
 opApply and range primitives may be reached via an alias this.
 opSlice is called only if no way to iterate the struct/class is found. 
 I would not complain if the fourth rule didn't exist, because it's not 
described in dlang.org, but it is a reasonable feature that have be 
taken from TDPL (but then it should be added in dlang.org).


This way, if arr is a container that defines an opSlice, and that does 
not define an opApply, and does not define range primitives:

foreach (a, arr) ...
and
foreach (a, arr[]) ...
should be stricly equivalent. Since the first is translated into the 
second. Both work only if arr[] is iterable.

I think you hit a bug.

-- 
Christophe


Re: opApply not called for foeach(container)

2012-07-11 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:171868), a écrit :
> I'm wondering if this is the correct behavior? In particular, 
> since foreach guarantees a call to opSlice(), so writing "arr[]" 
> *should* be redundant, yet the final behavior is different.
> 
> That said, the "issue" *could* be fixed if the base class defines 
> opApply as: "return opSlice().opApply(dg)" (or more complex). 
> However:
> a) The implementer of class has no obligation to do this, since 
> he has provided a perfectly valid range.
> b) This would force implementers into more generic useless 
> boilerplate code.
> 
> What are your thoughts? Which is the "correct" solution? Is it a 
> bug with foreach, or should the base struct/class provide an 
> opApply?

I think foreach should never call opSlice. That's not in the online 
documentation (http://dlang.org/statement.html#ForeachStatement), unless 
I missed something. If you want to use foreach on a class with an 
opSlice, then yes, you should define opApply. Otherwise, the user have 
to call opSlice himself, which seems reasonable. That's how I understand 
the doc.

-- 
Christophe


Re: Inherited const when you need to mutate

2012-07-11 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:171828), a écrit :
> On 7/10/12 5:19 PM, H. S. Teoh wrote:
> 
> There is value in immutable objects that has been well discussed, which 
> is incompatible with logical constness. We can change the language such 
> as: a given type X has the option to declare "I want logical const and 
> for that I'm giving up the possibility of creating immutable(X)". That 
> keeps things proper for everybody - immutable still has the strong 
> properties we know and love, and such types can actually use logical const.
> 
> A number of refinements are as always possible.

I think this is a good idea, but for classes, inheritance is an issue. 

Example:

class A
{
  int a;
  int compute() const pure { return a; }
  final int fun() const pure
  {
a.compute; // optimised out by the compiler
return a.compute;
  }
}

class B : A
{
  @mutable int b;
  override int compute() const pure
  {
 if(!b) b = longComputation(a);
 // a += 1; // error, a is bitwise-const
 return b; // mutating and returning a mutable part at the 
   // programmer's risk
  }
}

A.compute is bitwise const. However B.compute is logical const. A.fun is 
bitwise const, and can be optimised. But that is no longer true with a B 
instance. However, the compiler must be able to make those 
optimizations, otherwise all the power of const for any non final object 
is lost, because someone may derive a logical const class. This means 
the programmer is *responsible* for creating a logical const-behavior. 
This is a serious issue.

Given the system-programming aspect of D, I would say the programmer 
should be allowed to do such a thing, taking the risk to have an 
undefined behavior. But with great caution. At least, it will be less 
dangerous than casting away const. Just providing a way to make it 
impossible to create an immutable instance of some classes would make it 
less dangerous to cast away constness.

-- 
Christophe


Re: Let's stop parser Hell

2012-07-11 Thread Christophe Travert
Timon Gehr , dans le message (digitalmars.D:171814), a écrit :
> On 07/11/2012 01:16 AM, deadalnix wrote:
>> On 09/07/2012 10:14, Christophe Travert wrote:
>>> deadalnix , dans le message (digitalmars.D:171330), a écrit :
>>>> D isn't 100% CFG. But it is close.
>>>
>>> What makes D fail to be a CFG?
>>
>> type[something] <= something can be a type or an expression.
>> typeid(somethning) <= same here
>> identifier!(something) <= again
> 
> 'something' is context-free:
> 
> something ::= type | expression.

Do you have to know if something is a type or an expression for a simple 
parsing? The langage would better not require this, otherwise simple 
parsing is not possible without looking at all forward references and 
imported files.


Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171769), a écrit :
> On 2012-07-10 20:04, Andrei Alexandrescu wrote:
> 
>> Then store an array. "No one's put a gun to yer head."
>> http://youtu.be/CB1Pij54gTw?t=2m29s
> 
> That's what I'm doing.
> 

And that's what you should do. Algorithm are not made to be stored in 
struct or class instance. You could use InputRange(E) and friends 
to do that, but that's often not optimal. Algorithm are here to do 
their job and output non-lazy result in the end.


Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171725), a écrit :
>> To make the best implementation would require to know how the String
>> context works.
>>
> String is a wrapper around str.array.Appender.

Then, if the purpose is to make the code efficient, I would use the loop 
and append everything to the result without creating the params array, 
and even without creating the string p. Appender is made to append 
everything directly to it efficiently.



Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
"Daniel Murphy" , dans le message (digitalmars.D:171741), a écrit :
> "Christophe Travert"  wrote in message 
> news:jthmu8$2s5b$1...@digitalmars.com...
>> "Daniel Murphy" , dans le message (digitalmars.D:171720), a écrit :
>>> Could it be extended to accept multiple values? (sort of like chain)
>>> eg.
>>> foreach(x; makeRange(23, 7, 1990)) // NO allocations!
>>> {
>>> 
>>> }
>>> I would use this in a lot of places I currently jump through hoops to get 
>>> a
>>> static array without allocating.
>>
>> That's a good idea. IMHO, the real solution would be to make an easy way
>> to create static arrays, and slice them when you want a range.
> 
> It's not quite the same thing, static arrays are not ranges and once you 
> slice them you no longer have a value type, and might be referring to stack 
> allocated data.  With... this thing, the length/progress is not encoded in 
> the type (making it rangeable) but the data _is_ contained in the type, 
> making it safe to pass around.  The best of both worlds, in some situations.

OK, I see. This goes against the principle that ranges are small and 
easy to copy arround, but it can be useful when you know what you are 
doing, or when the number of items is small.

I don't like makeRange much. Would you have a better name? smallRange? 
rangeOf?




Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171739), a écrit :
> On 2012-07-10 18:42, Daniel Murphy wrote:
>> "Jacob Carlborg"  wrote in message
>> news:jthlpf$2pnb$1...@digitalmars.com...
>>>
>>> Can't "map" and "filter" return a random-access range if that's what they
>>> receive?
>>>
>> map can, and does.
> 
> It doesn't seem to:
> 
> auto a = [3, 4].map!(x => x);
> auto b = a.sort;
> 
> Result in one of the original errors I started this thread with.

here, map is random-access. But random access is not enough to call 
sort: you need to have assignable (well, swapable) elements in the 
range, if you want to be able to sort it. values accessed via a map are 
not always assignable, since they are the result of a function.

It seems the map resulting from (x => x) is not assignable. This is 
debatable, but since (x => x) is just a stupid function to test. 
Otherwise, you could try the folowing:

auto a = [3, 4].map!(ref int (ref int x) { return x; })();
a.sort;



Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171725), a écrit :
> On 2012-07-10 17:11, Christophe Travert wrote:
> 
>> What is wrong with foo.chain(["bar"])?
> 
> I think it conceptually wrong for what I want to do. I don't know if I 
> misunderstood ranges completely but I'm seeing them as an abstraction 
> over a collection. With most mutable collection you can add/append an 
> element.

That may be the source of your problem. ranges are not collections. They 
do not own data. They just show data. You can't make them grow. You can 
only consume what you have already read.



Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
"Daniel Murphy" , dans le message (digitalmars.D:171720), a écrit :
> Could it be extended to accept multiple values? (sort of like chain)
> eg.
> foreach(x; makeRange(23, 7, 1990)) // NO allocations!
> {
> 
> }
> I would use this in a lot of places I currently jump through hoops to get a 
> static array without allocating. 

That's a good idea. IMHO, the real solution would be to make an easy way 
to create static arrays, and slice them when you want a range.

-- 
Christophe

I it were just me, array litterals would be static, and people 
should use .dup when they want a a surviving slice.

Well, if it were just me, all function signature should tell when 
references to data escape the scope of the function, and all data would 
be allocated automatically where it should by the compiler.


Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:171723), a écrit :
>> auto emptyRange(E)(E value)
>> {
>>return repeat(value).takeNone;
>> }

> That also seems to answer Jonathan's quest about defining emptyRange. 
> Just use takeNone(R.init).

err, that should be more like:

auto singletonRange(E)() // with no parameters
{
  return takeNone!type_of(repeat(E.init))();
}

An emptyRange compatible with singletonRange should be called 
singletonRange and take no parameter, so that emptyRange name could be 
reserved to a real statically empty range (which is pretty easy to 
implement).

-- 
Christophe


Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Andrei Alexandrescu , dans le message (digitalmars.D:171717), a écrit :
> On 7/10/12 11:11 AM, Christophe Travert wrote:
>> If you do not want the heap allocation of the array, you can create a
>> one-element range to feed to chain (maybe such a thing could be placed
>> in phobos, next to takeOne).
>>
>> struct OneElementRange(E)
>> {
>>E elem;
>>bool passed;
>>@property ref E front() { return elem; }
>>void popFront() { passed = true; }
>>@property bool empty() { return passed; }
>>@property size_t length() { return 1-passed; }
>>//...
>> }
> 
> Yah, probably we should add something like this:
> 
> auto singletonRange(E)(E value)
> {
>  return repeat(value).takeExactly(1);
> }

It would be much better to use:

auto singletonRange(E)(E value)
{
 return repeat(value).takeOne;
}

as well as:

auto emptyRange(E)(E value)
{
  return repeat(value).takeNone;
}

to have the advantages of takeOne and takeNone over takeExactly.

> I don't think it would be considerably less efficient than a handwritten 
> specialization. But then I've been wrong before in assessing efficiency.

Error message displaying the type of singletonRange(E) will be weird, 
but that's far from being the first place where it will be. Simplicity 
and maintainance of phobos seems more important to me. At least until 
these algorithm get stable, meaning open bug reports on algorithm and 
range are solved, and new bugs appears rarely. Optimisers should have no 
trouble inlining calls to Repeat's methods...




Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171690), a écrit :
>> int[] arr = [ 1, 2, 2, 2, 2, 3, 4, 4, 4, 5 ];
>> assert(equal(uniq(arr), [ 1, 2, 3, 4, 5 ][]));
> 
> How should I know that from the example?


Maybe there should be an example with an unsorted range, and a better 
explanation:

| auto uniq(...)
|   Iterates unique consecutive elements of a given range (...)
|   Note that equivalent elements are kept if they are not consecutive.
| 
| Example:
|   int[] arr = [ 1, 2, 2, 3, 4, 4, 4, 2, 4, 4];
|   assert(equal(uniq(arr), [ 1, 2, 3, 4, 2, 4][]));


Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
"Simen Kjaeraas" , dans le message (digitalmars.D:171678), a écrit :
>> Well, I haven't been able to use a single function from std.algorithm  
>> without adding a lot of calls to "array" or "to!(string)". I think the  
>> things I'm trying to do seems trivial and quite common. I'm I overrating  
>> std.algorithm or does it not fit my needs?
>>
> 
> bearophile (who else? :p) has suggested the addition of eager and in-place
> versions of some ranges, and I think he has a very good point.

That would have been useful before UFSC.
Now, writing .array() at the end of an algorithm call is not a pain.

int[] = [1, 2, 2, 3].uniq().map!toString().array();



Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Dmitry Olshansky , dans le message (digitalmars.D:171679), a écrit :
> Because uniq work only on sorted ranges? Have you tried reading docs?
> "
> Iterates unique consecutive elements of the given range (functionality 
> akin to the uniq system utility). Equivalence of elements is assessed by 
> using the predicate pred, by default "a == b". If the given range is 
> bidirectional, uniq also yields a bidirectional range.
> "

Not, as the doc says, uniq work on any range, but remove only the 
consecutive elements. It you want to remove all duplicates, 
then you need a sorted range.




Re: Why is std.algorithm so complicated to use?

2012-07-10 Thread Christophe Travert
Jacob Carlborg , dans le message (digitalmars.D:171685), a écrit :
> I mean, is it possible to have the original code work?
> 
> auto bar = foo.chain("bar");
> 
> Or perhaps more appropriate:
> 
> auto bar = foo.append("bar");

What is wrong with foo.chain(["bar"])?

If you do not want the heap allocation of the array, you can create a 
one-element range to feed to chain (maybe such a thing could be placed 
in phobos, next to takeOne).

struct OneElementRange(E)
{
  E elem;
  bool passed;
  @property ref E front() { return elem; }
  void popFront() { passed = true; }
  @property bool empty() { return passed; }
  @property size_t length() { return 1-passed; }
  //...
}

You can't expect chain to work the same way as run-time append. A 
compile-time append would be very inefficient if misused.

> https://github.com/jacob-carlborg/dstep/blob/master/dstep/translator/Translator.d#L217

you might try this (untested)


string function(Parameter) stringify = (x)
{
 return (x.isConst? "const("~x.type~")": x.type)
~ (x.name.any?" "~translateIdentifier(x.name):"");
}

auto params = parameters
  .map!stringify()
  .chain(variadic? []: ["..."])
  .joiner(", ");

context ~= params;

I am not sure this will be more efficient. joiner may be slowed down by 
the fact that it is called with a chain result, which is slower on 
front. But at leat you save yourself the heap-allocation of the params 
array*.

I would use:
context ~= parameters.map!stringify().joiner(",  ");
if (variadic) context ~= ", ...";

To make the best implementation would require to know how the String 
context works.

*Note that here, stringify is not lazy, and thus allocates. It 
could be a chain or a joiner, but I'm not sure the result would really 
be more efficient.


Re: Let's stop parser Hell

2012-07-09 Thread Christophe Travert
deadalnix , dans le message (digitalmars.D:171330), a écrit :
> D isn't 100% CFG. But it is close.

What makes D fail to be a CFG?


Re: Proposal: takeFront and takeBack

2012-07-05 Thread Christophe Travert
"monarch_dodra" , dans le message (digitalmars.D:171175), a écrit :
> For those few algorithms that work on bidirRange, we'd need a 
> garantee that they don't ever front/back the same item twice. We 
> *could* achieve this by defining a bidirectionalInputRange class 
> of range.

filter does that. If you want to call front only oncem you have to cache 
the results or... pop as you take the front value.

popFrontN and drop will crash too.


Re: Proposal: takeFront and takeBack

2012-07-05 Thread Christophe Travert
If you really don't need the value, you could devise a "justPop" method 
that does not return (by the way, overloading by return type would be an 
amazing feature here). The idea is not "we should return a value 
everytime we pop", but "we should pop when we return a value".

-- 
Christophe


  1   2   3   >