Advent of Code

2015-12-01 Thread Regan Heath via Digitalmars-d

Hi all,

Long time since I read/posted here but I saw this and thought it 
might be good PR for D:

http://adventofcode.com/

Should also be fun.

Ciao,
Regan



Re: 'partial' keyword in C# is very good for project , what's the same thing in D?

2014-11-12 Thread Regan Heath via Digitalmars-d

On Mon, 10 Nov 2014 18:09:12 -, deadalnix deadal...@gmail.com wrote:


On Monday, 10 November 2014 at 10:21:34 UTC, Regan Heath wrote:
On Fri, 31 Oct 2014 09:30:25 -, Dejan Lekic dejan.le...@gmail.com  
wrote:
In D apps I work on I prefer all my classes in a single module, as is  
common D way, or shall I call it modular way?


Sure, but that's not the point of partial.  It's almost never used by  
the programmer directly, and when it is used you almost never need to  
look at the generated partial class code as it just works.  So, you  
effectively get what you prefer but you also get clean separation  
between generated and user code, which is very important if the  
generated code needs to be re-generated and it also means the user code  
stays simpler, cleaner and easier to work with.


Basically it's just a good idea(TM).  Unfortunately as many have said,  
it's not something D2.0 is likely to see.  String mixins aren't the  
nicest thing to use, but at least they can achieve the same/similar  
thing.


R


I don't get how the same can't be achieved with mixin template
for instance.


Someone raised concerns.. I haven't looked into it myself.  If it can,  
great :)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: 'partial' keyword in C# is very good for project , what's the same thing in D?

2014-11-10 Thread Regan Heath via Digitalmars-d
On Fri, 31 Oct 2014 09:30:25 -, Dejan Lekic dejan.le...@gmail.com  
wrote:
In D apps I work on I prefer all my classes in a single module, as is  
common D way, or shall I call it modular way?


Sure, but that's not the point of partial.  It's almost never used by the  
programmer directly, and when it is used you almost never need to look at  
the generated partial class code as it just works.  So, you effectively  
get what you prefer but you also get clean separation between generated  
and user code, which is very important if the generated code needs to be  
re-generated and it also means the user code stays simpler, cleaner and  
easier to work with.


Basically it's just a good idea(TM).  Unfortunately as many have said,  
it's not something D2.0 is likely to see.  String mixins aren't the nicest  
thing to use, but at least they can achieve the same/similar thing.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: 'partial' keyword in C# is very good for project , what's the same thing in D?

2014-10-29 Thread Regan Heath via Digitalmars-d
On Wed, 29 Oct 2014 07:54:39 -, Paulo  Pinto pj...@progtools.org  
wrote:



On Wednesday, 29 October 2014 at 07:41:41 UTC, FrankLike wrote:

Hello,everyone,
I've written some projects  in  C#,find the 'partial' keyword is very  
userful,which lets the auto codes in another single file,my codes are  
very easy to update.

But  what the same thing in D?

Thank you,every one.


Maybe mixins might be a possibility.


Something like..

class Foo
{
  mixin(import(auto-generated.d));
}

where auto-generated.d has class members/methods but no class Foo itself.


Partial classes are used in C# wherever you need to combine auto-generated  
code and user code into a single class.  So, the Windows GUI builder does  
it placing all the GUI component construction and property setting in one  
file, and allowing the user to only have to see/edit the application level  
code in another file.  Likewise LINQ to SQL generates a custom DataContext  
child class, and the user can optionally create a 2nd file with the  
partial class to extend it.


C# also has partial methods which are essentially abstract methods with a  
compiler generated empty body.  They are not virtual as you cannot call a  
base.method() from method(), instead you optionally implement the method  
and if you don't it does nothing.  LINQ to SQL uses these for  
insert/update/delete events for each table in your database.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-23 Thread Regan Heath via Digitalmars-d-announce
On Thu, 23 Oct 2014 15:27:50 +0100, Leandro Lucarella l...@llucax.com.ar  
wrote:



Regan Heath, el 22 de October a las 10:41 me escribiste:

NO, this is completely false, and why I think you are not entirely
familiar with env vars in posix. LD_PRELOAD and LD_LIBRARY_PATH affects
ALL, EACH and EVERY program for example. D or not D. Every single
dynamically linked program.

True.  And the reason these behave this way is because we *always*
want them to - the same is NOT true of the proposed vars for D.


No, not at all, you very rarely want to change LD_PRELOAD and
LD_LIBRARY_PATH globaly.


Sure, but when you do change them you will want them to propagate, by  
default, which is why envvars are used for these.


The same is not true of many potential D GC/allocation/debug flags, we do  
not necessarily want them to propagate at all and in fact we may want to  
target a single exe in a process tree i.e.


parent - not this
  child1   - this one
child2 - not this


My conclusion is we don't agree mainly on this:

I think there are cases where you want runtime configuration to
propagate or be set more or less globally.


I agree that there are cases we might want it to propagate *from a parent  
exe downwards* or similar but this is not what I would call more or less  
globally it's very much less than globally.  The scope we want is going  
to be either a single exe, or that exe and some or all of it's children  
and possibly only for a single execution.


Sure, you *could* wrap a single execution in it's own session and only set  
the envvar within that session but it's far simpler just to pass a command  
line arg.  Likewise, you could set an envvar in a session and run multiple  
executions within that session, but again it's simpler just to pass an arg  
each time.


Basically, I don't see what positive benefit you get from an envvar over a  
command line switch, especially if you assume/agree that the most sensible  
default these switches is 'off' and that they should be enabled  
specifically.


I think what we disagree about here is the scope it should apply and  
whether propagation should be the default behaviour.



You think no one will ever want some runtime option to propagate.


Nope, I never said that.


Also, I don't have much of a problem with having command-line options to
configure the runtime too, although I think in linux/unix is much less
natural.


Surely not, unix is the king of command line switches.


Runtime configuration will be most of the time some to be done
either by the developer (in which case it would be nicer to have a
programatic way to configure it)


Agreed.


or on a system level, by a system
administrator / devops (in which case for me environment variables are
superior for me).


Disagree.  It's not something we ever want at a system level, it's  
somewhere within the range of a single session - single execution.



Usually runtime options will be completely meaningless
for a regular user. Also, will you document them when you use --help?


Or course not, just as you would not document the envvar(s).

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-22 Thread Regan Heath via Digitalmars-d-announce
On Tue, 21 Oct 2014 23:52:22 +0100, Leandro Lucarella l...@llucax.com.ar  
wrote:

The runtime is not platform independent AT ALL.

  ^ implementation


Why should you provide a platform agnostic way to configure it?


Because it makes life easier for developers and cross platform  
development, not to mention documentation.  The benefits far outweigh the  
costs.



I can understand it if it's free,
but if you have to sacrifice something just to get a platform agnostic
mechanism, for me it's not worth it at all.


Reasonable people may disagree.


All these fear about how this can obscurely affect programs
is totally unfunded. That just does not happen. Not at least commonly
enough to ignore all the other advantages of it.

Sure, but past/current env vars being used are used *privately* to a
single program.


NO, this is completely false, and why I think you are not entirely
familiar with env vars in posix. LD_PRELOAD and LD_LIBRARY_PATH affects
ALL, EACH and EVERY program for example. D or not D. Every single
dynamically linked program.


True.  And the reason these behave this way is because we *always* want  
them to - the same is NOT true of the proposed vars for D.  Which is my  
point.



This is a super common mechanism. I never ever had problems with this.
Did you? Did honestly you even know they existed?


Yes.  But this is beside the point, which I hope I have clarified now?

Regan

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Destructor order

2014-10-22 Thread Regan Heath via Digitalmars-d-learn

On Wed, 22 Oct 2014 16:49:20 +0100, eles e...@eles.com wrote:


On Wednesday, 22 October 2014 at 15:45:02 UTC, eles wrote:

D version with structs:

{ //display ~C~B~A
A foo;
B bar;
C *caz = new C();
delete caz;
}

as expected.


Structs are special, compare:
http://dlang.org/struct.html#struct-destructor

with:
http://dlang.org/class.html#destructors

Specifically:
The garbage collector is not guaranteed to run the destructor for all  
unreferenced objects. Furthermore, the order in which the garbage  
collector calls destructors for unreference objects is not specified. This  
means that when the garbage collector calls a destructor for an object of  
a class that has members that are references to garbage collected objects,  
those references may no longer be valid. This means that destructors  
cannot reference sub objects. This rule does not apply to auto objects or  
objects deleted with the DeleteExpression, as the destructor is not being  
run by the garbage collector, meaning all references are valid.


Regan

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-21 Thread Regan Heath via Digitalmars-d-announce
On Mon, 20 Oct 2014 18:19:33 +0100, Sean Kelly s...@invisibleduck.org  
wrote:



On Monday, 20 October 2014 at 10:39:28 UTC, Regan Heath wrote:


Sure, but past/current env vars being used are used *privately* to a  
single program.  What you're suggesting here are env vars which will  
affect *all* D programs that see them.  If D takes over the world as we  
all hope it will then this will be a significantly different situation  
to what you are used to.


I'm not advocating the approach, but you could create a run_d
app that simply set the relevant environment args and then
executed the specified app as a child process.  The args would be
picked up by the app without touching the system environment.
This would work on Windows as well as on *nix.


Sure, but in this case passing an argument is both simpler and clearer  
(intent).


This is basically trying to shoehorn something in where it was never  
intended to be used, envvars by design are supposed to affect everything  
running in the environment and they're the wrong tool for the job if you  
want to target specific processes, which IMO is a requirement we have.


A specific example.  Imagine we have the equivalent of the windows CRT  
debug malloc feature bits, i.e. never free or track all allocations etc.   
These features are very useful, but they are costly.  Turning them on for  
an entire process tree may be unworkable - it may be too slow or consume  
too much memory.  A more targeted approach is required.


There are plenty of options, but a global envvar is not one of them.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-20 Thread Regan Heath via Digitalmars-d-announce
On Fri, 17 Oct 2014 17:54:55 +0100, Leandro Lucarella l...@llucax.com.ar  
wrote:

Regan Heath, el 17 de October a las 15:43 me escribiste:

I think you've mistook my tone.  I am not religious about this.  I
just think it's a bad idea for a program to alter behaviour based on
a largely invisible thing (environment variable).  It's far better
to have a command line switch staring you in the face.


But it's not the same. I don't mean to be rude, but all you (and Walter)
are saying about environment is evidence of not knowing how useful they
are in POSIX OSs


I am aware of how they are used as I have had to deal with them in the  
past. :)



what's the history in those OSs and what people expect from them.


D is not simply for these OSs and should be as platform agnostic as  
possible for core functionality.



All these fear about how this can obscurely affect programs
is totally unfunded. That just does not happen. Not at least commonly
enough to ignore all the other advantages of it.


Sure, but past/current env vars being used are used *privately* to a  
single program.  What you're suggesting here are env vars which will  
affect *all* D programs that see them.  If D takes over the world as we  
all hope it will then this will be a significantly different situation to  
what you are used to.



If you keep denying it usefulness and how they are different from
command-line arguments, we'll keep going in circles.


I am not denying they are useful.  I am denying they are *better* than a  
command line argument *for this specific use case*



Plus as Walter mentioned the environment variable is a bit like a
shotgun, it could potentially affect every program executed from
that context.

We have a product here which uses env vars for trace flags and
(without having separate var for each process) you cannot turn on
trace for a single process in an execution tree, instead each child
inherits the parent environment and starts to trace.


So, your example is a D program, that spawns other D programs, so if you
set an environment variable to affect the behaviour of the starting
program, you affect also the behaviour of the child programs.


Yes.  How do you control which of these programs is affected by your  
global-affects-all-D-programs-env-var?


This is a good example, and I can argue for environment variables for  
the same

reason. If I want to debug this whole mess, using command-line options
there is no way I can affect the whole execution tree to log useful
debug information.


Sure you can.  You can do whatever you like with an argument, including  
passing a debug argument to sub-processes as required.  Or, you could use  
custom env vars to do the same thing.


What you *do not* want is a global env var that indiscriminately affects  
every program that sees it, this gives you no control.



See, you proved my point, environment variables and
command-line arguments are different and thus, useful for different
situations.


Sure, but the point is that a global env var that silently controls *all*  
D programs is a shotgun blast, and we want a needle.



And.. when some of those flags have different meanings in different
processes it gets even worse.


Why would they? Don't create problems where there are any :)


Sadly it exists .. I inherited it (the source is 20+ years old).


Especially if one of those flags prints
debugging to stdout, and the process is run as a child where
input/output are parsed.. it just plain doesn't work.  It's a mess.


If you write to stdout (without giving the option to write to a log
file) then what you did is just broken. Again, there is no point in
inventing theoretical situations where you can screw anything up. You
can always fabricate those. Let's stay on the domain of reality :)


Sadly not theoretical.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function

2014-10-20 Thread Regan Heath via Digitalmars-d

On Sun, 19 Oct 2014 10:06:31 +0100, eles eles...@gzk.dot wrote:


On Wednesday, 15 October 2014 at 14:42:30 UTC, Regan Heath wrote:

On Thu, 09 Oct 2014 09:50:44 +0100, Martin Nowak c...@dawg.eu wrote:

Would this affect your code?


Probably, but I have no D code of any size to care about.


Would this change make you to write more code in D?


No.  The blockers for me are:

1- We're not likely to use D here at work any time soon.  We're writing  
new stuff in C#/Java and we maintain legacy C/C++.


2- For my own projects I typically write windows GUI programs and D is no  
where near C# for this.


3- Last time I tried to write anything non-GUI of a substantial nature I  
was annoyed by the fact that I could not mixin virtual methods (which I  
know is a tough problem and waaay down the priority list if at all).  It's  
a silly reason to be put off, I know, it was just disappointing and enough  
to put the brakes on and I just drifted off after that.


4- TBH I don't have enough free time or motivation to do more, it's not  
you (D) it's me :P


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-17 Thread Regan Heath via Digitalmars-d-announce
On Fri, 17 Oct 2014 00:01:39 +0100, Leandro Lucarella l...@llucax.com.ar  
wrote:



Regan Heath, el 14 de October a las 11:11 me escribiste:

I still don't understand why wouldn't we use environment variables for
what they've been created for, it's foolish :-)

As mentioned this is not a very windows friendly/like solution.


As mentioned you don't have to use a unique cross-platform solution, you
can have different solutions for different OSs. No need to lower down to
the worse solution.


You've got it backwards.  I'm looking for a *better* solution than  
environment variables, which are a truly horrid way to control runtime  
behaviour IMHO.  Something built into the language or runtime itself.   
And, better yet would be something that is more generally useful - not  
limited to GC init etc.



Wouldn't it be more generally useful to have another function like
main() called init() which if present (optional) is called
before/during initialisation.  It would be passed the command line
arguments.  Then a program can chose to implement it, and can use it
to configure the GC in any manner it likes.

Seems like this could be generally useful in addition to solving
this issue.


It is nice, but a) a different issue, this doesn't provide
initialization time configuration.


I don't follow.  You want to execute some code A before other code B  
occurs.  This meets that requirement - assuming init() is called at the  
point you need it to be called.



Think of development vs. devops. If
devops needs to debug a problem they could potentially re-run the
application activating GC logging, or GC stomping. No need to recompile,
no need to even have access to the source code.


./application -gclog
./application -gcstomp

..code..

init(string[] args)
{
 if ..
}

Not need to recompile.

Some GC options might make sense for all D applications, in that case the  
compiler default init() could handle those and custom init() functions  
would simply call it, and handle any extra custom options.


Other GC/allocation options might be very application specific i.e.  
perhaps the application code cannot support RC for some reason, etc.



And b) where would this init() function live? You'll have to define it
always


Surely not.


, even if you don't want to customize anything, otherwise the
compiler will have to somehow figure out if one is provided or not and
if is not provided, generate a default one. Not a simple solution to
implement.


Sounds pretty trivial to me.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-16 Thread Regan Heath via Digitalmars-d-announce
On Thu, 16 Oct 2014 09:10:38 +0100, Dylan Knutson tcdknut...@gmail.com  
wrote:






Wouldn't it be more generally useful to have another function like  
main() called init() which if present (optional) is called  
before/during initialisation.  It would be passed the command line  
arguments.  Then a program can chose to implement it, and can use it to  
configure the GC in any manner it likes.


Seems like this could be generally useful in addition to solving this  
issue.


Isn't this what module constructors are for? As for passed in  
parameters, I'm sure there's a cross platform way to retrieve them  
without bring passed them directly, ala how Rust does it.


Provided module constructors occur early enough in the process I guess  
this would work.  You would need to ensure the module constructor doing  
the GC configuration occurred first too I guess.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function

2014-10-15 Thread Regan Heath via Digitalmars-d
On Sat, 11 Oct 2014 13:47:55 +0100, Martin Nowak  
code+news.digitalm...@dawg.eu wrote:



https://github.com/D-Programming-Language/dmd/pull/4043#issuecomment-58748353

There has been a broad support for this on the newsgroup discussion  
because this regularly confuses beginners.
There are also some arguments against it (particularly by Walter) saying  
that this change will put too much work on D code owners.


Let's continue with the following steps.
- add RHS/LHS function qualifiers to D's style guide
- change all code formatting (like dmd's headergen and ddoc to use RHS  
qualifiers)
- help Brian to get dfix up and running  
(https://github.com/Hackerpilot/dfix/issues/1)


Then we might revisit the topic in 6 month and see whether we have  
better arguments now.


+1

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function

2014-10-15 Thread Regan Heath via Digitalmars-d

On Thu, 09 Oct 2014 09:50:44 +0100, Martin Nowak c...@dawg.eu wrote:

Would this affect your code?


Probably, but I have no D code of any size to care about.


Do you think it makes your code better or worse?


Better.


Is this just a pointless style change?


Nope.


Anything else?


Only what you said in summary to this thread (I am waay late to this party)

Regan

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-14 Thread Regan Heath via Digitalmars-d-announce
On Sat, 11 Oct 2014 01:45:48 +0100, Leandro Lucarella l...@llucax.com.ar  
wrote:



Walter Bright, el  9 de October a las 17:28 me escribiste:

On 10/9/2014 7:25 AM, Dicebot wrote:
At the same time I don't see what real benefit such runtime options  
brings to
the table. This is why in my PR garbage collector is currently chosen  
during

compilation time.

Choosing at compile time is probably best.


This is not (only) about picking a GC implementation, but also about GC
*options/configuration*. The fact that right now to select between
concurrent or not would mean using a different GC altogether is just an
implementation detail. As I said, if at some point we can merge both,
this wouldn't be necessary. Right now GDGC can disable the concurrent
scanning, among other cool things (like enabling memory stomping,
enabling logging of allocations to a file, enable logging of collections
to a file, controlling the initial pools of memory when the program
starts, etc.).

This is very convenient to turn on/off not exactly at *runtime* but what
I call *initialization time* or program startup. Because sometimes
recompiling the program with different parameters is quite annoying, and
as said before, for stuff that needs to be initialized *before* any
actual D code is executed, sometimes is not easy to be done *inside* D
code in a way that's not horrible and convoluted.

I still don't understand why wouldn't we use environment variables for
what they've been created for, it's foolish :-)


As mentioned this is not a very windows friendly/like solution.

Wouldn't it be more generally useful to have another function like main()  
called init() which if present (optional) is called before/during  
initialisation.  It would be passed the command line arguments.  Then a  
program can chose to implement it, and can use it to configure the GC in  
any manner it likes.


Seems like this could be generally useful in addition to solving this  
issue.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: scope() statements and return

2014-10-08 Thread Regan Heath via Digitalmars-d
On Tue, 07 Oct 2014 14:39:06 +0100, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 10/7/14, 12:36 AM, monarch_dodra wrote:

Hum... But arguably, that's just exception chaining happening. Do you
have any examples of someone actually dealing with all the exceptions
in a chain in a catch, or actually using the information in a manner
that is more than just printing?


No. But that doesn't mean anything; all uses of exceptions I know of are  
used for just printing. -- Andrei


I have a couple of examples here in front of me.  This is in C#...

[not just for printing]
1. I catch a ChangeConflictException and attempt some basic automatic  
conflict resolution (i.e. column has changed in the database, but I have  
not changed the local version then merge the value from database)


[examining the chain]
2. I catch Exception then test if ex is TransactionException AND if  
ex.InnerException is TimeoutException (AKA first in chain) then raise a  
different sort of alert (for our GUI to display).


(FYI the reason I don't have a separate catch block for  
TransactionException specifically is that it would involve duplicating all  
the cleanup I am doing in this catch block, all for a 1 line raise a  
different alert call - it just didn't seem worth it)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-06 Thread Regan Heath via Digitalmars-d

On Mon, 06 Oct 2014 15:48:31 +0100, Jacob Carlborg d...@me.com wrote:


On 06/10/14 15:45, Andrei Alexandrescu wrote:


Knowledge doesn't have to be by type; just place data inside the
exception. About the only place where multiple catch statements are
used to make fine distinctions between exception types is in sample code
showing how to use multiple catch statements :o). This whole notion
that different exceptions need different types is as far as I can tell a
red herring.


What do you suggest, error codes? I consider that an ugly hack.


Why?

It gives us the benefits of error code return values:
 - ability to easily/cheaply check for/compare them using switch on code  
value (vs comparing/casting types)

 - ability to pass through OS level codes directly

Without any of the penalties:
 - checking for them after every call.
 - losing the return value slot or having to engineer multiple return  
values in the language.
 - having to mix error codes in with valid return values (for int()  
functions).


We also get:
 - no type proliferation.
 - no arguments about what exception types are needed, or the hierarchy to  
put them in.


Seems like a win to me.


Of course.. it would be nicer still if there was a list of OS/platform  
agnostic error codes which were used throughout phobos and could be  
re-used by client code.  And.. (for example) it would be nice if there was  
a FileNotFound(string path) function which returned an Exception using the  
correct code allowing:

  throw FileNotFound(path);

and so on.


I do not know a lot about how exceptions are thrown and caught at the  
compiler/compiled code level, but perhaps there is even a performance  
benefit to be had if you know that only 2 possible types (Exception and  
Error) can/will be thrown..


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: WAT: opCmp and opEquals woes

2014-07-28 Thread Regan Heath via Digitalmars-d
On Sat, 26 Jul 2014 05:22:26 +0100, Walter Bright  
newshou...@digitalmars.com wrote:
If you don't want to accept that equality and comparison are  
fundamentally different operations, I can only repeat saying the same  
things.


For the majority of use cases they are *not* in fact fundamentally  
different.


You're correct, they are *actually* fundamentally different at a  
conceptual/theoretical level, but this difference is irrelevant in the  
majority of cases.


It is true that we need to be able to define/model this difference (which  
is why we have both opCmp and opEquals) but it is *not* true that every  
user, for every object, needs to be aware of and cope with this difference.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: WAT: opCmp and opEquals woes

2014-07-28 Thread Regan Heath via Digitalmars-d
On Fri, 25 Jul 2014 21:38:33 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/25/2014 4:10 AM, Regan Heath wrote:
Sure, Andrei makes a valid point .. for a minority of cases.  The  
majority case
will be that opEquals and opCmp==0 will agree.  In those minority cases  
where
they are intended to disagree the user will have intentionally defined  
both, to
be different.  I cannot think of any case where a user will intend for  
these to

be different, then not define both to ensure it.


You've agreed with my point, then, that autogenerating opEquals as  
memberwise equality (not opCmp==0) if one is not supplied will be  
correct unless the user code is already broken.


No, you've miss-understood my point.

My point was that for the vast majority of coders, in the vast majority of  
cases opCmp()==0 will agree with opEquals().  It is only in very niche  
cases i.e. where partial ordering is actually present and important, that  
this assumption should be broken.


Yet, by default, if a user defines opCmp() the compiler generated opEquals  
may well violate that assumption.  This is surprising and will lead to  
subtle bugs.


If someone is intentionally defining an object for partial ordering they  
will expect to have to define both opCmp and opEquals, and not only that,  
if they somehow neglect to do so their first test of partial ordering will  
show they have a bug and they will soon realise their mistake.


The same cannot be said for someone who wants total ordering (the majority  
of users in the majority of cases).  In this case they are unlikely to  
specifically test for ordering bugs, and this mistake will creep in cause  
trouble down the line.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: WAT: opCmp and opEquals woes

2014-07-25 Thread Regan Heath via Digitalmars-d
On Fri, 25 Jul 2014 09:39:11 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/25/2014 1:02 AM, Jacob Carlborg wrote:

3. If opCmp is defined but no opEquals, lhs == rhs will be lowered to
lhs.opCmp(rhs) == 0


This is the sticking point. opCmp and opEquals are separate on purpose,  
see Andrei's posts.


Sure, Andrei makes a valid point .. for a minority of cases.  The majority  
case will be that opEquals and opCmp==0 will agree.  In those minority  
cases where they are intended to disagree the user will have intentionally  
defined both, to be different.  I cannot think of any case where a user  
will intend for these to be different, then not define both to ensure it.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: File needs to be closed on Windows but not on Posix, bug?

2014-07-07 Thread Regan Heath via Digitalmars-d-learn
On Mon, 07 Jul 2014 12:17:34 +0100, Joakim dl...@joakim.airpost.net  
wrote:



On Monday, 7 July 2014 at 10:19:01 UTC, Kagamin wrote:

See if stdio allows you to specify delete sharing when opening the file.


I don't know what delete sharing is exactly, but the File constructor  
simply calls fopen and I don't see any option for the Windows fopen that  
seems to do it:


http://msdn.microsoft.com/en-us/library/yeby3zcb.aspx


The fopen variant that allows you to specify sharing is:
http://msdn.microsoft.com/en-us/library/8f30b0db.aspx

But it does not mention delete sharing there.

CreateFile allows sharing to be specified for delete however:
http://msdn.microsoft.com/en-gb/library/windows/desktop/aa363858(v=vs.85).aspx

So... you could:
 - Call CreateFile giving you a handle
 - Call _open_osfhandle to get a file descriptor
 - Call _fdopen on the file descriptor to get a FILE* for it

But!  I agree with Adam, leave it as a thin wrapper.  Being a windows  
programmer by trade I would expect the remove to fail, I would not expect  
all my files to be opened with delete sharing enabled by default.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: File needs to be closed on Windows but not on Posix, bug?

2014-07-07 Thread Regan Heath via Digitalmars-d-learn
On Mon, 07 Jul 2014 15:18:51 +0100, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Monday, 7 July 2014 at 12:00:48 UTC, Regan Heath wrote:
But!  I agree with Adam, leave it as a thin wrapper.  Being a windows  
programmer by trade I would expect the remove to fail, I would not  
expect all my files to be opened with delete sharing enabled by default.


R


And I believe behavior is still different. In Linux an open file can be  
accessed even after a delete operation (unlink). But in Windows is that  
possible?


Not sure, I've never done this.  It's just not something you would  
typically want/try to do on windows.


If I had to guess, I would say it would still be possible to access the  
file.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Scott Meyers' DConf 2014 keynote The Last Thing D Needs

2014-05-30 Thread Regan Heath via Digitalmars-d-announce
On Tue, 27 May 2014 22:40:00 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 5/27/2014 2:22 PM, w0rp wrote:
I'm actually a native speaker of 25 years and I didn't get it at first.  
Natural

language communicates ideas approximately.


What bugs me is when people say:

I could care less.


I've always assumed some sort of sentence finishing laziness on their  
part.  As in, I could care less, but it would be pretty hard to do so or  
something like that.


R


Re: Scott Meyers' DConf 2014 keynote The Last Thing D Needs

2014-05-30 Thread Regan Heath via Digitalmars-d-announce
On Thu, 29 May 2014 20:40:10 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 5/29/2014 11:25 AM, Dmitry Olshansky wrote:
Agreed. The simple dream of automatically decoding UTF and staying  
Unicode

correct is a failure.


Yes. Attempting to hide the fact that strings are UTF-8 is just doomed.  
It's like trying to pretend that floating point does not do rounding.


It's far more practical to embrace what it is and deal with it. Yes, D  
programmers will need to understand what UTF-8 is. I don't see any way  
around that.


And it's the right choice.  4 of the 7 billion people in the world today  
are in Asia and by 2100 80% of the worlds population will be in Asia and  
Africa.


http://bigthink.com/neurobonkers/it-is-not-about-political-views-or-ideologies-it-is-blunt-facts-which-are-not-known

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: D Users Survey: Primary OS?

2014-05-30 Thread Regan Heath via Digitalmars-d

Windows 7 x64


Re: Allocating a wstring on the stack (no GC)?

2014-05-09 Thread Regan Heath via Digitalmars-d
On Wed, 07 May 2014 19:41:16 +0100, Maxime Chevalier-Boisvert  
maximechevali...@gmail.com wrote:



Unless I'm misunderstanding it should be as simple as:

wchar[100] stackws; // alloca() if you need it to be dynamically sized.

A slice of this static array behaves just like a slice of a dynamic  
array.


I do need it to be dynamically sized. I also want to avoid copying my  
string data if possible. Basically, I just want to create a wstring  
view on an existing raw buffer that exists in memory somewhere,  
based on a pointer to this buffer and its length.


import std.stdio;
import core.stdc.stdlib : malloc;
import core.stdc.wchar_ : wcscpy;

wchar[] toWChar(const void *ptr, int len)
{
	// Cast pointer to wchar*, create slice (on the heap?) from it (copies no  
data)

return (cast(wchar*)ptr)[0..len];
}

void main()
{
// Pre-existing data
int len = 12;
wchar *ptr = cast(wchar*)malloc(len * wchar.sizeof);
wcscpy(ptr, Hello World);

// Create slice of data
wchar[] slice = toWChar(ptr, len);
writefln(%s, slice);
}

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: The writeln() function's args can't be [一 ,二]?

2014-05-06 Thread Regan Heath via Digitalmars-d-learn

On Tue, 06 May 2014 15:48:44 +0100, Marc Schütz schue...@gmx.net wrote:


On Tuesday, 6 May 2014 at 13:35:57 UTC, FrankLike wrote:

The problem is that you have a wide-character comma (,) there.

This works:

   void main() {
   writeln([一, 二]);
   }


No,I mean the execute result is error.That doesn't get the [一,  
二],but get the [涓C,浜?].


Why?

Thank you.

Frank.


It works for me (Linux). If you're on Windows, it could have something  
to do with Windows' handling of Unicode, but I don't know enough about  
that to help you. There were posts about this in this newsgroup, maybe  
you can find them, or someone else remembers and can tell you directly...


IIRC you need to type chcp 65001 and set the command prompt to the  
Lucida font...


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP61: redone to do extern(C++,N) syntax

2014-05-02 Thread Regan Heath via Digitalmars-d

On Fri, 02 May 2014 01:22:12 +0100, deadalnix deadal...@gmail.com wrote:


On Thursday, 1 May 2014 at 10:03:21 UTC, Regan Heath wrote:
On Wed, 30 Apr 2014 20:56:15 +0100, Timon Gehr timon.g...@gmx.ch  
wrote:

If this is a problem, I guess the most obvious alternatives are to:

1. Get rid of namespace scopes. Require workarounds in the case of  
conflicting definitions in different namespaces in the same file. (Eg.  
use a mixin template.) I'd presume this would not happen often.


2. Give the global C++ namespace a distinctive name and put all other  
C++ namespaces below it. This way fully qualified name lookup will be  
reliable.


3. Use the C++ namespace for mangling, but not lookup.  C++ symbols  
will belong in the module they are imported into, and be treated  
exactly the same as a D symbol, e.g.




1. The whole point of C++ namespace is to avoid that. That is going to  
happen. Probably less in D as we have module scoping. But that makes it  
impossible to port many C++ headers.


2. Creating a new name lookup mechanism is the kind of idea that sound  
good but ends up horribly backfiring. There is all kind of implications  
and it affect every single identifier resolution. You don't want to mess  
with that (especially since it is already quite badly defined in the  
first place).


3. That makes it impossible to port some C++ headers just as 1.


#1 and #3 are essentially the same thing, and are how C# interfaces with  
.. well C, not C++ granted.  But, how does this make it impossible to port  
some C++ headers?


Were you thinking..

[a.cpp/h]
namespace a {
  void foo();
}

[b.cpp/h]
namespace b {
  void foo();
}

[header.h] - header to import
#include a.h
#include b.h

[my.d] - our port
extern(c++, a) foo();
extern(c++, b) foo(); // oh, oh!

?

Because the solution is..

[a.d]
extern(c++, a) foo();

[b.d]
extern(c++, b) foo();

[my.d]
import a;
import b;

// resolve the conflict using the existing D mechanisms, or call them  
using a.foo, b.foo.


In essence we're re-defining the C++ namespace(s) as a D one(s) and we  
have complete flexibility about how we do it.  We can expose C++ symbols  
in any D namespace we like, we can hide/pack others away in a cpp or util  
namespace if we prefer.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP61: redone to do extern(C++,N) syntax

2014-05-01 Thread Regan Heath via Digitalmars-d

On Wed, 30 Apr 2014 20:56:15 +0100, Timon Gehr timon.g...@gmx.ch wrote:

If this is a problem, I guess the most obvious alternatives are to:

1. Get rid of namespace scopes. Require workarounds in the case of  
conflicting definitions in different namespaces in the same file. (Eg.  
use a mixin template.) I'd presume this would not happen often.


2. Give the global C++ namespace a distinctive name and put all other  
C++ namespaces below it. This way fully qualified name lookup will be  
reliable.


3. Use the C++ namespace for mangling, but not lookup.  C++ symbols will  
belong in the module they are imported into, and be treated exactly the  
same as a D symbol, e.g.


module a;
extern(C++, std) ..string..

module b;
extern(C++, std) ..string..

module c;
import a;
import b;

void main() { .. string .. }   // error could be a.string or b.string
void main() { .. a.string .. } // resolved

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP61: redone to do extern(C++,N) syntax

2014-05-01 Thread Regan Heath via Digitalmars-d
On Thu, 01 May 2014 11:03:21 +0100, Regan Heath re...@netmail.co.nz  
wrote:



On Wed, 30 Apr 2014 20:56:15 +0100, Timon Gehr timon.g...@gmx.ch wrote:

If this is a problem, I guess the most obvious alternatives are to:

1. Get rid of namespace scopes. Require workarounds in the case of  
conflicting definitions in different namespaces in the same file. (Eg.  
use a mixin template.) I'd presume this would not happen often.


2. Give the global C++ namespace a distinctive name and put all other  
C++ namespaces below it. This way fully qualified name lookup will be  
reliable.


3. Use the C++ namespace for mangling, but not lookup.  C++ symbols will  
belong in the module they are imported into, and be treated exactly the  
same as a D symbol, e.g.


module a;
extern(C++, std) ..string..

module b;
extern(C++, std) ..string..

module c;
import a;
import b;

void main() { .. string .. }   // error could be a.string or b.string
void main() { .. a.string .. } // resolved


Sorry, #1 is the same suggestion :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: A lot of people want to use D,but they only know MS SQL Server,what will help them to Learn D?

2014-05-01 Thread Regan Heath via Digitalmars-d-learn

On Thu, 01 May 2014 09:56:49 +0100, FrankLike 1150015...@qq.com wrote:


On Monday, 14 April 2014 at 17:13:56 UTC, FrankLike wrote:


My advice - use ODBC, it is the fastest way you may connect to the SQL  
server, and you already have everything you need for that. :)


Regards


I have test the d\dmd2\windows\lib\odbc32.lib,the size is 4.5kb,
I test it by test.d(build :dmd test.d)
but find the error:
Error 42:Symbol Undefined _SQLFreeHandle@8
Error 42:Symbol Undefined _SQLSetEnvAttr@16
Error 42:Symbol Undefined _SQLAllocHandle@12
Error 42:Symbol Undefined _SQLGetDiagRec@32
-- errorlevel 4


  I have fixed the errors.
The exe file only 210kb,it works very good.

Where the errors is ?
In the odbc32.def file.
must set the all used function names.
such as:
  _SQLFreeHandle@8  = SQLFreeHandle


That's interesting.

Those functions are _stdcall, so should be exported from the lib as  
_func@N.


How did you declare them in arsd.mssql?

You should use extern(Windows) e.g.

extern(Windows) SQLRETURN SQLFreeHandle(SQLSMALLINT HandleType, SQLHANDLE  
Handle);


The extern(Windows) tells DMD to look for _stdcall.
extern(C) tells it to look for _cdecl.

The difference boils down to who is responsible for cleaning up the stack  
after a function call.  _stdcall assumes the callee will cleanup the  
stack, _cdecl assumes the caller will.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-30 Thread Regan Heath via Digitalmars-d
On Wed, 30 Apr 2014 05:03:58 +0100, Ola Fosheim Grøstad  
ola.fosheim.grostad+dl...@gmail.com wrote:

Wrong KISS: compiler internals over specification


Indeed.

I've been a C/C++ developer for ~16 years and I was confused several times  
reading this thread.


The mix of D modules and C++ namespaces is the thing what needs to be kept  
simple for us lesser mortals, not the compiler implementation - which  
should, I agree, *ideally* remain simple, but in this case should be  
sacrificed for the other because compiler writers are good at what they do  
and will be able to cope.


I think it is simpler all round to just invent (and perhaps reserve) a new  
top level module for C++ namespaces (an idea mentioned here already) i.e.  
cpp


Example:

module a;
extern(C++, std) class string {..} // identical to decl in b

module b:
extern(C++, std) class string {..} // identical to decl in a
extern(C++, std) class vector {..} // new decl

module userland;
import a;
import b;

void main()
{
  cpp.std.string x = new cpp.std.string();
  cpp.std.vector y = new cpp.std.vector();
}

Notes:
 - the D modules 'a' and 'b' play no part whatsoever in the lookup of the  
C++ symbol (why the hell should they? I see no benefit to this)

 - the identical declarations in a/b for std.string are treated as one.
 - any *use* (in userland) of a non-identical/ambiguous declaration would  
result in an error.


Link time is where it would actually complain if multiple C++ symbols were  
found.


Special lookup rules would apply to cpp.*

My 2p/c

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP61: redone to do extern(C++,N) syntax

2014-04-30 Thread Regan Heath via Digitalmars-d
On Wed, 30 Apr 2014 10:20:22 +0100, Regan Heath re...@netmail.co.nz  
wrote:


Something else to think about.

C# has the same problem and has solved it the following way..

[main.cs]
using ..
using CSTest_Test1;
using CSTest_Test2;

namespace CSTest
{
class Program
{
static void Main(string[] args)
{
Test1.GetLastError(); // class, not namespace required to call  
method
Test2.GetLastError(); // class, not namespace required to call  
method

}
}
}

[Test1.cs]
using ..
namespace CSTest_Test1
{
public static class Test1
{
[DllImport(coredll.dll, SetLastError = true)]
public static extern Int32 GetLastError();
}
}

[Test2.cs]
namespace CSTest_Test2
{
public static class Test2
{
[DllImport(coredll.dll, SetLastError = true)]
public static extern Int32 GetLastError();
}
}

GetLastError() is always going to unambiguous here because it *must* live  
inside a C# class and that class name is *always* required in the call to  
it.


If D has replaced classes/namespaces with modules, then the answer to our  
problem may be to use the C++ namespace *only* to mangle the symbol, and  
*only* use the D module for lookup resolution.


module a;
extern(C++, std) class string {..}

module b:
extern(C++, std) class string {..}
extern(C++, std) class vector {..}

module userland;
import a;
import b;

void main()
{
  string x = new string(); //error ambiguous (same resolution as for D  
symbols)

  a.string x = new a.string(); //ok
  b.vector y = new b.vector(); //ok
}

Regan


Re: [OT] from YourNameHere via Digitalmars-d

2014-04-22 Thread Regan Heath via Digitalmars-d
On Thu, 17 Apr 2014 22:32:31 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Thu, 17 Apr 2014 17:29:47 -0400, Nick Sabalausky  
seewebsitetocontac...@semitwist.com wrote:



On 4/17/2014 8:51 AM, Steven Schveighoffer wrote:

Every time I open one of these messages I
get a huge pregnant 5-second pause, along with the Mac Beach Ball
(hourglass) while this message is opened in my news reader.



Sounds like something's wrong with your news reader.


But it only happens on these messages that come via Digitamars-d, and  
consistently so. What could be the difference.


I've used this newsreader for years (opera), never had this problem.


Opera on windows does not suffer from the same issue.  I see no delays for  
these messages..


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP60: @nogc attribute

2014-04-17 Thread Regan Heath via Digitalmars-d
On Thu, 17 Apr 2014 14:08:29 +0100, Orvid King via Digitalmars-d  
digitalmars-d@puremagic.com wrote:



I'm just going to put my 2-cents into this discussion, it's my
personal opinion that while _allocations_ should be removed from
phobos wherever possible, replacing GC usage with manual calls to
malloc/free has no place in the standard library, as it's quite simply
a mess that is really not needed, and quite simply, one should be
figuring out how to simply not allocate at all rather than trying do
do manual management.


The standard library is a better place to put manual memory management  
than user space because it should be done by experts, peer reviewed and  
then would benefit everyone at no extra cost.


There are likely a number of smaller GC allocations which could be  
replaced by calls to alloca, simultaneously improving performance and  
avoiding GC interaction.


These calls could then be marked @nogc and used in the realtime sections  
of applications without fear of collections stopping the world.


Neither ARC nor a super amazing GC would be able to improve upon the  
efficiency of this sort of change.


Seems like win-win-win to me.


It is possible to implement a much better GC than what D currently
has, and I intend to do exactly that when I have the time needed (in
roughly a month).


Excellent :)

R


Re: DIP60: @nogc attribute

2014-04-17 Thread Regan Heath via Digitalmars-d
On Wed, 16 Apr 2014 18:38:23 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 4/16/2014 8:01 AM, qznc wrote:
However, what is still an open issue is that @nogc can be stopped by  
allocations
in another thread. We need threads which are not affected by  
stop-the-world. As
far as I know, creating threads via pthreads C API directly achieves  
that, but
integration with @nogc could provide more type safety. Stuff for  
another DIP?


That's a completely separate issue.


Yep.  I was thinking an attribute like @rt (realtime) would be super cool  
(but, perhaps impossible).  It would be a super-set of things like @nogc,  
and imply those things.  Adding @nogc does not prevent such a thing being  
done in the future.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: std.file.read returns void[] why?

2014-04-17 Thread Regan Heath
On Wed, 16 Apr 2014 14:36:20 +0100, Spacen Jasset  
spacenjas...@mailrazer.com wrote:



Why does the read function return void[] and not byte[]

void[] read(in char[] name, size_t upTo = size_t.max);


One one hand the data is always /actually/ going to be a load of (u)bytes,  
but /conceptually/ it might be structs or something else and using void[]  
therefore doesn't /imply/ anything about what the data really is.


I also thought that void[] was implicitly cast.. but it seems this either  
has never been the case or was changed at some point:


import std.stdio;

void main(string[] args)
{
byte[] barr = new byte[10];
foreach(i, ref b; barr)
b = cast(byte)('a' + i);

void[] varr = barr;
char[] carr;

	//carr = barr; // Error: cannot implicitly convert expression (barr) of  
type byte[] to char[]

carr = cast(char[])barr;

	//carr = varr; // Error: cannot implicitly convert expression (varr) of  
type void[] to char[]

carr = cast(char[])varr;

writefln(%d,%s, carr.length, carr);
}

I am curious, was it ever possible, was it changed?  why?  It's always  
safe - as the compiler knows how much data the void[] contains, and  
void[] is untyped so it sorta makes sense to allow it..


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: std.file.read returns void[] why?

2014-04-17 Thread Regan Heath
On Thu, 17 Apr 2014 13:59:20 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:

It was never possible. You must explicitly cast to void[].


to - from?

void[] makes actually little sense as the result of whole-file read that  
allocates. byte[] is at least usable and more accurate. In fact, it's a  
little dangerous to use void[], since you could assign  
pointer-containing values to the void[] and it should be marked as  
NOSCAN (no pointers inside file data).


I see what you're saying, byte[] is what *is* allocated.. but my point is  
that it's not what those bytes actually represent.


Are you saying void[] *is* currently marked NOSCAN?

However, when using the more conventional read(void[]) makes a LOT of  
sense, since any T[] implicitly casts to void[].


Indeed. :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Interesting rant about Scala's issues

2014-04-07 Thread Regan Heath
On Mon, 07 Apr 2014 00:17:45 +0100, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 4/6/14, 10:52 AM, Walter Bright wrote:

On 4/6/2014 3:31 AM, Leandro Lucarella wrote:

What I mean is the current semantics of enum are as they are for
historical reasons, not because they make (more) sense (than other
possibilities). You showed a lot of examples that makes sense only
because you are used to the current semantics, not because they are the
only option or the option that makes the most sense.


I use enums a lot in D. I find they work very satisfactorily. The way
they work was deliberately designed, not a historical accident.


Sorry, I think they ought to have been better. -- Andrei


Got a DIP/spec/design to share?

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Interesting rant about Scala's issues

2014-04-07 Thread Regan Heath
On Mon, 07 Apr 2014 16:15:41 +0100, Paulo Pinto pj...@progtools.org  
wrote:



Am 07.04.2014 12:07, schrieb Regan Heath:

On Mon, 07 Apr 2014 00:17:45 +0100, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


On 4/6/14, 10:52 AM, Walter Bright wrote:

On 4/6/2014 3:31 AM, Leandro Lucarella wrote:

What I mean is the current semantics of enum are as they are for
historical reasons, not because they make (more) sense (than other
possibilities). You showed a lot of examples that makes sense only
because you are used to the current semantics, not because they are  
the

only option or the option that makes the most sense.


I use enums a lot in D. I find they work very satisfactorily. The way
they work was deliberately designed, not a historical accident.


Sorry, I think they ought to have been better. -- Andrei


Got a DIP/spec/design to share?

R



How they work in languages like Ada.


Ok, brief look at those shows me enums can be converted to a Pos index  
but otherwise you cannot associate a numberic value with them, right?


So if we had that in D, Walters examples would look like..

1)

  enum Index { A, B, C }
  T[Index.C.pos + 1] array; // perhaps?
  ...
  array[Index.B.pos] = t;   // yes?

2)

  array[Index.A.pos + 1] = t; // yes?

3)

  enum Mask { A=1,B=4 } // not possible?

  Mask m = A | B;   // Error: incompatible operator | for enum


Have I got that right?

For a proposal like this to even be considered I would imagine it would  
have to be backward compatible with existing uses, so you would have to be  
proposing a new keyword or syntax on enum to trigger typesafe enums,  
perhaps typesafe is a good keyword, e.g.


typesafe enum Index { A, B, C } // requires use of .pos to convert to int  
0, 1, or 2.

enum Index { A, B, C }  // existing pragmatic behaviour

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Poll - How long have you been in D?

2014-04-04 Thread Regan Heath

On Fri, 04 Apr 2014 03:10:14 +0100, dnewbie r...@myopera.com wrote:


Please vote now!
http://www.easypolls.net/poll.html?p=533e10e4e4b0edddf89898c5

See also results from previous years:
- http://d.darktech.org/2012.png
- http://d.darktech.org/2013.png


I think we need a 10+ category now too :p

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-28 Thread Regan Heath
On Fri, 28 Mar 2014 08:59:34 -, Paolo Invernizzi  
paolo.invernizzi@no.address wrote:
For what concern us, everyone here is happy with the fact that empty  
*must* be checked prior to front/popFront.


This is actually not true.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-28 Thread Regan Heath

On Fri, 28 Mar 2014 14:15:10 -, Chris wend...@tcd.ie wrote:


Earlier Walter wrote:

I don't like being in the position of when I need high performance  
code, I have
to implement my own ranges  algorithms, or telling customers they need  
to do so.


I don't think there is a one size fits all. What if customers ask for  
maximum security? In any language, if I want high performance, I have to  
be prepared to walk on thin ice. If I want things to be safe and / or  
generic, I have to accept additonal checks (= perfomance penalties). I  
don't think that a language can solve the fundamental problems  
concerning programming / mathematical logic with all the contradictory  
demands involved. It can give us the tools to cope with those problems,  
but not solve them out of the box.


You can build safety on top of performance.  You cannot do the opposite.   
Meaning, one could wrap an unsafe/fast range with a safe/slower one.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-28 Thread Regan Heath
On Fri, 28 Mar 2014 16:30:36 -, John Stahara  
john.stahara+dl...@gmail.com wrote:



On Fri, 28 Mar 2014 16:23:11 +, Paolo Invernizzi wrote:


On Friday, 28 March 2014 at 09:30:25 UTC, Regan Heath wrote:

On Fri, 28 Mar 2014 08:59:34 -, Paolo Invernizzi
paolo.invernizzi@no.address wrote:

For what concern us, everyone here is happy with the fact that empty
*must* be checked prior to front/popFront.


This is actually not true.

R


What I'm meaning, it's that we don't care: we are always respecting the
sequence empty  front  pop, and everybody here find it natural.



To clarify for Mr. Invernizzi: the we to which he refers is the group
of people he works with, and /not/ the members of this newsgroup.

--jjs


Thanks, that was confusing me :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-28 Thread Regan Heath

On Fri, 28 Mar 2014 16:04:29 -, Chris wend...@tcd.ie wrote:


On Friday, 28 March 2014 at 15:49:06 UTC, Regan Heath wrote:

On Fri, 28 Mar 2014 14:15:10 -, Chris wend...@tcd.ie wrote:


Earlier Walter wrote:

I don't like being in the position of when I need high performance  
code, I have
to implement my own ranges  algorithms, or telling customers they  
need to do so.


I don't think there is a one size fits all. What if customers ask for  
maximum security? In any language, if I want high performance, I have  
to be prepared to walk on thin ice. If I want things to be safe and /  
or generic, I have to accept additonal checks (= perfomance  
penalties). I don't think that a language can solve the fundamental  
problems concerning programming / mathematical logic with all the  
contradictory demands involved. It can give us the tools to cope with  
those problems, but not solve them out of the box.


You can build safety on top of performance.  You cannot do the  
opposite.  Meaning, one could wrap an unsafe/fast range with a  
safe/slower one.


R


But should unsafe+fast be the default or rather an option for cases when  
you really need it?


Pass.  My point was only that it needs to exist.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Iterate over an array while mutating it?

2014-03-28 Thread Regan Heath

On Thu, 27 Mar 2014 22:23:40 -, Anh Nhan anhn...@outlook.com wrote:


Hey guys,

I want to iterate over an array, while adding new entries, and have  
those in the iteration loop.


See here: https://gist.github.com/AnhNhan/9820226

The problem is that the foreach loop seemingly only iterates over the  
original array, not minding the newly added entries.


Does somebody have a solution or approach for the loop to pick up those  
new entries?


Wrap the array in an adapter class/struct which implements opApply for  
foreach...


import std.stdio;
import std.conv;

struct ForAdd(T)
{
  T[] data;

  this(T[] _data)  { data = _data; }

  void opOpAssign(string op : ~)(T rhs) { data ~= rhs; }

  int opApply(int delegate(ref T) dg)
  {
int result = 0;

for (int i = 0; i  data.length; i++)
{
  result = dg(data[i]);
  if (result)
break;
}

return result;
  }
}

int main(string[] args)
{
  string[] test;

  for(int i = 0; i  5; i++)
test ~= to!string(i);

  auto adder = ForAdd!string(test);
  foreach(string item; adder)
  {
writefln(%s, item);
if (item == 2)
  adder ~= 5;
if (item == 4)
  adder ~= 6;
if (item == 5)
  adder ~= 7;
  }

  return 0;
}

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Changing the behavior of the comma operator

2014-03-27 Thread Regan Heath

On Wed, 26 Mar 2014 22:02:44 -, Timon Gehr timon.g...@gmx.ch wrote:


On 03/26/2014 05:19 PM, H. S. Teoh wrote:

int x = 1, 5;   // hands up, how many understand what this does?


Nothing. This fails to parse because at that place ',' is expected to be  
a separator for declarators.


Spectacularly proving his point :p

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-27 Thread Regan Heath
On Thu, 27 Mar 2014 02:44:13 -, Daniel Murphy  
yebbliesnos...@gmail.com wrote:


Regan Heath  wrote in message  
news:op.xdb9a9v354x...@puck.auriga.bhead.co.uk...


What guarantees range2 is longer than range1?  The isArray case checks  
explicitly, but the generic one doesn't.  Is it a property of being an  
output range that it will expand as required, or..


Some ranges will give you their length...


Sure.  And generally you could use it.  copy() doesn't, and I was talking  
specifically about that example.  :)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-27 Thread Regan Heath
On Thu, 27 Mar 2014 02:19:13 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:

if(!r.empty)
{
auto r2 = map!(x = x * 2)(r);
do
{
   auto x = r2.front;
   ...
} while(!r2.empty);
}


if(r.empty)
  return;

auto r2 = map!(x = x * 2)(r);
while(!r2.empty)
{
   auto x = r2.front;
   ...
   r2.popFront();  //bug fix for your version which I noticed because I  
followed the pattern :D

}

ahh.. much better.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-27 Thread Regan Heath

On Thu, 27 Mar 2014 10:49:42 -, Marc Schütz schue...@gmx.net wrote:


On Thursday, 27 March 2014 at 04:17:16 UTC, Walter Bright wrote:

On 3/26/2014 7:55 PM, Steven Schveighoffer wrote:
OK, but it's logical to assume you *can* avoid a call to empty if you  
know
what's going on under the hood, no? Then at that point, you have lost  
the
requirement -- people will avoid calling empty because they can get  
away with
it, and then altering the under-the-hood requirements cause code  
breakage later.


Case in point, the pull request I referenced, the author originally  
tried to
just use empty to lazily initialize filter, but it failed due to  
existing code
in phobos that did not call empty on filtered data before processing.  
He had to

instrument all 3 calls.


As with *any* API, if you look under the hood and make assumptions  
about the behavior based on a particular implementation, assumptions  
that are not part of the API, the risk of breakage inevitably follows.


If you've identified Phobos code that uses ranges but does not follow  
the protocol, the Phobos code is broken - please file a bugzilla issue  
on it.


I was originally going to do that, but then I took a closer look at the  
documentation, which says ([1] in the documentation of `isInputRange()`):


Calling r.front is allowed only if calling r.empty has, or would have,  
returned false.


(And the same for `popFront()`.)

That is, the documentation more or less explicitly states that you don't  
actually need to call `empty` if you know it returned `true`.


[1] http://dlang.org/phobos/std_range.html


That's because up until now we've made no attempt to set this in stone,  
and as such many interpretations have surfaced.  This documentation would,  
of course, change to match the final decision made.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Changing the behavior of the comma operator

2014-03-27 Thread Regan Heath

On Thu, 27 Mar 2014 11:45:50 -, Kagamin s...@here.lot wrote:


On Thursday, 27 March 2014 at 10:39:58 UTC, Regan Heath wrote:
On Wed, 26 Mar 2014 22:02:44 -, Timon Gehr timon.g...@gmx.ch  
wrote:



On 03/26/2014 05:19 PM, H. S. Teoh wrote:

int x = 1, 5;   // hands up, how many understand what this does?


Nothing. This fails to parse because at that place ',' is expected to  
be a separator for declarators.


Spectacularly proving his point :p


Did he want to kill the declaration statement too?


No.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-26 Thread Regan Heath


On Tue, 25 Mar 2014 23:22:18 -, Walter Bright  
newshou...@digitalmars.com wrote:

On 3/25/2014 2:29 PM, Andrei Alexandrescu wrote:

The range instance gets bigger and
more expensive to copy, and the cost of manipulating the flag and the
buffer is added to every loop iteration. Note that the user of a range
can trivially use:
  auto e = r.front;
  ... using e multiple times ...
instead.


That would pessimize code using arrays of large structs.


You're already requiring copying with the buffering requirement. And  
besides, if I was creating a range of expensive-to-copy objects, I would  
add a layer of indirection to cheapen it.


Surely you'd simply start with a range of pointers to expensive-to-copy  
objects?  Or, return them by reference from the underlying  
range/array/source.  You want to avoid *ever* copying them except  
explicitly where required.



The proposed protocol pessimizes arrays of large structs


Not really more than the existing protocol does (i.e. required  
buffering).


and will trip the unwary if calling r.front again returns something  
else.


I'm not proposing that calling them wrongly would make things unsafe.  
But I don't think it's unreasonable to expect the protocol to be  
followed, just like file open/read/close.


My immediate expectation upon encountering ranges was that r.front would  
return the same item repeatedly until r.popFront was called.  Breaking  
that guarantee will trip a *lot* of people up.


IMO the rules should be something like:
 - r.empty WILL return false if there is more data available in the range.

 - r.empty MUST be called before r.front, r.front WILL succeed if r.empty  
returned false.
 - r.front WILL repeatably return the current element in the range. It MAY  
return by value or by reference.


 - r.empty SHOULD be called before r.popFront, otherwise r.popFront MAY do  
nothing or throw

   (could also make this one a 'MUST')
 - r.popFront WILL advance to the next element in the range.

 - lazy ranges SHOULD delay initialisation until r.empty is called

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-26 Thread Regan Heath
On Wed, 26 Mar 2014 12:30:53 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Wed, 26 Mar 2014 08:29:15 -0400, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Wed, 26 Mar 2014 06:46:26 -0400, Regan Heath re...@netmail.co.nz  
wrote:

IMO the rules should be something like:
  - r.empty WILL return false if there is more data available in the  
range.


  - r.empty MUST be called before r.front, r.front WILL succeed if  
r.empty returned false.
  - r.front WILL repeatably return the current element in the range.  
It MAY return by value or by reference.


  - r.empty SHOULD be called before r.popFront, otherwise r.popFront  
MAY do nothing or throw

(could also make this one a 'MUST')
  - r.popFront WILL advance to the next element in the range.


These two rules are not necessary if you know the range is not empty.  
See the conversation inside this pull:  
https://github.com/D-Programming-Language/phobos/pull/1987


Gah, I didn't cut out the right rules. I meant the two rules that empty  
must be called before others. Those are not necessary.


I see.  I was thinking we ought to make empty mandatory to give more  
guaranteed structure for range implementors, so lazy initialisation can be  
done in one place only, etc etc.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-26 Thread Regan Heath
On Wed, 26 Mar 2014 15:37:38 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Wed, 26 Mar 2014 11:09:04 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Wed, 26 Mar 2014 12:30:53 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


Gah, I didn't cut out the right rules. I meant the two rules that  
empty must be called before others. Those are not necessary.


I see.  I was thinking we ought to make empty mandatory to give more  
guaranteed structure for range implementors, so lazy initialisation can  
be done in one place only, etc etc.


Yes, but when you know that empty is going to return false, there isn't  
any logical reason to call it. It is an awkward requirement.


Sure, it's not required for some algorithms in some situations.

I had the same thinking as you, why pay for an extra check for all 3  
calls? But there was already evidence that people were avoiding empty.


Sure, as above, makes perfect sense.

It seemed from this thread that there was some confusion about how ranges  
should be written and used, and I thought it might help if the  
requirements were more fixed, that's all.


If r.empty was mandatory then every range implementer would have a place  
to lazily initialise, r.front would be simpler, r.popFront too.  Basically  
it would lower the bar for good range implementations.


We might just need better documentation tho.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-26 Thread Regan Heath
On Wed, 26 Mar 2014 16:38:57 -, monarch_dodra monarchdo...@gmail.com  
wrote:



On Wednesday, 26 March 2014 at 15:37:38 UTC, Steven Schveighoffer wrote:
Yes, but when you know that empty is going to return false, there isn't  
any logical reason to call it. It is an awkward requirement.


-Steve


Not only that, but it's also a performance criteria: If you are  
iterating on two ranges at once (think copy), then you *know* range2  
is longer than range1, even if you don't know its length.


What guarantees range2 is longer than range1?  The isArray case checks  
explicitly, but the generic one doesn't.  Is it a property of being an  
output range that it will expand as required, or..


Why pay for range2.empty, when you know it'll always be false? There  
is a noticeable performance difference if you *don't* check.


But aren't you instead paying for 2 checks in front and 2 in popFront, so  
4 checks vs 1?  Or is the argument that these 4 checks cannot be removed  
even if we mandate r.empty is called before r.front/popFront.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: protocol for using InputRanges

2014-03-26 Thread Regan Heath
On Wed, 26 Mar 2014 17:32:30 -, monarch_dodra monarchdo...@gmail.com  
wrote:



On Wednesday, 26 March 2014 at 16:55:48 UTC, Regan Heath wrote:

On Wed, 26 Mar 2014 16:38:57 -, monarch_dodra
Not only that, but it's also a performance criteria: If you are  
iterating on two ranges at once (think copy), then you *know*  
range2 is longer than range1, even if you don't know its length.


What guarantees range2 is longer than range1?  The isArray case checks  
explicitly, but the generic one doesn't.  Is it a property of being an  
output range that it will expand as required, or..


The interface: The target *shall* have enough room to accommodate  
origin. Failure to meet that criteria is an Error.


Ok.  So long as *something* is throwing that Error I am down with this.

Output ranges may or may not auto expand as required. Arguably, it's a  
design flaw I don't want to get started on.


:)

Why pay for range2.empty, when you know it'll always be false? There  
is a noticeable performance difference if you *don't* check.


But aren't you instead paying for 2 checks in front and 2 in popFront,  
so 4 checks vs 1?  Or is the argument that these 4 checks cannot be  
removed even if we mandate r.empty is called before r.front/popFront.


I don't know what checks you are talking about. Most ranges don't check  
anything on front or popFront. They merely assume they are in a state  
that where they can do their job. Failure to meet this condition prior  
to the call (note I didn't say failure to check for), means Error.


Ok.. but lets take a naive range of say int with a 1 element cache in the  
member variable int cache;.  The simplest possible front would just be  
return cache;.  But, if cache hasn't been populated yet it's not going  
to throw an Error, it's just going to be wrong.


So, presumably front has to check another boolean to verify it's been  
populated and throw an Error if not.  That's one of the checks I meant.  A  
typical loop over a range will call front one or more times, so you pay  
for that check 1 or more times per loop.


popFront in this example doesn't need to check anything, it just populates  
cache regardless, as does empty.


But, I imagine there are ranges which need some initial setup, and they  
have to do it somewhere, and they need to check they have done it in  
empty, front and popFront for every call.  It's those checks we'd like to  
avoid if we can.


So.. if we mandate that empty MUST be called, then they could just be done  
in one place, empty.


However, in this situation nothing would be enforcing that requirement (in  
release anyway) and things could just go wrong.  So, perhaps the checks  
always need to be there and we gain nothing by mandating empty is called  
first.


idunno.

It's really not much different from when doing an strcpy. ++p and *p  
don't check anything. The fact that ranges *can* check doesn't always  
mean they should.


Sure.  For performance reasons they might not, but isn't this just a tiny  
bit safer if we mandate empty must be called and do one check there..


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Should we deprecate comma?

2014-03-25 Thread Regan Heath

On Tue, 25 Mar 2014 13:15:16 -, Timon Gehr timon.g...@gmx.ch wrote:


On 03/25/2014 02:08 PM, bearophile wrote:

Steve Teale:


The only place I have tended to use the comma operator is in ternary
expressions

bool universal;

atq = whatever? 0: universal = true, 42;


I classify that as quite tricky code, it's a negative example :-(

Bye,
bearophile


It's not tricky code. It is not even valid code. Operator precedence  
from lowest to highest: , = ?.


Fixed:

atq = whatever ? 0 : (universal = true, 42);

Still a bad example.  Horrid code IMO.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Should we deprecate comma?

2014-03-24 Thread Regan Heath
On Mon, 24 Mar 2014 02:50:17 -, Adam D. Ruppe  
destructiona...@gmail.com wrote:

int a = something == 1 ? 1
   : something == 2 ? 2
   : (assert(0), 0);


FWIW I personally find this kind of code horrid.  I would re-write to:

assert  (something == 1 || something == 2);
int a = (something == 1 || something == 2) ? something : 0;

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Should we deprecate comma?

2014-03-24 Thread Regan Heath
On Sun, 23 Mar 2014 20:56:25 -, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:

Discuss: https://github.com/D-Programming-Language/dmd/pull/3399


Would it have any effect on:

int *p, q;

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Should we deprecate comma?

2014-03-24 Thread Regan Heath
On Mon, 24 Mar 2014 11:35:38 -, monarch_dodra monarchdo...@gmail.com  
wrote:



On Monday, 24 March 2014 at 10:57:45 UTC, Regan Heath wrote:
On Sun, 23 Mar 2014 20:56:25 -, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:

Discuss: https://github.com/D-Programming-Language/dmd/pull/3399


Would it have any effect on:

int *p, q;


That's not a comma operator. So no.


That's my Q answered :)


BTW, I'd *STRONGLY* urge you to write that as:
int* p, q;

since in D, both p and q are of type int*, unlike in C.


I am well aware :p

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Handling invalid UTF sequences

2014-03-21 Thread Regan Heath
On Thu, 20 Mar 2014 22:39:50 -, Walter Bright  
newshou...@digitalmars.com wrote:



Currently we do it by throwing a UTFException. This has problems:

1. about anything that deals with UTF cannot be made nothrow

2. turns innocuous errors into major problems, such as DOS attack vectors
http://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences

One option to fix this is to treat invalid sequences as:

1. the .init value (0xFF for UTF8, 0x for UTF16 and UTF32)

2. U+FFFD

I kinda like option 1.

What do you think?


In window/Win32..

WideCharToMultiByte has flags for a bunch of similar behaviours and allows  
you to define a default char to use as a replacement in such cases.


swprintf when passed %S will convert a wchar_t UTF-16 argument into ascii,  
and replaces invalid characters with ? as it does so.


swprintf_s (the safe version), IIRC, will invoke the invalid parameter  
handler for sequences which cannot be converted.


I think, ideally, we want some sensible default behaviour but also the  
ability to alter it globally, and even better in specific calls where it  
makes sense to do so (where flags/arguments can be passed to that effect).


So, the default behaviour could be to throw (therefore no breaking change)  
and we provide a function to change this to one of the other options, and  
another to select a replacement character (which would default to .init or  
U+FFFD).


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Good name for f.byLine.map!(x = x.idup)?

2014-03-19 Thread Regan Heath

On Tue, 18 Mar 2014 15:03:16 -, Dicebot pub...@dicebot.lv wrote:


On Tuesday, 18 March 2014 at 14:57:30 UTC, Regan Heath wrote:

Why this fixation on by?

lines
allLines
eachLine
everyLine

R


range vs container. I expect file.lines to be separate fully allocated  
entity that can be assigned and stored. file.byLines implies iteration  
without any guarantees about collection as a whole.


So by has come to signify range then?

eachLine does not imply a container but an iteration/range..

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Good name for f.byLine.map!(x = x.idup)?

2014-03-18 Thread Regan Heath
On Sun, 16 Mar 2014 16:58:38 -, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:


A classic idiom for reading lines and keeping them is f.byLine.map!(x =  
x.idup) to get strings instead of the buffer etc.


The current behavior trips new users on occasion, and the idiom solving  
it is very frequent. So what the heck - let's put that in a function,  
expose and document it nicely, and call it a day.


A good name would help a lot. Let's paint that bikeshed!


Why not simply lines.

foreach (line; file.lines)
  ...

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Good name for f.byLine.map!(x = x.idup)?

2014-03-18 Thread Regan Heath

On Tue, 18 Mar 2014 14:09:05 -, Dicebot pub...@dicebot.lv wrote:


On Tuesday, 18 March 2014 at 13:49:45 UTC, Steven Schveighoffer wrote:
On Sun, 16 Mar 2014 12:58:38 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:


A classic idiom for reading lines and keeping them is f.byLine.map!(x  
= x.idup) to get strings instead of the buffer etc.


The current behavior trips new users on occasion, and the idiom  
solving it is very frequent. So what the heck - let's put that in a  
function, expose and document it nicely, and call it a day.


A good name would help a lot. Let's paint that bikeshed!


byImmutableLines
byStringLines

-Steve


byPersistLines ?


Why this fixation on by?

lines
allLines
eachLine
everyLine

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Good name for f.byLine.map!(x = x.idup)?

2014-03-18 Thread Regan Heath
On Mon, 17 Mar 2014 12:38:23 -, bearophile bearophileh...@lycos.com  
wrote:



Dmitry Olshansky:


f.lines?


There is already a lines in std.stdio (but I don't use it much), search  
for:


foreach (string line; lines(stdin))

Here:
http://dlang.org/phobos/std_stdio.html


Does this do the same as byLine or does it dup the lines?

Can we replace or scrap it?


foreach(string line; f.lines)

is just too nice not to strive for, IMO.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Final by default?

2014-03-14 Thread Regan Heath

On Fri, 14 Mar 2014 08:51:05 -, 1100110 0b1100...@gmail.com wrote:


 version (X86 || X86_64 || PPC || PPC64 || ARM || AArch64)
 {
 enum RTLD_LAZY = 0x1;
 enum RTLD_NOW = 0x2;
 enum RTLD_GLOBAL = 0x00100;
 enum RTLD_LOCAL = 0x0;
 }
 else version (MIPS32)
 {
 enum RTLD_LAZY = 0x0001;
 enum RTLD_NOW = 0x0002;
 enum RTLD_GLOBAL = 0x0004;
 enum RTLD_LOCAL = 0;
 }


Walter's point, I believe, is that you should define a meaningful version  
identifier for each specific case, and that this is better because then  
you're less concerned about where it's supported and more concerned with  
what it is which is/isn't supported.


Maintenance is very slightly better too, IMO, because you add/remove/alter  
a complete line rather than editing a set of ||  etc which can in some  
cases be a little confusing.  Basically, the chance of an error is very  
slightly lower.


For example, either this:

version(X86) version = MeaningfulVersion
version(X86_64) version = MeaningfulVersion
version(PPC) version = MeaningfulVersion
version(PPC64) version = MeaningfulVersion
version(ARM) version = MeaningfulVersion
version(AArch64) version = MeaningfulVersion

version(MeaningfulVersion)
{
}
else version (MIPS32)
{
}

or this:

version (X86) version = MeaningfulVersion
version (X86_64) version = MeaningfulVersion
version (PPC) version = MeaningfulVersion
version (PPC64) version = MeaningfulVersion
version (ARM) version = MeaningfulVersion
version (AArch64) version = MeaningfulVersion

version (MIPS32) version = OtherMeaningfulVersion

version (MeaningfulVersion)
{
}
else version (OtherMeaningfulVersion)
{
}

Regan

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Final by default?

2014-03-14 Thread Regan Heath
On Thu, 13 Mar 2014 21:42:43 -, Walter Bright  
newshou...@digitalmars.com wrote:



On 3/13/2014 1:09 PM, Andrei Alexandrescu wrote:
Also let's not forget that a bunch of people will have not had contact  
with the
group and will not have read the respective thread. For them -- happy  
campers
who get work done in D day in and day out, feeling no speed impact  
whatsoever
from a virtual vs. final decision -- we are simply exercising the brunt  
of a
deprecation cycle with undeniable costs and questionable (in Walter's  
and my

opinion) benefits.


Also,

 class C { final: ... }

achieves final-by-default and it breaks nothing.


Yes.. but doesn't help Manu or any other consumer concerned with speed if  
the library producer neglected to do this.  This is the real issue,  
right?  Not whether class *can* be made final (trivial), but whether they  
*actually will* *correctly* be marked final/virtual where they ought to be.


Library producers range in experience and expertise and are only human  
so we want the option which makes it more likely they will produce good  
code.  In addition we want the option which means that if they get it  
wrong, less will break if/when they want to correct it.



Final by default requires that you (the library producer) mark as virtual  
the functions you intend to be inherited from.  Lets assume the library  
producer has a test case where s/he does just this, inherits from his/her  
classes and overrides methods as they see consumers doing.  The compiler  
will detect any methods not correctly marked.  So, there is a decent  
chance that producers will get this right w/ final by default.


If they do get it wrong, making the change from final - virtual does not  
break any consumer code.



Compare that to virtual by default where marking everything virtual means  
it will always work, but there is a subtle and unlikely to be  
detected/tested performance penalty.  There is no compiler support for  
detecting this, and no compiler support for correctly identifying the  
methods which should be marked final.  In fact, you would probably mark  
them all final and then mark individual functions virtual in order to  
solve this.


If they get it wrong, making the change from virtual - final is more  
likely to break consumer code.



I realise you're already aware of the arguments for final by default, and  
convinced it would have been the best option, but it also seems to me that  
the damage that virtual by default will cause over the future lifetime  
of D, is greater than a well controlled deprecation path from virtual -  
final would be.


Even without a specific tool to aid deprecation, the compiler will output  
clear errors for methods which need to be marked virtual, granted this  
requires you compile a program which uses the library but most library  
producers should have such a test case already, and their consumers could  
help out a lot by submitting those errors directly.


Regan.

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Final by default?

2014-03-14 Thread Regan Heath
On Fri, 14 Mar 2014 11:37:07 -, Daniel Murphy  
yebbliesnos...@gmail.com wrote:



Walter Bright  wrote in message news:lfu74a$8cr$1...@digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual  
 methods.


That's what the !final storage class is for.


My mistake, I forgot you'd said you were in favor of this.  Being able  
to 'escape' final certainly gets us most of the way there.


!final is really rather hideous though.


+1 eew.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Final by default?

2014-03-14 Thread Regan Heath

On Fri, 14 Mar 2014 10:22:40 -, 1100110 0b1100...@gmail.com wrote:


On 3/14/14, 4:58, Regan Heath wrote:


Maintenance is very slightly better too, IMO, because you
add/remove/alter a complete line rather than editing a set of ||  etc
which can in some cases be a little confusing.  Basically, the chance of
an error is very slightly lower.

For example, either this:

version(X86) version = MeaningfulVersion
version(X86_64) version = MeaningfulVersion
version(PPC) version = MeaningfulVersion
version(PPC64) version = MeaningfulVersion
version(ARM) version = MeaningfulVersion
version(AArch64) version = MeaningfulVersion

version(MeaningfulVersion)
{
}
else version (MIPS32)
{
}

or this:

version (X86) version = MeaningfulVersion
version (X86_64) version = MeaningfulVersion
version (PPC) version = MeaningfulVersion
version (PPC64) version = MeaningfulVersion
version (ARM) version = MeaningfulVersion
version (AArch64) version = MeaningfulVersion

version (MIPS32) version = OtherMeaningfulVersion

version (MeaningfulVersion)
{
}
else version (OtherMeaningfulVersion)
{
}

Regan




...I can't even begin to describe how much more readable the OR'd  
version is.


It's shorter, but shorter does not mean more readable.. if by readable  
you mean include the ability to communicate intent etc.  Add to that, that  
readable is just one metric.


Walter's point is that the above pattern is better at communicating  
intent, clarifying your logic, and making the resulting version statements  
easier to understand (aka more readable)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Final by default?

2014-03-14 Thread Regan Heath

On Fri, 14 Mar 2014 14:46:33 -, 1100110 0b1100...@gmail.com wrote:
That's an awful lot of typo opportunities   Quick!  which one did I  
change!?


Copy/paste.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Philosophy of how OS API imports are laid out in druntime

2014-03-06 Thread Regan Heath
On Wed, 05 Mar 2014 17:55:27 -, Iain Buclaw ibuc...@gdcproject.org  
wrote:



On 5 March 2014 17:16, Regan Heath re...@netmail.co.nz wrote:

On Tue, 04 Mar 2014 00:09:46 -, Walter Bright
newshou...@digitalmars.com wrote:


This is an important debate going on here:

https://github.com/D-Programming-Language/druntime/pull/732

It has a wide impact, and so I'm bringing it up here so everyone can
participate.




The disagreement here seems to boil down to two competing goals.

1. Walter wants the C include to map directly to a D import.
2. Sean wants to be able to ensure he does not import and use a platform
specific function/definiton in a cross platform application.

Is that about right?


3. Iain wants to be able to ensure ports of druntime (ARM, MIPS,
SPARC, etc...) are conveniently - as in not complex - split up without
introducing a new namespace.

:o)


Sorry.  Missed that requirement :)

I like your last idea re transitioning away from core.sys.posix.* by using  
version(Posix).  Presumably, in the case of modules which contain POSIX  
and non-POSIX definition we would wrap those in version blocks also.


I think if we add the mapping modules as Walter suggested then to split  
the runtime for a specific platform (which GCC requires?) then you would  
copy the modules in core.* and core.sys.* and then the  
core.sys.platform.*.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Philosophy of how OS API imports are laid out in druntime

2014-03-06 Thread Regan Heath

On Thu, 06 Mar 2014 11:40:55 -, Kagamin s...@here.lot wrote:


It can be a module pragma:

pragma(restrictImportTo,core.sys.posix.ucontext)
module ports.linux.ucontext;


Good idea, then the platform specific modules can only define the platform  
specific things.


But, it means when maintaining them you inherently have to consider posix,  
making the burden a little higher.  Which runs counter to Walter's goal I  
think.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Philosophy of how OS API imports are laid out in druntime

2014-03-06 Thread Regan Heath

On Thu, 06 Mar 2014 11:17:36 -, Kagamin s...@here.lot wrote:


On Wednesday, 5 March 2014 at 17:16:34 UTC, Regan Heath wrote:

It seems this will satisfy Walter without impacting Sean..


As I understand, the idea is that Sean get little trying to fix posix  
standard: the only way to check if the code works on some platform is to  
compile and test it on that platform and posix standard doesn't change  
that. So various platforms ended up adding new functions to posix  
headers. Having straightforward translations of headers takes less  
thinking and probably helps migrate from C and doesn't change posix  
compliance of the headers - it remains conventional.


Sure.

The core.sys.platform.* modules are/will be straight translations.

Walter wants additional core.* and core.sys.* modules which map to  
core.sys.platform.* as appropriate.


Sean wants/uses core.sys.posix.* modules, which are maintained by someone  
and only contain posix definitions for all platforms.


Iain wants to be able to split a single platform easily from the rest;  
taking core.* core.sys.* and core.sys.platform.*.


Right?

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Philosophy of how OS API imports are laid out in druntime

2014-03-05 Thread Regan Heath
On Tue, 04 Mar 2014 00:09:46 -, Walter Bright  
newshou...@digitalmars.com wrote:



This is an important debate going on here:

https://github.com/D-Programming-Language/druntime/pull/732

It has a wide impact, and so I'm bringing it up here so everyone can  
participate.



The disagreement here seems to boil down to two competing goals.

1. Walter wants the C include to map directly to a D import.
2. Sean wants to be able to ensure he does not import and use a platform  
specific function/definiton in a cross platform application.


Is that about right?


To clarify some points..

@Walter are you asking for ALL includes even #include windows.h to map?   
OR, are you asking for only those headers thought to be cross platform  
headers to map?


For example, sys/ioctl.h is not a windows header and would not be  
considered cross platform.


I have the following include folders in Visual Studio 2008:
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include-  
posix, c[++] std library headers
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\atlmfc\include -  
ATL/MFC headers
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include  -  
windows specific headers


would you only expect to map headers from the first of those, and not the  
rest?


That first folder contains 184 files which correlate to posix, and c[++]  
standard library headers.



@Sean to achieve your ends you are currently importing core.sys.posix.*  
right?  So, any modules added to core.* would not affect you?  I presume  
the modules in core.sys.* are cut down versions of the headers in  
core.sys.linux etc with any the platform specific definitions removed, yes?



So, if we currently have the following layout:

[the root folders]
core\stdc
core\sync
core\sys

[the platform specific tree]
core\sys\freebsd
core\sys\freebsd\sys
core\sys\linux
core\sys\linux\sys
core\sys\osx
core\sys\osx\mach
core\sys\windows

[the posix tree]
core\sys\posix
core\sys\posix\arpa
core\sys\posix\net
core\sys\posix\netinet
core\sys\posix\sys

Note; I think that sys folder in core is unnecessary and may be causing  
some confusion.  Why not have a c folder to separate the C modules from  
other core components.  I mean, why isn't stdc in sys?


So, anyway, could we not simply add modules as Walter described to core.*  
and core.sys.* to map to the specific platform and header for the build  
system?  Likewise we would want to map core.* to core.stdc.* where  
appropriate.


It seems this will satisfy Walter without impacting Sean.. other than  
being loads of unnecessary modules from your perspective.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: GC for noobs

2014-02-28 Thread Regan Heath
On Thu, 27 Feb 2014 18:29:55 -, Szymon Gatner noem...@gmail.com  
wrote:



On Thursday, 27 February 2014 at 18:06:58 UTC, John Colvin wrote:

On Thursday, 27 February 2014 at 14:52:00 UTC, Szymon Gatner wrote:

On Thursday, 27 February 2014 at 14:42:43 UTC, Dicebot wrote:
There is also one complex and feature-reach implementation of  
uniqueness concept by Sonke Ludwig :  
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/core/concurrency.d#L281  
(Isolated!T)


Priceless for message passing concurrency.


Tbh it only looks worse and worse to me :(

Another example of code necessary to overcome language limitations.


Or, alternatively:

A language flexible enough to facilitate library solutions for problems  
that would normally require explicit language support.


I dig flexibility, I really do, and I appreciate D's features that  
enable that, but in case of such basic thing as a resource management, I  
just want things to work without surprises by default.


Amen.  (Not used religiously)

I have been around D for a long time, and I have noticed a growing trend  
of solving problems with clever but complicated library solutions when  
in *some* cases a simpler built-in solution was possible.  I realise  
Walter's time is precious and I realise that adding complexity to the  
language itself is something to be generally avoided, but I think  
sometimes we make the wrong choice.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-25 Thread Regan Heath
On Mon, 24 Feb 2014 17:58:51 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Monday, 24 February 2014 at 10:29:46 UTC, Regan Heath wrote:
No, not good enough.  This should just work, there is no good reason  
for it not to.


R


I have long since given up believing this should be in the language, I'm  
satisfied with the reasons I gave for why it is not in the language and  
why it is not needed to be in the language.


You asked for feedback, I've given mine to you. I'm ok with you  
disagreeing with that.


Sure, no worries.  :)

I'd just like to list your objections here and respond to them all, in one  
place, without the distracting issues surrounding the 3 extra schemes I  
mentioned.  Can you please correct me if I miss represent you in any way.


1. Adding 'i' on ranges is not necessarily an index and people will expect  
an index.

2. You don't need to count iterations very often.
3. Your point about the range gets a new value and foreach would compile  
but be wrong

4. This area of D is not important enough to polish
5. We will have enumerate soon, and won't need it.

I think this is every point you made in opposition of the change I want  
(excluding those in opposition of the 3 additional schemes - which in  
hindsight I should just have left off)


I believe objection #3 is invalid.  The foreach in the example given is a  
flattened tuple foreach, not the range foreach I want to change.  Making  
the change I want will have no effect on the given example.


I think the strongest objection here is #1, #2 and #4 are fairly  
subjective and #5 just seems a little odd to me, why would you want to  
type more rather than less?


For my point of view, it seems an obvious lack in D that the range foreach  
doesn't have the same basic functionality as the array foreach.  That's  
pretty much my whole argument, it should just work in the same way as  
arrays.


But, as you say we're free to disagree here, I was just about to suggest  
we were at an impasse myself.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-24 Thread Regan Heath
On Fri, 21 Feb 2014 19:42:41 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:

On Friday, 21 February 2014 at 16:41:00 UTC, Regan Heath wrote:

and make this possible too:

foreach([index, ]value; range) { }


I understand the user interface is simple, but you created 3 statements  
about how it could be achieved and work/not work with the existing  
setup. Each have their positives and negatives, it would not make sense  
to just choose one and hope it all works out.


if AA is changed to a double[string], then your value loop iterates on  
keys and your key loop iterates on values.


No, I was actually suggesting a change here, the compiler would use  
type matching not ordering to assign the variables.  So because 'v' is  
a string, it is bound to the value not the key.


And string is the key, double[string] is not the same as string[double].

Also string[string], ambiguous yet common.

There are many things to consider when adding a feature, it is not good  
to ignore what can go wrong.


Yes.. something is not being communicated here.  I addressed all this in  
the OP.



Thanks!  Ok, so how is this working?  ahh, ok I think I get it.
 enumerate returns a range, whose values are Tuples of index/value  
where value is also a tuple so is flattened, and then the whole lot is  
flattened into the foreach.


Sounds like you understand it, seams foreach will flatten all tuples.


I don't think this affects what I actually want to change, we can have:

foreach(index, value; range) { }

and still flatten tuples into value, you would simply have to provide  
one extra variable to get an index.


Make sense?


Yes, but I'm saying we don't need it because

foreach(index, value; range.enumerate) { }

is good enough. Not perfect, but good enough.


No, not good enough.  This should just work, there is no good reason for  
it not to.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-24 Thread Regan Heath
On Fri, 21 Feb 2014 16:59:26 -, Justin Whear  
jus...@economicmodeling.com wrote:



On Fri, 21 Feb 2014 10:02:43 +, Regan Heath wrote:


On Thu, 20 Feb 2014 16:30:42 -, Justin Whear
jus...@economicmodeling.com wrote:


On Thu, 20 Feb 2014 13:04:55 +, w0rp wrote:


More importantly, this gets in the way of behaviour which may be
desirable later, foreach being able to unpack tuples from ranges.
I would like if it was possible to return Tuple!(A, B) from front()
and write foreach(a, b; range) to interate through those thing,
unpacking the values with an alias, so this...

foreach(a, b; range) {
}

... could rewrite to roughly this. (There may be a better way.)

foreach(_someInternalName; range) {
 alias a = _someInternalName[0];
 alias b = _someInternalName[1];
}


Tuple unpacking already works in foreach.  This code has compiled since
at least 2.063.2:

import std.stdio;
import std.range;
void main(string[] args)
{
auto tuples = [a, b, c].zip(iota(0, 3));

// unpack the string into `s`, the integer into `i`
foreach (s, i; tuples)
writeln(s, , , i);
}


Does this work for more than 2 values?  Can the first value be something
other than an integer?

R


Yes to both questions.  In the following example I use a four element
tuple, the first element of which is a string:


import std.stdio;
import std.range;
void main(string[] args)
{
auto tuples = [a, b, c].zip(iota(0, 3), [1.2, 2.3, 3.4], ['x',
'y', 'z']);
foreach (s, i, f, c; tuples)
writeln(s, , , i, , , f, , , c);
}

Compiles with dmd 2.063.2


Thanks.  I understand this now, I had forgotten about tuple  
unpacking/flattening.  DMD supports at least 4 distinct types of foreach.   
The range foreach is the one which I want an index/count added to, and  
this change will have no effect on the tuple case shown above.


It should just work, and there is no good reason not to make it so.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath
On Thu, 20 Feb 2014 16:30:42 -, Justin Whear  
jus...@economicmodeling.com wrote:



On Thu, 20 Feb 2014 13:04:55 +, w0rp wrote:


More importantly, this gets in the way of behaviour which may be
desirable later, foreach being able to unpack tuples from ranges.
I would like if it was possible to return Tuple!(A, B) from front() and
write foreach(a, b; range) to interate through those thing, unpacking
the values with an alias, so this...

foreach(a, b; range) {
}

... could rewrite to roughly this. (There may be a better way.)

foreach(_someInternalName; range) {
 alias a = _someInternalName[0];
 alias b = _someInternalName[1];
}


Tuple unpacking already works in foreach.  This code has compiled since
at least 2.063.2:

import std.stdio;
import std.range;
void main(string[] args)
{
auto tuples = [a, b, c].zip(iota(0, 3));

// unpack the string into `s`, the integer into `i`
foreach (s, i; tuples)
writeln(s, , , i);
}


Does this work for more than 2 values?  Can the first value be something  
other than an integer?


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath

My current thinking:

 - I still think adding index to range foreach is a good idea.
 - I realise that scheme #2 isn't workable.
 - I still like scheme #1 over tuple expansion as it avoids all the issues  
which make scheme #2 unworkable.

 - enumerate is not as flexible as many people seem to think.


On Fri, 21 Feb 2014 02:34:28 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Thursday, 20 February 2014 at 11:15:14 UTC, Regan Heath wrote:
I am posting this again because I didn't get any feedback on my idea,  
which may be TL;DR or because people think it's a dumb idea and they  
were politely ignoring it :p


I certainly have wanted counts of the range iteration, but I do believe  
it becomes too complex to support and that even if we state 'i' is to  
represent a count and not an index, people will still want an index and  
expect it to be an index of their original range even though it makes no  
possible sense from the perspective of iterating over a different range  
from the original.


I don't understand how this is complex to support?  It's simple.  It's a  
count, not an index unless the range is indexable.  If people are going to  
expect an index here, they will expect one with enumerate as well - and  
are going to be equally disappointed.  So, they need to be aware of this  
regardless.



I also don't find myself needing to count iterations very often, and I  
believe when I do, it is because I want to use that count as an index  
(possibly needing to add to some global count, but I don't need it  
enough to remember).


The justification for this change is the same as for enumerate.

It is common enough to make it important, and when it happens it's  
frustrating enough that it needs fixing.


My specific example didn't want an index, or rather it wanted an index  
into the result set which I believe is just as common if not more common  
than wanting an index into the source - especially given that they are  
often the same thing.


For example, I find myself using an index to control loop behaviour, most  
often for detecting the first and last iterations than anything else.  A  
counter will let you do that just as well as an index.




Scheme 1)


As Marc said, ails backwards-compatibility. A change like this will  
never exist if it isn't backwards compatible. There are very few changes  
which will be accepted if backwards compatibility isn't preserved.


Sure.  I personally find this idea compelling enough to warrant some  
breakage, it is simple, powerful and extensible and avoids all the issues  
of optional indexes with tuple expansion.  But, I can see how someone  
might disagree.




Scheme 2)
However, if a type is given and the type can be unambiguously matched  
to a single tuple component then do so.


double[string] AA;
foreach (string k; AA) {} // k is key


While probably not common, what if one needed to switch key/value

string[double] AA;

or something similar, the type system no longer helps. But again, this  
seems pretty much uneventful.


Perhaps I wasn't clear, this would work fine:

string[double] AA;
foreach (string v; AA) {} // v is value
foreach (double k; AA) {} // k is key

or am I missing the point you're making?



foreach (i, k, v; AA.byPairs.enumerate) {}
foreach (i, k, v; AA) {} // better


Bringing this back to range iteration:

 foreach(i, v1, v2; tuple(0,1).repeat(10))
 writeln(i, \t,v1, \t,v2);

Later the range gets a new value, the foreach would still compile but be  
wrong:


 foreach(i, v1, v2; tuple(0,1,2).repeat(10))
 writeln(i, \t,v1, \t,v2);

With enumerate, there is an error.

 foreach(i, v1, v2; tuple(0,1,2).repeat(10).enumerate)
 writeln(i, \t, v1, \t, v2);
Error: cannot infer argument types


Sure, this is an issue with having the optional index/count variable,  
which is not something foreach with enumerate allows.  This is another  
reason I prefer scheme #1, you never have this issue no matter what.




 foreach(i, v1, v2; tuple(0,1).repeat(10).enumerate)
 writeln(i, \t, v1, \t, v2);

This works today! And once enumerate is part of Phobos it will just need  
an import std.range to use it.


I don't believe this works today.  My understanding of what is currently  
supported is..


foreach(index, value; array) { }
foreach(value; range) { }// no support for index/count
foreach(key, value; tuple) { }   // no support for index/count

And, my understanding of enumerate is that it simply creates a tuple from  
an index and a range value, taking it from the range foreach case above,  
to the tuple foreach case.


This is not extensible to more than 2 values.  In fact, it's pretty  
limited until we get full built-in tuple expansion support.


To test this understanding I pulled down the source for enumerate and  
coded this up:


import std.stdio;
import std.range;
import std.typecons;

..paste enumerate here.. // line 5

void main

Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath
On Fri, 21 Feb 2014 10:02:43 -, Regan Heath re...@netmail.co.nz  
wrote:


On Thu, 20 Feb 2014 16:30:42 -, Justin Whear  
jus...@economicmodeling.com wrote:



On Thu, 20 Feb 2014 13:04:55 +, w0rp wrote:


More importantly, this gets in the way of behaviour which may be
desirable later, foreach being able to unpack tuples from ranges.
I would like if it was possible to return Tuple!(A, B) from front() and
write foreach(a, b; range) to interate through those thing, unpacking
the values with an alias, so this...

foreach(a, b; range) {
}

... could rewrite to roughly this. (There may be a better way.)

foreach(_someInternalName; range) {
 alias a = _someInternalName[0];
 alias b = _someInternalName[1];
}


Tuple unpacking already works in foreach.  This code has compiled since
at least 2.063.2:

import std.stdio;
import std.range;
void main(string[] args)
{
auto tuples = [a, b, c].zip(iota(0, 3));

// unpack the string into `s`, the integer into `i`
foreach (s, i; tuples)
writeln(s, , , i);
}


Does this work for more than 2 values?  Can the first value be something  
other than an integer?


Answered this myself.  What is supported is:

foreach(key, value; tuple) { }

But, what is not supported is more than 2 values.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath
On Thu, 20 Feb 2014 17:09:31 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Thu, 20 Feb 2014 11:07:32 -0500, Regan Heath re...@netmail.co.nz  
wrote:



Only if the compiler prefers opApply to range methods, does it?


It should. If it doesn't, that is a bug.

The sole purpose of opApply is to interact with foreach. If it is masked  
out, then there is no point for having opApply.


Thanks.

So, if we had this support which I am asking for:

foreach(index, value; range) { }

And, if someone adds opApply to that range, with a different type for the  
first variable then an existing foreach (using index, value) is likely to  
stop compiling due to type problems.


This seems acceptable to me.

There is an outside chance it might keep on compiling, like if 'i' is not  
used in a strongly typed way, i.e. passed to a writefln or similar.  In  
this case we have silently changed behaviour.


Is this acceptable?

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath
On Fri, 21 Feb 2014 14:29:37 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Fri, 21 Feb 2014 06:21:39 -0500, Regan Heath re...@netmail.co.nz  
wrote:


On Thu, 20 Feb 2014 17:09:31 -, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Thu, 20 Feb 2014 11:07:32 -0500, Regan Heath re...@netmail.co.nz  
wrote:



Only if the compiler prefers opApply to range methods, does it?


It should. If it doesn't, that is a bug.

The sole purpose of opApply is to interact with foreach. If it is  
masked out, then there is no point for having opApply.


Thanks.

So, if we had this support which I am asking for:

foreach(index, value; range) { }

And, if someone adds opApply to that range, with a different type for  
the first variable then an existing foreach (using index, value) is  
likely to stop compiling due to type problems.


This seems acceptable to me.


I think any type that does both opApply and range iteration is asking  
for problems :) D has a nasty way of choosing all or nothing for  
overloads, meaning it may decide this is a range or this is opApply,  
but if you have both, it picks one or the other.


I'd rather see it do:

1. can I satisfy this foreach using opApply? If yes, do it.
2. If not, can I satisfy this foreach using range iteration?

This may be how it works, I honestly don't know.

There is an outside chance it might keep on compiling, like if 'i' is  
not used in a strongly typed way, i.e. passed to a writefln or  
similar.  In this case we have silently changed behaviour.


Is this acceptable?


Adding opApply is changing the API of the range. If the range does  
something different based on whether you use the range interface or  
opApply, then this is a logic error IMO.


The easiest thing is to just not use opApply and range primitives  
together :) One separation I like to use in my code is that you use  
opApply on a container, but range primitives on a range for that  
container. And a container is not a range.


Makes sense to me. :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-21 Thread Regan Heath
On Fri, 21 Feb 2014 15:35:44 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:
You've provided 3 schemes to support this feature. This suggest there  
are several right ways to bring this into the language, while you  
prefer 1 someone may prefer 3.


Ignore the 3 schemes they were just me thinking about how what I actually  
want will affect built in tuple expansion etc.


I want just 1 thing to change (at this time), an index added to foreach  
over ranges so that it matches arrays, e.g.


foreach(index, value; range) { }

The code change is likely quite literally just adding an int to the  
foreach handler for ranges, passing it to the foreach body, and  
incrementing it afterwards.  That's it, well, plus the front end code to  
bind the variable.


All I am suggesting is that we take what we currently have:

foreach([index, ]value; array) { }
foreach(value; range) { }
foreach(key, value; tuple) { }

and make this possible too:

foreach([index, ]value; range) { }



string[double] AA;

or something similar, the type system no longer helps. But again, this  
seems pretty much uneventful.


Perhaps I wasn't clear, this would work fine:

string[double] AA;
foreach (string v; AA) {} // v is value
foreach (double k; AA) {} // k is key

or am I missing the point you're making?


if AA is changed to a double[string], then your value loop iterates on  
keys and your key loop iterates on values.


No, I was actually suggesting a change here, the compiler would use type  
matching not ordering to assign the variables.  So because 'v' is a  
string, it is bound to the value not the key.




foreach(i, v1, v2; tuple(0,1).repeat(10).enumerate)
writeln(i, \t, v1, \t, v2);

This works today! And once enumerate is part of Phobos it will just  
need an import std.range to use it.


I tested all my claims about enumerate. You need it to import std.traits  
or else is(Largest(...)) will always be false.


Thanks!  Ok, so how is this working?  ahh, ok I think I get it.  enumerate  
returns a range, whose values are Tuples of index/value where value is  
also a tuple so is flattened, and then the whole lot is flattened into the  
foreach.


So, while the range foreach only supports:

foreach(value; range) { }

value in this case is a flattened tuple of (index, v1, v2, ...)

Yes?

I had completely forgotten about tuple flattening.

I don't think this affects what I actually want to change, we can have:

foreach(index, value; range) { }

and still flatten tuples into value, you would simply have to provide one  
extra variable to get an index.


Make sense?

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Repost: make foreach(i, a; range) just work

2014-02-20 Thread Regan Heath
I am posting this again because I didn't get any feedback on my idea,  
which may be TL;DR or because people think it's a dumb idea and they were  
politely ignoring it :p


My original thought was that things like this should just work..

auto range = input.byLine();
while(!range.empty)
{
  range.popFront();
  foreach (i, line; range.take(4))  //Error: cannot infer argument types
  {
..etc..
  }
  range.popFront();
}

The reason it fails was best expressed by Steven:

This is only available using opApply style iteration. Using range  
iteration does not give you this ability.
It's not a permanent limitation per se, but there is no plan at the  
moment to add multiple parameters torange iteration.


One thing that IS a limitation though: we cannot overload on return  
values. So the obvious idea ofoverloading front to return tuples of  
various types, would not be feasible. opApply can do that becausethe  
delegate is a parameter.


And Jakob pointed me to this proposed solution:
[1] https://github.com/D-Programming-Language/phobos/pull/1866

Which is a great idea, but, I still feel that this should just work as I  
have written it.  I think this is what people will intuitively expect to  
work, and having it fail and them scrabble around looking for enumerate is  
sub-optimal.  I think we can solve it without negatively impacting future  
plans like what bearophile wants, which is built-in tuples (allowing  
foreach over AA's etc).


So, the solution I propose for my original problem above is:

Currently the 'i' value in a foreach on an array is understood to be an  
index into the array.  But, ranges are not always indexable.  So, for us  
to make this work for all ranges we would have to agree to change the  
meaning of 'i' from being an index to being a counter, which may also  
be an index.  This counter would be an index if the source object was  
indexable.  Another way to look at it is to realise that the counter is  
always an index into the result set itself, and could be used as such if  
you were to store the result set in an indexable object.


To implement this, foreach simply needs to keep a counter and increment it  
after each call to the foreach body - the same way (I assume) it does for  
arrays and objects with opApply.


Interestingly, if this had been in place earlier, then the byKey() and  
byValue() members of AA's would not have been necessary.  Instead  
keys/values could simply have changed into indexable ranges, and no code  
breakage would have occurred (AFAICS).


So, to address bearophile's desire for built-in tuples, and iteration over  
AA's and how this change might affect those plans.  It seems to me we  
could do foreach over AAs/tuples in one of 2 ways or even a combination of  
both:


Scheme 1) for AA's/tuples the value given to the foreach body is a  
voldemort (unnamed) type with a public property member for each component  
of the AA/tuple.  In the case of AA's this would then be key and  
value, for tuples it might be a, b, .., z, aa, bb, .. and so on.


foreach(x; AA) {}// Use x.key and x.value
foreach(i, x; AA) {} // Use i, x.key and x.value
foreach(int i, x; AA) {} // Use i, x.key and x.value

Extra/better: For non-AA tuples we could allow the members to be named  
using some sort of syntax, i.e.


foreach(i, (x.bob, x.fred); AA) {} // Use i, x.bob and x.fred
or
foreach(i, x { int bob; string fred }; AA) {} // Use i, x.bob and x.fred
or
foreach(i, new x { int bob; string fred }; AA) {} // Use i, x.bob and  
x.fred



Lets look at bearophile's examples re-written for scheme #1

foreach (v; AA) {}
foreach (x; AA) { .. use x.value .. } // better? worse?

foreach (k, v; AA) {}
foreach (x; AA) { .. use x.key, x.value .. } // better? worse?

foreach (k; AA.byKeys) {}
same // no voldemort reqd

foreach (i, k; AA.byKeys.enumerate) {}
foreach (i, k; AA.byKeys) {}   // better. note, no voldemort reqd

foreach (i, v; AA.byValues.enumerate) {}
foreach (i, v; AA.byValues) {} // better. note, no voldemort reqd

foreach (k, v; AA.byPairs) {}
foreach (x; AA) { .. use x.key, x.value .. } // better

foreach (i, k, v; AA.byPairs.enumerate) {}
foreach (i, x; AA) { .. use i and x.key, x.value .. } // better

This is my preferred approach TBH, you might call it foreach on packed  
tuples.



Scheme 2) the tuple is unpacked into separate variables given in the  
foreach.


When no types are given, components are assigned to variables such that  
the rightmost is the last AA/tuple component and subsequent untyped  
variables get previous components up and until the N+1 th which gets  
index/count.


foreach (v; AA) {}// v is value (last tuple component)
foreach (k, v; AA) {} // k is key   (2nd to last tuple component),  
...
foreach (i, k, v; AA) {}  // i is index/count because AA only has 2  
tuple components.


So, if you have N tuple components and you supply N+1 variables you get  
the index/count.  Supplying any more would be an error.


However, if a 

Re: Repost: make foreach(i, a; range) just work

2014-02-20 Thread Regan Heath

On Thu, 20 Feb 2014 12:56:27 -, Marc Schütz schue...@gmx.net wrote:

IMO, any change needs to be both backwards-compatible (i.e., it should  
not only just work, as you phrased, but existing code should just  
keep working), and forward-compatible, so as not to obstruct any  
potential improvements of tuple handling.


Scheme #1 fails backwards-compatibility.


Fair enough.  We can always pack things manually using something like the  
enumerate() method mentioned in the link.


Scheme #2 doesn't, but I feel the matching rules if a type is specified  
are too complicated. Instead, I would suggest just to always assign the  
variables from the right, i.e. you cannot skip variables, and if you  
specify a type, it must match the type of the value in this position.


If you really want to skip a tuple member (in order to avoid an  
expensive copy), a special token _ or $ could be introduced, as has  
also been suggested in one the tuple unpacking/pattern matching DIPs,  
IIRC.


As for unpacking a tuple value (or key), an additional pair of  
parentheses can be used, so such a feature would still be possible in  
the future:


foreach(i, k, (a,b,c); ...)


Cool.

(Scheme #3 seems just too complicated for my taste. It's important to be  
intuitively understandable and predictable.)


Fair enough.

Any comments on the initial solution to my original problem?

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Repost: make foreach(i, a; range) just work

2014-02-20 Thread Regan Heath

On Thu, 20 Feb 2014 13:04:55 -, w0rp devw...@gmail.com wrote:


I don't think this is a good idea.


Which part?  The initial solution to my initial problem, or one of the 3  
schemes mentioned?


Say you have a class with range methods and add opApply later. Only the  
opApply delegate receives a type other than size_t for the first  
argument. Now the foreach line infers a differnt type for i and code in  
the outside world will break.


Only if the compiler prefers opApply to range methods, does it?

And, if it prefers range methods then any existing class with opApply  
(with more than 1 variable) that gets range methods will break also,  
because foreach(more than 1 variable; range) does not (currently) work.


More importantly, this gets in the way of behaviour which may be  
desirable later, foreach being able to unpack tuples from ranges.


snip :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-17 Thread Regan Heath
This turned into a bit of a full spec so I would understand if you TL;DR  
but it would be nice to get some feedback if you have the time..


On Fri, 14 Feb 2014 17:34:46 -, bearophile bearophileh...@lycos.com  
wrote:

Regan Heath:


In my case I didn't need any of these.


I don't understand.


What I meant here is that I don't need the advantages provided by  
enumerate like the starting index.


One thing I am unclear about from your response is what you mean by  
implicit in this context?  Do you mean the process of inferring things  
(like the types in foreach)?


(taken from subsequent reply)

Isn't this discussion about adding an index to a range?


No, it's not.  The counter I want would only be an index if the range was  
indexable, otherwise it's a count of foreach iterations (starting from  
0).  This counter is (if you like) an index into the result set which is  
not necessarily also an index into the source range (which may not be  
indexable).


What we currently have with foreach is an index and only for indexable  
things.  I want to instead generalise this to be a counter which is an  
index when the thing being enumerated is indexable, otherwise it is a  
count or index into the result set.


Lets call this change scheme #0.  It solves my issue, and interestingly  
also would have meant we didn't need to add byKey or byValue to AA's,  
instead we could have simply made keys/values indexable ranges and not  
broken any existing code.


Further details of scheme #0 below.

(taken from subsequent reply)
If you want all those schemes built in a language (and to use them  
without adding .enumerate) you risk making

a mess. In this case explicit is better than implicit.


Have a read of what I have below and let me know if you think it's a  
mess.  Scheme #2 has more rules, and might be called a mess perhaps.   
But, scheme #1 is fairly clean and simple and I think better overall.  The  
one downside is that without some additional syntax it cannot put tuple  
components nicely in context with descriptive variable names, so there is  
that.


To be fair to all 3 schemes below, they mostly just work for simple  
cases and/or cases where different types are used for key/values in AA's  
and tuples.  The more complicated rules only kick in to deal with the  
cases where there is ambiguity (AA's with the same type for key and value  
and tuples with multiple components of the same type).


Anyway, on to the details..

***

Scheme 0) So, what I want is for foreach to simply increment a counter  
after each call to the body of the foreach, giving me a counter from 0 to  
N (or infinity/wrap).  It would do this when prompted to do so by a  
variable being supplied in the foreach statement in the usual way (for  
arrays/opApply)


This counter would not be defined/understood to be an index into the  
object being enumerated necessarily (as it currently is), instead if the  
object is indexable then it would indeed be an index, otherwise it's a  
count (index into the result set).


I had not been considering associative arrays until now, given current  
support (without built in tuples) they do not seem to be a special case to  
me.  Foreach over byKey() should look/function identically to foreach over  
keys, likewise for byValue().  The only difference is that in the  
byKey()/byValue() case the counter is not necessarily an index into  
anything, though it would be if the underlying byKey() range was indexable.


The syntax for this, is the same as we have for arrays/classes with  
opApply today.  In other words, it just works and my example would  
compile and run as one might expect.


This seems to me to be intuitive, useful and easy to implement.  Further,  
I believe it leaves the door open to having built in tuples (or using  
library extensions like enumerate()), with similarly clean syntax and no  
mess.


***

So, what if we had built in tuples?  Well, seems to me we could do foreach  
over AAs/tuples in one of 2 ways or even a combination of both:


Scheme 1) for AA's/tuples the value given to the foreach body is a  
voldemort (unnamed) type with a public property member for each component  
of the AA/tuple.  In the case of AA's this would then be key and  
value, for tuples it might be a, b, .., z, aa, bb, .. and so on.


foreach(x; AA) {}// Use x.key and x.value
foreach(i, x; AA) {} // Use i, x.key and x.value
foreach(int i, x; AA) {} // Use i, x.key and x.value

Extra/better: For non-AA tuples we could allow the members to be named  
using some sort of syntax, i.e.


foreach(i, (x.bob, x.fred); AA) {} // Use i, x.bob and x.fred
or
foreach(i, x { int bob; string fred }; AA) {} // Use i, x.bob and x.fred
or
foreach(i, new x { int bob; string fred }; AA) {} // Use i, x.bob and  
x.fred



Lets look at your examples re-written for scheme #1


foreach (v; AA) {}

foreach (x; AA) { .. use x.value .. } // better? worse?


foreach (k, v; AA) {}

foreach (x

Re: Ranges, constantly frustrating

2014-02-14 Thread Regan Heath
On Fri, 14 Feb 2014 02:48:51 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Thursday, 13 February 2014 at 14:30:41 UTC, Regan Heath wrote:
Don't get me wrong, counting the elements as you iterate over them is  
useful, but it isn't the index into the range you're likely after.


Nope, not what I am after.  If I was, I'd iterate over the original  
range instead or keep a line count manually.


Maybe a better way to phrase this is, while counting may be what you're  
implementation needs, it is not immediately obvious what 'i' should be.  
Someone who desires an index into the original array will expect 'i' to  
be that; even though it can be explained that .take() is not the same  
range as the original.


Thus it is better to be explicit with the .enumerate function.


FWIW I disagree.  I think it's immediately and intuitively obvious what  
'i' should be when you're foreaching over X items taken from another  
range, even if you do not know take returns another range.  Compare it to  
calling a function on a range and foreaching on the result, you would  
intuitively and immediately expect 'i' to relate to the result, not the  
input.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-14 Thread Regan Heath
On Fri, 14 Feb 2014 13:14:51 -, bearophile bearophileh...@lycos.com  
wrote:



Regan Heath:

FWIW I disagree.  I think it's immediately and intuitively obvious what  
'i' should be when you're foreaching over X items taken from another  
range, even if you do not know take returns another range.  Compare it  
to calling a function on a range and foreaching on the result, you  
would intuitively and immediately expect 'i' to relate to the result,  
not the input.


Using enumerate has several advantages.


In my case I didn't need any of these.  Simple things should be simple and  
intuitive to write.  Yes, we want enumerate *as well* especially for the  
more complex cases but we also want the basics to be simple, intuitive and  
easy.


That's all I'm saying here.  This seems to me to be very low hanging fruit.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-14 Thread Regan Heath
On Fri, 14 Feb 2014 12:29:49 -, Jakob Ovrum jakobov...@gmail.com  
wrote:



On Friday, 14 February 2014 at 12:10:51 UTC, Regan Heath wrote:
FWIW I disagree.  I think it's immediately and intuitively obvious what  
'i' should be when you're foreaching over X items taken from another  
range, even if you do not know take returns another range.  Compare it  
to calling a function on a range and foreaching on the result, you  
would intuitively and immediately expect 'i' to relate to the result,  
not the input.


R


How should it behave on ranges without length, such as infinite ranges?


In exactly the same way.  It just counts up until you break out of the  
foreach, or the 'i' value wraps around.  In fact the behaviour I want is  
so trivial I think it could be provided by foreach itself, for iterations  
of anything.  In which case whether 'i' was conceptually an index or  
simply a count would depend on whether the range passed to foreach  
(after all skip, take, etc) was itself indexable.


Also, `enumerate` has the advantage of the `start` parameter, which  
usefulness is demonstrated in `enumerate`'s example as well as in an  
additional example in the bug report.


Sure, if you need more functionality reach for enumerate.  We can have  
both;  sensible default behaviour AND enumerate for more complicated  
cases.  In my case, enumerate w/ start wouldn't have helped (my file was  
blocks of 6 lines, where I wanted to skip lines 1, 3, and 6 *of each  
block*)


I'm not yet sure whether I think it should be implemented at the  
language or library level, but I think the library approach has some  
advantages.


Certainly, for the more complex usage.  But I reckon we want both  
enumerate and a simple language solution which would do what I've been  
trying to describe.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-13 Thread Regan Heath
On Wed, 12 Feb 2014 11:08:57 -, Jakob Ovrum jakobov...@gmail.com  
wrote:



On Wednesday, 12 February 2014 at 10:44:57 UTC, Regan Heath wrote:
Ahh.. so this is a limitation of the range interface.  Any plans to  
fix this?


R


Did my original reply not arrive? It is the first reply in the thread...


It did, thanks.  It would be better if this was part of the language and  
just worked as expected, but this is just about as good.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-13 Thread Regan Heath
On Wed, 12 Feb 2014 21:01:58 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Wednesday, 12 February 2014 at 10:52:13 UTC, Regan Heath wrote:
On Tue, 11 Feb 2014 19:48:40 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Tuesday, 11 February 2014 at 10:10:27 UTC, Regan Heath wrote:

Things like this should just work..

File input ...

auto range = input.byLine();
while(!range.empty)
{
 range.popFront();
 foreach (i, line; range.take(4))  //Error: cannot infer argument  
types

 {


It isn't *required* to (input/forward), but it could (random access).   
I think we even have a template to test if it's indexable as we can  
optimise some algorithms based on this.


You chopped of your own comment prompting this response, in which I am  
responding to a minor side-point, which I think has confused the actual  
issue.  All I was saying above was that a range might well have an index,  
and we can test for that, but it's not relevant to the foreach issue below.


What do you expect 'i' to be? Is it the line number? Is it the index  
within the line where 'take' begins? Where 'take' stops?


If I say take(5) I expect 0,1,2,3,4.  The index into the take range  
itself.


I don't see how these two replies can coexist. 'range.take(5)' is a  
different range from 'range.'


Yes, exactly, meaning that it can trivially count the items it returns,  
starting from 0, and give those to me as 'i'.  *That's all I want*


'range may not traverse in index order (personally haven't seen such a  
range). But more importantly you're not dealing with random access  
ranges. The index you're receiving from take(5) can't be used on the  
range.


A forward range can do what I am describing above, it's trivial.

Don't get me wrong, counting the elements as you iterate over them is  
useful, but it isn't the index into the range you're likely after.


Nope, not what I am after.  If I was, I'd iterate over the original range  
instead or keep a line count manually.



Maybe the number is needed to correspond to a line number.


Nope.  The file contains records of 5 lines plus a blank line.  I want 0,  
1, 2, 3, 4, 5 so I can skip lines 0, 2, and 5 *of each record*.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-12 Thread Regan Heath

On Tue, 11 Feb 2014 17:11:46 -, Ali Çehreli acehr...@yahoo.com wrote:


On 02/11/2014 06:25 AM, Rene Zwanenburg wrote:

On Tuesday, 11 February 2014 at 10:10:27 UTC, Regan Heath wrote:


  foreach (i, line; range.take(4))  //Error: cannot infer argument  
types

  {
..etc..
  }



foreach (i, line; iota(size_t.max).zip(range.take(4)))
{

}


There is also the following, relying on tuples' automatic expansion in  
foreach:


 foreach (i, element; zip(sequence!n, range.take(4))) {
 // ...
 }


Thanks for the workarounds.  :)  Both seem needlessly opaque, but I  
realise you're not suggesting these are better than the original, just  
that they actually work today.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-12 Thread Regan Heath
On Tue, 11 Feb 2014 19:48:40 -, Jesse Phillips  
jesse.k.phillip...@gmail.com wrote:



On Tuesday, 11 February 2014 at 10:10:27 UTC, Regan Heath wrote:

Things like this should just work..

File input ...

auto range = input.byLine();
while(!range.empty)
{
  range.popFront();
  foreach (i, line; range.take(4))  //Error: cannot infer argument types
  {
..etc..
  }
  range.popFront();
}

Tried adding 'int' and 'char[]' or 'auto' .. no dice.

Can someone explain why this fails, and if this is a permanent or  
temporary limitation of D/MD.


R


In case the other replies weren't clear enough. A range does not have an  
index.


It isn't *required* to (input/forward), but it could (random access).  I  
think we even have a template to test if it's indexable as we can optimise  
some algorithms based on this.


What do you expect 'i' to be? Is it the line number? Is it the index  
within the line where 'take' begins? Where 'take' stops?


If I say take(5) I expect 0,1,2,3,4.  The index into the take range itself.

The reason I wanted it was I was parsing blocks of data over 6 lines - I  
wanted to ignore the first and last and process the middle 4.  In fact I  
wanted to skip the 2nd of those 4 as well, but there was not single  
function (I could find) which would do all that so I coded the while above.


There is a feature of foreach and tuple() which results in the tuple  
getting expanded automatically.


And also the opApply overload taking a delegate with both parameters.

byLine has its own issues with reuse of the buffer, it isn't inherent to  
ranges. I haven't really used it (needed it from std.process), when I  
wanted to read a large file I went with wrapping std.mmap:


https://github.com/JesseKPhillips/libosm/blob/master/source/util/filerange.d


Cool, thanks.

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Ranges, constantly frustrating

2014-02-11 Thread Regan Heath

Things like this should just work..

File input ...

auto range = input.byLine();
while(!range.empty)
{
  range.popFront();
  foreach (i, line; range.take(4))  //Error: cannot infer argument types
  {
..etc..
  }
  range.popFront();
}

Tried adding 'int' and 'char[]' or 'auto' .. no dice.

Can someone explain why this fails, and if this is a permanent or  
temporary limitation of D/MD.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-11 Thread Regan Heath
On Tue, 11 Feb 2014 10:52:39 -, Tobias Pankrath tob...@pankrath.net  
wrote:


Further, the naive solution of adding .array gets you in all sorts of  
trouble :p  (The whole byLine buffer re-use issue).


This should be simple and easy, dare I say it trivial.. or am I just  
being dense here.


R


The second naive solution would be to use readText and splitLines.


The file is huge in my case :)

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Ranges, constantly frustrating

2014-02-11 Thread Regan Heath
On Tue, 11 Feb 2014 10:58:17 -, Tobias Pankrath tob...@pankrath.net  
wrote:



On Tuesday, 11 February 2014 at 10:10:27 UTC, Regan Heath wrote:

Things like this should just work..

File input ...

auto range = input.byLine();
while(!range.empty)
{
  range.popFront();
  foreach (i, line; range.take(4))  //Error: cannot infer argument types
  {
..etc..
  }
  range.popFront();
}

Tried adding 'int' and 'char[]' or 'auto' .. no dice.

Can someone explain why this fails, and if this is a permanent or  
temporary limitation of D/MD.


R
Is foreach(i, val; aggregate) even defined if aggr is not an array or  
associated array? It is not in the docs:  
http://dlang.org/statement#ForeachStatement


import std.stdio;

struct S1 {
   private int[] elements = [9,8,7];
   int opApply (int delegate (ref uint, ref int) block) {
   foreach (uint i, int n ; this.elements)
   block(i, n);
   return 0;
   }
}

void main()
{
S1 range;   
foreach(uint i, int x; range)
{
  writefln(%d is %d, i, x);
}
}

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Side-Q: GC/ARC/Allocators - How many GC's

2014-02-04 Thread Regan Heath
How many GC's do we get, if we build a D application linking it to a D  
library statically, or dynamically, or by loading it at runtime?


It seems to me, that one thing people really want in this discussion is to  
be able to select a single allocation strategy for their application,  
regardless of the libraries involved.


Is this at all technically possible?

I realise that if we had ARC, and GC, and manual memory management as 3  
possible strategies on the table, and given that each would require  
different code/annotations then a library may be written for only say GC  
and simply wont work in the other modes.


But, could we design a system such that DMD can determine what allocation  
strategies are valid for a given library at link time.  It would require  
that libraries advertise this (in a separate file if necessary) although  
presumably they would or would not contain the required hooks as exported  
symbols for ARC or manual free if not built to support them so link would  
simply fail.


I admit I am not well versed in the details, but conceptually this could  
be powerful and flexible...


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Should this work?

2014-01-29 Thread Regan Heath

On Wed, 29 Jan 2014 09:52:01 -, Dicebot pub...@dicebot.lv wrote:


On Tuesday, 28 January 2014 at 11:26:39 UTC, Regan Heath wrote:

No, you really don't.

If you're writing string code you will intuitively reach for  
substring, contains, etc because you already know these terms and  
what behaviour to expect from them.  In a generic context, or a range  
context you will reach for different generic or range type names.


Trusting intuition is not acceptable.


Sure it is, if we're talking about making life easier for beginners and  
making things more obvious in general.  Of course, not everyone has the  
same idea of obvious, but there is enough overlap and we would *only*  
define aliases for that overlap.  In short, if people expect it to be  
there, lets make sure it's there.


I will go and check in docs in most case if I have not encountered it  
before. Check each time for every new aliases. I'd hate to have this  
overhead.


Huh?  Assuming you have a decent editor checking the docs should be as  
simple as pressing F1 on the unknown function.  And, that's only assuming  
it's not immediately obvious what it's doing.  Are you telling me, that  
you would be confused by seeing...


if (str.contains(hello))

I seriously doubt that, and that's all I'm suggesting, adding aliases for  
things which are obvious, things which any beginner will expect to be  
there, and currently aren't there.


I am *not* suggesting we add every obscure name for every single function,  
that would be complete nonsense.  Lets not get confused about the scope of  
what I'm suggesting, I am suggesting a very limited number of new aliases,  
and only for cases where there is a clear obvious expected name which we  
currently lack.


Right now all I need to do is to stop thinking about strings as strings  
- easy and fast.


Sure, once you learn all the generic terms for things.  I *still* have  
trouble finding the LINQ function I need when I want to do something in  
the LINQ generic style .. and I've been using LINQ for at least a year  
now.  The issue is that the generic name just does not naturally occur to  
me in certain contexts, like strings.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


  1   2   3   4   5   6   7   >