Re: Using closure causes GC allocation

2017-09-03 Thread Vino.B via Digitalmars-d-learn

On Saturday, 2 September 2017 at 20:54:03 UTC, Vino.B wrote:
On Saturday, 2 September 2017 at 20:10:58 UTC, Moritz Maxeiner 
wrote:

On Saturday, 2 September 2017 at 18:59:30 UTC, Vino.B wrote:

[...]


Cannot reproduce under Linux with dmd 2.076.0 (with commented 
out Windows-only check). I'll try to see what happens on 
Windows once I have a VM setup.



[...]


You changed the type of dFiles, which you return from 
cleanFiles, without changing the return type of cleanFiles. 
Change the return type of cleanFiles to the type the compiler 
error above tells you it should be (`Tuple!(string, string)[]` 
instead of `string[][]`), or let the compiler infer it via 
auto (`auto cleanFiles(...`).


Hi,

 Thank you very much, was able to resolve the second code issue 
by changing the return type of the function.


Hi,

  In order to resolve the issue "Using closure causes GC 
allocation" it was stated that we need to use delegates, can you 
please help me on how to as i have not gone that much far in D 
programming.


import std.stdio: File,writeln;
import std.datetime.systime: Clock, days, SysTime;
import std.file: SpanMode, dirEntries, exists, isFile, mkdir, 
remove;

import std.typecons: tuple, Tuple;
import std.algorithm:  filter, map, each;
import std.array: array;

Tuple!(string)[] logClean (string[] Lglst, int LogAge) {
if (!Lglst[0].exists) { mkdir(Lglst[0]); }
auto ct1 = Clock.currTime();
auto st1 = ct1 + days(-LogAge);
	auto dFiles = dirEntries(Lglst[0], SpanMode.shallow).filter!(a 
=> a.exists && a.isFile && a.timeCreated < st1).map!(a => 
tuple(a.name)).array;

dFiles.each!(a => a[0].remove);
return dFiles;
}

void main () {
string[] LogDir = 
["C:\\Users\\bheev1\\Desktop\\Current\\Script\\D\\Logs"];

int  LogAge = 1;
logClean(LogDir,LogAge);
}

From,
Vino.B


Re: Crazy compile time errors with DMD 2.075.1 & 2.076.0

2017-09-03 Thread Joel via Digitalmars-d-learn
On Monday, 4 September 2017 at 04:45:27 UTC, Jonathan M Davis 
wrote:
On Sunday, September 03, 2017 21:22:14 Ali Çehreli via 
Digitalmars-d-learn wrote:

[...]


Much as some people have been doing it for some reason, I 
really don't understand why anyone would be unzipping the .zip 
file on top of a previous release and expect it to work. If 
enough stuff stays in the same place, then it can work, but as 
soon as any files are moved or removed, odds are that you're 
screwed.


[...]


Found the problem, this [1] in my dub.json file. Removed it, and 
it now works as expected.


Thanks for the replies.

[1] "dflags-dmd": [
   "-transition=import"],


Re: Crazy compile time errors with DMD 2.075.1 & 2.076.0

2017-09-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Sunday, September 03, 2017 21:22:14 Ali Çehreli via Digitalmars-d-learn 
wrote:
> On 09/03/2017 09:03 PM, Joel wrote:
>  > One of my small programs doesn't compile any more since, said, DMD
>  > versions.
>  >
>  > I've got other programs that do work, but I can't see what's different
>  > about them?!
>  >
>  > I'm using macOS.
>  >
>  > [1] Here is the program and stuff. It uses DSFML 2.1.1, but I haven't
>  > added the dynamic files for it.
>  >
>  > giver ~master: building configuration "application"...
>  > /usr/local/opt/dmd/include/dlang/dmd/std/datetime/systime.d(7652,52):
>  > Error:
>  > std.datetime.date.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at
>  > /usr/local/opt/dmd/include/dlang/dmd/std/datetime/date.d(9997,6)
>  > conflicts with
>  > core.time.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at
>  > /usr/local/opt/dmd/include/dlang/dmd/core/time.d(4250,6)
>
> Are you installing from the zip distribution? If so, unfortunately,
> changes made to std.datetime are not compatible for that way of
> installing. (The large datetime file has been split into modules, making
> datetime a package.)
>
> First delete the old dmd directories and unzip again.

Much as some people have been doing it for some reason, I really don't
understand why anyone would be unzipping the .zip file on top of a previous
release and expect it to work. If enough stuff stays in the same place, then
it can work, but as soon as any files are moved or removed, odds are that
you're screwed.

But given the paths to the Phobos source files that are listed there, it
looks like they were either installed with something other than .zip file,
or they were manually installed from the .zip file. So, I'm not sure that
that's the problem here. And the error messages are pretty weird, because
they're complaining about a private function from core.time and a package
one from std.datetime.date conflicting. It isn't mentioning std.datetime as
a module anywhere, and it's not talking about linker problems, which is more
what I'd expect when you end up with both the old module and the new package
together or if you're accidentally using a library that wasn't rebuilt for
the new release. So, I have no idea what's happening here.

It's quite possible that the system in question is not dealing a clean 2.076
environment, and it's possible that not everything was built from scratch
for the new release like it should have been, but the error messages don't
seem to be complaining about that. To figure it out, we'd probably need a
reproducible example.

- Jonathan M Davis




Re: 24-bit int

2017-09-03 Thread Ilya via Digitalmars-d-learn
On Sunday, 3 September 2017 at 23:30:43 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 04:01:34 UTC, Ilya Yaroshenko 
wrote:
On Saturday, 2 September 2017 at 03:29:20 UTC, EntangledQuanta 
wrote:
On Saturday, 2 September 2017 at 02:49:41 UTC, Ilya 
Yaroshenko wrote:

[...]


Thanks. Seems useful.


Just added `bytegroup` topology. Released in v0.6.12 (will be 
available in DUB after few minutes.)


http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#bytegroup

It is faster for your task then `bitpack`.

Best regards,
Ilya


Thanks! I might end up using this. Is this basically just a 
logical mapping(cast(int)bytes[i*3]) & 0xFF) type of stuff 
or is there more of a performance hit?


I could do the mapping myself if that is the case as I do not 
need much of a general solution. I'll probably be using in a 
just a few lines of code. It just needs to be nearly as fast as 
direct access.


The implementation can be found here
https://github.com/libmir/mir-algorithm/blob/master/source/mir/ndslice/iterator.d
It uses unions and byte loads. The speed should be almost the 
same as direct access.


Re: Crazy compile time errors with DMD 2.075.1 & 2.076.0

2017-09-03 Thread Ali Çehreli via Digitalmars-d-learn

On 09/03/2017 09:03 PM, Joel wrote:
> One of my small programs doesn't compile any more since, said, DMD
> versions.
>
> I've got other programs that do work, but I can't see what's different
> about them?!
>
> I'm using macOS.
>
> [1] Here is the program and stuff. It uses DSFML 2.1.1, but I haven't
> added the dynamic files for it.
>
> giver ~master: building configuration "application"...
> /usr/local/opt/dmd/include/dlang/dmd/std/datetime/systime.d(7652,52):
> Error:
> std.datetime.date.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at
> /usr/local/opt/dmd/include/dlang/dmd/std/datetime/date.d(9997,6)
> conflicts with
> core.time.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at
> /usr/local/opt/dmd/include/dlang/dmd/core/time.d(4250,6)

Are you installing from the zip distribution? If so, unfortunately, 
changes made to std.datetime are not compatible for that way of 
installing. (The large datetime file has been split into modules, making 
datetime a package.)


First delete the old dmd directories and unzip again.

Ali



Crazy compile time errors with DMD 2.075.1 & 2.076.0

2017-09-03 Thread Joel via Digitalmars-d-learn
One of my small programs doesn't compile any more since, said, 
DMD versions.


I've got other programs that do work, but I can't see what's 
different about them?!


I'm using macOS.

[1] Here is the program and stuff. It uses DSFML 2.1.1, but I 
haven't added the dynamic files for it.


giver ~master: building configuration "application"...
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/systime.d(7652,52): Error: 
std.datetime.date.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/date.d(9997,6) conflicts with 
core.time.splitUnitsFromHNSecs!"days".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/core/time.d(4250,6)
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/systime.d(7660,58): Error: 
std.datetime.date.splitUnitsFromHNSecs!"hours".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/date.d(9997,6) conflicts with 
core.time.splitUnitsFromHNSecs!"hours".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/core/time.d(4250,6)
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/systime.d(7661,62): Error: 
std.datetime.date.splitUnitsFromHNSecs!"minutes".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/std/datetime/date.d(9997,6) conflicts with 
core.time.splitUnitsFromHNSecs!"minutes".splitUnitsFromHNSecs at 
/usr/local/opt/dmd/include/dlang/dmd/core/time.d(4250,6)
../JMiscLib/source/jmisc/base.d(33,19): Error: template instance 
std.datetime.systime.SysTime.opCast!(DateTime) error instantiating
../JMiscLib/source/jmisc/base.d(23,4): Error: template instance 
jmisc.base.upDateStatus!string error instantiating

../JecLib/source/jec/input.d(357,26): Error: void has no value
../JecLib/source/jec/input.d(357,26): Error: incompatible types 
for ((dateTimeString()) ~ (" ")): 'void' and 'string'
../JecLib/source/jec/input.d(274,17): Error: template instance 
jec.input.InputJex.addToHistory!dstring error instantiating
source/app.d(31,7): Error: no property 'setupBoardPieces' for 
type 'int'

dmd failed with exit code 1.

[1] 
https://www.dropbox.com/s/eofa70od1hlk6ok/ArchiveOddCompileErrors.zip?dl=0




Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Monday, 4 September 2017 at 03:08:50 UTC, EntangledQuanta 
wrote:
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz 
Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't 
change that we would still be overloading the same keyword 
with three vastly different meanings. Two is already bad 
enough imho (and if I had a good idea with what to replace 
the "in" for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may 
be very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization".


I don't agree, because once something is in the language 
syntax, removing it is a long deprecation process (years), so 
these things have to be considered well beforehand.


That's true. But I don't see how it matters to much in the 
current argument. Remember, I'm not advocating using 'in' ;) 
[...]


It matters, because that makes it not be _early_ optimization.






If we are worried about saving time then what about the 
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as 
you know, won't necessarily save much time.


Their speed generally does not affect the time one has to 
spend to understand a piece of code.


Yes, but you are picking and choosing. [...]


I'm not (in this case), as the picking is implied by discussing 
PL syntax.


So, in this case I have to go with the practical of saying 
that it may be theoretically slower, but it is such an 
insignificant cost that it is an over optimization. I think 
you would agree, at least in this case.


Which is why I stated I'm opposing overloading `in` here as a 
matter of principle, because even small costs sum up in the 
long run if we get into the habit of just overloading.




I know, You just haven't convinced me enough to change my 
opinion that it really matters at the end of the day. It's 
going to be hard to convince me since I really don't feel as 
strongly as you do about it. That might seem like a 
contradiction, but


I'm not trying to convince you of anything.



Again, the exact syntax is not import to me. If you really 
think it matters that much to you and it does(you are not 
tricking yourself), then use a different keyword.


My proposal remains to not use a keyword and just upgrade 
existing template specialization.


[...]

You just really haven't stated that principle in any clear way 
for me to understand what you mean until now. i.e., Stating 
something like "... of a matter of principle" without stating 
which principle is ambiguous. Because some principles are not 
real. Some base their principles on fictitious things, some on 
abstract ideals, etc. Basing something on a principle that is 
firmly established is meaningful.


I've stated the principle several times in varied forms of 
"syntax changes need to be worth the cost".


I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more 
symbols. I would imagine you are against stuff like using 
"in1", "in2", etc because they visibly are to close to each 
other.


It's not an absolute restriction, it's an absolute position 
from which I argue against including such overloading on 
principle.
If it can be overcome by demonstrating that it can't 
sensibly be done without more overloading and that it adds 
enough value to be worth the increases overloading, I'd be 
fine with inclusion.


[...]

To simplify it down: Do you have the sample problems with all 
the ambiguities that already exist in almost all programming 
languages that everyone is ok with on a practical level on a 
daily basis?


Again, you seem to mix ambiguity and context sensitivity.
W.r.t. the latter: I have a problem with  those occurences 
where I don't think the costs I associate with it are 
outweighed by its benefits (e.g. with the `in` keyword 
overloaded meaning for AA's).


Not mixing, I exclude real ambiguities because have no real 
meaning. I thought I mentioned something about that way back 
when, but who knows... Although, I'd be curious if any 
programming languages existed who's grammar was ambiguous and 
actually could be realized?


Sure, see the dangling else problem I mentioned. It's just that 
people basically all agree on one of the choices and all stick 
with it (despite the grammar being formally 

New programming paradigm

2017-09-03 Thread EntangledQuanta via Digitalmars-d-learn
In coming up with a solution that maps enums to templates, I 
think it might provide a means to allow template like behavior at 
runtime. That is, type information is contained with in the enum 
which then can, with the use of compile time templates, be 
treated as dynamic behaviors.


Let me explain:

Take a variant type. It contains the "type" and the data. To 
simplify, we will treat look at it like


(pseudo-code, use your brain)

enum Type { int, float }

foo(void* Data, Type type);

The normal way to deal with this is a switch:

switch(type)
{
case int: auto val = *(cast(int*)Data);
case float: auto val = *(cast(float*)Data);
}


But what if the switch could be generated for us?

Instead of

foo(void* Data, Type type)
{
  switch(type)
  {
  case int: auto val = *(cast(int*)Data);
  case float: auto val = *(cast(float*)Data);
  }
}

we have

foo(T)(T* Data)
{

}


which, if we need to specialize on a type, we can do

foo(int* Data) { }
foo(float* Data) { }


One may claim that this isn't very useful because it's not much 
different than the switch because we might still have to do 
things like:


foo(T)(T* Data)
{
  static switch(T)
   {
 case int: break;
 case float: break;
   }
}

but note that it is a CT switch.

But, in fact, since we can specialize on the type we don't have 
to use switch and in some cases do not even need to specialize:


for example:

foo(T)(T* Data) { writeln(*Data); }

is a compile time template that is called with the correct type 
value at run-time due to the "magic" which I have yet to 
introduce.


Note that if we just use a standard runtime variant, writeln 
would see a variant, not the correct type that Data really is. 
This is the key difference and what makes this "technique" 
valuable. We can treat our dynamic variables as compile time 
types(use the compile time system) without much hassle. They fit 
naturally in it and we do not clutter our code switches. We can 
have a true auto/var like C# without the overhead of the IR. The 
cost, of course, is that switches are still used, they are 
generated behind the scenes though and the runtime cost is a few 
instructions that all switches have and that we cannot avoid.



To get a feel for what this new way of dealing with dynamic types 
might look like:


void foo(var y) { writeln(y); }

var x = "3"; // or possibly var!(string, int) for the explicit 
types used

foo(x);
x = 3;
foo(x);

(just pseudo code, don't take the syntax literally, that is not 
what is important)


While this example is trivial, the thing to note is that there is 
one foo declared, but two created at runtime. One for string and 
one for and int. It is like a variant, yet we don't have to do 
any testing. It is very similar to `dynamic` in C#, but better 
since actually can "know" the type at compile time, so to speak. 
It's not that we actually know, but that we write code as if we 
knew.. it's treated as if it's statically typed.


In fact, we still have to specify the possible types a value can 
take on(just like variant), but once that is done the switch 
statement can be generated and we just have to write our 
templated function to handle this new "type".


You can see some of the code here, which I won't repeat for sake 
of brevity:


https://forum.dlang.org/thread/qtnawzubqocllhacu...@forum.dlang.org

The thing to note, is that by defining foo in a specific way, the 
mixin generates a proxy that creates the switch statement for us. 
This deals with casting the type by using the type specifier and 
calling the template with it.


If the compiler were to use such a type as a first class citizen, 
we would have a very easy and natural way for dealing with 
dynamic types that can only have a finite number of type 
specializations. This should be the general case, although  I 
haven't looked at how it would work with oop. The cost is the 
same as any dynamic type... a switch statement, which is just a 
few extra cycles. (a lookup table could be used, of course, but 
not sure the benefit)



As far as I know, no other language actually does this. Those 
with dynamic types have a lot more overhead since they don't 
couple them with templates(since they are not a statically typed 
language).


Anyways, not a thoroughly thought out idea, but actually if it 
works well(I'm using the code I linked to and it works quite well 
for dealing with buffers that can take several different times. 
One function, no explicit switching in it) and could be 
implemented in the compiler, would probably be a very nice 
feature for D?


One of the downsides is code bloat. Having multiple var's 
increase the size O(n^m) since one has to deal with every 
combination. These result in very large nested switch 
structures... only O(m) to transverse at runtime though, but 
still takes up a lot of bits to represent.







Re: Bug in D!!!

2017-09-03 Thread EntangledQuanta via Digitalmars-d-learn
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with 
three vastly different meanings. Two is already bad enough 
imho (and if I had a good idea with what to replace the 
"in" for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may be 
very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization".


I don't agree, because once something is in the language 
syntax, removing it is a long deprecation process (years), so 
these things have to be considered well beforehand.


That's true. But I don't see how it matters to much in the 
current argument. Remember, I'm not advocating using 'in' ;) I'm 
only saying it doesn't matter in a theoretical sense. If humans 
were as logical as they should be, it would matter less. For 
example, a computer has no issue with using `in`, and it doesn't 
really take any more processing(maybe a cycle, but the context 
makes it clear). But, of course, we are not computers. So, in a 
practical sense, yes, the line has to be draw somewhere, even if, 
IMO, it is not the best place. You agree with this because you 
say it's ok for parenthesis but not in. You didn't seem to answer 
anything about my statements and question about images though. 
But, I'm ok with people drawing lines in the sand, that really 
isn't what I'm arguing. We have to draw lines. My point is, we 
should know we are drawing lines. You seem to know this on some 
significant level, but I don't think most people do. So, what 
would happen, if we argued for the next 10 years, we would just 
come to some refinement of our current opinions and experiences 
about the idea. That's a good thing in a sense, but I don't have 
10 years to waste on such a trivial concept that really doesn't 
matter much ;) (again, remember, I'm not advocating in, I'm 
advocating anything, but against doing nothing.)





If we are worried about saving time then what about the 
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as 
you know, won't necessarily save much time.


Their speed generally does not affect the time one has to spend 
to understand a piece of code.


Yes, but you are picking and choosing. To understand code, you 
have to write code, to write code you need a compiler, ide, etc. 
You need a book, the internet, or other resources to learn things 
too. It's a much much bigger can of worms than you realize or 
want to get in to. Everything is interdependent. It's nice to 
make believe that we can separate everything in to nice little 
quanta, but we can't, and when we ultimately try we get results 
that make no sense.  But, of course, it's about the best we can 
do with where humans are at in their evolution currently. The 
ramifications of one minor change can change everything... See 
the butterfly effect. Life is fractal-life, IMO(I can't prove it 
but the evidence is staggering).


I mean, when you say "read code faster" I assume you mean the 
moment you start to read a piece of code with your eyes to the 
end of the code... But do you realize that, in some sense, that 
is meaningless? What about the time it takes to turn on your 
computer? Why are you not including that? Or the time to scroll 
your mouse? These things matter because surely you are trying to 
save time in the "absolute" sense?


e.g., so you have more time to spend with your family at the end 
of the day? Or spend more time hitting a little white ball in a 
hole? or whatever? If all you did was read code and had no other 
factors involved in the absolute time, then you would be 100% 
correct. But all those other factors do add up too.


Of course, the more code you read the more important it becomes 
and the less the other factors become, but then why are you 
reading so much code if you think it's a waste of time? So you 
can save some more time to read more code? If your goal is to 
truly read as much code as you can in your life span, then I 
think your analysis is 99.999...% correct.  If you only code as a 
means to an end for other things, then I think your answer is 
about 10-40% correct(with a high degree of error and dependent on 
context).


For me, and the way I 

Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with 
three vastly different meanings. Two is already bad enough 
imho (and if I had a good idea with what to replace the "in" 
for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may be 
very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization".


I don't agree, because once something is in the language syntax, 
removing it is a long deprecation process (years), so these 
things have to be considered well beforehand.


If we are worried about saving time then what about the 
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as 
you know, won't necessarily save much time.


Their speed generally does not affect the time one has to spend 
to understand a piece of code.




Maybe the language itself should be designed so there are no 
ambiguities at all? A single simple for each function? A new 
keyboard design should be implemented(ultimately a direct brain 
to editor interface for the fastest time, excluding the time 
for development and learning)?


I assume you mean "without context sensitive meanings" instead of 
"no ambiguities", because the latter should be the case as a 
matter of course (and mostly is, with few exceptions such as the 
dangling else ambiguity in C and friends).
Assuming the former: As I stated earlier, it needs to be worth 
the cost.




So, in this case I have to go with the practical of saying that 
it may be theoretically slower, but it is such an insignificant 
cost that it is an over optimization. I think you would agree, 
at least in this case.


Which is why I stated I'm opposing overloading `in` here as a 
matter of principle, because even small costs sum up in the long 
run if we get into the habit of just overloading.


Again, the exact syntax is not import to me. If you really 
think it matters that much to you and it does(you are not 
tricking yourself), then use a different keyword.


My proposal remains to not use a keyword and just upgrade 
existing template specialization.




When I see something I try to see it at once rather [...]




To really counter your argument: What about parenthesis? They 
too have the same problem with in. They have perceived 
ambiguity... but they are not ambiguity. So your argument 
should be said about them too and you should be against them 
also, but are you? [To be clear here: foo()() and (3+4) have 3 
different use cases of ()'s... The first is templated 
arguments, the second is function arguments, and the third is 
expression grouping]


That doesn't counter my argument, it just states that parentheses 
have these costs, as well (which they do). The primary question 
would still be if they're worth that cost, which imho they are. 
Regardless of that, though, since they are already part of the 
language syntax (and are not going to be up for change), this is 
not something we could do something about, even if we agreed they 
weren't worth the cost.
New syntax, however, is up for that kind of discussion, because 
once it's in it's essentially set in stone (not quite, but *very* 
slow to remove/change because of backwards compatibility).



[...]


Well, yes, as I wrote, I think it is unambiguous (and can 
thus be used), I just think it shouldn't be used.


Yes, but you have only given the reason that it shouldn't be 
used because you believe that one shouldn't overload keywords 
because it makes it harder to parse the meaning. My rebuttal, 
as I have said, is that it is not harder, so your argument is 
not valid. All you could do is claim that it is hard and we 
would have to find out who is more right.


As I countered that in the above, I don't think your rebuttal 
is valid.


Well, hopefully I countered that in my rebuttal of your 
rebuttal of my rebuttal ;)


Not as far as I see it, though I'm willing to agree to disagree :)


I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more symbols. 
I would imagine you are against stuff like using "in1", 
"in2", etc because they visibly are to close to each other.


It's not an absolute restriction, it's an absolute position 
from which I argue against including such overloading 

Re: Template substitution for function parameters

2017-09-03 Thread Biotronic via Digitalmars-d-learn
On Saturday, 2 September 2017 at 01:41:14 UTC, Nicholas Wilson 
wrote:

On Friday, 1 September 2017 at 11:33:15 UTC, Biotronic wrote:
On Friday, 1 September 2017 at 10:15:09 UTC, Nicholas Wilson 
wrote:

So I have the following types

struct DevicePointer(T) { T* ptr; }

struct Buffer(T)
{
void* driverObject;
T[] hostMemory;
}

and a function

auto enqueue(alias k)(HostArgsOf!k) { ... }

where k would be a function like

void foo( DevicePointer!float a, float b , int c) { ... }

How can I write HostArgsOf such that HostArgsOf!foo yields:
AliasSeq!(Buffer!float, float, int)
preferably in such a way that I can add additional 
transformations to it later on?


i.e. it substitutes the template DevicePointer for the 
template Buffer in Parameters!foo,
The templates can be assumed to not be nested templates, i.e. 
DevicePointer!(DevicePointer!(float)) will never occur 
neither will Buffer!(Buffer!(float) or any cross templates)


template k(alias fn) {
import std.meta, std.traits;
alias k = staticMap!(ReplaceTemplate!(DevicePointer, 
Buffer), Parameters!fn);

}

template ReplaceTemplate(alias needle, alias replacement) {
template ReplaceTemplate(alias T) {
static if (is(T : needle!Args, Args...)) {
alias ReplaceTemplate = replacement!Args;
} else {
alias ReplaceTemplate = T;
}
}
}


Hmm, it seems I oversimplified the example a bit and this 
doesn't quite work for my actual usecase.


struct DevicePointer(int n,T) { T* ptr; }

alias GlobalPointer(T) = DevicePointer!(1,T);

k!foo yields
DevicePointer!(cast(AddrSpace)1u, float), float, int
instead of
Buffer!float, float, int

I think because the is(T : needle!Args, Args...) fails.


It really shouldn't work at all - the is(T : ...) works great, 
but gives Args as (1, float), and fails to instantiate Buffer 
with those arguments. This should give a compilation error.


Anyways, updated code that should work:

template ReplaceTemplate(alias needle, alias replacement) {
template ReplaceTemplate(alias T) {
static if (is(T : needle!Args, Args...)) {
alias ReplaceTemplate = replacement!(Args[1]);
} else {
alias ReplaceTemplate = T;
}
}
}

If you only ever use this for DevicePointer and Buffer, a less 
generic solution might be more understandable for maintainers:


template ReplaceDevicePointer(alias T) {
static if (is(T : DevicePointer!(n, T), int n, T)) {
alias ReplaceDevicePointer = Buffer!T;
} else {
alias ReplaceDevicePointer = T;
}
}

--
  Biotronic


Re: 24-bit int

2017-09-03 Thread EntangledQuanta via Digitalmars-d-learn
On Sunday, 3 September 2017 at 04:01:34 UTC, Ilya Yaroshenko 
wrote:
On Saturday, 2 September 2017 at 03:29:20 UTC, EntangledQuanta 
wrote:
On Saturday, 2 September 2017 at 02:49:41 UTC, Ilya Yaroshenko 
wrote:
On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta 
wrote:
Is there a way to create a 24-bit int? One that for all 
practical purposes acts as such? This is for 24-bit stuff 
like audio. It would respect endianness, allow for arrays 
int24[] that work properly, etc.


Hi,

Probably you are looking for bitpack ndslice topology:
http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.bitpack

sizediff_t[] data;
// creates a packed signed integer slice with max allowed 
value equal to `2^^24 - 1`.

auto packs = data[].sliced.bitpack!24;

packs has the same API as D arrays

Package is Mir Algorithm
http://code.dlang.org/packages/mir-algorithm

Best,
Ilya


Thanks. Seems useful.


Just added `bytegroup` topology. Released in v0.6.12 (will be 
available in DUB after few minutes.)


http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#bytegroup

It is faster for your task then `bitpack`.

Best regards,
Ilya


Thanks! I might end up using this. Is this basically just a 
logical mapping(cast(int)bytes[i*3]) & 0xFF) type of stuff or 
is there more of a performance hit?


I could do the mapping myself if that is the case as I do not 
need much of a general solution. I'll probably be using in a just 
a few lines of code. It just needs to be nearly as fast as direct 
access.







Re: Bug in D!!!

2017-09-03 Thread EntangledQuanta via Digitalmars-d-learn
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with 
three vastly different meanings. Two is already bad enough 
imho (and if I had a good idea with what to replace the "in" 
for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may be 
very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization". If we are worried about saving time then what 
about the tooling? compiler speed? IDE startup time? etc? All 
these take time too and optimizing one single aspect, as you 
know, won't necessarily save much time.


Maybe the language itself should be designed so there are no 
ambiguities at all? A single simple for each function? A new 
keyboard design should be implemented(ultimately a direct brain 
to editor interface for the fastest time, excluding the time for 
development and learning)?


So, in this case I have to go with the practical of saying that 
it may be theoretically slower, but it is such an insignificant 
cost that it is an over optimization. I think you would agree, at 
least in this case. Again, the exact syntax is not import to me. 
If you really think it matters that much to you and it does(you 
are not tricking yourself), then use a different keyword.


When I see something I try to see it at once rather than reading 
it left to right. It is how music is read properly, for example. 
One can't read left to right and process the notes in real time 
fast enough. You must "see at once" a large chunk.


When I see foo(A in B)() I see it at once, not in parts or 
sub-symbols(subconsciously that may be what happens, but it 
either is so quick or my brain has learned to see differently 
that I do not feel it to be any slower).


that is, I do not read it like f, o, o (, A,  , i,...

but just like how one sees an image. Sure, there are clustering 
such as foo and (...), and I do sub-parse those at some point, 
but the context is derived very quickly. Now, of course, I do 
make assumptions to be able to do that. Obviously I have to sorta 
assume I'm reading D code and that the expression is a templated 
function, etc. But that is required regardless.


It's like seeing a picture of an ocean. You can see the global 
characteristics immediately without getting bogged down in the 
details until you need it. You can determine the approximate time 
of day(morning, noon, evening, night) relatively instantaneously 
without even knowing much else.


To really counter your argument: What about parenthesis? They too 
have the same problem with in. They have perceived ambiguity... 
but they are not ambiguity. So your argument should be said about 
them too and you should be against them also, but are you? [To be 
clear here: foo()() and (3+4) have 3 different use cases of 
()'s... The first is templated arguments, the second is function 
arguments, and the third is expression grouping]


If you are, then you are being logical and consistent, If you are 
not, then you are not being logical nor consistent. If you fall 
in the latter case, I suggest you re-evaluate the way you think 
about such things because you are picking and choosing.


Now, if you are just stating a mathematical fast that it takes 
longer, then I can't really deny that, although I can't 
technically prove it either as you can't because we would require 
knowing exactly how the brain processes the information.












[...]


Well, yes, as I wrote, I think it is unambiguous (and can 
thus be used), I just think it shouldn't be used.


Yes, but you have only given the reason that it shouldn't be 
used because you believe that one shouldn't overload keywords 
because it makes it harder to parse the meaning. My rebuttal, 
as I have said, is that it is not harder, so your argument is 
not valid. All you could do is claim that it is hard and we 
would have to find out who is more right.


As I countered that in the above, I don't think your rebuttal 
is valid.


Well, hopefully I countered that in my rebuttal of your rebuttal 
of my rebuttal ;) Again, you don't actually know how the brain 
processes information(no one does, it is all educated guesses). 
You use the concept that the more information one has to process 
the more time it takes... which seems logical, but it is not 
necessarily applicable directly to the interpretation of written 
symbols. Think of an 

Dub documentation with an initial ddoc file

2017-09-03 Thread Conor O'Brien via Digitalmars-d-learn
I've been trying to figure out how to generate documentation for 
my project using dub. I have found this link[1] which told me how 
I could use dub to generate docs:


dub build --build=docs 

However, I wish to have a set of macros that are present on every 
documentation file, that would define how the resultant HTML 
document is rendered. I tried:


dub build --build=docs  html.ddoc

But I got the following error:

Expected one or zero arguments.
Run "dub build -h" for more information about the "build" 
command.


How might I include `html.ddoc` with every file that has 
documentation?


[1]: 
https://stackoverflow.com/questions/29762668/using-dub-to-build-documentation


How to change the file extension of generated doc files

2017-09-03 Thread Jeremy DeHaan via Digitalmars-d-learn
I can't find anywhere describing how to change the extension of 
the generated doc files.


I've tried `-of.php`, but it still generates .html.

I'm probably missing something here that's going to make me feel 
silly.


Re: passing member.member alias to mixin template

2017-09-03 Thread Eric_DD via Digitalmars-d-learn

Clear explanation, thanks!

I think it would avoid a lot of confusion to disallow the alias f 
= c1.field notation and only allow the alias f = C.field 
notation. If necessary one could use alias f = typeof(c1).field


Re: passing member.member alias to mixin template

2017-09-03 Thread ag0aep6g via Digitalmars-d-learn

On 09/03/2017 08:54 PM, Eric_DD wrote:

*** This works:

struct Array {
 void foo() { writeln("foo"); }
}

mixin template arrayOperations(arrays...) {
 void foo() {
 foreach(ref a; arrays) a.foo();
 }
}

class Thing {
 Array data1;
 Array data2;
 mixin arrayOperations!(data1, data2);
}

[...]

***

But if I wrap Array in a S, then I get a "need this for data of type Array"
Is there a way (without an alias this in S) to get the following working?


*** Non working code:

[...]

struct S {
 Array data;
}

[...]

class Thing {
 S s1;
 S s2;
 mixin arrayOperations!(s1.data, s2.data);
}


As far as I understand, the problem is that an alias of a member does 
not carry a `this` reference. It's added only when you use the alias in 
a method of the aggregate.


That means, s1.data is not an alias of s1's `data` field, but an alias 
of `S.data`. And so is `s2.data`. They're effectively the same alias.


It's the same with `data1` and `data2`. But in that case the aliases 
work because `foo` provides the correct `this` reference.


An example of what I mean:


import std.stdio;

class C
{
int field;
void method() { writeln(f); }
}

C c1;
C c2;

alias f = c1.field;

void main()
{
c1 = new C;
c2 = new C;

c1.field = 1;
c2.field = 2;

c1.method(); /* prints "1" */
c2.method(); /* prints "2" */

version (none) writeln(f); /* Error: need 'this' */
}


Note that `c2.method()` prints "2", even though the alias f has been 
made from c1. The alias doesn't refer to c1's specific field, but to the 
generic field of the C class. The alias can only be used in methods of 
C, because they provide the needed `this`.


passing member.member alias to mixin template

2017-09-03 Thread Eric_DD via Digitalmars-d-learn

I am running into something that seems a bit inconsistent.
When I pass an alias of a member to a mixin it works, but a 
member to member doesn't.


It seems like the alias is evaluated to the last symbol before 
passing it to the mixin.

If that's true, is there a way to defer the evaluation?

Anyway, better look at this code:


*** This works:

struct Array {
void foo() { writeln("foo"); }
}

mixin template arrayOperations(arrays...) {
void foo() {
foreach(ref a; arrays) a.foo();
}
}

class Thing {
Array data1;
Array data2;
mixin arrayOperations!(data1, data2);
}

int main(string[] argv) {
new Thing().foo();
return 0;
}

***

But if I wrap Array in a S, then I get a "need this for data of 
type Array"
Is there a way (without an alias this in S) to get the following 
working?



*** Non working code:

struct Array {
void foo() { writeln("foo"); }
}

struct S {
Array data;
}

mixin template arrayOperations(arrays...) {
void foo() {
		foreach(ref a; arrays) a.foo();// error: "need this for 
data of type Array"

}
}

class Thing {
S s1;
S s2;
mixin arrayOperations!(s1.data, s2.data);
}



int main(string[] argv) {
new Thing().foo();
return 0;
}



Re: string to character code hex string

2017-09-03 Thread Ali Çehreli via Digitalmars-d-learn

On 09/03/2017 03:03 AM, ag0aep6g wrote:
> On 09/03/2017 01:39 AM, Ali Çehreli wrote:
>> If we can convert byte-by-byte, we should be able to
>> convert back byte-by-byte, right?
>
> You weren't converting byte-by-byte.

In my mind I was! :o)

> Or maybe just convert everything to UTF-8 first. That also sidesteps any
> endianess issues.

Good point.

> Still fails with UTF-16 and UTF-32 strings:

I think I can make it work with a few more iterations but I'll leave it 
as an exercise for the author.


Ali



Re: Problems with std.experimental.allocator

2017-09-03 Thread Igor via Digitalmars-d-learn

On Saturday, 2 September 2017 at 11:23:00 UTC, Igor wrote:
I realize these are not yet stable but I would like to know if 
I am doing something wrong or is it a lib bug.


My first attempt was to do this:

theAllocator = allocatorObject(Region!MmapAllocator(1024*MB));

If I got it right this doesn't work because it actually does 
this:


1. Create Region struct and allocate 1024MB from MMapAllocator
2. Wrap the struct in IAllocator by copying it because it has 
state

3. Destroy original struct which frees the memory
4. Now the struct copy points to released memory

Am I right here?

Next attempt was this:

	theAllocator = 
allocatorObject(Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB)));


Since I give actual memory instead of the allocator to the 
Region it can not dealocate that memory so even the copy will 
still point to valid memory. After looking at what will the 
allocatorObject do in this case my conclusion is that it will 
take a "copyable" static if branch and create an instance of 
CAllocatorImpl which will have a "Region!() impl" field within 
itself but given Region!() struct is never copied into that 
field.


Am I right here?

If I am right about both are then these considered as lib bugs?

I finally got it working with:

	auto newAlloc = 
Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB));

theAllocator = allocatorObject();

Next I tried setting processAllocator instead of theAllocator 
by using:


auto newAlloc = 
Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB));

processAllocator = sharedAllocatorObject();

but that complained how it "cannot implicitly convert 
expression `pa` of type `Region!()*` to `shared(Region!()*)`" 
and since Region doesn't define its methods as shared does this 
mean one can not use Region as processAllocator? If that is so, 
what is the reason behind it?


After a lot of reading I learned that I need a separate 
implementation like SharedRegion and I tried implementing one. If 
anyone is interested you can take a look here 
https://github.com/igor84/dngin/blob/master/source/util/allocators.d. It doesn't have expand at the moment but I tried making it work with this:


processAllocator = sharedAllocatorObject(shared 
SharedRegion!MmapAllocator(1024*MB));


I did it by reserving first two bytes of allocated memory for 
counting the references to that memory and then increasing that 
count on postblit and decreasing it in destructor and identity 
assignment operator. I then only release the memory if this count 
gets bellow 0. Problem is I couldn't get it to compile since I 
would get this error on above line:


Error: shared method 
util.allocators.SharedRegion!(MmapAllocator, 8u, 
cast(Flag)false).SharedRegion.~this is not callable using a 
non-shared object


I couldn't figure out why is it looking for non-shared 
destructor. If I remove shared from destructor then I get that 
"non-shared method SharedRegion.~this is not callable using a 
shared object" in std/experimental/allocator/package.d(2067).


I currently use it with pointer construction: 
https://github.com/igor84/dngin/blob/master/source/winmain.d#L175


Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with three 
vastly different meanings. Two is already bad enough imho (and 
if I had a good idea with what to replace the "in" for AA's 
I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one meaning 
you also have to consider the context it appears in. That is 
intrinsically more work (though the difference may be very small) 
and thus harder.




Again, I'm not necessarily arguing for them, just saying that 
one shouldn't avoid them just to avoid them.






[...]


It's not about ambiguity for me, it's about readability. The 
more significantly different meanings you overload some 
keyword - or symbol, for that matter - with, the harder it 
becomes to read.


I don't think that is true. Everything is hard to read. It's 
about experience. The more you experience something the more 
clear it becomes. Only with true  ambiguity is something 
impossible. I realize that in one can design a language to be 
hard to parse due to apparent ambiguities, but am I am talking 
about cases where they can be resolved immediately(at most a 
few milliseconds).


Experience helps, of course, but it doesn't change that it's 
still just that little bit slower. And everytime we encourage 
such overloading encourages more, which in the end sums up.




You are making general statements, and it is not that I 
disagree, but it depends on context(everything does). In this 
specific case, I think it is extremely clear what in means, so 
it is effectively like using a different token. Again, everyone 
is different though and have different experiences that help 
them parse things more naturally. I'm sure there are things 
that you might find easy that I would find hard. But that 
shouldn't stop me from learning about them. It makes me 
"smarter", to simplify the discussion.


I am, because I believe it to be generally true for "1 keyword 
|-> 1 meaning" to be easier to read than "1 keyword and 1 context 
|-> 1 meaning" as the former inherently takes less time.






[...]


Well, yes, as I wrote, I think it is unambiguous (and can thus 
be used), I just think it shouldn't be used.


Yes, but you have only given the reason that it shouldn't be 
used because you believe that one shouldn't overload keywords 
because it makes it harder to parse the meaning. My rebuttal, 
as I have said, is that it is not harder, so your argument is 
not valid. All you could do is claim that it is hard and we 
would have to find out who is more right.


As I countered that in the above, I don't think your rebuttal is 
valid.




I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more symbols. I 
would imagine you are against stuff like using "in1", "in2", 
etc because they visibly are to close to each other.


It's not an absolute restriction, it's an absolute position from 
which I argue against including such overloading on principle.
If it can be overcome by demonstrating that it can't sensibly be 
done without more overloading and that it adds enough value to be 
worth the increases overloading, I'd be fine with inclusion.





[...]


I would much rather see it as a generalization of existing 
template specialization syntax [1], which this is t.b.h. just 
a superset of (current syntax allows limiting to exactly one, 
you propose limiting to 'n'):


---
foo(T: char)  // Existing syntax: Limit T to the single 
type `char`
foo(T: (A, B, C)) // New syntax:  Limit T to one of A, B, 
or C

---


Yes, if this worked, I'd be fine with it. Again, I could care 
less. `:` == `in` for me as long as `:` has the correct meaning 
of "can be one of the following" or whatever.


But AFAIK, : is not "can be one of the following"(which is "in" 
or "element of" in the mathematical sense) but can also mean 
"is a derived type of".


Right, ":" is indeed an overloaded symbol in D (and ironically, 
instead of with "in", I think all its meanings are valuable 
enough to be worth the cost). I don't see how that would 
interfere in this context, though, as we don't actually overload 
a new meaning (it's still "restrict this type to the thing to the 
right").








If that is the case then go for it ;) It is not a concern of 
mine. You tell me the syntax and I will use it. (I'd have no 
choice, of course, but if it's short and sweet then I won't 
have any problem).


I'm discussing this as a matter of theory, I don't have a use for 
it.





[...]


Quoting a certain person (you know who you are) from DConf 
2017: "Write a DIP".
I'm quite happy to discuss this idea, but at the end of 

Re: 24-bit int

2017-09-03 Thread Patrick Schluter via Digitalmars-d-learn

On Friday, 1 September 2017 at 22:10:43 UTC, Biotronic wrote:
On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta 
wrote:
Is there a way to create a 24-bit int? One that for all 
practical purposes acts as such? This is for 24-bit stuff like 
audio. It would respect endianness, allow for arrays int24[] 
that work properly, etc.


I haven't looked at endianness beyond it working on my 
computer. If you have special needs in that regard, consider 
this a starting point:


big endian is indeed problematic.


@property
int value(int x) {
_payload = (cast(ubyte*))[0..3];
return value;
}



will not work on big endian machine.

version(BigEndian)
_payload = (cast(ubyte*))[1..4];



Re: string to character code hex string

2017-09-03 Thread ag0aep6g via Digitalmars-d-learn

On 09/03/2017 01:39 AM, Ali Çehreli wrote:
Ok, I see that I made a mistake but I still don't think the conversion 
is one way. If we can convert byte-by-byte, we should be able to convert 
back byte-by-byte, right?


You weren't converting byte-by-byte. You were only converting the 
significant bytes of the code points, throwing away leading zeroes.


What I failed to ensure was to iterate by code 
units.


A UTF-8 code unit is a byte, so "%02x" is enough, yes. But for UTF-16 
and UTF-32 code units, it's not. You need to match the format width to 
the size of the code unit.


Or maybe just convert everything to UTF-8 first. That also sidesteps any 
endianess issues.



The following is able to get the same string back:

import std.stdio;
import std.string;
import std.algorithm;
import std.range;
import std.utf;
import std.conv;

auto toHex(R)(R input) {
 // As Moritz Maxeiner says, this format is expensive
 return input.byCodeUnit.map!(c => format!"%02x"(c)).joiner;
}

int hexValue(C)(C c) {
 switch (c) {
 case '0': .. case '9':
 return c - '0';
 case 'a': .. case 'f':
 return c - 'a' + 10;
 default:
 assert(false);
 }
}

auto fromHex(R, Dst = char)(R input) {
 return input.chunks(2).map!((ch) {
 auto high = ch.front.hexValue * 16;
 ch.popFront();
 return high + ch.front.hexValue;
 }).map!(value => cast(Dst)value);
}

void main() {
 assert("AAA".toHex.fromHex.equal("AAA"));

 assert("ö…".toHex.fromHex.equal("ö…".byCodeUnit));
 // Alternative check:
 assert("ö…".toHex.fromHex.text.equal("ö…"));
}


Still fails with UTF-16 and UTF-32 strings:


writeln("…"w.toHex.fromHex.text); /* prints " &" */
writeln("…"d.toHex.fromHex.text); /* prints " &" */