with statement not triggering opDispatch?

2015-02-05 Thread Alex Parrill via Digitalmars-d
DMD does not seem to consider `opDispatch` when looking up 
variables in a `with` block, or . Is this intentional, or a 
bug/oversight?


For example:

import std.typecons;
import std.stdio;

struct MyStruct {
auto opDispatch(string name)() {
return name~"!";
}
}

void main() {
auto obj = MyStruct();
with(obj)
writeln(helloworld());
}

Fails to run with the following error:

$ rdmd test.d
test.d(14): Error: undefined identifier helloworld
Failed: ["dmd", "-v", "-o-", "test.d", "-I."]

Even though `helloworld` should be "defined", by way of the 
`opDispatch` template.


This also occurs when looking up identifiers in methods, when not 
prefixing the identifiers with `this`:


import std.typecons;
import std.stdio;

struct MyStruct {
auto opDispatch(string name)() {
return name~"!";
}

void run() {
writeln(helloworld()); // Error: no identifier 
`helloworld`
}
}

void main() {
auto obj = MyStruct();
obj.run();
}

I can work around it via introducing a wrapper struct that 
contains the wrapped struct and uses `alias this` on it, which 
fixes the identifier resolution with methods, but not with `with`:


import std.typecons;
import std.stdio;

struct MyStruct {
auto opDispatch(string name)() {
return name~"!";
}
}

struct MyStructWrapper {
MyStruct __mystruct;
alias __mystruct this;

void run() {
writeln(helloworld()); // Prints "helloworld!"
}
}

void main() {
auto obj = MyStructWrapper();
obj.run();
with(obj) writeln(helloworld()); // Still fails
}

My use case for this is a D implementation of Mustache that 
compiles templates at compile-time, and can be used with 
arbitrary objects. The structure maintains the context stack 
(stored as a tuple), with `opDispatch` forwarding accesses to the 
first object in the stack that contains the named identifier. The 
tag content would be retrieved like `with(context) return 
mixin(tag_content);`, so that `{{helloworld}}` would generate 
`with(context) return helloworld;`.


Re: with statement not triggering opDispatch?

2015-02-05 Thread Alex Parrill via Digitalmars-d

On Thursday, 5 February 2015 at 20:45:36 UTC, Meta wrote:


This is probably an oversight as nobody's thought to use 
opDispatch and with in that manner. I don't know what the 
consensus will be on if it's a bug or not, but you can file an 
issue at issues.slang.org


Looks like two issues have already been opened: 
https://issues.dlang.org/show_bug.cgi?id=6400 and 
https://issues.dlang.org/show_bug.cgi?id=9808. Looks like neither 
have any discussion or resolution, which is a bit disheartening. 
I'll see if I can bump the latest one.


Re: Points of Failure

2015-07-28 Thread Alex Parrill via Digitalmars-d

On Tuesday, 28 July 2015 at 19:11:24 UTC, Walter Bright wrote:

http://spot.livejournal.com/308370.html

Anyone care to total up the fail points for D?


Here's a spreadsheet I've set up, feel free to test and add a 
comment if a particular point passes or fails:


https://docs.google.com/spreadsheets/d/17BlHF_2VAl2UUi6acymeirvVKxVZPsUnnPkLP-vX1Ec/edit?usp=sharing


Re: Points of Failure

2015-07-28 Thread Alex Parrill via Digitalmars-d

On Tuesday, 28 July 2015 at 19:30:45 UTC, Jonathan M Davis wrote:

On Tuesday, 28 July 2015 at 19:11:24 UTC, Walter Bright wrote:

http://spot.livejournal.com/308370.html

Anyone care to total up the fail points for D?


LOL. There's a lot of highly subjective stuff on that list, and 
some of it seems to think that normal practices are bad, like...


- Jonathan M Davis


Yea, I disagree on the "/usr/local is bad" and "anything other 
than GNU make is bad". A few of the items are over the top too 
("Your releases are only in an encapsulation format that you 
invented" Do people really do this?!).


Re: Points of Failure

2015-07-29 Thread Alex Parrill via Digitalmars-d

On Wednesday, 29 July 2015 at 15:21:43 UTC, H. S. Teoh wrote:


Hmm. I would've thought things couldn't possibly get easier 
than `git clone $url; vim $pathname`...


`git clone` can take awhile with large repositories, like mono.




Re: Rant after trying Rust a bit

2015-07-30 Thread Alex Parrill via Digitalmars-d

On Thursday, 30 July 2015 at 11:46:02 UTC, Bruno Medeiros wrote:


Tooling doesn't just matters. Tooling trumps everything else.



I don't agree. IMO reducing the need for tools would be a better 
solution.


For example, there's no need for a memory checker if you're 
writing in Python, but if you're writing in C, you better start 
learning how to use Valgrind, and that takes time.


Also there's Javascript's overabundance of tooling, with varying 
levels of quality, way too many choices (grunt vs gulp vs ..., 
hundreds of transpilers), and incompatibilities (want to use JSX 
and TypeScript together? Good luck).


To take it to the extreme, no matter how much tooling you write 
for BrainFuck, I doubt anyone will use it.


I think D goes in the right track by embedding things like unit 
tests, function contracts, and annotations into the language 
itself, even if the implementations could capitalize on them 
better than they do now.


Re: D for Game Development

2015-07-30 Thread Alex Parrill via Digitalmars-d

On Thursday, 30 July 2015 at 15:10:59 UTC, Brandon Ragland wrote:


It's a dog because Java is a dog. But that's not because of the 
GC.


It's not really that bad either, I can open up Minecraft at any 
time and have it sit in the background quietly using ~800Mb ram 
and virtually no cpu time.


It's mostly because, in Java, every one of those tiny immutable 
`(x,y,z)` tuples and vectors have to be allocated on the heap.


D is nice because you can allocate such small things on the 
stack, but it also doesn't have a massively optimized collector 
either.


Either your kid has tons of mods in their Minecraft or your 
computer is a bit dated.


Tons of mods is the only way I can (or more accurately, can't) 
play MC anymore.


Re: I'm confused about ranges (input and forward in practice, in algorithms)

2015-08-13 Thread Alex Parrill via Digitalmars-d

On Friday, 14 August 2015 at 00:33:30 UTC, Luís Marques wrote:

...


Yea, it would be nice if all ranges were reference types, and 
calling `save` made a duplicate of the cursor (or whatever the 
range is supposed to be). However, that would mean that they 
would have to be allocated and garbage collected, which a lot of 
overhead that D is working to avoid. Or perhaps there's another 
solution that I can't think of.


Bitwise copying of a struct `rng` isn't necessarily the same as 
doing the same thing as `rng.save` though. Just because something 
is a struct, doesn't mean it has value semantics. 
`std.stdio.File` is technically a struct, but it's just a 
refcounted wrapper around a pointer to an internal file object, 
so it technically has reference semantics (hence why version(two) 
in your example is the output). This also applies to wrapper 
ranges like map and filter, if they wrap reference-semantic 
objects. Also consider a barebones range that contains only an 
`int` file descriptor.


Right now, I mostly treat it as if passing a range through a 
non-`ref` parameter consumes the original range.


I don't know if `startsWith` should take the range by reference, 
though. On one hand, it would allow you to use the range 
afterwards if it matches, but it would have weird, unexpected 
effects on code like this:


string mystring = "Hello World";
if(mystring.startsWith("Hello"))
writeln(mystring) // This would print " World" because 
startsWith modified mystring


I do think `startsWith` not popping off the last character is a 
bit of a bug though.


Re: TCP Socket Client Example

2015-08-14 Thread Alex Parrill via Digitalmars-d

On Friday, 14 August 2015 at 14:06:03 UTC, Kingsley wrote:

Hi

Does anyone have some examples of making a client socket 
connection to a host on a port and parsing the incoming data in 
some kind of loop.


--K


auto addresses = getAddress("localhost", 8085);
	auto socket = new Socket(AddressFamily.INET, SocketType.STREAM, 
ProtocolType.TCP);

scope(exit) socket.close();

socket.connect(addresses[0]);

auto buffer = new ubyte[2056];
ptrdiff_t amountRead;
while((amountRead = socket.receive(buffer)) != 0) {
enforce(amountRead > 0, lastSocketError);

// Do stuff with buffer
}


Re: Truly lazy ranges, transient .front, and std.range.Generator

2015-08-16 Thread Alex Parrill via Digitalmars-d
On Saturday, 15 August 2015 at 10:06:13 UTC, Joseph Rushton 
Wakeling wrote:

...


I had this issue recently when reading from a command-line-style 
TCP connection; I needed to read the line up to the \n separator, 
but consuming the separator meant waiting for the next byte that 
would never arrive unless a new command was sent.


So I made a wrapper range that evaluates the wrapped range's 
popFront only when front/empty is first called ("just in time"). 
Source code here: 
https://gist.github.com/ColonelThirtyTwo/0dfe76520efcda02d848


You can throw it in a UFCS chain anywhere except (for some 
reason) after something that takes a delegate template parameter 
like map. For example:


auto reader = SocketReader(socket).joiner.jitRange.map!(byt 
=> cast(char) byt);




Weird "circular initialization of isInputRange" error

2015-09-16 Thread Alex Parrill via Digitalmars-d
This piece of code (which I reduced with dustmite) gives me the 
following error when I try to compile it:


$ rdmd -main parser.d parser.d(28): Error: circular 
initialization of isInputRange
parser.d(31): Error: template instance 
std.meta.staticMap!(handler, ArrayReader*) error instantiating
parser.d(36):instantiated from here: 
unpacker!(RefRange!(immutable(ubyte)[]))
parser.d(40): Error: template instance 
std.range.primitives.isInputRange!(ArrayReader*) error 
instantiating
/usr/include/dmd/phobos/std/meta.d(546):instantiated 
from here: F!(ArrayReader*)
parser.d(43):instantiated from here: staticMap!(toTD, 
ArrayReader*)

Failed: ["dmd", "-main", "-v", "-o-", "parser.d", "-I."]


I'm not really sure what's causing the error; I'm not declaring 
`isInputRange` in my code. Commenting out the definition of `TD` 
(the very last line) removes the error. Am I doing something 
wrong here, or is this a compiler bug?


Tested with dmd v2.068.1 on Linux x64

Code:
-

import std.range;
import std.variant;
import std.typetuple;

///
template unpacker(Range)
{
/// Element data types. See `unpack` for usage.
alias MsgPackData = Algebraic!(
ArrayReader*,
);


/// Reader range for arrays.
struct ArrayReader {
MsgPackData _front;
void update() {
_front.drain;
}

void popFront() {
update;
}
}

void drain(MsgPackData d) {
static handler(T)(T t) {
static if(isInputRange!T)
data;
}
d.visit!(staticMap!(handler, MsgPackData.AllowedTypes));
}
}


alias TestUnpacker = unpacker!(RefRange!(immutable(ubyte)[]));
alias D = TestUnpacker.MsgPackData;

template toTD(T) {
static if(isInputRange!T)
alias toTD = This;
}
alias TD = Algebraic!(staticMap!(toTD, D.AllowedTypes)); // 
test data type




Re: Possible issue with isInputRange

2015-09-24 Thread Alex Parrill via Digitalmars-d
On Wednesday, 23 September 2015 at 23:10:21 UTC, Maor Ben Dayan 
wrote:
isInputRange will always return true for a range returning ref 
to non-copyable type.
This is a problem when trying to work with chain etc. together 
with such ranges.
The problem is that the test in isInputRange should have been 
similar to A below instead of B (no need to try and assign the 
return value of front for the range to be an input range).

Below is a reduced code example.

Am I correct in assuming that this is a phobos bug ?

code example:

void main()
{
import std.range;
import std.traits;

struct Snowflake {
int x;
@disable this(this);
}

Snowflake[12] flakes;
foreach(uint i; 0..flakes.length) {
flakes[i].x = i;
}
alias R = Snowflake[];

foreach(ref s; flakes[0..$]) { /* works just fine, I guess 
it is a valid input range */

// do something
}

static assert(is(typeof((inout int = 0) { R r = R.init; 
})));
static assert(is(typeof((inout int = 0) { R r = R.init; if 
(r.empty) {} })));
static assert(is(typeof((inout int = 0) { R r = R.init; 
r.popFront(); })));
static assert(is(typeof((inout int = 0) { R r = R.init; 
r.front; })));/* A passes */
static assert(is(typeof((inout int = 0) { R r = R.init; h = 
r.front; })));  /* B fails */

static assert(isInputRange!(Snowflake[]));
  /* fails */
}


It's because you disabled the copy constructor of `Snowflake`. 
Apparently `isInputRange` requires copyable elements (it does 
`auto h = r.front;` in its check).


Also, just because it's compatible with foreach loops doesn't 
mean it's a range; it may be an object with `opApply` (such as 
`std.parallelism.parallel` 
http://dlang.org/phobos/std_parallelism.html#.parallel)


Re: Second CT-Parameter of isRange Predicates

2015-11-02 Thread Alex Parrill via Digitalmars-d

On Monday, 2 November 2015 at 14:16:53 UTC, Nordlöw wrote:

Is there a reason why

isOutputRange(R,E)

takes a second argument `E` but not other range predicates

isInputRange(R)
isForwardRange(R)
...

?

If so, I still think it would very be nice to have a second 
argument `E` for all the other `isXRange` traits to simplify, 
for instance,


if (isInputRange!R && is(int == ElementType!R))

to simpler

if (isInputRange!(R, int))

or even

if (isInputRange!R && is(isSomeChar!(ElementType!R)))

to simpler

if (isInputRange!(R, isSomeChar))

?

What do think?

I'm planning to add a PR for this and simplify Phobos in all 
places where this pattern is used.


I think it's because output ranges can accept more than one type 
of value. That said, the `isInputRange!(R,E)` shortcut for 
`isInputRange!R && is(int == ElementType!R)` would be nice.


Re: Referencer

2015-11-20 Thread Alex Parrill via Digitalmars-d

On Friday, 20 November 2015 at 18:23:57 UTC, HaraldZealot wrote:
All ranges in Phobos pass by value, but if I have output range 
with state like cumulative statistics this is useless.


After discussion with Dicebot I try this work-arround:
http://dpaste.dzfl.pl/8af8eb8d0007

It is unfinished. But direction is observable.

Is this good solution? And how about to include something like 
this tool in Phobos?


I'm not sure how useful this is as opposed to plain pointers. For 
structs, since `foo.bar` is the same as `(&foo).bar`, you may as 
well use a pointer, and the only thing it saves for numbers is a 
pointer deference or two.


You say ranges are pass-by-value, but that's not entirely true. 
Ranges themselves can be classes, or be made references via 
std.range.refRange. Range elements can be made lvalues by using 
ref functions [1].


As for the code:

* Your example usage (x = x += x = (x * 5)) is confusing, due to 
the chained assignments.
* I wouldn't mark this struct as @safe because the passed value 
may leave scope, causing invalid dereferences.
* There's no point in making the Reference struct a template, as 
the function its defined in is also a template. Just replace 
usages of U with T.


[1]: http://dlang.org/function.html#ref-functions see also 
std.range.interface.hasLvalueElements


Re: Example: wc

2015-11-23 Thread Alex Parrill via Digitalmars-d

On Monday, 23 November 2015 at 14:10:35 UTC, Chris wrote:


The code doesn't look up to date and maybe it's been replaced 
with a more up to date example (i.e. with range chaining). It 
uses ulong instead of size_t. I dunno, maybe it should be 
dropped completely.


ulong is appropriate here; the maximum word count should not be 
dependent on the system's memory limits, since it's streaming 
from an (arbitrary large) file.


Re: Vulkan bindings

2016-02-17 Thread Alex Parrill via Digitalmars-d

On Tuesday, 16 February 2016 at 19:01:58 UTC, Satoshi wrote:

Hello Vulkan API 1.0 is here and I just wrapped it into D.

https://github.com/Rikarin/VulkanizeD

Have fun!


Please consider making it a Dub package!

(IMO It would be cool to generate OpenGL and Vulkan bindings 
directly from the XML spec; std.xml doesn't work in CTFE. Though 
it would probably take a long time to compile)


Re: Vulkan bindings

2016-02-18 Thread Alex Parrill via Digitalmars-d

On Thursday, 18 February 2016 at 03:39:30 UTC, Kapps wrote:

On Thursday, 18 February 2016 at 03:38:42 UTC, Kapps wrote:


This is what I did with OpenGL for my own bindings. It had 
some nice benefits like having the documentation be (mostly) 
accessible.


Unfortunately, turns out the spec contains a lot of typos, 
including wrong arguments / function names.


And I should clarify, ahead of time to generate a .d file, not 
at compile-time. :P


Yea, by "directly", I meant using D templates and CTFE, not a 
script that generates a D file.


For my own project, since I just need the function names, I'm 
using a Python script to generate a CSV file from the OpenGL 
spec, then importing/parsing that with D. It's neat, but slows 
down the build a lot. I haven't had any issues with typos, though.




Re: Pseudo-random numbers in [0, n), covering all numbers in n steps?

2016-02-26 Thread Alex Parrill via Digitalmars-d
On Friday, 26 February 2016 at 14:59:43 UTC, Andrei Alexandrescu 
wrote:

On 02/25/2016 06:46 PM, Nicholas Wilson wrote:
The technical name for the property of distribution you 
describe is

  k-Dimensional Equidistribution (in this case k=1).
I would suggest taking a look at http://www.pcg-random.org.
They claim to have both arbitrary period and k-Dimensional 
Equidistribution


Thanks, that's indeed closest! A hefty read. Anyone inclined to 
work on a PCG random implementation? -- Andrei


Beat you to it: http://code.dlang.org/packages/d-pcg

It only has the basic generators at the moment. I'll look into 
the more advanced stuff.


(Also 64 bit outputs aren't implemented yet because they need a 
128 bit uint for state. I noticed D reserves the names cent and 
ucent but hasn't implemented them)


Re: Pseudo-random numbers in [0, n), covering all numbers in n steps?

2016-02-26 Thread Alex Parrill via Digitalmars-d
On Friday, 26 February 2016 at 16:45:53 UTC, Andrei Alexandrescu 
wrote:

On 02/26/2016 10:19 AM, Alex Parrill wrote:
On Friday, 26 February 2016 at 14:59:43 UTC, Andrei 
Alexandrescu wrote:

On 02/25/2016 06:46 PM, Nicholas Wilson wrote:
The technical name for the property of distribution you 
describe is

  k-Dimensional Equidistribution (in this case k=1).
I would suggest taking a look at http://www.pcg-random.org.
They claim to have both arbitrary period and k-Dimensional
Equidistribution


Thanks, that's indeed closest! A hefty read. Anyone inclined 
to work

on a PCG random implementation? -- Andrei


Beat you to it: http://code.dlang.org/packages/d-pcg

It only has the basic generators at the moment. I'll look into 
the more

advanced stuff.

(Also 64 bit outputs aren't implemented yet because they need 
a 128 bit
uint for state. I noticed D reserves the names cent and ucent 
but hasn't

implemented them)


That's pretty darn cool! I don't see a way to create a 
generator given a range expressed as a power of two. Say e.g. a 
client wants to say "give me a generator with a cycle of 
32768". Is this easily doable?


Also: when the generator starts running, does it generate a 
full cycle, or it starts with a shorter cycle and then settle 
into a full cycle?



Thanks,

Andrei


My port at the moment only provides the basic pgm32 generators; 
their behavior should match the pgm32_* classes from the C++ 
library.


I'll look into which of the generators support the 
equidistributed results, though I suspect that they are 
distributed across the entire domain of the result type.




Re: Synchronization on immutable object

2016-03-22 Thread Alex Parrill via Digitalmars-d

On Tuesday, 22 March 2016 at 10:49:01 UTC, Johan Engelen wrote:

Quiz: does this compile or not?
```
class Klass {}
void main() {
immutable Klass klass = new Klass;
synchronized (klass)
{
// do smth
}
}
```

A D object contains two (!) hidden pointers. Two? Yes: the 
vtable pointer __vptr, and a pointer to a Monitor struct which 
contains a synchronization mutex.
The synchronized statement is lowered into druntime calls that 
*write* to __monitor.

Quiz answer: yes it compiles. Oops?

This is related to an earlier discussion on whether TypeInfo 
objects should be immutable or not [1]. Should one be able to 
synchronize on typeid(...) or not?

```
interface Foo {}
void main() {
synchronized(typeid(Foo)) {
   // do smth
}
}
```
Because LDC treats the result of typeid as immutable, the code 
is bugged depending on the optimization level.


[1] 
http://forum.dlang.org/post/entjlarqzpfqohvnn...@forum.dlang.org


As long as there's no race conditions in the initial creation of 
the mutex, it shouldn't matter, even though it does internally 
mutate the object, because it's transparent to developers (unless 
you're going out of your way to access the internal __monitor 
field).


What exactly is bugged about the typeid example under LDC?


Re: Tristate - wanna?

2016-03-26 Thread Alex Parrill via Digitalmars-d

On Saturday, 26 March 2016 at 22:11:53 UTC, Nordlöw wrote:
On Saturday, 26 October 2013 at 15:41:32 UTC, Andrei 
Alexandrescu wrote:
While messing with std.allocator I explored the type below. I 
ended up not using it, but was surprised that implementing it 
was quite nontrivial. Should we add it to stdlib?


I can think of many variants of for this. What about

{ yes, // 1 chance
  no, // 0 chance
  likely, // > 1/2 chance
  unlikely, // < 1/2 chance
  unknown // any chance
}

?

Partial implementation at

https://github.com/nordlow/justd/blob/master/fuzzy.d#L15

:)


If we're going down that route, might as well use state tables. 
With CTFE + templates, you could possibly do something like this:



immutable State[] StateOrTable = ParseStateTable!q{
| yes | no   | likely | unlikely | unknown
--
yes | yes | yes  | yes| yes  | yes
no  | yes | no   | likely | unlikely | unknown
likely  | yes | likely   | likely | likely   | likely
unlikely| yes | unlikely | likely | unlikely | unknown
unknown | yes | unknown  | likely | unknwon  | unknown
};

State opBinary(string op)(State other)
if(op == "||") {
return StateOrTable[this.value*NumStates+other.value];
}

Though I see issues with having a generic n-state value template 
and also rewriting `a != b` to `!(a == b)`; I suspect that there 
may be some class of values where the two are not equivalent.


Re: Tristate - wanna?

2016-03-26 Thread Alex Parrill via Digitalmars-d

On Sunday, 27 March 2016 at 02:19:56 UTC, crimaniak wrote:

On Saturday, 26 March 2016 at 22:39:58 UTC, Alex Parrill wrote:

...

If we're going down that route, might as well use state tables.

...

For Boolean, Ternary, and N-state logic:

a && b == min(a, b)
a || b == max(a, b)
~a == N-1-a

why to optimize it more?


That's incorrect for the `unknown` value.

Lets say you represented true as 1f, false as 0f, and unknown as 
NaN...


std.algorithm.max(0, 0f/0f) = 0, but should be NaN
std.math.fmax(1, 0f/0f) = NaN, but should be 1

N-state logic isn't just about probabilities either. According to 
Wikipedia, Bochvar's three-valued logic has an "internal" state, 
where any operation with `internal` results in `internal` 
(similar to NaN). More broadly, the values and operations between 
them could be whatever the mathematician or developer wants, so a 
truth table is one of the ways to generally specify an operator.


Re: Can we check the arguments to format() at compile time?

2016-04-01 Thread Alex Parrill via Digitalmars-d

On Friday, 1 April 2016 at 21:25:46 UTC, Yuxuan Shui wrote:
Clang has this nice feature that it will warn you when you 
passed wrong arguments to printf:


#include 
int main(){
long long u = 10;
printf("%c", u);
}

clang something.c:
something.c:4:15: warning: format specifies type 'int' but the 
argument has type 'long long' [-Wformat]


With the CTFE power of D, we should be able to do the same 
thing when the format string is available at compile time. 
Instead of throwing exceptions at run time.


Not as-is, because the format string is a runtime argument and 
not a compile-time constant.


Consider:

writefln(rand() >= 0.5 ? "%s" : "%d", 123);

It's certainly possible with a new, templated writef function. 
Hypothetically:


writefln_ctfe!"%s"(1234); // would fail


Re: debugger blues

2016-04-01 Thread Alex Parrill via Digitalmars-d
Comparing a logging framework with a basic print function is not 
a fair comparison. I'd like to point out that Python's logging 
module[1] also takes format strings.


So this really is just an argument of D's writeln vs Python's 
print. In which case, this seems like a small thing to get upset 
over. Yes, implicit spacing is convenient, but in some cases it 
isn't. It's a fairly arbitrary choice. I'd argue that D's writeln 
follows Python's philosophy of "Explicit is better than implicit" 
better than Python does.


But it's not overly hard to implement your own print function:

void print(Args...)(Args args) {
foreach(i, arg; args) {
if(i != 0) write(" ");
write(arg);
}
writeln();
}

[1] https://docs.python.org/3/library/logging.html


Re: uniform initialization in D (as in C++11): i{...}

2016-04-05 Thread Alex Parrill via Digitalmars-d

On Tuesday, 5 April 2016 at 05:39:25 UTC, Timothee Cour wrote:

q{...} // comment (existing syntax)


That is syntax for a string literal, not a comment (though unlike 
other string literals, the contents must be valid D tokens and 
editors usually do not highlight them as strings).




Re: The Sparrow language

2016-04-06 Thread Alex Parrill via Digitalmars-d

On Wednesday, 6 April 2016 at 21:35:51 UTC, mate wrote:
On Wednesday, 6 April 2016 at 20:48:20 UTC, Lucian Radu 
Teodorescu wrote:

On Wednesday, 6 April 2016 at 18:27:25 UTC, BLM768 wrote:

On Wednesday, 6 April 2016 at 18:25:11 UTC, BLM768 wrote:


Aside from the explicit annotations, I don't see how their 
solution is more flexible than D's CTFE, but I might be 
missing something.


Never mind. Just saw their language embedding example. Neat!


Compared to CTFE, in Sparrow you can run at compile-time *any* 
algorithm you like. No restrictions apply. Not only you can do 
whatever your run-time code can do, but can also call external 
programs at compile-time.


Imagine that you are calling the D compiler from inside the 
Sparrow compiler to compile some D code that you encounter.


Wow, could be dangerous to compile source code.


Spawning processes during compilation is as dangerous as 
executing the program you just compiled (which you're going to 
do; the entire point of compiling a program is to execute it). I 
wouldn't be too concerned.


If you're hosting an online compiler, then you're (hopefully) 
already sandboxing the compiler (to prevent source code that does 
a lot of CTFE/has large static arrays/etc from eating all your 
cpu+mem) and the compiled program (for obvious reasons) anyway.


(Same argument for D's string import paths not allowing you into 
symlinks/subdirectores; there are more thorough sandboxing 
options for those concerned)


Re: Recursive vs. iterative constraints

2016-04-15 Thread Alex Parrill via Digitalmars-d
On Saturday, 16 April 2016 at 02:42:55 UTC, Andrei Alexandrescu 
wrote:

So the constraint on chain() is:

Ranges.length > 0 &&
allSatisfy!(isInputRange, staticMap!(Unqual, Ranges)) &&
!is(CommonType!(staticMap!(ElementType, staticMap!(Unqual, 
Ranges))) == void)


Noice. Now, an alternative is to express it as a recursive 
constraint:


(Ranges.length == 1 && isInputRange!(Unqual!(Ranges[0])))
  ||
  (Ranges.length == 2 &&
isInputRange!(Unqual!(Ranges[0])) &&
isInputRange!(Unqual!(Ranges[1])) &&
!is(CommonType!(ElementType!(Ranges[0]), 
ElementType!(Ranges[1])) == void))

  || is(typeof(chain(rs[0 .. $ / 2], chain(rs[$ / 2 .. $]

In the latter case there's no need for additional helpers but 
the constraint is a bit more bulky.


Pros? Cons? Preferences?


Andrei


The former, definitely.

The only helper function you're getting rid of that I see is 
allSatisfy, which describes the constraint very well. The 
recursive constraint obscures what the intended constraint is 
(that the passed types are input ranges with a common type) 
behind the recursion.


std.experimental.allocator.make should throw on out-of-memory

2016-04-19 Thread Alex Parrill via Digitalmars-d
I'm proposing that std.experimental.allocator.make, as well as 
its friends, throw an exception when the allocator cannot satisfy 
a request instead of returning null.


These are my reasons for doing so:

* It eliminates the incredibly tedious, annoying, and 
easy-to-forget boilerplate after every allocation to check if the 
allocation succeeded.


* Being unable to fulfill an allocation is an exceptional case 
[1], thus exceptions are a good tool for handling it. Performance 
on the out-of-memory case isn't a big issue; 99% of programs, 
when out of memory, either exit immediately or display an "out of 
memory" message to the user and cancel the operation.


* It fails faster and safer. It's better to error out immediately 
with a descriptive "out of memory" message instead of potentially 
continuing with an invalid pointer and potentially causing an 
invalid memory access, or worse, a vulnerability, if the 
developer forgot to check (which is common for boilerplate code).


* Creating a standard out-of-memory exception will make it easier 
to catch, instead of catching each library's own custom exception 
that they will inevitably define.


Hopefully, since std.experimental.allocator is experimental, 
we'll be allowed to make such backwards-incompatible changes.


What are other peoples thoughts on this? Or has this brought up 
before and I missed the discussion?


[1] It may not be very exceptional for "building-block" 
allocators that start with small but fast allocators that may 
fail a lot, in which case returning null is appropriate. However, 
AFAIK allocators internally use the `allocate` method of the 
allocator, not make, et al., so they should be unaffected by this 
change.


Re: std.experimental.allocator.make should throw on out-of-memory

2016-04-20 Thread Alex Parrill via Digitalmars-d
On Wednesday, 20 April 2016 at 01:59:31 UTC, Vladimir Panteleev 
wrote:

On Tuesday, 19 April 2016 at 22:28:27 UTC, Alex Parrill wrote:
* It eliminates the incredibly tedious, annoying, and 
easy-to-forget boilerplate after every allocation to check if 
the allocation succeeded.


FWIW, you can turn a false-ish (!value) function call result 
into an exception by sticking .enforce() at the end. Perhaps 
this is the use case for a Maybe type.


Yes, enforce helps (and I forgot it reruns its argument), but its 
still boilerplate, and it throws a generic "enforcement failed" 
exception instead of a more specific "out of memory" exception 
unless you remember to specify your own exception or message.


Re: std.experimental.allocator.make should throw on out-of-memory

2016-04-20 Thread Alex Parrill via Digitalmars-d

On Wednesday, 20 April 2016 at 18:07:05 UTC, Alex Parrill wrote:
Yes, enforce helps (and I forgot it reruns its argument), but 
its still boilerplate, and it throws a generic "enforcement 
failed" exception instead of a more specific "out of memory" 
exception unless you remember to specify your own exception or 
message.


s/rerun/return/



Re: std.experimental.allocator.make should throw on out-of-memory

2016-04-20 Thread Alex Parrill via Digitalmars-d

On Wednesday, 20 April 2016 at 19:18:58 UTC, Minas Mina wrote:

On Tuesday, 19 April 2016 at 22:28:27 UTC, Alex Parrill wrote:
I'm proposing that std.experimental.allocator.make, as well as 
its friends, throw an exception when the allocator cannot 
satisfy a request instead of returning null.


[...]


I believe it was designed this way so that it can be used in 
@nogc code, although I might be wrong.


This is IMO a separate issue: that you cannot easily throw an 
exception without allocating it on the GC heap, making it too 
painful to use in nogc code.


I've heard mentions of altering exception handling to store the 
exception in a static memory space instead of allocating them on 
the heap; I'd much rather see that implemented than the bandage 
solution of ignoring exception handling.


Re: std.experimental.allocator.make should throw on out-of-memory

2016-04-20 Thread Alex Parrill via Digitalmars-d

On Wednesday, 20 April 2016 at 20:23:53 UTC, Era Scarecrow wrote:


 The downside though is the requirement to throw may not be 
necessary. Having a failed attempt at getting memory and 
sleeping the program for 1-2 seconds before retrying could 
succeed on a future attempt. For games this would be a failure 
to have the entire game pause and hang until it acquires the 
memory it needs, while non critical applications (say 
compressing data for a backup) having it be willing to wait 
wouldn't be a huge disadvantage (assuming it's not at the start 
and already been busy for a while).


This would be best implemented in a "building block" allocator 
that wraps a different allocator and uses the `allocate` 
function, making it truly optional. It would also need a timeout 
to fail eventually, or else you possibly wait forever.


 This also heavily depends on what type of memory you're 
allocating. A stack based allocator (with fixed memory) 
wouldn't ever be able to get you more memory than it has fixed 
in reserve so immediately throwing makes perfect sense


True, if you are allocating from small pools then OOM becomes 
more likely. But most programs do not directly allocate from 
small pools; rather, they try to allocate from a small pool (ex. 
a freelist) but revert to a larger, slower pool when the smaller 
pool cannot satisfy a request. That is implemented using the 
building block allocators, which use the `allocate` method, not 
`make`.


Although IF the memory could be arranged and a second attempt 
made before deciding to throw could be useful (which assumes 
not having direct pointers to the memory in question and rather 
having an offset which is used. The more I think about it 
though the less likely this would be).


This is the mechanism used for "copying" garbage collectors. They 
can only work if they can know about and alter all references to 
the objects that they have allocated, which makes them hard to 
use for languages with raw pointers like D.


Re: std.experimental.allocator.make should throw on out-of-memory

2016-04-21 Thread Alex Parrill via Digitalmars-d

On Thursday, 21 April 2016 at 13:42:50 UTC, Era Scarecrow wrote:

On Thursday, 21 April 2016 at 09:15:05 UTC, Thiez wrote:
On Thursday, 21 April 2016 at 04:07:52 UTC, Era Scarecrow 
wrote:
 I'd say either you specify the amount of retries, or give 
some amount that would be acceptable for some background 
program to retry for. Say, 30 seconds.


Would that actually be more helpful than simply printing an 
OOM message and shutting down / crashing? Because if the limit 
is 30 seconds *per allocation* then successfully allocating, 
say, 20 individual objects might take anywhere between 0 
seconds and almost (but not *quite*) 10 minutes. In the latter 
case the program is still making progress but for the user it 
would appear frozen.


 Good point. Maybe having a global threshold of 30 seconds 
while it waits and retries every 1/2 second.


 In 30 seconds a lot can change. You can get gigabytes of 
memory freed from other processes and jobs. In the end it 
really depends on the application. A backup utility that you 
run overnight gives you 8+ hours to do the backup that probably 
takes up to 2 hours to actually do. On the other hand no one 
(sane anyways) wants to wait if they are actively using the 
application and would prefer it to die quickly and restart it 
when there's fewer demands on the system.


I'm proposing that make throws an exception if the allocator 
cannot satisfy a request (ie allocate returns null). How the 
allocator tries to allocate is it's own business; if it wants to 
sleep (which I don't believe would be helpful outside of 
specialized cases), make doesn't need to care.


Sleeping would be very bad for certain workloads (you mentioned 
games), so having make itself sleep would be inappropriate.


Re: Threads

2016-05-02 Thread Alex Parrill via Digitalmars-d

On Monday, 2 May 2016 at 16:39:13 UTC, vino wrote:

Hi All,

 I am a newbie for D programming and need some help, I am 
trying to write a program using the example given in the book 
The "D Programming Language" written by "Andrei Alexandrescu" 
with few changes such as the example program read the input 
from stdin and prints the data to stdout, but my program reads 
the input from the file(readfile.txt) and writes the output to 
another file(writefile.txt), and I am getting the below errors 
while compiling


Error:

[root@localhost DProjects]# dmd readwriteb.d
readwriteb.d(7): Error: cannot implicitly convert expression 
(__aggr2859.front()) of type ubyte[] to immutable(ubyte)[]
readwriteb.d(15): Error: cannot implicitly convert expression 
(receiveOnly()) of type immutable(ubyte)[] to 
std.outbuffer.OutBuffer

[root@localhost DProjects]#

Version: DMD64 D Compiler v2.071.0

Code:

import std.algorithm, std.concurrency, std.stdio, 
std.outbuffer, std.file;


void main() {
   enum bufferSize = 1024 * 100;
   auto file = File("readfile.txt", "r");
   auto tid = spawn(&fileWriter);
   foreach (immutable(ubyte)[] buffer; 
file.byChunk(bufferSize)) {

  send(tid, buffer);
   }
}

void fileWriter() {
   auto wbuf  = new OutBuffer();
   for (;;) {
  wbuf = receiveOnly!(immutable(ubyte)[])();
  write("writefile.txt", wbuf);
   }
}

From,
Vino.B


File.byChunks iirc returns a mutable ubyte[] range, not an 
immutable(ubyte)[]. The easiest way to fix this would be to 
change the foreach variable to ubyte[] and make an immutable 
duplicate of it when sending via idup.


wbuf is inferred to be an OutBuffer but then you assign an 
immutable(ubyte)[] to it in your foreach loop; a type error.


Re: The Case Against Autodecode

2016-05-13 Thread Alex Parrill via Digitalmars-d
On Friday, 13 May 2016 at 16:05:21 UTC, Steven Schveighoffer 
wrote:

On 5/12/16 4:15 PM, Walter Bright wrote:

10. Autodecoded arrays cannot be RandomAccessRanges, losing a 
key

benefit of being arrays in the first place.


I'll repeat what I said in the other thread.

The problem isn't auto-decoding. The problem is hijacking the 
char[] and wchar[] (and variants) array type to mean 
autodecoding non-arrays.


If you think this code makes sense, then my definition of sane 
varies slightly from yours:


static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) 
&& is(typeof(R.init[0 .. $])));


I think D would be fine if string meant some auto-decoding 
struct with an immutable(char)[] array backing. I can accept 
and work with that. I can transform that into a char[] that 
makes sense if I have no use for auto-decoding. As of today, I 
have to use byCodePoint, or .representation, etc. and it's very 
unwieldy.


If I ran D, that's what I would do.

-Steve


Well, the "auto" part of autodecoding means "automatically doing 
it for plain strings", right? If you explicitly do decoding, I 
think it would just be "decoding"; there's no "auto" part.


I doubt anyone is going to complain if you add in a struct 
wrapper around a string that iterates over code units or 
graphemes. The issue most people have, as you say, is the fact 
that the default for strings is to decode.




Re: Discuss vulkan erupted, the other auto-generated vulkan binding

2016-05-18 Thread Alex Parrill via Digitalmars-d

On Monday, 16 May 2016 at 12:10:58 UTC, ParticlePeter wrote:

This is in respect to announce thread:
https://forum.dlang.org/post/mdpjqdkenrnuxvruw...@forum.dlang.org

Please let me know if you had the chance to test the 
functionality as requested in the announce thread.

All other question are welcome here as well of course.

Cheers, ParticlePeter


Apparently GitHub didn't add my own repo to my list of watch 
repos, meaning no notifications for them...


I'll look over the pull request. Let's not split this project.


Re: A technique to mock "static interfaces" (e.g. isInputRange)

2016-05-25 Thread Alex Parrill via Digitalmars-d

On Wednesday, 25 May 2016 at 21:38:23 UTC, Atila Neves wrote:
There was talk in the forum of making it easier to come up 
instantiations of say, an input range for testing purposes. 
That got me thinking of how mocking frameworks make it easy to 
pass in dependencies without having to write a whole new "test 
double" type oneself. How would one do that for what I've 
started calling "static interfaces"? How would one mock an 
input range?


There's no way to inspect the code inside the lambda used in 
isInputRange or any other similar template constraint (.codeof 
would be awesome, but alas it doesn't exist), but a regular OOP 
interface you can reflect on... and there's even one called 
InputRange in Phobos... hmmm.


The result is in the link below. The implementation is a bit 
horrible because I cowboyed it. I should probably figure out 
how to make it more template mixin and less of the string 
variety, but I was on a roll. Anyway, take a look at the unit 
test at the bottom first and complain about my crappy 
implementation later:



https://gist.github.com/atilaneves/b40c4d030c70686ffa3b8543018f6a7e


If you have an interface already I guess you could just mock 
that, but then you wouldn't be able to test templated code with 
it. This technique would fix that problem.



Interesting? Crazy? Worth adding to unit-threaded? Phobos 
(after much cleaning up)?



Atila


Have you looked at std.typecons.AutoImplement at all? 
http://dlang.org/phobos/std_typecons.html#.AutoImplement


It seems to do something similar to what you're doing, though it 
generates a subclass rather than a struct (for the purposes of 
testing contracts and stuff, I don't think it matters too much).


Re: Transient ranges

2016-05-29 Thread Alex Parrill via Digitalmars-d
On Sunday, 29 May 2016 at 17:45:00 UTC, Steven Schveighoffer 
wrote:

On 5/27/16 7:42 PM, Seb wrote:

So what about the convention to explicitely declare a 
`.transient` enum

member on a range, if the front element value can change?


enum isTransient(R) = is(typeof(() {
   static assert(isInputRange!R);
   static assert(hasIndirections(ElementType!R));
   static assert(!allIndrectionsImmutable!(ElementType!R)); // 
need to write this

}));

-Steve


allIndrectionsImmutable could probably just be is(T : immutable) 
(ie implicitly convertible to immutable). Value types without (or 
with immutable only) indirections should be convertible to 
immutable, since the value is being copied.




Re: Transient ranges

2016-05-30 Thread Alex Parrill via Digitalmars-d
On Monday, 30 May 2016 at 12:53:07 UTC, Steven Schveighoffer 
wrote:

On 5/30/16 5:35 AM, Dicebot wrote:
On Sunday, 29 May 2016 at 17:25:47 UTC, Steven Schveighoffer 
wrote:
What problems are solvable only by not caching the front 
element? I

can't think of any.


As far as I know, currently it is done mostly for performance 
reasons -
if result is fitting in the register there is no need to 
allocate stack
space for the cache, or something like that. One of most 
annoying

examples is map which calls lambda on each `front` call :
https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L587-L590


Maybe my understanding of your post is incorrect. You said "It 
is impossible to correctly define input range without caching 
front which may not be always possible and may have negative 
performance impact."


I'm trying to figure out which cases caching makes the solution 
impossible.


One case is wrapping a network stream: a TCP input range that 
yields ubyte[] chunks of data as they are read from the socket, a 
joiner to join the blocks together, a map that converts the bytes 
to characters, and take to consume each message.


The issue is that, in a naive implementation, creating the TCP 
range requires reading a chunk from the socket to populate front. 
Since I usually set up my stream objects, before using them, the 
receiver range will try to receive a chunk, but I haven't sent a 
request to the server yet, so it would block indefinitely.


Same issue with popFront: I need to pop the bytes I've already 
used for the previous request, but calling popFront would mean 
waiting for the response for the next request which I haven't 
sent out yet.


The solution I used was to delay actually receiving the chunk 
until front was called, which complicates the implementation a 
bit.


Re: Free the DMD backend

2016-05-31 Thread Alex Parrill via Digitalmars-d

On Tuesday, 31 May 2016 at 20:18:34 UTC, default0 wrote:
I have no idea how licensing would work in that regard but 
considering that DMDs backend is actively maintained and may 
eventually even be ported to D, wouldn't it at some point 
differ enough from Symantecs "original" backend to simply call 
the DMD backend its own thing?


The way I understand it is that no matter how different a 
derivative work (such as any modification to DMD) gets, it's 
still a derivative work, and is subject to the terms of the 
license of the original work.


Re: Code security: "auto" / Reason for errors

2016-06-01 Thread Alex Parrill via Digitalmars-d

On Wednesday, 1 June 2016 at 14:52:29 UTC, John Nixon wrote:
On Wednesday, 2 March 2016 at 21:37:56 UTC, Steven 
Schveighoffer wrote:


Pointer copying is inherent in D. Everything is done at the 
"head", deep copies are never implicit. This is a C-like 
language, so one must expect this kind of behavior and plan 
for it.


I sympathise with Ozan. What is the best reference you know 
that explains this fully?


Slices/dynamic arrays are literally just a pointer (arr.ptr) and 
a length (arr.length).


Assigning a slice simply copies the ptr and length fields, 
causing the slice to refer to the entire section of data. Slicing 
(arr[1..2]) returns a new slice with the ptr and length fields 
updated.


(This also means you can slice arbitrary pointers; ex. 
`(cast(ubyte*) malloc(1024))[0..1024]` to get a slice of memory 
backed by C malloc. Very useful.)


The only magic happens when increasing the size of the array, via 
appending or setting length, which usually allocates a new array 
from the GC heap, except when D determines that it can get away 
with not doing so (i.e. when the data points somewhere in a GC 
heap and there's no data in-use after the end of the array. 
capacity also looks at GC metadata).


Re: Shared, but synchronisation is not desired for multithreading

2016-06-04 Thread Alex Parrill via Digitalmars-d

On Saturday, 4 June 2016 at 15:11:51 UTC, tcak wrote:
If you ignore the discouraged __gshared keyword, to be able to 
share a variable between threads, you need to be using "shared" 
keyword.


While designing your class with "shared" methods, the compiler 
directly assumes that objects of this class must be protected 
against threading problems.


There can be three USAGEs of a class object:

1. It will be non-shared. So, it is stored in TLS, and only one 
thread can access it.


2. It will be shared. But programmer knows that the object is 
designed as "shared" with the purpose of reading its value from 
multiple threads.


3. It will be shared. But the object must be synchronised. 
Because programmer knows that multiple threads will be reading 
from and writing to object.


Currently, in a normal coding environment (I am not talking 
about using extra parameters, increasing complexity etc), 
distinguishing between 2 and 3 does not seem like possible. You 
prepare your shared class, and its methods are designed to be 
either sycnhronised or not synchronised. There is no middle 
point unless you define the same method with different names, 
or use a flag like "bool run_this_method_synchronised_please".


So, what I did is using UDA for this with the name @ThreadSafe. 
e.g.


@ThreadSafe auto myObject = new shared MyClass();

In a method of the class, I make the declaration as following:

public void foo() shared{
static if( std.traits.hasUDA!( this, ThreadSafe ) ){
// lock mutex
scope(exit){
// unlock mutex
}
}

// do your normal operations
}

This way, if the object is desired to be doing synchronisation, 
you only add an attribute to it.


There are some small problems here, those are related to D's 
implementation right now:


1. There is no standard way of saying @ThreadSafe. You are 
supposed to be defining it. If language was to be defining a 
standard attribute as @ThreadSafe, it could be used everywhere 
for this purpose.


2. If a method is defined as shared, compiler immediately warns 
the programmer to use core.atomic.atomicOp. If codes are 
already being designed as thread-safe by the programmer, normal 
variable operations could be used without any concern.


3. As far as I remember, there were some talks about 
synchronized keyword not being used much. Maybe its usage could 
be changed to support this @ThreadSafe system.


If the method is thread-safe, it should be marked as shared. 
Otherwise, it should not be shared. I think you're trying to mark 
functions that aren't actually thread safe but you call in a 
`synchronized` context as shared, when they should not be.


If you've made guarantees that the shared object you are 
modifying can only be accessed by one thread, you can cast it to 
unshared, and call its thread-unsafe methods (which are now 
safe). I think there were plans to get `synchronized` to do this 
for you, but it doesn't, which makes it fairly unwieldy.


(It also doesn't help that many "thread-safe" functions in D 
aren't marked as shared where they really ought to be, ex. all 
the functions in core.sync.mutex)


Re: I implemented delegates in D

2016-06-09 Thread Alex Parrill via Digitalmars-d

On Thursday, 9 June 2016 at 21:02:26 UTC, maik klein wrote:

Has this been done before?


Well, yes, the entire point of delegates is to be able to capture 
variables (as opposed to function pointers, which cannot).



auto createADelegate(int captured) {
return (int a) => captured + a;
}

void main() {
auto dg1 = createADelegate(5);
auto dg2 = createADelegate(32);
assert(dg1(5) == 10);
assert(dg1(10) == 15);
assert(dg2(8) == 40);
assert(dg2(32) == 64);
}

https://dpaste.dzfl.pl/90ebc29651f6

(Unfortunately template delegates, like the ones used with map, 
don't keep their captured variables alive after the captured 
variables go out of scope, but it doesn't sound like you need 
those)


Re: implicit conversions to/from shared

2016-07-10 Thread Alex Parrill via Digitalmars-d

On Sunday, 10 July 2016 at 13:02:17 UTC, ag0aep6g wrote:
While messing with atomicLoad [1], I noticed that dmd lets me 
implicitly convert values to/from shared without restrictions. 
It's in the spec [2]. This seems bad to me.


[...]


Atomic loading and storing, from what I understand, is usually 
limited to about a word on most architectures. I don't think it 
would be good to implicitly define assignment and referencing as 
atomic operations, since it would be limited. IMO concurrent 
access is better off being explicit anyway.


I don't think there is an issue with converting unshared 
reference types to shared (ex. ref T -> ref shared(T) or T* -> 
shared(T)*); worst you get is some extra synchronization.


Re: A few notes on choosing between Go and D for a quick project

2015-03-13 Thread Alex Parrill via Digitalmars-d

On Friday, 13 March 2015 at 21:02:39 UTC, Almighty Bob wrote:

The language reference is pretty abysmal too. EG...

The language "Introduction" spends all it's time talking about 
phases of compilation. That's like introducing someone to 
driving by explaining how the internal combustion engine works.


The page on templates starts with scope and instantiation 
details. The examples at the start have alias parameters which 
aren't explained until half way down the page.


I mean why no start the template page with something that 
people will find familiar and then build on that?


It has that feel all the way through. You go looking for things 
and they never seem to be where you expect, or they are so 
tersely explained, it feels like it's a reference for people 
already experts in D. Which is fine if that's what it's meant 
to be ... but if you want to attract new people you need a 
"guided tour" rather than a "technical spec".


The language reference is a reference; it's supposed to be a
technical spec, not a starting point. That said, the only links 
to starting points are the book (which isn't free) and the 
"Programming in D" pages at
http://ddili.org/ders/d.en/index.html (which are now buried in 
the Articles tab)


Re: Novel list

2015-03-25 Thread Alex Parrill via Digitalmars-d
On Wednesday, 25 March 2015 at 12:21:32 UTC, Martin Krejcirik 
wrote:

doeas poorly at annoying syntax => not annoying syntax


Yea, these charts are confusing, with the double negatives and 
the green up arrows next to negative aspects. A pro/con list 
would be much more clear.


Re: Human unreadable documentation - the ugly seam between simple D and complex D

2015-03-26 Thread Alex Parrill via Digitalmars-d

On Thursday, 26 March 2015 at 19:32:53 UTC, Idan Arye wrote:

...snip...


So tl;dr; make the template constraints in ddoc less prominent?

The "new library reference preview" under Resources seems to 
already have this (example: 
http://dlang.org/library/std/algorithm/searching/starts_with.html)


Re: How does the D compiler get updated on travis-ci.org?

2015-03-26 Thread Alex Parrill via Digitalmars-d

On Thursday, 26 March 2015 at 20:40:50 UTC, Gary Willoughby wrote:

On Thursday, 26 March 2015 at 19:37:06 UTC, extrawurst wrote:
i think it is already available on travis. this it what works 
for me:

https://github.com/Extrawurst/unecht/blob/master/.travis.yml

```
language: d

d:
 - dmd-2.067.0
```


I'm just using:

language: d

I hoped this would pick up the latest version.


From the source [1], looks like its harcoded to default to 
2.066.1.


[1] 
https://github.com/travis-ci/travis-build/blob/master/lib/travis/build/script/d.rb


Re: isForwardRange failed to recognise valid forward range

2015-05-04 Thread Alex Parrill via Digitalmars-d

On Monday, 4 May 2015 at 10:25:23 UTC, ketmar wrote:

here is some code for your amusement:

struct Input {
  auto opSlice (size_t start, size_t end) {
static struct InputSlice {
  @property bool empty () { return false; }
  @property char front () { return 0; }
  void popFront () {}
  InputSlice save () { return this; }
}
import std.range.primitives;
static assert(isForwardRange!InputSlice);
return InputSlice();
  }
}

fixing code like this tames `isForwardRange`:

template isForwardRange(R)
{
enum bool isForwardRange = isInputRange!R && is(typeof(
(inout int = 0)
{
R r1 = R.init;
//old: static assert (is(typeof(r1.save) == R));
auto s1 = r1.save;
static assert (is(typeof(s1) == R));
}));
}

i wonder if some other checking primitives has similar bug.


Add @property to save.


Re: Adding a read primitive to ranges

2015-05-04 Thread Alex Parrill via Digitalmars-d

On Monday, 4 May 2015 at 00:07:27 UTC, Freddy wrote:
Would it be a bad idea to add a read primitive to ranges for 
streaming?


struct ReadRange(T){
size_t read(T[] buffer);
//and | or
T[] read(size_t request);

/+ empty,front,popFront,etc +/
}



IT seems redundant to me. It's semantically no different than 
iterating through the range normally with front/popFront. For 
objects where reading large amounts of data is more efficient 
than reading one-at-a-time, you can implement a byChunks function 
like stdio.File.


Re: Adding a read primitive to ranges

2015-05-04 Thread Alex Parrill via Digitalmars-d

On Monday, 4 May 2015 at 19:23:08 UTC, Freddy wrote:

On Monday, 4 May 2015 at 15:16:25 UTC, Alex Parrill wrote:

The ploblem is that all the functions in 
std.range,std.algorithm and many other wrappers would ignore 
byChucks and produce much slower code.


How so? `file.byChunks(4096).joiner` is a range that acts as if 
you read each byte out of the file one at a time, but actually 
reads them in 4096-byte buffers. It's still compatible with all 
of the range and algorithm functions.


Re: Adding a read primitive to ranges

2015-05-05 Thread Alex Parrill via Digitalmars-d

On Tuesday, 5 May 2015 at 01:28:03 UTC, Freddy wrote:

Wait, Bad example,

void func(R)(R range){//expects range of ubyte
ubyte[] data=range.read(VERY_BIG_NUMBER);
ubyte[] other_data=range.read(OTHER_VERY_BIG_NUMBER);
}

which would be more optimal for a file but still works for 
other ranges, compared to looping though the ranges read 
appending to data.


How would it be more optimal? As I said, if you pass in 
`file.byChunks(some_amount).joiner`, this will still read the 
file in large chunks. It's less optimal now because `read` has to 
allocate an array on every call (easily avoidable by passing in a 
reusable buffer, but still).


Equivalent code with ranges:

auto range = file.byChunks(4096).joiner;
ubyte[] data = range.take(VERY_BIG_NUMBER).array;
ubyte[] other_data = range.take(OTHER_VERY_BIG_NUMBER).array;


Re: Sneak preview into std.allocator's porcelain

2015-05-07 Thread Alex Parrill via Digitalmars-d
On Thursday, 7 May 2015 at 02:28:45 UTC, Andrei Alexandrescu 
wrote:

http://erdani.com/d/phobos-prerelease/std_experimental_allocator_porcelain.html

Andrei


The links for allocator.temp and allocator.typed lead to 404 
pages.


Re: std.xml2 (collecting features)

2015-05-11 Thread Alex Parrill via Digitalmars-d

Can we please not turn this thread into an XML vs JSON flamewar?

XML is one of the most popular data formats (for better or for 
worse), so a parser would be a good addition to the standard 
library.


Re: Type tuple pointers

2015-05-21 Thread Alex Parrill via Digitalmars-d

On Thursday, 21 May 2015 at 14:55:21 UTC, Freddy wrote:

Why don't pointers and .sizeof work with typetuples

import std.typetuple;
void main(){
TypeTuple!(int,char)* test;
TypeTuple!(long,int).sizeof;
}

$ rdmd test
test.d(3): Error: can't have pointer to (int, char)
test.d(4): Error: no property 'sizeof' for tuple '(long, int)'
Failed: ["dmd", "-v", "-o-", "test.d", "-I."]

I know they can be wrapped in structs but shouldn't this work 
in the first place.


'Type' tuples are compile-time tuples of types, values, and 
aliases. They aren't types themselves, so `TypeTuple!(int, char) 
var` doesn't make sense.


I think you want regular tuples from std.typecons.


Re: Type tuple pointers

2015-05-21 Thread Alex Parrill via Digitalmars-d

On Thursday, 21 May 2015 at 15:37:42 UTC, Dicebot wrote:

On Thursday, 21 May 2015 at 15:30:59 UTC, Alex Parrill wrote:
They aren't types themselves, so `TypeTuple!(int, char) var` 
doesn't make sense.


Sadly, you are wrong on this one - this is actually a valid 
variable declaration which will create two distinct local 
variables and uses their aliases in resulting symbol list named 
'var'.


So it creates a variable for each type in the tuple, and stores 
the aliases in `var`? Huh, didn't know that.


But still, `TypeTuple!(int,char)` isn't a real type, so having a 
pointer to one doesn't make sense.


Re: Question about garbage collection specification

2015-06-15 Thread Alex Parrill via Digitalmars-d

On Monday, 15 June 2015 at 15:33:41 UTC, rsw0x wrote:


this doesn't make any sense, it's referring to an object p (of 
void*), not the location of p. It's setting the lsb of p to 1 
and claiming it's undefined behavior, when it's clearly not.


Unless I misunderstand it.


`p` is assumed to be a pointer. `cast(int) p` gets the integer 
representation of the pointer (though it should be `uintptr_t` 
but whatever). It then does an integer bitwise or, then converts 
the result back to a pointer.


It's undefined behavior in D because the spec just said so 
(regardless of its defined behavior in C).


Re: What have you done with UDAs?

2015-06-22 Thread Alex Parrill via Digitalmars-d

On Monday, 22 June 2015 at 19:09:40 UTC, weaselcat wrote:
I never seem to use them for anything, has anyone else done 
anything interesting with them?


I'm writing a program that can accept subcommands via either the 
command line (ex. `prog mycmd 1 2`) or via a shell. I represent 
each command as a function, use UDAs to store the help text and 
if they can be ran from the command line or shell. The command 
list, full help text, and command dispatcher is generated at 
compile time via templates, CTFE, and static foreach.


An example command:

@("") // arguments
@("Sets the amount of time to increment the clock on each 
frame.") // description

@ShellOnly // can't be ran from command line
int cmd_set_time_per_frame(string[] args) {
// ...
}


Re: What have you done with UDAs?

2015-06-23 Thread Alex Parrill via Digitalmars-d

On Tuesday, 23 June 2015 at 14:52:30 UTC, Dmitry Olshansky wrote:

An example command:

@("") // arguments
@("Sets the amount of time to increment the clock on each 
frame.") //

description
@ShellOnly // can't be ran from command line
int cmd_set_time_per_frame(string[] args) {
 // ...
}


Awesome. Is it open-sourced?
How about handling argument conversion automatically (via to! 
and/or custom functions)?


Say :
@(" ")
@CmdName("plus")
void add(int a, int b)
{
writelen(a+b);
}


To be automagically callable like:

./prog plus 2 4


Not yet; its ATM a bit coupled with the application I'm writing.

Specifying types with the arguments is definitely feasible, but I 
haven't gotten around to writing it yet. You could also possibly 
specify flag arguments (--foo) by specifying default parameters.


(somewhat related: std.getopts is kinda bad; you can't get help 
text without successfully parsing arguments and `required` breaks 
`--help`)


Re: Phobos addition formal review: std.experimental.allocator

2015-06-26 Thread Alex Parrill via Digitalmars-d
The Windows MMap allocator only keeps one HANDLE around, and 
creates a new one on each `allocate`. Thus, `deallocate` closes 
the latest handle, regardless of what it was actually passed, so 
it leaks.


If I'm reading the docs for `CreateFileMapping` right, you should 
be able to close the handle after calling `MapViewOfFile`; the 
internal data will persist until you unmap the memory region.


Re: Phobos addition formal review: std.experimental.allocator

2015-06-26 Thread Alex Parrill via Digitalmars-d

On Friday, 26 June 2015 at 14:56:21 UTC, Dmitry Olshansky wrote:

On 26-Jun-2015 17:51, Alex Parrill wrote:
The Windows MMap allocator only keeps one HANDLE around, and 
creates a
new one on each `allocate`. Thus, `deallocate` closes the 
latest handle,

regardless of what it was actually passed, so it leaks.



Actually I don't see why Windows couldnt' just use VirtualAlloc 
w/o messing with files.


Yea, VirtualAlloc seems like a better fit. (I don't actually know 
the windows API that well)


If I'm reading the docs for `CreateFileMapping` right, you 
should be
able to close the handle after calling `MapViewOfFile`; the 
internal

data will persist until you unmap the memory region.


IIRC no you can't. I'd need to double check that though.


Here's the paragraph I'm reading:

Mapped views of a file mapping object maintain internal 
references to the object, and a file mapping object does not 
close until all references to it are released. Therefore, to 
fully close a file mapping object, an application must unmap all 
mapped views of the file mapping object by calling 
UnmapViewOfFile and close the file mapping object handle by 
calling CloseHandle. These functions can be called in any order.


Re: Rant after trying Rust a bit

2015-07-22 Thread Alex Parrill via Digitalmars-d
I'm not at all familiar with Rust, so forgive me if I'm 
misinterpreting something.


On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:

Cargo
-
Rust has a default package manager much like Dub. The main 
difference is that Cargo has been endorsed by the Rust team and 
is an official product.


I think I read that this may happen soon.


Traits
--
...


You can make a `conformsToSomeInterface!T` template, and use 
`static assert`. D ranges, and the upcoming std.allocator, 
already use this sort of 'interfaces without polymorphism'.


Ex. `static assert(isInputRange!(MyCoolRange));`


Macros
--
...


Most of what macros in C were used for are now done with 
templates, static if, etc. (I don't know how Rust's macros work). 
Tools could theoretically execute `mixin`, but it effectively 
requires a D interpreter. A library to do that would be really 
nice.



Borrowing
-
...


Look into `std.typecons.Unique`, though I've seen people posting 
that they don't like it (I haven't used it much; I had one use 
case for it, which was sending it through `std.concurrency.send`, 
but it doesn't work with that function).



Yes, D's community is pretty small. It's not something you can 
just code; you have to market the language. And it's the 
community that creates the many tools and packages.