SPAM Captcha?

2013-08-26 Thread Tyler Jameson Little
I just tried to post a reply to Dicebot on the announce subforum 
through the web ui and I got a reCAPTCHA (first time ever) and I 
could not pass the CAPTCHA, and I tried several times.


Did something change in the spam filter?

Has anyone else had problems with the reCAPTCHA? Is there 
something messed up with my account?


I have tried on Firefox and Chromium (Linux, no flash; FF has JS 
blocked for dlang.org). I don't know if I'll be able to reply to 
this ='(


Re: std.serialization: pre-voting review / discussion

2013-08-23 Thread Tyler Jameson Little

On Friday, 23 August 2013 at 13:39:47 UTC, Dicebot wrote:

On Friday, 23 August 2013 at 13:34:04 UTC, ilya-stromberg wrote:
It's a serious issue. May be it's more important than range 
support. For example, I have to change class (bug fixing, new 
features, etc.), but it comparable with previos version 
(example: it's always possible to convert int to long). I 
that case I can't use std.serialization and have to write own 
solution (for examle, save data in csv file).


I don't think it as an issue at all. Behavior you want can't be 
defined in a generic way, at least not without lot of UDA help 
or similar declarative approach. In other words, the fact that 
those two classes are interchangeable in the context of the 
serialization exists only in the mind of programmer, not in D 
type system.


More than that, such behavior goes seriously out of the line of 
D being strongly typed language. I think functionality you want 
does belong to a more specialized module, not generic 
std.serialization - maybe even format-specific.


What about adding delegate hooks in somewhere? These delegates 
would be called on errors like invalid type or missing field.


I'm not saying this needs to be there in order to release, but 
would this be a direction we'd like to go eventually? I've seen 
similar approaches elsewhere (e.g. Node.js's HTTP parser).


Re: std.serialization: pre-voting review / discussion

2013-08-23 Thread Tyler Jameson Little

On Friday, 23 August 2013 at 20:29:40 UTC, Jacob Carlborg wrote:

On 2013-08-23 16:39, Tyler Jameson Little wrote:

What about adding delegate hooks in somewhere? These delegates 
would be

called on errors like invalid type or missing field.

I'm not saying this needs to be there in order to release, but 
would
this be a direction we'd like to go eventually? I've seen 
similar

approaches elsewhere (e.g. Node.js's HTTP parser).


std.serialization already supports delegate hooks for missing 
values:


https://dl.dropboxusercontent.com/u/18386187/docs/std.serialization/std_serialization_serializer.html#.Serializer.errorCallback


Awesome!


Re: Range interface for std.serialization

2013-08-22 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 07:16:11 UTC, Jacob Carlborg wrote:

On 2013-08-22 05:13, Tyler Jameson Little wrote:

I don't like this because it still caches the whole object 
into memory.

In a memory-restricted application, this is unacceptable.


It need to store all serialized reference types, otherwise it 
cannot properly serialize a complete object graph. We don't 
want duplicates. Example:


The following code:

auto bar = new Bar;
bar.a = 3;

auto foo = new Foo;
foo.a = bar;
foo.b = bar;

Is serialized as:

object runtimeType=main.Foo type=main.Foo key=0 id=0
object runtimeType=main.Bar type=main.Bar key=a 
id=1

int key=a id=23/int
/object
reference key=b1/reference
/object

When foo.b is just serializes a reference, not the complete 
object, because that has already been serialized. The 
serializer needs to keep track of that.


Right, but it doesn't need to keep the serialized data in memory.

I think one call to popFront should release part of the 
serialized

object. For example:

struct B {
int c, d;
}

struct A {
int a;
B b;
}

The JSON output of this would be:

{
a: 0,
b: {
c: 0,
d: 0
}
}

There's no reason why the serializer can't output this in 
chunks:


Chunk 1:

{
a: 0,

Chunk 2:

b: {

Etc...


It seems hard to keep track of nesting. I can't see how pretty 
printing using this technique would work.


Can't you just keep a counter? When you enter anything that would 
increase the indentation level, increment the indentation level. 
When leaving, decrement. At each level, insert whitespace equal 
to indentationLevel * whitespacePerLevel. This seems pretty 
trivial, unless I'm missing something.


Also, I didn't check, but it turns off pretty-printing be 
default, right?



This is just a read-only property, which arguably doesn't break
misconceptions. There should be no reason to assign directly 
to a range.


How should I set the data used for deserializing?


How about passing it in with a function? Each range passed this 
way would represent a single object, so the current 
deserialize!Foo(InputRange) would work the same way it does now.



I agree that (de)serializing a large list of objects lazily is
important, but I don't think that's the natural interface for a
Serializer. I think that each object should be lazily 
serialized instead

to maximize throughput.

If a Serializer is defined as only (de)serializing a single 
object, then
serializing a range of Type would be as simple as using map() 
with a
Serializer (getting a range of Serialize). If the allocs are 
too much,
then the same serializer can be used, but serialize 
one-at-a-time.


My main point here is that data should be written as it's being
serialized. In a networked application, it may take a few 
packets to
encode a larger object, so the first packets should be sent 
ASAP.


As usual, feel free to destroy =D


Again, how does one keep track of nesting in formats like XML, 
JSON and YAML?


YAML will take a little extra care since whitespace is 
significant, but it should work well enough as I've described 
above.


Re: Range interface for std.serialization

2013-08-22 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 14:48:57 UTC, Dicebot wrote:
On Thursday, 22 August 2013 at 03:13:46 UTC, Tyler Jameson 
Little wrote:

On Wednesday, 21 August 2013 at 20:21:49 UTC, Dicebot wrote:
It should be range of strings - one call to popFront should 
serialize one object from input object range and provide 
matching string buffer.


I don't like this because it still caches the whole object 
into memory. In a memory-restricted application, this is 
unacceptable.


Well, in memory-restricted applications having large object at 
all is unacceptable. Rationale is that you hardly ever want 
half-deserialized object. If environment is very restrictive, 
smaller objects will be used anyway (list of smaller objects).


It seems you and I are trying to solve two very different 
problems. Perhaps if I explain my use-case, it'll make things 
clearer.


I have a server that serializes data from a socket, processes 
that data, then updates internal state and sends notifications to 
clients (involves serialization as well).


When new clients connect, they need all of this internal state, 
so the easiest way to do this is to create one large object out 
of all of the smaller objects:


class Widget {
}

class InternalState {
Widget[string] widgets;
... other data here
}

InternalState isn't very big by itself; it just has an 
associative array of Widget pointers with some other rather small 
data. When serialized, however, this can get quite large. Since 
archive formats are orders of magnitude less-efficient than 
in-memory stores, caching the archived version of the internal 
state can be prohibitively expensive.


Let's say the serialized form of the internal state is 5MB, and I 
have 128MB available, while 50MB or so is used by the 
application. This leaves about 70MB, so I can only support 14 
connected clients.


With a streaming serializer (per object), I'll get that 5MB down 
to a few hundred KB and I can support many more clients.



...
There's no reason why the serializer can't output this in 
chunks


Outputting on its own is not useful to discuss - in pipe model 
output matches input. What is the point in outputting partial 
chunks of serialized object if you still need to provide it as 
a whole to the input?


This only makes sense if you are deserializing right after 
serializing, which is *not* a common thing to do.


Also, it's much more likely to need to serialize a single object 
(as in a REST API, 3d model parser [think COLLADA] or config 
parser). Providing a range seems to fit only a small niche, 
people that need to dump the state of the system. With 
single-object serialization and chunked output, you can define 
your own range to get the same effect, but with an API as you 
detailed, you can't avoid memory problems without going outside 
std.


Re: Download page needs a tidy up

2013-08-22 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 07:34:09 UTC, Jacob Carlborg wrote:

On 2013-08-22 06:28, Tyler Jameson Little wrote:

Why not sniff the platform? I think Firefox  Dart websites do 
this.

This can be retrieved with navigator.platform:
https://developer.mozilla.org/en-US/docs/Web/API/window.navigator.platform

Of course, the others should be easily accessible.


Or just using the user agent, since it has to work on all major 
browsers.


Right, and doing it server-side would allow users with JS 
disabled to still be supported.


Any support for this would be miles ahead of the current 
situation.


Re: Why I chose D over Ada and Eiffel

2013-08-22 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 10:34:58 UTC, John Colvin wrote:
On Thursday, 22 August 2013 at 02:06:13 UTC, Tyler Jameson 
Little wrote:

- array operations (int[] a; int[]b; auto c = a * b;)
 - I don't think these are automagically SIMD'd, but there's 
always hope =D


That isn't allowed. The memory for c must be pre-allocated, and 
the expression then becomes c[] = a[] * b[];


Oops, that was what I meant.


Is it SIMD'd?

It depends. There is a whole load of hand-written assembler for 
simple-ish expressions on builtin types, on x86. x86_64 is only 
supported with 32bit integer types because I haven't finished 
writing the rest yet...


However, I'm not inclined to do so at the moment as we need a 
complete overhaul of that system anyway as it's currently a 
monster*.  It needs to be re-implemented as a template 
instantiated by the compiler, using core.simd. Unfortunately 
it's not a priority for anyone right now AFAIK.


That's fine. I was under the impression that it didn't SIMD at 
all, and that SIMD only works if explicitly stated.


I assume this is something that can be done at runtime:

int[] a = [1, 2, 3];
int[] b = [2, 2, 2];
auto c = a[] * b[]; // dynamically allocates on the stack; 
computes w/SIMD

writeln(c); // prints [2, 4, 6]

I haven't yet needed this, but it would be really nice... btw, it 
seems D does not have dynamic allocation. I know C99 does, so I 
know this is technically possible. Is this something we could 
get? If so, I'll start a thread about it.



*
hand-written asm loops. If fully fleshed out there would be:
  ((aligned + unaligned + legacy mmx) * (x86 + x64) + fallback 
loop)

  * number of supported expressions * number of different types
of them. Then there's unrolling considerations. See 
druntime/src/rt/arrayInt.d




Re: Why I chose D over Ada and Eiffel

2013-08-21 Thread Tyler Jameson Little

On Wednesday, 21 August 2013 at 17:45:29 UTC, Ramon wrote:

On Wednesday, 21 August 2013 at 17:17:52 UTC, deadalnix wrote:
You want no bugs ? Go for Haskell. But you'll get no 
convenience or performance. The good thing if that if it does 
compile, you are pretty sure that it does the right thing.


Why should I? Isn't that what D promises, too (and probably is 
right)?


On another perspective: Consider this question Would you be 
willing to have all your software (incl. OS) running 10% or 
even 20% slower but without bugs, leaks, (unintended) backdoors 
and the like?


My guess: Upwards of 80% would happily chime YES!.


Have you looked at Rust? It promises to solve a few of the 
memory-related problems mentioned:


- no null pointer exceptions
- deterministic free (with owned pointers)
- optional garbage collection

It also has generics, which are runtime generics if I'm not 
mistaken. It doesn't have inheritance in the traditional OO 
sense, so you may not like that. I really like that it's LLVM 
compiled, so performance and cross-compiling should be pretty 
much solved problems.


There are still things that keep me here with D though:

- templates instead of generics (little reason to take a 
performance hit)

- CTFE
- inheritance (though I hardly use classes, they're handy 
sometimes)

- community
- array operations (int[] a; int[]b; auto c = a * b;)
  - I don't think these are automagically SIMD'd, but there's 
always hope =D

- similar to C++, so it's easy to find competent developers


Re: Range interface for std.serialization

2013-08-21 Thread Tyler Jameson Little

On Wednesday, 21 August 2013 at 20:21:49 UTC, Dicebot wrote:

My 5 cents:

On Wednesday, 21 August 2013 at 18:45:48 UTC, Jacob Carlborg 
wrote:
If this alternative is chosen how should the range for the 
XmlArchive work like? Currently the archive returns a string, 
should the range just wrap the string and step through 
character by character? That doesn't sound very effective.


It should be range of strings - one call to popFront should 
serialize one object from input object range and provide 
matching string buffer.


I don't like this because it still caches the whole object into 
memory. In a memory-restricted application, this is unacceptable.


I think one call to popFront should release part of the 
serialized object. For example:


struct B {
int c, d;
}

struct A {
int a;
B b;
}

The JSON output of this would be:

{
a: 0,
b: {
c: 0,
d: 0
}
}

There's no reason why the serializer can't output this in chunks:

Chunk 1:

{
a: 0,

Chunk 2:

b: {

Etc...

Most archive formats should support chunking. I realize this may 
be a rather large change to Orange, but I think it's a direction 
it should be headed.



Alternative AO2:

Another idea is the archive is an output range, having this 
interface:


auto archive = new XmlArchive!(char);
archive.writeTo(outputRange);

auto serializer = new Serializer(archive);
serializer.serialize(new Object);

Use the output range when the serialization is done.


I can't imagine a use case for this. Adding ranges just because 
you can is not very good :)


I completely agree.

A problem with this, actually I don't know if it's considered 
a problem, is that the following won't be possible:


auto archive = new XmlArchive!(InputRange);
archive.data = archive.data;


What this snippet should do?

Which one would usually expect from an OO API. The problem 
here is that the archive is typed for the original input range 
but the returned range from data is of a different type.


Range-based algorithms don't assign ranges. Transferring data 
from one range to another is done via copy(sourceRange, 
destRange) and similar tools.


This is just a read-only property, which arguably doesn't break 
misconceptions. There should be no reason to assign directly to a 
range.


It looks like difficulties come from your initial assumption 
that one call to serialize/deserialize implies one object - in 
that model ranges hardly are useful. I don't think it is a 
reasonable restriction. What is practically useful is 
(de)serialization of large list of objects lazily - and this is 
a natural job for ranges.


I agree that (de)serializing a large list of objects lazily is 
important, but I don't think that's the natural interface for a 
Serializer. I think that each object should be lazily serialized 
instead to maximize throughput.


If a Serializer is defined as only (de)serializing a single 
object, then serializing a range of Type would be as simple as 
using map() with a Serializer (getting a range of Serialize). If 
the allocs are too much, then the same serializer can be used, 
but serialize one-at-a-time.


My main point here is that data should be written as it's being 
serialized. In a networked application, it may take a few packets 
to encode a larger object, so the first packets should be sent 
ASAP.


As usual, feel free to destroy =D


Re: Download page needs a tidy up

2013-08-21 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 03:07:39 UTC, Manu wrote:

So I'm trying to find windows binaries for GDC and LDC...

First place I look is dlang.org/download. Appears to be for 
DMD... keep

looking.

I look at the GDC/LDC wiki pages. No links to binaries anywhere.
GDC and LDC home pages... no links to binaries.
Github doesn't host binaries anymore...

Where are they?

Turns out there are links to the GDC binaries (hosted on 
bitbucket) on

dlang.org/download.
...I didn't previously notice they were there, never scrolled 
down far

enough. The impression you get from the top of the page is that
dlang.orgis just DMD related, and I quickly dismissed it 
previously

_


Also, where's the Linux (FreeBSD?, Mac OS X?) downloads for GDC? 
I get it through my package manager, but it seems there should be 
a download for it as well. If not a binary, then at least a 
tar.gz...



But there's still no LDC binary there... where is it?

This needs to be fixed. You can argue I'm retarded and 
ignorant, but as an
end user, it should take me no more than 5 seconds to find the 
download

button.


Agreed. Some may not be as persistent as you...

I suggest, on the front page of dlang.org, there should be a 
MASSIVE
button: DOWNLOAD D COMPILERS, and the download page should be 
tweaked to

be more obviously compiler agnostic.


I'm not too sure about this. DMD is the reference compiler, so 
people should be using that for learning, then graduate to GDC or 
LDC if they need something faster.


D1 and DMC consume an unreasonable amount of realestate, hiding 
GDC/LDC
(surely basically nobody is looking for those?), perhaps they 
should be
reduced to small test links with the other links down the 
bottom of the

page?
This will allow room to present GDC and LDC without scrolling.


+1

Do we really want D1 compilers that easily accessable? I assume 
everyone relying on them already has a copy, and even if they 
need another, they can click on an Archive link or something...



And why is there no LDC binary?


Re: Download page needs a tidy up

2013-08-21 Thread Tyler Jameson Little

On Thursday, 22 August 2013 at 03:33:38 UTC, Manu wrote:

On 22 August 2013 13:18, Brad Anderson e...@gnuk.net wrote:


On Thursday, 22 August 2013 at 03:07:39 UTC, Manu wrote:


So I'm trying to find windows binaries for GDC and LDC...

First place I look is dlang.org/download. Appears to be for 
DMD... keep

looking.

I look at the GDC/LDC wiki pages. No links to binaries 
anywhere.

GDC and LDC home pages... no links to binaries.
Github doesn't host binaries anymore...

Where are they?

Turns out there are links to the GDC binaries (hosted on 
bitbucket) on

dlang.org/download.
...I didn't previously notice they were there, never scrolled 
down far
enough. The impression you get from the top of the page is 
that
dlang.orgis just DMD related, and I quickly dismissed it 
previously


 _




But there's still no LDC binary there... where is it?

This needs to be fixed. You can argue I'm retarded and 
ignorant, but as an
end user, it should take me no more than 5 seconds to find 
the download

button.

I suggest, on the front page of dlang.org, there should be a 
MASSIVE
button: DOWNLOAD D COMPILERS, and the download page should 
be tweaked to

be more obviously compiler agnostic.

D1 and DMC consume an unreasonable amount of realestate, 
hiding GDC/LDC
(surely basically nobody is looking for those?), perhaps they 
should be
reduced to small test links with the other links down the 
bottom of the

page?
This will allow room to present GDC and LDC without scrolling.

And why is there no LDC binary?



I tried to fix some of these problems here[1].  Maybe someone 
can take

what I did and fix it up enough to be used.

1. 
https://github.com/D-**Programming-Language/dlang.**org/pull/304https://github.com/D-Programming-Language/dlang.org/pull/304




Definitely an improvement!
Although if I were to be critical, I'd say when scrolling the 
page, I find
the almost random layout of the bright red buttons scattered 
all over the

place to be rather overwhelming.

I was also briefly confused by the 32bit/64bit scattered 
everywhere. My

initial assumption was that it specified the toolchain's target
architecture :/
But since it's the compiler's host arch, I'd say that for 
Windows where
32bit binaries will run on any version of windows and no 64bit 
binary is
offered, and OSX which has only ever been 64bit, there's no 
need to write

it for those platforms. It's just confusing.


Why not sniff the platform? I think Firefox  Dart websites do 
this. This can be retrieved with navigator.platform: 
https://developer.mozilla.org/en-US/docs/Web/API/window.navigator.platform


Of course, the others should be easily accessible.


Re: std.serialization: pre-voting review / discussion

2013-08-20 Thread Tyler Jameson Little

On Tuesday, 20 August 2013 at 13:44:01 UTC, Daniel Murphy wrote:

Dicebot pub...@dicebot.lv wrote in message
news:luhuyerzmkebcltxh...@forum.dlang.org...


What I really don't like is excessive amount of object in the 
API. For example, I have found no reason why I need to create 
serializer object to simply dump a struct state. It is both 
boilerplate and runtime overhead I can't justify. Only state 
serializer has is archiver - and it is simply collection of 
methods on its own. I prefer to be able to do something like 
`auto data = serialize!XmlArchiver(value);`




I think this is very important.  Simple uses should be as 
simple as

possible.


+1

This would enhance the 1-liner: write(file, 
serialize!XmlArchiver(InputRange));


We could even make nearly everything private except an 
isArchiver() template and serialize!().


Re: A Discussion of Tuple Syntax

2013-08-20 Thread Tyler Jameson Little
On Tuesday, 20 August 2013 at 21:25:11 UTC, Andrei Alexandrescu 
wrote:

On 8/20/13 1:24 PM, Dicebot wrote:
That would be problematic to say the least. (There have been 
a few
discussions in this group; I've come to think auto expansion 
is fail.)


:O it is awesome!


Spawn of Satan would be a tad more appropriate.


+1

I can stand explicit expansion though, like Go's variadic 
argument ellipses syntax: 
http://golang.org/doc/effective_go.html#append


OT: I'm not really a fan of D's variadic function syntax. I'd 
prefer it to be explicit:


int sum(int[] ar ...) {
}
int[3] foo = [4, 5, 6];
sum(foo...);
sum(3, 4, foo...); // on my wish list...


Stuff like foo(myStructInstance.tupleof) is very
powerful tool for generic interfaces.


One problem with automatic expansion is that now there's a 
whole new kind - a single expression that doesn't quite have 
one value and one type, but instead is an explosion of other 
expressions.


snip...

Alternatively, we could do what Go does and prevent all packing 
altogether (if I understand what Go does correctly). That is, 
if a function returns a tuple you can't even keep that tuple 
together unless you use some handcrafted solution. In that 
case, before long there will be some Pack structure and some 
pack helper function, viz. the converse of .expand.


+1

I've been itching for multiple returns in D for a long time, and 
this seems like a nice way to add it in. I think I'd prefer to 
use tuple syntax instead though, just so there's less magic:


(int, int) doStuff() {
return (1, 2);
}

// syntax error
auto a, b = doStuff();
// ok
auto (a, b) = doStuff();


Clearly both packing and unpacking tuples are necessary.

snip...

Andrei


OT: is the only thing stopping us from using the nice (x,y) 
syntax for tuples the comma operator? If so, can we take that 
mis-feature behind the woodshed and shoot it? I sincerely hope 
nobody is relying on it...


Re: Possible solution to template bloat problem?

2013-08-20 Thread Tyler Jameson Little

On Wednesday, 21 August 2013 at 01:46:37 UTC, Ramon wrote:

On Tuesday, 20 August 2013 at 22:58:24 UTC, John Colvin wrote:

On Tuesday, 20 August 2013 at 22:49:40 UTC, Ramon wrote:
Happily I'm stupid and completely missed the condescending 
tone of an evident genius. Instead I'll just be grateful that 
it pleased one of the D masters to drop some statement down 
at me at all.



Awesome, thank you and keep destroying.


destroying??? Which part of not to bash it and of D 
means a lot to me and of D is, no doubts, an excellent and 
modern incarnation of C/C++. As

far as I'm concerned D is *the* best C/C++ incarnation ever,
hands down. was too complicated to understand for your 
genius brain?


I knew this would happen at some point:
Andrei uses destroy as a positive term to denote a 
well-reasoned powerful argument/response.


Chill :)


Uhum.

Well, where I live to destroy has a pretty clear and very 
negative meaning.
I took that post (of Mr. Alexandrescu) as very rude and 
condescending and I do not intend to change my communication 
habits so as to understand to destroy as a positive statement 
or even a compliment. While I'm certainly not in a position to 
look down on J. Ichbiah, N. Wirth, and B. Meyer, I have 
certainly not spent the last 25+ years without understanding a 
thing or two about my profession, no matter what Mr. 
Alexandrescu seems to think.


No matter what Mr. Alexandrescu thinks or likes/dislikes or how 
he behaves I recognize (and praise) D as a very major 
improvement on C/C++ and as a very attractive language (by no 
means only for system programming).
Furthermore I recognize and respect Mr. Alexandrescu's profound 
knowledge of D and the (assumed and highly probable) value of 
his book and thank him for his work.


Maybe I'm simply profitting from my being a lowly retarded 
creature who, as some kind of a generous compensation by 
nature, is capable to recognize the knowledge and work of 
others irrespective of their being friendly or rightout rude 
and condescending.


As for Mr. Alexandrescu's book, I'm glad to report that I will 
no longer need to molest him with my lowly requests. I have 
found a way to buy an epub version (through InformIt/Pearson). 
D The programming language has been bought and already 
downloaded and I'm looking forward to learn quite a lot about D 
from it.


Regards - R.


I'm sorry you felt offended by that, but I can assure you, he 
didn't mean anything negative by it. I probably won't convince 
you, but here are a few other times the word destroy has been 
used in a similar manner (the first is by Andre):


http://forum.dlang.org/thread/kooe7p$255m$1...@digitalmars.com
http://forum.dlang.org/thread/iauldfsuxzifzofzm...@forum.dlang.org
http://forum.dlang.org/thread/rhwopozmtodmazcyi...@forum.dlang.org
http://forum.dlang.org/thread/jlbsreudrapysiaet...@forum.dlang.org

I agree though, it isn't the best term, especially for someone 
who isn't accustomed to this community, but it's part of the 
culture.


Cheers!


Re: A possible suggestion for the Foreach loop

2013-08-20 Thread Tyler Jameson Little

On Wednesday, 21 August 2013 at 02:46:06 UTC, Dylan Knutson wrote:

Hello,

I'd like to open up discussion regarding allowing foreach loops 
which iterate over a tuple of types to exist outside of 
function bodies. I think this would allow for templating 
constants and unittests easier. Take, for instance, this 
hypothetical example:


--
T foo(T)(ref T thing)
{
thing++; return thing * 2;
}

foreach(Type; TupleType!(int, long, uint))
{
unittest
{
Type tmp = 5;
assert(foo(tmp) == 12);
}

unittest
{
Type tmp = 0;
foo(tmp);
assert(tmp == 1);
}
}
--

Without the ability to wrap all of the unittests in a template, 
one would have to wrap the bodies of each unittest in an 
individual foreach loop. This is not only repetitive and 
tedious, but error prone, as changing the types tested then 
requires the programmer to change *every* instance of the 
foreach(Type; TupleType).


A similar pattern already exists in Phobos, for testing all 
variants of strings (string, dstring, and wstring) and char 
types, as eco brought to my attention. After taking a look at 
some of the unittests that employ this pattern, I'm certain 
that code clarity and unittest quality could be improved by 
simply wrapping all of the individual unittests themselves in a 
foreach as described above.


Now, I'm certainly no D expert, but I can't think of any 
breakages this change might impose on the language itself. So, 
I'd like to hear what the benevolent overlords and community 
think of the idea.


Why not just do this?

T foo(T)(ref T thing)
{
thing++; return thing * 2;
}

unittest
{
void test(T)(T thing, T exp) {
assert(foo(thing) == exp);
}

foreach(Type; TypeTuple!(int, long, uint))
{
test!Type(5, 12);
test!Type(0, 1);
}
}

Unless you imagine doing this for something other than unittests.


Re: std.serialization: pre-voting review / discussion

2013-08-19 Thread Tyler Jameson Little

On Monday, 19 August 2013 at 13:31:27 UTC, Jacob Carlborg wrote:

On 2013-08-19 15:03, Dicebot wrote:


Great! Are there any difficulties with the input?


It just that I don't clearly know how the code will need to 
look like, and I'm not particular familiar with implementing 
range based code.


Maybe we need some kind of doc explaining the idiomatic usage of 
ranges?


Personally, I'd like to do something like this:

auto archive = new XmlArchive!(char); // create an XML archive
auto serializer = new Serializer(archive); // create the 
serializer

serializer.serialize(foo);

pipe(archive.out, someFile);

Where pipe would read from the left and write to the right. My 
idea for an implementation is through using take():


void pipe(R) (R input, File output) // isInputRange(R)...
{
while (!input.empty) {
// if Serializer has no data cached, goes through one 
step

// and returns what it has
auto arr = input.take(BUF_SIZE);
input.popFrontN(arr.length);
output.write(arr);
}
}

For now, I'd be happy for serializer to process all data in 
serialize(), but change the behavior later to do step through 
computation when calling take().


I don't know if this helps, and others are very likely to have 
better ideas.


Re: std.serialization: pre-voting review / discussion

2013-08-19 Thread Tyler Jameson Little

On Monday, 19 August 2013 at 18:06:00 UTC, Johannes Pfau wrote:

Am Mon, 19 Aug 2013 16:21:44 +0200
schrieb Tyler Jameson Little beatgam...@gmail.com:

On Monday, 19 August 2013 at 13:31:27 UTC, Jacob Carlborg 
wrote:

 On 2013-08-19 15:03, Dicebot wrote:

 Great! Are there any difficulties with the input?

 It just that I don't clearly know how the code will need to 
 look like, and I'm not particular familiar with implementing 
 range based code.


Maybe we need some kind of doc explaining the idiomatic usage 
of ranges?


Personally, I'd like to do something like this:

 auto archive = new XmlArchive!(char); // create an XML 
archive
 auto serializer = new Serializer(archive); // create the 
serializer

 serializer.serialize(foo);

 pipe(archive.out, someFile);


Your pipe function is the same as 
std.algorithm.copy(InputRange,

OutputRange) or std.range.put(OutputRange, InputRange);


Right, for some reason I couldn't find it... Moot point though.

An important question regarding ranges for std.serialization is 
whether

we want it to work as an InputRange or if it should _take_ an
OutputRange. So the question is

-
auto archive = new Archive();
Serializer(archive).serialize(object);
//Archive takes OutputRange, writes to it
archive.writeTo(OutputRange);

vs

auto archive = new Archive()
Serializer(archive).serialize(object);
//Archive implements InputRange for ubyte[]
foreach(ubyte[] data; archive) {}
-

I'd use the first approach as it should be simpler to 
implement. The
second approach would be useful if the ubyte[] elements were 
processed

via other ranges (map, take, ...). But as binary data is usually
not processed in this way but just stored to disk or sent over 
network
(basically streaming operations) the first approach should be 
fine.


+1 for the first way.

The first approach has the additional benefit that we can 
easily do

streaming like this:

auto archive = new Archive(OutputRange);
//Immediately write the data to the output range
Serializer(archive).serialize([1,2,3]);



This can make a nice one-liner for the general case:

Serializer(new Archive(OutputRange)).serialize(...);


Another point is that serialize in the above example could be
renamed to put. This way Serializer would itself be an 
OutputRange
which allows stuff like 
[1,2,3,4,5].stride(2).take(2).copy(archive);


Then serialize could also accept InputRanges to allow this:
archive.serialize([1,2,3,4,5].stride(2).take(2));
However, this use case is already covered by using copy so it 
would just

be for convenience.


This is nice, but I think I like serialize() better. I also don't 
think serializing a range is it's primary purpose, so it doesn't 
make a lot of sense to optimize for the uncommon case.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Tyler Jameson Little

On Sunday, 18 August 2013 at 14:24:38 UTC, Tobias Pankrath wrote:

On Sunday, 18 August 2013 at 08:38:53 UTC, ilya-stromberg wrote:

As I can see, we have a few options:
- accept std.serialization as is. If users can't use 
std.serialization due memory limitation, they should find 
another way.
- hold std.serialization until we will have new std.xml module 
with support of range/file input/output. Users should use 
Orange if they need std.serialization right now.
- hold std.serialization until we will have binary archive for 
serialization with support of range/file input/output. Users 
should use Orange if they need std.serialization right now.

- use another xml library, for example from Tango.

Ideas?


We should add a suitable range interface, even if it makes no 
sense with current std.xml and include std.serialization now. 
For many use cases it will be sufficient and the improvements 
can come when std.xml2 comes. Holding back std.serialization 
will only mean that we won't see any new backend from users and 
would be quite unfair to Jacob and may keep off other 
contributors.


I completely agree.

I'm the one that brought it up, and I mostly brought it up so the 
API doesn't have to change once std.xml is fixed. I don't think 
changing the return type to a range will be too difficult or 
memory expensive.


Also, since slices *are* ranges, shouldn't this just work?


Re: Actor model D

2013-08-18 Thread Tyler Jameson Little

On Monday, 19 August 2013 at 03:11:00 UTC, Luís Marques wrote:
Can anyone please explain me what it means for the D language 
to follow the Actor model, as the relevant Wikipedia page says 
it does? [1]


[1] 
http://en.wikipedia.org/wiki/Actor_model#Later_Actor_programming_languages


I assume this refers to task in std.parallelism and the various 
bits in std.concurrency for message passing.


I'm very surprised that D made the cut but Go didn't. I'm even 
more surprised that Rust was included even though it's not even 
1.0 yet while Go is at 1.1.1 currently.


I wish they had some kind of explanation or code examples to 
justify each one as in other articles, because I'm also very 
interested...


Re: cannot build LDC on OSX

2013-08-18 Thread Tyler Jameson Little

On Sunday, 18 August 2013 at 23:31:58 UTC, Timothee Cour wrote:

I'm bumping up this issue here
https://github.com/ldc-developers/ldc/issues/436 as it's been 
16 days with

no answer ...
am i doing something wrong?
it used to work a while ago, IIRC.


I don't know if you noticed, but there's an LDC-specific mailing 
list, so you may get more help there. I don't run Mac OS X, so I 
can't test, but there's a thread mentioning potential problems 
building on Mac OS X: 
http://forum.dlang.org/thread/mailman.1435.1369083694.4724.digitalmars-d-...@puremagic.com


What version of LLVM are you running? Also, what version of Mac 
OS X? According to the linked thread, only Mac OS 10.7 is 
supported.


Re: Rust vs Dlang

2013-08-17 Thread Tyler Jameson Little

On Saturday, 16 March 2013 at 14:42:58 UTC, Suliman wrote:
Hi folks! I had wrote small article about Rust vs D. I hope 
that you will like it!


http://versusit.org/rust-vs-d


I don't know if you still follow this, but there's a typo here:

Now, let’s see, how the code, outputting the word Hello 10 
times, will look. In Rust it will look like this:


fn main() {
for 100.times {
...

Should be:

...
for 10.times {
...

Also, the formatting still sucks and imports are missing for the 
D code, whereas imports are verbosely stated in Rust. FWIW, 
std::io::println can be stated as just println, since it's 
available by default. Don't know if this was the case when you 
wrote the post though...


Also, you don't mention the difference between D and Rust in 
switch statements (FWIW, Rust doesn't have switch statements, 
they're now match statements, which are closer to haskell's 
case statements than D's switch, because D's switch has optional 
fallthrough (goto case;) and Rust has no fallthrough.


Also, I second the objection to your exception example. D does 
have exceptions and there are some very important differences 
between them and Rust's exception handling (which is more like 
Go's recover() than D's catch).


Re: Redundancy in std.stdio name

2013-08-17 Thread Tyler Jameson Little

On Sunday, 18 August 2013 at 02:26:59 UTC, Paul Jurczak wrote:
As a frustrated C++ user, I was sniffing around D for a decade. 
Today, I started reading The D Programming Language. The 
first line of the first code example in this book:


import std.stdio

triggered my redundancy detector. Does it have to be std.stdio? 
How about std.io?


I think there's a replacement in the works, but I'm not sure of 
the status on that:


http://forum.dlang.org/thread/vnpriguleebpbzhkp...@forum.dlang.org#post-mailman.234.1362471736.14496.digitalmars-d:40puremagic.com


Re: D reaches 1000 questions on stackoverflow

2013-08-16 Thread Tyler Jameson Little
On Thursday, 15 August 2013 at 02:30:42 UTC, Jonathan M Davis 
wrote:

On Wednesday, August 14, 2013 22:56:30 Andre Artus wrote:

As with many things it depends on what you want to achieve.
Answering on SO is as much about establishing awareness as it 
is

about answering the question. For a newcomer to D StackOverflow
may be their first port of call, if questions go unanswered, or
are answered after long delays, then the likelihood of the 
person

persisting with D is diminished.


I answer questions on SO all the time, but I rarely ask 
anything there, and I
never ask anything D-related there. Of course, if my question 
is D-related,
I'm much more likely to _have_ to ask my question here to get a 
good answer
anyway just based on how many people would even know the 
answer, simply
because I know enough that anything I asked would be much more 
likely to be
esoteric and/or require in-depth knowledge. The experts are all 
here, and only

a small portion of them are on SO.

In any case, I'd say that in general, asking your question on 
SO gives it more
visibility to those outside of the core D community, but you're 
more likely to
get a good answer here than there, because there are more 
people here, and

this is where the experts are.

- Jonathan M Davis


First off, thank you so much for answering questions on SO. 
Answers there come up higher in Google search results than 
questions here, and several of your answers have been very 
helpful to me. There are others that answer, who I'm also 
grateful for, but your name always sticks out to me when I see an 
answer there.


It's true though that there are much better answers (and 
questions) here than on SO, and I'm beginning to shift my search 
from Google to the forum search, but this isn't something a 
newcomer will know to do, especially since many other languages 
put more emphasis on SO.


Re: Ideas for a brand new widget toolkit

2013-08-14 Thread Tyler Jameson Little

On Wednesday, 14 August 2013 at 02:23:07 UTC, Adam D. Ruppe wrote:

On Tuesday, 13 August 2013 at 20:33:48 UTC, Joakim wrote:
You mentioned X11 to me before, when we talked about this idea 
over email.


Ah yes. I think X's biggest problem though is that it doesn't 
do *enough*. The protocol is fairly efficient for what it does, 
but it just doesn't do enough so the commands aren't 
particularly compact when you start to get fancier.

snip...


I'm in the opposite camp. The server is never going to be able to 
support everything, and people are just going to fall back to 
rendering themselves anyway.


For example, say you need to screw a screw into a piece of wood. 
Let's also say you have the following: hammer, screwdriver 
(doesn't fit the screw), means to fab a new screwdriver. Let's 
also say that the screw needs to be in by tomorrow. Will you:


a) spend all night trying to get the screwdriver to work?
b) design a new screwdriver (also takes all night)
c) pound the screw in with the hammer ( 5 minutes) and promise 
yourself you'll make that screwdriver (which will never get done)


I think most people will go with c. This is exactly what happened 
with X. People didn't care enough to put font-rendering into X, 
so they wrote a rendering library and used it everywhere. This 
way they don't have to force their users to upgrade their X 
server, and they still get pretty fonts (at the risk of slow X 
forwarding, which hardly anyone uses anyway).


Anyway, the other thing too is all events go to the client 
application from the display server, and then the application's 
changes go back to the display server. Since X itself doesn't 
offer any kind of widgets, a text control for instance would 
work like this:


application - display: draw the box
display - application: key press event
application - display: draw the character, advance the 
cursor,,, and if oyu have to scroll btw it might draw a whole 
lot of stuff (though if you know you're on a potentially 
networked app, you'd do something like XCopyArea and tell the 
display to move a whole block of data up without resending it 
all, but again the leaky abstraction can kill you)



But yeah, the event needs a round trip to react. On a LAN, 
you're probably ok, but what about on the open internet where 
there's some latency? Well, sometimes it is actually quite fine 
there too, I spend a *lot* of time using both X and ssh 
remotely, but sometimes it gets to be really, really annoying.


Just tried to X forward Chrome on a local lan. It worked, but it 
was dog slow. I can't imagine trying this over a dodgy network. 
The problem is likely that Chrome (like most apps) makes 
extensive use of x frame buffer. This is the way many apps are 
going, and that trend is not likely to change.


So I'd want to do higher level events too, and the application 
can request them. For instance, if all you want is a basic text 
input, output something like textarea/textarea and let the 
display do the details.


(Another really nice benefit here, if we do it right, is it 
could use native controls on systems like Windows, or even pipe 
it to an external app on unix, and get a nice customized, 
adaptable system. Though while it sounds simpler in ways, this 
is easier said than done.)


With these higher level events and widgets, unless you need to 
override some event, it can just be handled without the round 
trip and improve response time on slow connections.


Though, if you do need real time event processing, you're back 
to the round trip, but meh, some speedup is better than none.


It just seems simpler to render into buffers on the client then 
upload entire chunks to the server. This will have less 
round-trips at the expense of larger packets each update.


For many high-latency networks, bandwidth is not a big problem. 
This is why websites try to reduce the number of downloads they 
have by increasing sizes of each download. For example, Opera 
Mobile worked well because they would render the page to an image 
before it got to the phone. Phones were on high-latency networks, 
so this meant fewer round-trips.


But, indeed, we don't want to go *too* far either, especially 
since then we'd end up with a web browser situation where 
people write their applications in the scripting language...



What basic widgets do you have in mind, to keep on the 
client-side?  Also, just widgets in the client or some basic 
layout too?


Layout would be nice too. Ideally, I'd love if my apps worked 
on both guis and text mode uis and laying them out would be a 
bit different.


This is nice. I've also thought about how to make this not suck. 
My initial thought was to see how ViM works (gvim runs 
stand-alone, vim runs in console).



For my crappygui.d, I'm aiming to do:

menus, labels, radio box, checkbox, buttons, grid layouts, text 
input, slider, number chooser, list boxes, and basic 2d 
drawing. Pretty much the basic stuff you get on html forms. 
Maybe more later, but 

Re: Future of string lambda functions/string predicate functions

2013-08-14 Thread Tyler Jameson Little

On Wednesday, 14 August 2013 at 05:44:50 UTC, Brad Anderson wrote:

On Wednesday, 14 August 2013 at 02:05:16 UTC, Manu wrote:
Can you give an example where you've actually used a string 
lambda before

where the predicate is more complex than a basic comparison?
Surely the solution to this problem is to offer a bunch of 
templates that

perform the most common predicates in place of unary/binaryFun?

So rather than: func!((a, b) = a  b)(args)
You use: func!binaryLess(args)

Or something like that?



How about just less?  It's what C++ STL uses (std::less, 
std::greater, std::negate, etc...). In C++, however, you have 
to do some truly ugly stuff to actually make use of the 
predefined function objects...bind1st...eww (the new C++11 bind 
is only marginally better but C++11 does have lambdas now at 
least).


The thing that annoys me about string vs proper lambda's, is 
that I never
know which one I'm supposed to use. I need to refer to 
documentation every

time.
Also, the syntax highlighting fails.


Or imitate bash:

Binary:
- gt: a  b
- ge: a = b
- lt: a  b
- le: a = b
- eq: a == b
- ne: a != b

Unary:
- z: (zero) a == 0 (if range, a.empty?)
- n: (non-zero) a != 0

Perhaps this is *too* terse?


Re: database applications

2013-08-14 Thread Tyler Jameson Little

On Tuesday, 13 August 2013 at 22:42:31 UTC, John Joyus wrote:

On 08/09/2013 07:24 AM, Dejan Lekic wrote:

The answer is NO to all your questions.


I appreciate the straight answer! :)

However, my curiosity for D language has grown recently after I 
read this article,

http://www.drdobbs.com/parallel/the-case-for-d/217801225

So, I will sill learn this language, albeit a little slowly!
I'll start with Ali's book and finish with Andrei's.

Thanks to all,
JJ


I like your spirit.

The main things that led me to like D were:

- GC, but you can go without (a little painful, but not bad)
- simpler templates than C++, but still powerful
- CTFE (not done yet, but usable today)

And I stayed because:

- community driven
  - community intelligent and helpful, albeit a little small
- still in development
  - I can still pitch cool ideas and have a chance at them landing
- /+ /* */ +/
  - seriously, why don't other languages allow this?!?


Re: std.serialization: pre-voting review / discussion

2013-08-14 Thread Tyler Jameson Little

Serious:

- doesn't use ranges
  - does this store the entire serialized output in memory?
  - I would to serialize to a range (file?) and deserialize from 
a range (file?)


Minor

- Indentation messed up in Serializable example
- Typo: NonSerialized example should read NonSerialized!(b)


Re: std.serialization: pre-voting review / discussion

2013-08-14 Thread Tyler Jameson Little
On Wednesday, 14 August 2013 at 08:48:23 UTC, Jacob Carlborg 
wrote:

On 2013-08-14 10:19, Tyler Jameson Little wrote:

Serious:

- doesn't use ranges
  - does this store the entire serialized output in memory?


That's up to the archive how it chooses to implement it. But 
the current XmlArchive does so, yes. I becomes quite limited 
because of std.xml.


Well, std.xml needs to be replaced anyway, so it's probably not a 
good limitation to have. It may take some work to replace it 
correctly though...


  - I would to serialize to a range (file?) and deserialize 
from a

range (file?)


The serialized data is returned as an array, so that is 
compatible with the range interface, it just won't be lazy.


The input data used for deserializing excepts a void[], I don't 
think that's compatible with the range interface.


I'm mostly interested in reducing memory. If I'm (de)serializing 
a large object or lots of objects, this could become an issue.


Related question: Have you looked at how much this relies on the 
GC?



Minor

- Indentation messed up in Serializable example


Right, I'll fix that.


- Typo: NonSerialized example should read NonSerialized!(b)


No, it's not a typo. If you read the documentation you'll see 
that:


If no fields or this is specified, it indicates that the 
whole class/struct should not be (de)serialized.


Ah, missed that.


Re: std.serialization: pre-voting review / discussion

2013-08-14 Thread Tyler Jameson Little

On Wednesday, 14 August 2013 at 09:17:44 UTC, Tove wrote:
On Wednesday, 14 August 2013 at 08:48:23 UTC, Jacob Carlborg 
wrote:

On 2013-08-14 10:19, Tyler Jameson Little wrote:

- Typo: NonSerialized example should read NonSerialized!(b)


No, it's not a typo. If you read the documentation you'll see 
that:


If no fields or this is specified, it indicates that the 
whole class/struct should not be (de)serialized.


I understand the need for Orange to be backwards compatible, 
but for std.serialization, why isn't the old-style mixin simply 
removed in favor of the UDA.


Furthermore for template NonSerialized(Fields...) there is an 
example, while for the new style struct nonSerialized; there 
isn't!


I find the newstyle both more intuitive and you also more dry 
not duplicating the identifier: int b; mixin NonSerialized!(b)


@nonSerialized struct Foo
{
int a;
int b;
int c;
}

struct Bar
{
int a;
int b;
@nonSerialized int c;
}


I like this a lot more. Phobos just needs to be compatible with 
the current release, so backwards compat is a non-issue here.


Re: Ideas for a brand new widget toolkit

2013-08-14 Thread Tyler Jameson Little

On Wednesday, 14 August 2013 at 17:35:24 UTC, Joakim wrote:
While remote desktop is decent, it's trying to do too much: 
mirroring an entire desktop is overkill.  Better to use a lean 
client that handles most situations.


Maybe this is because I'm used to Linux, but I generally just 
want to forward an application, not the entire desktop. On Linux, 
this is trivial:


$ ssh -X user@host
$ gui-program args

However, Windows doesn't have this workflow, so mirroring the 
entire desktop became necessary.


Simpler is usually better, as long as simpler doesn't prevent you 
from creating something robust on top.


In a widget toolkit, I think the same applies: make just enough 
controls that most people are satisfied, then make the toolkit 
easy to extend with custom controls.


Re: std.serialization: pre-voting review / discussion

2013-08-14 Thread Tyler Jameson Little
On Wednesday, 14 August 2013 at 19:55:52 UTC, ilya-stromberg 
wrote:
On Wednesday, 14 August 2013 at 19:23:51 UTC, Jacob Carlborg 
wrote:

On 2013-08-14 21:11, Andrei Alexandrescu wrote:

I'm thinking some people may need to stream to/from large 
files and

would find the requirement of in-core representation limiting.


Yes, I understand that. But currently I'm limited by std.xml.


Can you use another serialization format and supports file 
output for it? For example, can you use JSON, BSON or binary 
format?


That's often not possible, especially when working with an 
external API.


When working with large files, it's much better to read the file 
in chunks so you can be processing the data while the platters 
are seeking. This isn't as big of a problem with SSDs, but you 
still have to wait for the OS. RAM usage is also an issue, but 
for me it's less of an issue than waiting for I/O.


Even if rotating media were to be phased out, there's still the 
problem of streaming data over a network.


std.xml will be replaced, but it shouldn't require breaking code 
to fix std.serialize.


Re: Designing a consistent language is *very* hard

2013-08-14 Thread Tyler Jameson Little

On Wednesday, 14 August 2013 at 12:09:27 UTC, Dejan Lekic wrote:
Speaking about PHP... I believe we all read that article. I 
could say worse about ASP than what that article says about PHP.


That doesn't mean that ASP is worse than PHP though. PHP is so 
bad that I've actually considered offering up my time pro-bono to 
rewrite sites written in PHP to pretty much anything else.


The only thing that excites me more than seeing PHP die is seeing 
IE6/7/8 die, and that's already happening. =D


Re: Ideas for a brand new widget toolkit

2013-08-13 Thread Tyler Jameson Little

On Tuesday, 13 August 2013 at 13:23:07 UTC, Paul Z. Barsan wrote:

Hello everyone,

These days I've been searching for a cross-platform IDE for D 
and I found out that there aren't any viable standalone 
options. After a few clicks, I've ran over this topic: 
http://forum.dlang.org/thread/astrlgbptrlvcdicq...@forum.dlang.org 
and it wasn't a surprise to see there are other people 
searching for the very same thing.One of the reasons for the 
absence of such IDEs is that there are no widget toolkits 
written in D except DWT, but some people are complaining about 
DWT for being a clone of SWT and that clients will want DWT to 
be in sync with SWT since SWT is a marketing paradigm. As 
such, I want to embark on a long journey of writing a new 
widget toolkit from scratch.


I already opened this can of worms: 
http://forum.dlang.org/thread/vtaufckbpdkpuxyzt...@forum.dlang.org?page=1


There's some good feedback there. Not sure if you saw this one.

Here are the ideas that people came up with so far(sorry if I 
omitted something):


snip...

Think of this topic as writing letters to Santa, so: what say 
you ?


I'm a web developer, and CSS+HTML works quite well. The DOM 
sucks, but the idea of separating markup, style and code has 
worked out pretty well. QML uses this model pretty well, and I 
think we can do something pretty nice with D's awesome template 
support and CTFE.


My general preferences:

- simple, orthogonal, efficient
- theme-able at runtime
- simple event model

Opinions:

- no XML
- few abstractions (i.e. avoid Java-esque OO obsession)

Features I'd really like:

- direct frame buffer support (like qingy on Linux, but not sucky)
- no GC (in an ideal world)
  - I'm not a big fan of Phobos relying on the GC
  - removes a barrier to writing bare-metal applications (only 
have to implement a new backend, not the whole ecosystem)

  - less expensive to link into a non-D project
- CTFE
- entire API accessable from C
  - so I can reuse it in another language (i.e. Rust, Go, Python, 
etc.)


Overall design:

- simple buffer layering strategy, with bottom up and top-down 
message passing


http://swtch.com/~rsc/thread/cws.pdf

- scene graph (like clutter): 
http://en.wikipedia.org/wiki/Clutter_%28toolkit%29


I'd be interested in helping out, but I can't promise I'll be 
dependable enough to be a major maintainer, since I don't have 
any real projects that use D. Writing a UI toolkit is on my 
already long list of TODOs.


Re: qtD

2013-08-12 Thread Tyler Jameson Little

On Monday, 12 August 2013 at 19:08:14 UTC, David Nadlinger wrote:

On Monday, 12 August 2013 at 15:28:34 UTC, Russel Winder wrote:
https://code.google.com/p/qtd/ (which has a Subversion 
repository)
clearly points to http://www.dsource.org/projects/qtd – which 
I guess has a checkoutable (Subversion) repository.


It's a Mercurial repository. QtD moved to BitBucket because of 
DSource stability problems impairing development. I suggested 
Eldar to nuke the DSource one to avoid confusion – i.e. either 
disable it, or replace it with a singe repo has moved text 
file in the root directory, or something like that –, but 
somehow this never happened (I don't recall whether there was 
actually disagreement about this or if we just never got around 
to do the change).



But then there is https://bitbucket.org/qtd/repo


As far as I am aware, this is the current repository, i.e. 
the last that Eldar, Max, Alexey and I actually committed to. 
However, I don't think any of us are actually working on QtD 
right now, and even simple patches/pull requests take 
inexcusably long to merge.



and https://github.com/qtd-developers/qtd


This seems to be an attempt to revive QtD, possibly by Michael 
Crompton, who contributed a few patches on BitBucket before. 
The URL is unnecessarily long, though – I just reserve 
github.com/qtd, if somebody wants admin rights for the 
organization, just drop me a line.


That was actually me. I started working on it, and I got a few 
patches in when I got really busy with work. Unfortunately, that 
was just about the same time I started understanding the build 
system...


It is unfortunately long, but I borrowed the naming scheme from 
ldc... Also, if anyone wants admin rights, drop me a line. I 
probably shouldn't be in charge of it since I only really have a 
passing interest (I just wanted to fix the PKGBUILD for Arch 
Linux..., also, if someone else can actually fix it, let me know).


Before any activity gets going on QtD might it be an idea to 
decide with

which VCS and support tools?


Yep. I can't speak for Eldar and Max, who are really the ones 
who own QtD (I only contributed a few smaller fixes), but I'd 
say, if somebody wants to genuinely pick up QtD development, 
they should go ahead and choose whatever they feel most 
comfortable with. Git/GitHub certainly would be a good fit for 
the D ecosystem.


Perhaps more should be done on 
http://www.dsource.org/projects/qtd to

make it clear where action is to happen?


I just tried to; the person behind the GitHub repository 
(Michael?) is welcome to amend that page. Note that the actual 
installation guides linked from that page all referred to the 
proper repository before as well.


David


I unfortunately don't have a dsource account, and I'm not sure 
how to get one.


Please, let me know how I can help out. I'm 100% ok with handing 
over the qtd-developers org (if that's what we want to use).


Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread Tyler Jameson Little
On Sunday, 11 August 2013 at 16:26:16 UTC, Andrei Alexandrescu 
wrote:

On 8/8/13 9:52 AM, Jonathan M Davis wrote:

On Thursday, August 08, 2013 07:29:56 H. S. Teoh wrote:
Seems this thread has quietened down. So, what is the 
conclusion? Seems
like almost everyone concedes that silent deprecation is the 
way to go.
We still support string lambdas in the background, but in 
public docs we

promote the use of the new lambda syntax. Would that be a fair
assessment of this discussion?


I find it interesting that very few Phobos devs have weighed 
in on the matter,
but unfortunately, most of the posters who have weighed in do 
seem to be

against keeping them.


There's a related issue that I think we must solve before 
deciding whether or not we should deprecate string lambdas. 
Consider:


void main() {
import std.range;
SortedRange!(int[], a  b) a;
SortedRange!(int[], a  b) b;
b = a;
SortedRange!(int[], (a, b) = a  b) c;
SortedRange!(int[], (a, b) = a  b) d;
d = c;
}

The last line fails to compile because D does not currently 
have a good notion of comparing lambdas for equality. In 
contrast, string comparison is well defined, and although 
string lambdas have clowny issues with e.g. ab being 
different from a  b, people have a good understanding of 
what to do to get code working.


So I think we should come up with a good definition of what 
comparing two function aliases means.



Andrei


Correct me if I'm wrong, but AFAICT the old behavior was an 
undocumented feature. I couldn't find string lambdas formally 
documented anywhere, but lambdas are.


Comparing function aliases is an optimization, not a feature, so 
I don't feel it's a blocker to deprecating string lambdas. If the 
user needs the old behavior, he/she can do this today with an 
actual function:


bool gt(int a, int b) {
return a  b;
}

void main() {
import std.range;
SortedRange!(int[], a  b) a;
SortedRange!(int[], a  b) b;
b = a;
SortedRange!(int[], gt) c;
SortedRange!(int[], gt) d;
d = c;
}

While not as concise, this is safer and does not rely on 
undocumented behavior.


Another consideration, are the following equivalent?

(a,b) = a  b
(b,c) = b  c


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Tyler Jameson Little
On Sunday, 11 August 2013 at 18:25:02 UTC, Andrei Alexandrescu 
wrote:

On 8/11/13 10:20 AM, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 09:28:21 -0700
Andrei Alexandrescu seewebsiteforem...@erdani.org wrote:


On 8/11/13 8:49 AM, monarch_dodra wrote:
On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky 
wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright newshou...@digitalmars.com wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in 
the world

does academia keep doing that anyway? (Genuine question, not
rhetoric)

But the fact that article even exists is really freaking
awesome. :)


My guess is simply because it takes more space, making a 4 
page

article look like a 7 page ;)


Double columns take less space


Per column yes, but overall, no. The same number of chars + 
same font

== same amount of space no matter how you rearrange them.

If anything, double columns take more space due to the inner 
margin and
increased number of line breaks (triggering more word-wrapping 
and thus
more space wasted due to more wrapped words - and that's just 
as true

with justified text as it is with left/right/center-aligned.


For a column of text to be readable it should have not much 
more than 10 words per line. Going beyond that forces eyes to 
scan too jerkily and causes difficulty in following line 
breaks. Filling an A4 or letter paper with only one column 
would force either (a) an unusually large font, (b) very large 
margins, or (c) too many words per line. Children books choose 
(a), which is why many do come in that format. LaTeX and Word 
choose (b) in single-column documents.



and are more readable.



In *print* double-columns are arguably more readable (although 
I've
honestly never found that to be the case personally, at least 
when

we're talking roughly 8.5 x 11 pages).

But it's certainly not more readable in PDFs, which work like 
this

(need monospaced font):

   Start
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
|
   End


Multicolumn is best for screen reading, too. The only problem 
is there's no good flowing - the columns should fit the screen. 
There's work on that, see e.g. 
http://alistapart.com/article/css3multicolumn.



Andrei


I really wish this was more popular:
__
|   ||
|   1   |   2|
|   ||
|   ||
||
|   ||
|   3   |   4|
|   ||
|   ||
___ page break ___
|   ||
|   ||
|   1   |   2|
|   ||
||
|   ||
|   ||
|   3   |   4|
|   ||

This allows a multi-column layout with less scrolling. The aspect 
ratio on my screen is just about perfect to fit half of a page at 
a time. I don't understand why this is rarely taken advantage 
of... For example, I like G+'s layout because posts seem to be 
layed out L-R, T-B like so:


|  1  |  2  |  3  |
|  4  |  2  |  3  |
|  4  |  2  |  5  |
|  6  |  7  |  5  |

Why can't we get the same for academic papers? They're even 
simpler because each section can be forced to be the same size.


Re: Version of implementation for docs

2013-08-11 Thread Tyler Jameson Little

On Sunday, 11 August 2013 at 15:25:27 UTC, JS wrote:

On Sunday, 11 August 2013 at 10:16:47 UTC, bearophile wrote:

JS:

Can we get the version of implementation/addition of a 
feature in the docs. e.g., if X feature/method/library is 
added into dmd version v, then the docs should display that 
feature.


Python docs do this, and in my first patch I have added such 
version number.


Bye,
bearophile


Too bad have the development team feel this is not important. 
Very bad decision and will hurt D in the long run. It's not a 
hard thing to do. Seems to be a lot of  laziness going around. 
Maybe you can tell us just how hard/time consuming it was to 
type in 2.063 when you added a method?


Personally I don't like the tone here, but I agree that having 
version numbers would be very nice to have, especially when using 
a pre-packaged DMD+Phobos from a package manager.


Perhaps this could be automated? It'd be a little messy, but it 
could look something like this:


* get list of all exported names changed since last release 
(using diff tool)
* eliminate all names that have the same definition in the last 
release

* mark new names (not in last release) as new in current release
* mark changed names as changed in current release (keep list of 
changes since added)

* document deleted names as having been removed

This would only have to be run once per release, so it's okay if 
it's a little expensive.


This bit me once in Go when a dependency failed to compile 
because of a missing function name. It existed in the official 
docs, but not in my local docs. After updating to the latest 
release, everything worked as expected. There was, however, no 
indication in the docs that anything had been added, only in the 
change logs.


Re: Anything up for formal review?

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 09:03:34 UTC, Dmitry Olshansky wrote:

09-Aug-2013 04:53, Tyler Jameson Little пишет:

According to the review queue, there there are 5 items that are
currently ready for review. There was even a thread a while 
back about
starting another formal review, where both Jacob Carlborg and 
Brian

Schott said they're ready for review:
http://forum.dlang.org/thread/gjonxudcdiwrlkgww...@forum.dlang.org 
(it

mostly digressed into bickering about the review process...).



Truth be told the wiki page is a bit misleading:

std.compression.lz77 - might be ready for review (as in code) 
but needs to address fundamental design decisions and get the 
right interface for all streaming (de)compressors.
std.idioms - a great idea but at the moment it hardly pulls its 
weight providing only a couple of helpers


Yeah, I saw that. I was actually part-way through implementing 
the DEFLATE algorithm when I realized the interface should 
probably be a community decision. Then I ran out of time...



I'm particularly
interested in the outcome of the formal review of 
std.serialize, because
I'd like to see a decent replacement for std.json (I'd be 
willing to

contribute as well).


Then you would need to design a new std.json or land a hand in 
such a project.
std.serialization should simply use it then as a backend not 
the other way around.


I'm willing to contribute code, but I feel any contribution would 
have to wait until std.serialization has gone through review. I'd 
ultimately prefer something simple like my PR: 
https://github.com/D-Programming-Language/phobos/pull/885, but 
I'm hesitant to add yet another item to the review queue, 
especially since std.serialization may obsolete my work.


I havn't seen anything in this mailing list (except the above 
and one by
Walter Bright) for a while, and I haven't seen any pull 
requests for any

of the items in the review queue.


Well previously reviewed std.uni got pulled recently. Things 
are moving but slooowly.


Is this due to the review process review? Who do I bug to get 
things underway? I'd offer to act as review manager, but I don't 
feel I have enough clout in the community to do so.


Re: UFCS for templates

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 07:22:12 UTC, barryharris wrote:

On Friday, 9 August 2013 at 04:33:52 UTC, barryharris wrote:


  auto test2 = New!AnotherTest(test2, 20);

oops, should read:

auto test2 = New!AnotherTest(20);


-1 for me anyway for the following reason:

A.function(args)// I know A is a function value parameter
function!A(args)// I know A is a template type parameter
function!10(args)   // I know 10 is a template type parameter
10.function(args)   // I know 10 is a function value parameter

So I don't like it...


To clarify, that above is what we have now but with the OP 
suggestion


A.function(args)

becomes too ambiguous as to whether A is a template parameter 
or function parameter (i.e. refers to type or value)


I agree. I briefly considered the following syntax to help 
disambiguate it:


A!function(args)

But then this code would be confusing:

void func(alias pred)(int x) {}
func!(func)(3);

Which function is the template argument? The same goes for the 
dot operator, if A happens to be an alias to a function.


I don't think this is inline (sorry for the pun...) with the 
intent of UFCS, which IMHO was to make methods less special:


class A {
void foo();
}
void bar(A a);

A a;
a.foo();
a.bar();

This way you can make a function call look like a method call. 
This is what the compiler does internally anyway (if I'm not 
mistaken), so it makes sense to allow the programmer to do this, 
which can be very useful in extending types.


Re: Anything up for formal review?

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 14:03:56 UTC, Dicebot wrote:

On Friday, 9 August 2013 at 06:41:21 UTC, Jesse Phillips wrote:
So please, if someone is willing to take std.serialize or even 
another item from the review queue, do so. I will be happy to 
assist, jesse.k.phillip...@gmail.com It isn't very hard or 
even that time consuming. (One of the reasons I've put off 
starting std.serialize is because I want to dig in and provide 
a review for the code and haven't become interested again 
since the review process distraction)


I'll have a look at review process definition and summary of 
last review tomorrow. May initiate a new one if it will feel 
appropriate.


Awesome! Thanks for looking into this.

Once std.serialize goes up for review, I'll be tracking the 
progress closely.


Re: database applications

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 11:25:01 UTC, Dejan Lekic wrote:

On Tuesday, 6 August 2013 at 18:05:22 UTC, John Joyus wrote:
I have looked at the D language and liked it's syntax. The 
code looks neat and clean.


But before I try to learn it thoroughly, I want to know if D 
is suitable to develop high level database GUI applications 
with drag and drop components like Delphi or Lazarus.


Do we have something like that already? If not, are there any 
plans to develop a component based rad tool in future?


Thanks,
JJ


The answer is NO to all your questions.


Well, the *language* is suitable, but the available libraries are 
not.


Re: A suggestion on keyword for_each

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 03:30:16 UTC, SteveGuo wrote:
I suggest that change keyword *for_each* to *for* since *for* 
is clear enough and

less letters, like C++11 does.


Decent suggestion, but I don't think Walter, Andrei or any of the 
committers want to break everyone's code just to shorten some 
syntax.


As a proposal, however, it's not bad, and like you say it exists 
in other languages:


// Go
for key, val := range x {} // where x is an array, slice or 
map

// Java
for (int x in y) {}
// C++11
for (int x : y) {}
// Python (or use iter(), range(), xrange(), etc)
for x in y: pass

Which are similar to D's foreach:

foreach (key, value; x) {} // where x is a suitable range

I agree that the following would be unambiguous (whichever syntax 
we decide is better):


for (key, value; x) {}
for (key, value : x) {}

But it doesn't offer enough benefit to make a breaking change or 
introduce redundant syntax. Saving 4 chararters isn't enough 
justification.


Also, I don't particularly like for_reverse, since you can't use 
a traditional for-loop syntax with for_reverse:


// would be syntax error
for_reverse (i = 0; i  5; i++) {}
// but this works just fine
foreach_reverse(i; 0 .. 5);

And D doesn't stand alone in the 'foreach' arena:

// C#
foreach(int x in y) {}
// Perl
foreach (@arr) {
print $_;
}
// PHP
foreach($set as $value) {}
foreach ($set as $key = $value) {}
// Scala
some_items foreach println

I don't really see a compelling reason to switch. Then again, 
this is only my opinion, so I can't speak for the community.


Re: A suggestion on keyword for_each

2013-08-09 Thread Tyler Jameson Little

On Friday, 9 August 2013 at 15:20:38 UTC, Tobias Pankrath wrote:
On Friday, 9 August 2013 at 14:51:05 UTC, Tyler Jameson Little 
wrote:
Also, I don't particularly like for_reverse, since you can't 
use a traditional for-loop syntax with for_reverse:


   // would be syntax error
   for_reverse (i = 0; i  5; i++) {}


What do you think would be proper semantics for this?


If the expression is simple enough, it's not out of the question 
for a new programmer to think it will do the same thing as the 
for version, but backwards.


I like foreach and foreach_reverse, because all uses of foreach 
can be foreach_reverse'd (except maybe some ranges), but the same 
does not apply in for. It lacks symmetry, which kind of bothers 
me.


I would prefer a step operator like Python's:

int[] arr;
// these two are equivalent
foreach(x; arr[0 .. $ : -1]) {}
foreach_reverse(x; arr[0 .. $]) {}

And for ranges:

auto arr = genRange!int();
foreach(x; arr.step(-1)) {}

Then I could grab every other one with a step of 2, every third 
with 3, etc. This would not require copying slices or ranges, but 
would be some nice syntax sugar. The index values in the foreach 
would be actual indexes into the slice/range.


If we got this feature, both foreach  foreach_reverse could 
probably be deprecated and merged into for (with no for_reverse). 
This is a pretty substantial change to the language though, so I 
doubt it will make it in.


Re: parseJSON bug

2013-08-08 Thread Tyler Jameson Little

On Thursday, 8 August 2013 at 13:56:15 UTC, Dicebot wrote:

On Thursday, 8 August 2013 at 13:49:22 UTC, bearophile wrote:

In my opinion we should follow the formal JSON grammar.


This. Anyone who wants JavaScript behavior can use own 
third-party library bust standard library must behave according 
to published standards and specifications.


Exactly. Here's the official web page complete with nice graphics 
detailing the grammar: http://json.org/. I've read the JSON RFC 
before, but I can't remember what it says about whitespace within 
a basic value, but the graphics here make it very clear that 
whitespace does *not* belong inside a value.


The real question is, is this worth fixing before std.serialize 
makes it in? There will likely be other bugs once/if that's 
accepted. I tried extending std.json in the past with static 
reflection, but that didn't make it in for this very reason.


Re: std.array string.split() bug

2013-08-08 Thread Tyler Jameson Little
On Wednesday, 7 August 2013 at 19:10:11 UTC, Borislav Kosharov 
wrote:

Something strange happens when I do this:

unittest {
import std.array, std.string;
string s = test;
//assert(s.toUpper.split().join(-) == T-E-S-T);
//Memory allocation failed
//[Finished in 26.5s]
//CPU: 1% - 50% | 2.7GHz dual core
//RAM: 1.6GB - 2.6GB | 1GB diff
assert(s.split() == [t,e,s,t]);
//ditto
}

I just want to achieve what the commented assert's result 
should be. Is there a better way to do that? And if it is 
really a bug where should I report it?


Bugs go here: http://d.puremagic.com/issues/


Re: std.json parsing real numbers.

2013-08-08 Thread Tyler Jameson Little

On Thursday, 8 August 2013 at 08:04:49 UTC, khurshid wrote:


I just check  std.json for parsing real numbers.

import std.json;
import std.stdio: writeln;

int main()
{
auto json = parseJSON(1.24E  +1);
writeln(toJSON(json));
return 0;
}

and
output:  12.4


It's bug or normal ?


As mentioned in a different thread, it's a bug since it doesn't 
adhere to the JSON standard.


Re: ctRegex! vs regex error

2013-08-08 Thread Tyler Jameson Little

On Wednesday, 7 August 2013 at 22:36:39 UTC, Milvakili wrote:

Hi,
I can compile

void main(){
auto myRegx = regex(`(?!test)`);
}

however can not compile this one

void main(){
auto myRegx =  ctRegex!(`(?!test)`);
}

code sample:http://dpaste.dzfl.pl/d38926f4

and get the following error:

snip...


Go ahead and add it to the issue tracker: 
http://d.puremagic.com/issues/


FWIW, I get the same error, and I get a similar one in LDC:

/usr/include/d/std-ldc/std/regex.d(4350): Error: ['N', 'e', 'g', 
'l', 'o', 'o', 'k', 'a', 'h', 'e', 'a', 'd', 'S', 't', 'a', 'r', 
't'][0LU..17LU]
/usr/include/d/std-ldc/std/regex.d(4308):called from 
here: this.ctGenGroup(ir, result.addr)
/usr/include/d/std-ldc/std/regex.d(4746):called from 
here: this.ctGenBlock(re.ir, 0)
/usr/include/d/std-ldc/std/regex.d(4795):called from 
here: context.ctGenRegEx(re)
/usr/include/d/std-ldc/std/regex.d(6482):called from 
here: ctGenRegExCode(regex((?!test), []))
/usr/include/d/std-ldc/std/regex.d(6506): Error: template 
instance std.regex.ctRegexImpl!((?!test), []) error 
instantiating

instantiatied in test.d(10): ctRegex!((?!test))
test.d(10): Error: template instance 
std.regex.ctRegex!((?!test)) error instantiating


Anything up for formal review?

2013-08-08 Thread Tyler Jameson Little
According to the review queue, there there are 5 items that are 
currently ready for review. There was even a thread a while back 
about starting another formal review, where both Jacob Carlborg 
and Brian Schott said they're ready for review: 
http://forum.dlang.org/thread/gjonxudcdiwrlkgww...@forum.dlang.org 
(it mostly digressed into bickering about the review process...).


Is there currently a formal review under way? I'm particularly 
interested in the outcome of the formal review of std.serialize, 
because I'd like to see a decent replacement for std.json (I'd be 
willing to contribute as well).


I havn't seen anything in this mailing list (except the above and 
one by Walter Bright) for a while, and I haven't seen any pull 
requests for any of the items in the review queue.


Re: Are there any crypto libraries floating around?

2013-07-29 Thread Tyler Jameson Little

https://github.com/Etherous/dcrypt


Hmm, last commit 3 years ago? It'll probably take quite a bit of 
work to bring it up to Phobos quality (and probably to get it to 
even compile).


It does look pretty complete though...


Are there any crypto libraries floating around?

2013-07-27 Thread Tyler Jameson Little
I found this thread mentioning some initial work on a crypto 
library:


http://forum.dlang.org/thread/j84us9$2m5k$1...@digitalmars.com?page=1

It looks like std.digest is what came of that though, not 
std.crypto.


I found this on the wish list:

Encryption and hashing

This is more an implementation problem than a design problem.
No one is working on it. Some work has been done here but 
it's unfinished.
One of the ideas is to wrap OpenSSL? at first and then 
implement the most
useful crypto primitives in D to avoid library dependency and 
to make them

usable with CTFE.

I'm not sure what some work has been done here means, but after 
looking around, I assume this refers to hashingDoes this just 
mean that hashing functions have been implemented, but not crypto?


What I'm looking for is:

* SSH library for an ssh client
* TLS library for HTTPS

Has anyone started working on this? Are there any openssh 
wrappers lying around somewhere? I may have a crack at it myself 
it noone has started on it.


Re: Are there any crypto libraries floating around?

2013-07-27 Thread Tyler Jameson Little

On Saturday, 27 July 2013 at 17:53:52 UTC, Walter Bright wrote:

On 7/27/2013 8:58 AM, Tyler Jameson Little wrote:
Has anyone started working on this? Are there any openssh 
wrappers lying around
somewhere? I may have a crack at it myself it noone has 
started on it.


https://github.com/D-Programming-Deimos/openssl


Awesome. Thanks!


Re: Is this documented behaviour?

2013-07-27 Thread Tyler Jameson Little

On Wednesday, 24 July 2013 at 15:14:16 UTC, John Colvin wrote:

On Tuesday, 23 July 2013 at 16:34:54 UTC, John Colvin wrote:

void foo(ref int a)
{
a = 5;
}

void main()
{
int a = 0;
int* aptr = a;

foo(*aptr);
assert(a == 5);

a = 0;

int b = *aptr;
foo(b);
assert(b == 5);
assert(a == 0);
}

The fact that adding an explicit temporary changes the 
semantics seems weird to me.


Thanks for the explanations people, I have now fixed a rather 
worrying mistake in my programming knowledge: WHAT IT ACTUALLY 
MEANS TO DEREFERENCE A POINTER!


Seriously, I've written programs in assembly and I still had it 
wrong. It's a wonder I ever wrote any correct code in my life.


To put the final nail in the coffin, this also works in C++:

#include stdio.h

void change(int  x) {
x = 4;
}

int main(int argc, char** argv) {
int a = 0;
int* aptr = a;
change(*aptr);
printf(%d\n, a);
}

TBH, I was also a bit surprised because I assumed *aptr as an 
rvalue created a temporary, but as you mentioned, that's not how 
it works in assembly, so it's wrong to think it would work 
differently in C/C++/D.


Thanks for the post!


Re: std.stream replacement

2013-07-04 Thread Tyler Jameson Little
On Saturday, 9 March 2013 at 02:13:36 UTC, Steven Schveighoffer 
wrote:
On Fri, 08 Mar 2013 20:59:33 -0500, Stewart Gordon 
smjg_1...@yahoo.com wrote:



On 07/03/2013 12:07, Steven Schveighoffer wrote:
snip
I don't really understand the need to make ranges into 
streams.

snip

Ask Walter - from what I recall it was his idea to have 
range-based file I/O to replace std.stream.


I hope to convince Walter the error of his ways :)

The problem with this idea, is that there isn't a proven 
design.  All designs I've seen that involve ranges don't look 
attractive, and end up looking like streams with an awkward 
range API tacked on.  I could be wrong, there could be that 
really great range API that nobody has suggested yet.  But from 
what I can tell, the desire to have ranges be streams is based 
on having all these methods that work with ranges, wouldn't it 
be cool if you could do that with streams too.


Thikning about it now, a range-based interface might be good 
for reading files of certain kinds, but isn't suited to 
general file I/O.


I think a range interface works great as a high level 
mechanism.  Like a range for xml parsing, front could be the 
current element, popFront could give you the next, etc.  I 
think with the design I have, it can be done with minimal 
buffering, and without double-buffering.


But I see no need to use a range to feed the range data from a 
file.


-Steve


I agree with this 100%, but I obviously am not the one making the 
decision.


My point in resurrecting this thread is that I'd like to start 
working on a few D libraries that will rely on streams, but I've 
been trying to hold off until this gets done. I'm sure there are 
plenty of others that would like to see streams get finished.


Do you have an ETA for when you'll have something for review? If 
not, do you have the code posted somewhere so others can help?


The projects I'm interested in working on are:

- HTTP library (probably end up pulling out some vibe.d stuff)
- SSH library (client/server)
- rsync library (built on SSH library)

You've probably already thought about this, but it would be 
really nice to either unread bytes or have some efficient way to 
get bytes without consuming them. This would help with writing an 
until function (read until either a new-line or N bytes have 
been read) when the exact number of bytes to read isn't known.


I'd love to help in testing things out. I'm okay with building 
against alpha-quality code, and I'm sure you'd like to get some 
feedback on the design as well.


Let me know if there's any way that I can help. I'm very 
interested in seeing this get finished sooner rather than later.


Re: Today's github tip - fixing local master

2013-06-20 Thread Tyler Jameson Little

On Tuesday, 18 June 2013 at 19:41:57 UTC, Walter Bright wrote:
I often struggle with understanding how github works. A problem 
I was having often is that I have 3 repositories to deal with:


   1. the main one on github (upstream)
   2. my github fork of the main one (origin)
   3. my local git repository

and (2) and (3) got out of sync with (1), causing all my pull 
requests to go bonkers. What I needed was a fix (2) and (3) so 
their masters are identical to (1)'s master. Various attempts 
at fixing it all failed in one way or another, often with 
mysterious messages, and cost me a lot of time.


yebblies (Daniel Murphy) provided the solution, which is nicely 
generic:


  git checkout master
  git fetch upstream master
  git reset --hard FETCH_HEAD
  git push origin master -f

So there it is if anyone else has this problem.


This saved me yesterday! My coworker decided to do a rebase on 
master, then he made a mistake, push --force'd and in the 
mean-time I accidentally git pull'd (we're a small team, I knew 
he was rebasing, but still). We guard merges into master, so it's 
normally not a problem.


Thanks for this! Probably saved us a quite a few WTFs!


Re: Feature request: Optional, simplified syntax for simple contracts

2013-06-17 Thread Tyler Jameson Little

Or the comma operator:

int x = (5, 3); // x is 3

Arrays:

int[] x = [3, 5];

Struct initializers:

struct t { int x, y };
auto z = t(3, 5);

Variable declarations:

int x = 5, y = 3;

I'm not sure which would be more idiomatic though... I'm leaning 
more towards commas though, to keep with the syntax of the 
initializers.


On Tuesday, 18 June 2013 at 05:28:07 UTC, Manu wrote:

What about the argument list only 3 characters earlier?


On 18 June 2013 15:16, Aleksandar Ruzicic 
aleksan...@ruzicic.info wrote:



On Sunday, 16 June 2013 at 00:19:37 UTC, Manu wrote:

Super awesome idea! How about coma separated expressions to 
perform

multiple asserts?

int func(int i, int j) in(i5, j10)
{
  return i + j;
}



I find use of comma inside of parentheses of a statement a bit 
unusual.
Correct me if I'm wrong but I don't think there is a single 
statement in D
that separates it's parts with a comma. It's always a 
semi-colon.


So I think it should be:

int func(int i, int j) in (i  5; j  10)
{
  return i + j;
}


But either comma or a semi-colon used as a separator, this is 
a really

nice syntactic sugar!




Re: More Linux love?

2013-06-16 Thread Tyler Jameson Little

On Sunday, 16 June 2013 at 23:43:15 UTC, bioinfornatics wrote:

Hi,

I am a fedora packager and i put some D into official fedora 
repo

 - derelict  version 3
 - dsqlite   a tiny wrapper
 - dustmite  to debug
 - gl3n  to works with vectors and 3D
 - glfw  to use it  with derelict 3
 - gtkd  to use gtk in D
 - ldc   to build D code
 - tango to use tango and miscellaneus feature
 - syntastic to use gvim

You see linux has some love at least into fedora the bleeding 
edge distro :-)


Well, not **the** bleeding edge distro. I'm on Arch, which has 
some very good D support: derelict, GDC  gl3n in AUR, DMD and 
LDC in official repos.


Anyway, used to be on Fedora and I also loved the D support. Most 
Debian-based distros seem to be a bit behind the times for D 
support. Perhaps there aren't a lot of D users on Debian Sid? I 
haven't ever had a good experience with D on Debian and I usually 
end up recompiling from source.


Re: More Linux love?

2013-06-15 Thread Tyler Jameson Little
But the later seems to be the same as it was. Yeah, DMD can 
generate x86_64 nowadays which I remember was a long time 
pending issue some while back and I can find `gdc` in the 
Ubuntu repository, which is huge improvement, but overall the 
impression is the same: D is Windows-centric.


It seems to me that because historically D was Windows-centric, 
because Walter is Windows user, for all this years Windows 
developers had easier time when playing with D, than Linux 
devs. And after all this years, D community is mostly 
Windows-centric. Have anyone did any poll regarding this? I am 
guessing, I may be wrong.


Each time I fell the urge to play with D in the free time and 
want to test newest, coolest features and projects written in 
D, I am constantly hitting some Linux-related issues. Library 
incompatibilities, path incompatibilities. I toy with a lot of 
languages and I never hit issues like this with eg. Rust or Go, 
which fall into similar category of programming languages. Both 
of them seem to be developed for Linux/Unix - first, Windows 
later.


Well, there's at least a significant chunk of the community on 
Linux, judging by the LDC and GDC projects. I haven't had any 
major problems on Linux (I use Arch Linux), and DMD gets regular 
testing on Linux: http://d.puremagic.com/test-results/ (it even 
gets tested on FreeBSD =D). LDC's CI (travis-ci) only supports 
Linux, and Windows support is in an alpha state.


A while ago I tried D on Windows and it wasn't nearly as nice as 
running on Linux. I don't use very many libraries (just some C 
bindings) and my projects aren't very complicated, so perhaps I 
haven't gotten to the point you're describing.


So I'd really like to ask all Windows-users D-developers: 
please install Virtual Box, latest Ubuntu guest inside, maybe 
Fedora too and see for yourself is your project is easy to 
install and working each time you release it.


I can agree with this, but there also aren't very many 
high-profile D libraries. Most developers seem to write something 
to scratch their own itch, and kudos if it happens to work for 
you.


I would like to see a stronger library management solution, but 
there currently isn't a standard build tool (except maybe DSSS, 
but it seems abandoned). There's also dub 
(https://github.com/rejectedsoftware/dub), which looks promising 
or orbit (https://github.com/jacob-carlborg/orbit). Maybe the 
community will settle on one and this problem will magically go 
away?


In my opinion in the last 15 years most of the noticeable, long 
lasting programming software improvements came from Linux/Mac 
world (Unix, generally speaking), but I am biased. But the fact 
is: Open Source and Linux is where young, eager to learn and 
risk devs and cool kids are. In great numbers. Embrace them, 
just like Open, Collaborative development model and you'll 
quickly see a lot of new cool projects, developers, bug fixes 
and buzz. :)


I agree, but this also depends on your target market. For 
Windows, I guess you've forgotten .NET?


A lot of the D community came from C++, and AFAICT Windows nearly 
dominates the commercial C++ market. All those C++ developers who 
got tired of C++'s warts came to D. Many other languages (Go, 
Ruby, Python, etc) are developed for users coming from C, Perl 
and Java, which have traditionally been *nix or cross-platform, 
so naturally development would happen on the platform they know 
better.


That being said, D has pretty strong Linux support, and from what 
I've seen in the community, even the Windows users have a pretty 
solid knowledge of Linux; moreso than many other open-source 
programming language projects (many are ignorant of everything 
Windows).


Personally, I think it's refreshing to have such strong Windows 
support, so when I need to make my project work on Windows, I 
know there's solid support in the community. Moving a node.js app 
from Linux to Windows was a bug-riddled experience because many 
of the libs didn't have proper Windows support (paths were just 
the tip of the iceburg).


PS. Kudos for whole D community, the language is even better 
and more impressive then it used to be.


I'm in a similar boat. I come back to the D community every few 
months and check back, and each time I run into less and less 
problems. There are still a lot of annoying things (CTFE, the 
garbage collector, no package manager), but these seem to be 
under pretty heavy development.


Anyway, with the last couple of releases, I now feel comfortable 
recommending D to my friends. If D had a nice, stupid-simple 
build process (like Go's), then I may even become a fanboy. =D


Re: The non allocating D subset

2013-06-08 Thread Tyler Jameson Little

On Saturday, 8 June 2013 at 07:10:29 UTC, Simen Kjaeraas wrote:
On Sat, 08 Jun 2013 04:09:25 +0200, Tyler Jameson Little 
beatgam...@gmail.com wrote:



What is the -safe option? I don't see it in DMD help.

@safe is specified without @nogc, but calling function is 
@nogc, so I think that #1 should be chosen.


I pulled that from here: http://dlang.org/memory-safe-d.html

Maybe that's out of date?


Would seem so. Safe D is activated with @safe, and the compiler 
switch

-safe gives an error message from the compiler.

If you want your entire module to be @safe, insert @safe: at 
the top of

the module.


Then the documentation should be changed, or the feature 
implemented. I've never had cause to use it, so I never got 
around to checking it.


Which is correct, the documentation or the implementation?


Re: The non allocating D subset

2013-06-07 Thread Tyler Jameson Little
If the nogc marker could be used to overload functions then 
Phobos may include both versions of the code - GC and non GC - 
as some code may run faster under GC. The calling function 
would pick up the right one.


I can't imagine how this would work without over-complicating the 
syntax. Any ideas?


There should also be a way of compiling without a GC and making 
functions that require GC (those without the marker) compile-time 
errors, something like a build-flag like unittest or version. If 
the function can be made to not use GC, but there's a performance 
hit, then an alternate implementation could be provided.


But I still think it's valuable to mark which functions in the 
standard lib don't require GC (similar to why @safe and pure 
exist). This would would benefit game designers now, and make 
writing code to run on the bare metal easier. This is a major 
pain point for me with Go, because Go has no way of manually 
managing memory within the Go memory space, so bare-metal 
applications cannot be developed currently in that language. This 
is where D can step in and unseat C/C++ for that application.


Re: The non allocating D subset

2013-06-07 Thread Tyler Jameson Little

On Friday, 7 June 2013 at 14:46:30 UTC, Simen Kjaeraas wrote:
On Fri, 07 Jun 2013 16:39:15 +0200, Tyler Jameson Little 
beatgam...@gmail.com wrote:


If the nogc marker could be used to overload functions then 
Phobos may include both versions of the code - GC and non GC 
- as some code may run faster under GC. The calling function 
would pick up the right one.


I can't imagine how this would work without over-complicating 
the syntax. Any ideas?


I don't understand what you mean. This is how that would work:

void foo() {}   // #1, Not @nogc.
@nogc void foo() {} // #2.

void bar() {
foo(); // Calls #1.
}

@nogc void baz() {
foo(); // calls #2.
}


Ok, so it takes the @nogc flag from the calling function. I was 
thinking it would involve including the attribute somewhere in 
the function call. *facepalm*


In this case, I think this would work well. It seems attributes 
are transitive, so the change to the language would be 
overloading based on attributes. I'm not sure of all of the 
implications of this, but I suppose it wouldn't be terrible.


I'm just not sure what this would do:

@nogc void foo() {} // #1
@safe void foo() {} // #2

@nogc void baz() {
foo();
}

Which gets called when -safe is passed? Is it a compile-time 
error, or does it just choose one? I guess I don't understand the 
specifics of attributes very well, and the docs don't even 
mention anything about transitivity of attributes, so I don't 
know how much existing code this would break.


Re: The non allocating D subset

2013-06-07 Thread Tyler Jameson Little

What is the -safe option? I don't see it in DMD help.

@safe is specified without @nogc, but calling function is 
@nogc, so I think that #1 should be chosen.


I pulled that from here: http://dlang.org/memory-safe-d.html

Maybe that's out of date?


Re: The stately := operator feature proposal

2013-06-06 Thread Tyler Jameson Little

On Friday, 31 May 2013 at 00:57:33 UTC, Minas Mina wrote:

I don't think this is useful.

At least when I see auto in the code I immediately understand 
what's going on, whereas with this proposal I have to double 
check my code to see if it's := or =.


First off, I write a _lot_ of Go code, and I _love_ the := there. 
It makes things nice and simple, and it fits nicely into the rest 
of the Go syntax. However, I don't think it belongs in D because 
it changes the flow of the code.


The problem is where type specifiers are expected to go. In D 
(and most other C-like languages), types go before the 
identifiers:


int x, y, z;

When scanning code, if I see a type identifier, I know it's 
declaring something. I immediately know the scope and all is well.


In Go, types go after the identifiers:

func example(x, y, z int) {}

This is only broken by var|type, which are completely different 
expressions.


For Go, the := makes perfect sense, because when you read Go 
code, you expect the identifier first, then the type. In D 
however, nothing else (correct me if I'm wrong) has this syntax.


I have no problem with the := syntax, I just think it doesn't 
make syntactic sense. It subtly breaks the idioms of the 
language, all for very little gain.


I would be okay with type blocks, or the presented math {} block 
(which could do all sorts of new and exciting things) because 
that would fit more nicely into the language.


If the OP really wants this, he/she can easily write a 
pre-processor for D code that he/she uses on his/her own personal 
projects. A completely untested regex:


rsync src compilable-source
find compilable-source/ -name *.d -exec sed -i 
s/\(\\w+\)\\s*:=/auto \1 =/g {}+


There, feature done in two lines of shell...


Re: Suggestion - use local imports in Phobos

2013-06-06 Thread Tyler Jameson Little
Due to these characteristics of Phobos, I believe making the 
imports
local to the unit tests and templates that use them will 
reduce the

number of imports the compiler has to do.


This breaks DRY because some imports are used by multiple 
unittests. As
long as the imports are wrapped in version(unittest) blocks, I 
don't see

this as a problem.


That's like saying that defining a local `i` variable for using 
in a `for` loop breaks DRY, because you could have defined it 
once at global scope.


Importing the same module over and over does not break DRY, 
just like calling the same function in multiple places does not 
break DRY. Breaking DRY means to write the internal code of 
that function in several places - or implementing the same 
things in several modules.


I completely agree. I write a lot of Go code, and in Go, unused
imports are a compile-time error, which is a pretty nice feature
because you know what the dependencies are for a package. In D,
this is not the case (though I would like it to be a warning at
least, but that's another issue...), but at least Phobos can
declare which imports are needed, and where they are needed.

For example, if a unittest requires std.datetime, an import is
added to the top of the file. Then later, if this requirement is
removed, the import remains because it is not immediately obvious
if some other unittest needs it. Sooner or later, the imports
stack up and it's hard to tell which imports are required.

If unittest dependencies are localized, they can be removed from
the tests that do not need it. I think this would simplify the
task of removing inter-dependencies later if this ever becomes a
priority. This is already what I do in my own D code, and for
this very reason.

Just my 2c.


Re: Suggestion - use local imports in Phobos

2013-06-06 Thread Tyler Jameson Little
Tango contains some duplicated code just to avoid dependencies 
between modules.


FWIW, so does Go's standard library. For small pieces of code, I
think this is reasonable.


Re: The non allocating D subset

2013-06-06 Thread Tyler Jameson Little
This would make D the truely universal language it was 
intended to be.


I'd love to see that! I think that Phobos code that makes 
allocations can be divided (duplicated) to manual and GC 
versions.


I thought I read somewhere about a marker for functions that
don't need the GC (similar to @safe, but more hardcore; @nogc?),
but I don't recall any real consensus about it. I'd really like
that, especially since I'm interested in D mostly for game dev.

Currently, the GC sucks in a lot of ways, but even if D got a
real concurrent, precise GC, I think I'd still want to have
functions I can rely on not allocating in a critical path.

I'd be happy to contribute some fixes to the standard lib if we
got some kind of marker for functions that don't need a GC.
Ideally, none of Phobos would rely on the GC, but it seems an
unnecessary burden, especially since significant portions can be
made to not rely on the GC.


Re: Ideal D GUI Toolkit

2013-05-22 Thread Tyler Jameson Little

On Tuesday, 21 May 2013 at 11:33:19 UTC, Kiith-Sa wrote:

On Tuesday, 21 May 2013 at 11:06:44 UTC, Andrej Mitrovic wrote:

On 5/21/13, Adam Wilson flybo...@gmail.com wrote:
Well, it comes down to how you want to render. My preferred 
solution
woulbd be a rendering thread running all the time doing 
nothing but the

GPU leg-work


Why a GPU? Aren't most GUIs static? And aren't there issues 
with GPUs

where feature X isn't supported on all GPUs or is buggy on a
particular one (e.g. driver issues)? Or maybe that was true in 
the

past, I was out of the loop for a while. :)


If you only use basic features (everything you need for GUI), 
you're not going to have issues. In any case if you go the GPU 
route it's best to isolate the GPU code behind an interface so 
you can add a software implementation later if absolutely 
necessary.


I think the best idea is to stop arguing and just do something. 
I recommend trying a minimalist project (at most Clutter sized) 
instead of something massive like Qt that's likely never going 
to see the light of day. Implement the basics, create a few 
example apps, and _then_ start a discussion. You might not get 
a perfect library/framework, but at least you'll get something 
that exists instead of an infinite flame war getting nowhere as 
is the tradition in the D world. Getting more than one 
contributor _and_ not stopping work on it is going to be the 
main issue, there've been a few D GUI attempts and they're 
mostly dead due to lost interest.


My (subjective) preferences:

* Human-readable markup, not just through a tool (a tool can be 
built later). YAML and JSON work well here.


* Look at Hybrid API. Clutter and Qt also have nice APIs, but D 
allows some things not possible there.


* Library instead of a framework - one of things I like about 
the Hybrid design




Re: Ideal D GUI Toolkit

2013-05-22 Thread Tyler Jameson Little

Oops, sorry for the empty message.

I think the best idea is to stop arguing and just do something. 
I recommend trying a minimalist project (at most Clutter sized) 
instead of something massive like Qt that's likely never going 
to see the light of day. Implement the basics, create a few 
example apps, and _then_ start a discussion. You might not get 
a perfect library/framework, but at least you'll get something 
that exists instead of an infinite flame war getting nowhere as 
is the tradition in the D world. Getting more than one 
contributor _and_ not stopping work on it is going to be the 
main issue, there've been a few D GUI attempts and they're 
mostly dead due to lost interest.


This was the direction I was thinking of going. Do something 
simple like Clutter (or even just a part of it), get something 
usable, then decide where we want to go from there.


This should keep it reasonably scoped so that it may stand a 
chance of getting done.



My (subjective) preferences:

* Human-readable markup, not just through a tool (a tool can 
be built later). YAML and JSON work well here.


Definitely. This makes source control a lot more effective, 
because I'd have a chance of understanding what's going on in the 
markup.


* Look at Hybrid API. Clutter and Qt also have nice APIs, but 
D allows some things not possible there.


* Library instead of a framework - one of things I like about 
the Hybrid design


Clutter does have a nice API, and I think that's a good place to 
start. I'll have to study it a bit before attempting an 
implementation. Qt is just such a beast though.


Re: Ideal D GUI Toolkit

2013-05-21 Thread Tyler Jameson Little
I can't tell if this is snark or not so I'll assume it isn't. 
:-) I don't know how likely cross-language portability is to be 
achieved by any UI toolkit, way to many things that need more 
advanced language features. If we use D we'd probably end-up 
using a non-portable set of language features anyways...


It'd be nice, but given how constraining C++ is compared to D 
it might not be practical in the long run. Although the 
rendering interface might be able to plug with D. That should 
be simple enough...


A little bit of snark, but there's some truth there. I realize 
that my work will likely come to naught, but I think it's an 
interesting project none-the-less. I'm tired of the industry's 
focus on C++, and it seems that most people have come to accept 
it. C++ devs I know would gladly move to D, if it had proper 
tools, such as a GUI library.


Personally I hate C++; I find it to be a terribly confusing 
language with no real benefit, except the availability of 
libraries, which isn't even a language feature. I do, however, 
think that whatever D uses should be relatively portable. I'm not 
sure how easy it is to import D code into C++, but it seems to be 
possible.


Maybe I'll just have a go and check back once you're done with 
you're bickering ;)


Re: Ideal D GUI Toolkit

2013-05-20 Thread Tyler Jameson Little
It's come up before, and I don't think that any sort of 
decision has ever been
made on that, though personally, that strikes me as the sort of 
thing that
doesn't really belong in the standard library. Certainly, if it 
did end up in

there, it would probably have to be very minamalistic.


That's exactly what I want, something to build off of. I'm 
thinking modeling it on Clutter or something like this: 
http://swtch.com/~rsc/thread/cws.pdf. The link is to a simple, 
nested windowing system that serves as the basic architecture of 
the Plan9 GUI. It's super simple and flexible. Everything would 
be asynchronous, and only the most essential components would be 
provided.


Also, I thought that general consensus had been that while it 
would be awesome
to have a GUI toolkit written in D at some point, that's the 
sort of thing
that takes a ton of time and effort, and we have enough other 
stuff that needs
doing that the time and effort of the community was better 
spent on other
things and that wrapping a C++ GUI toolkit was a better move 
for the
forseeable future (with a full D GUI toolkit being something 
that might happen
once D is much larger). But anyone who wants to work on a GUI 
toolkit in D is
welcome to do it. IIRC, there was at least one small one done 
in D1 using
OpenGL. And having at least a minimal one so that very basic 
GUIs could be

written fully in D would certainly be very cool.


That's the feeling I got. If it's designed well, it might be one 
of the major things that draws people to D, and everyone would 
benefit from that.


I'm willing to work on one, but I don't want to duplicate effort 
if the community is already standardizing on something. I ran 
into that earlier when I tried to expand std.json, only to find 
out that a std.serialize was in the works, hence the question.


I can't say I'm an expert, but I've got a little extra time and I 
want to eventually build a game in D, and I need something to 
build off of.


Re: Ideal D GUI Toolkit

2013-05-20 Thread Tyler Jameson Little
So the basic premise of the argument is that if we can't make 
everyone happy we shouldn't do anything at all?


That's the initial feeling I get, but I think it comes more from 
the idea that a large piece of software they didn't write might 
be dumped on them. It's a legitimate concern, so I think a GUI 
library should not be included in Phobos if a group has not 
agreed to maintain it.


Also, mobile, particularly WinRT, but also Android, do not 
enforce a look, in fact Android enshrines the idea of many 
different looks for widgets, my S3 is NOT the vanilla Android 
look. Only iOS enforces a look but it's still overridable. 
And WinRT doesn't even have the concept of OS enforced widgets 
looks, all it has a default style.


Trying to cover all bases seems like a headache that doesn't make 
sense in a standard library. For me, a standard library would 
provide a simple way to get basic tasks done; anything more 
complicated should be built on top if possible, but the core 
shouldn't be too bloated.


I'm thinking something closer to Clutter, not WPF. WPF is fine, 
but as you mentioned earlier, it has something like 40,000 
classes. I'm sure much of it is just OO baggage (stupid little 
20-line classes), but that still translates to a lot of code. 
Browsing Ohloh.net, here's an idea of the scope of other FOSS GUI 
toolkits:


* Qt 5 - ~7.7 million SLOC
* wkWidgets - ~1.3 million SLOC
* Gtk+ - ~769k SLOC
* EFL - ~535k SLOC
* Clutter - ~133k SLOC

As for my opinionated ideals (doesn't affect the overall design 
much):


* no XML (yaml maybe, but XML just isn't user-friendly)
* very limited number of default components (external libraries 
can provide more)
* liberally licensed (Phobos requires Boost, so that's a minimum; 
I prefer BSD)


I also don't like the idea of external processes interfering with 
the UI. I can only see security holes that cannot be plugged 
because of the design decision. This can, of course, be provided 
by a developer, but it should not be provided by default in the 
library code.


Re: Ideal D GUI Toolkit

2013-05-20 Thread Tyler Jameson Little
I'd love to get this up and running but I think we've got a 
blocker

right now in D and that is the lack of package import,
the GUI system is
going to be a monster no matter how it's sliced and I'd lack 
to avoid

the std.datetime effect. Sorry Jonathan Davis!


Why do you need package import?  Can't you achieve the 
equivalent by having one module that imports all the others 
publicly leaving the application programmer only one module to 
import?


I also agree. But a package import is in the works apparently, so 
by the time something is ready to show, the feature will be 
there. Just shim for now (with an all.d or whatever) and get 
stuff done. I think we're looking at a 1yr or so investment 
before trying to include in Phobos becomes a consideration.


Once we get package import into D we can start building out 
the basics.

Do you have any experience with concurrent hashmaps by chance?


No. Why do you want concurrency? Aren't associative arrays 
hashmaps?  My only experience with hashing techniques (other 
than as an end user of classes/functions/features using them) 
was implementing git binary patches in Python for use in one of 
my GUIs.


I agree. UIs are asynchronous by nature, not concurrent. If we 
agree on this premise, basic associative arrays can be used, and 
all access to them can be guarded by a simple mutex.


I'm completely willing to head up the initial development. I 
probably won't get anything done, and any initial work will be 
Linux-specific (I honestly don't care about Mac OS X or Windows). 
If anything does get done, I'll definitely come back here to get 
help on the design. I'm willing to do some leg-work, but I'm not 
experienced in any way, shape or form with GUI development (most 
of my work is server-type stuff in Go or front-end stuff in 
JS/HTML5/CSS3).


If we're generally in agreement that a UI toolkit is a good 
direction, I'd love to waste the next few months of my life doing 
something that likely won't go anywhere. I personally think it's 
much more exciting to make something in native D instead of 
trying to work around the lack of concern other C++ toolkits like 
Qt have for cross-language portability.


Ideal D GUI Toolkit

2013-05-19 Thread Tyler Jameson Little
I've been looking into trying to fix QtD, but it seems writing a 
binding to a C++ library is a bit complicated. I've read on the 
forums that a native D GUI toolkit is the most desirable 
long-term, so I'd like to start that discussion.


First off, I've heard of the DWT project, which looks promising, 
but it seems like a direct port of Java's SWT instead of a 
reimagining using idiomatic D. I understand the allure here 
(works, little translation for new developers), but since it's 
not yet in Phobos, I can only assume it's still up for discussion.


Personally, I want these features:

* simple and extensible
  * minimal components (something like HTMLs feature-set)
  * custom components (embed OpenGL/direct frame buffer)
* native window decorations by default, but can provide custom 
decorations

* markup (like QML) or programmable (like SWT)

Nice-to-haves:

* hardware accelerated (2D OpenGL)
* GUI designer (much easier with QML-esque markup)
* part of Phobos

I'm willing to lend a hand, but I'd like to know in what 
direction the community would like to go. I'd also like to know 
the likelihood of getting a GUI toolkit into Phobos.


Thoughts?


QtD fails to build on Arch Linux

2013-05-18 Thread Tyler Jameson Little
I'm on 64-bit, so I've used the 64-bit patch [1] on bitbucket to 
get the compile started. I get a lot of these errors:


	[  3%] Building CXX object 
CMakeFiles/cpp_core.dir/cpp/qt_core/QAbstractItemModel_shell.cpp.o
	/home/otto/sandbox/qtd/build_dir/build/cpp/qt_core/QAbstractItemModel_shell.cpp: 
In member function ‘virtual QModelIndex 
QAbstractItemModel_QtDShell::buddy(const QModelIndex) const’:
	/home/otto/sandbox/qtd/build_dir/build/cpp/qt_core/QAbstractItemModel_shell.cpp:83:141: 
error: taking address of temporary [-fpermissive]
		 
qtd_QAbstractItemModel_buddy_QModelIndex_const_dispatch(QObjectLink::getLink(this)-dId, 
__d_return_value, qtd_from_QModelIndex(index0));


I'm using gcc 4.8 if that makes a difference.

I've noticed that the original developers have more or less 
abandoned it (a patch for finding 64-bit dmd sits in the issue 
tracker gathering dust), but it seems to be the only Qt binding 
out there. I've noticed on the D forums that the developers have 
possibly lost interest [2].


So, has anyone else had problems building QtD recently? Is there 
any community interest in maintaining it?


If people need it, I'd consider looking into fixing the current 
build status, but I can't commit to maintaining it long-term 
since I don't have any active projects that need it. I may in the 
future, hence the tentative offer to help.


[1] 
https://bitbucket.org/qtd/repo/issue/4/cmake-finding-dmd#comment-4087437
[2] 
http://forum.dlang.org/thread/mailman.461.1349112690.5162.digitalmar...@puremagic.com


Re: QtD fails to build on Arch Linux

2013-05-18 Thread Tyler Jameson Little
This project very interest for me. But current QtD is supports 
only 4.8 Qt version. If anybody wants to revive this project 
and know something about Qt binding specific (this is not 
simple c++ binding, as I know. How bind QT_OBJECT macros?), I 
can to help him. Anyway, if someone tell me about QtD develop 
ideas, I'll be very grateful:)


I'd like to get this working with Qt5, but I don't even know 
where to begin. If I get time, I'll start diving into the code to 
see if I can figure something out.


Concerning that, would you find it advantageous to support both 
Qt4 and Qt5? Also, what about dynamic vs static bindings? [1]


About this build trouble: add -fpermissive to CXXFLAGS and all 
will be builded and work correctly (to the best of my memory)


Thanks, that works, but it saddens me to do this. Since I'm new 
to the project, I don't know if that's a binding issue or if it's 
a Qt one. I've adopted the package on the AUR and I'll be 
updating the PKGBUILD with the flag.


[1] 
http://www.gamedev.net/page/resources/_/technical/game-programming/binding-d-to-c-r3122


Re: QtD fails to build on Arch Linux

2013-05-18 Thread Tyler Jameson Little

Hrm, now I'm getting something else:

/home/otto/aur/qtd/src/qtd/d2/qt/core/QSize.d(62): Error: 
function qt.core.QSize.QSize.scale (int w, int h, AspectRatioMode 
mode) is not callable using argument types (QSize,AspectRatioMode)
/home/otto/aur/qtd/src/qtd/d2/qt/core/QSize.d(62): Error: 
function qt.core.QSize.QSize.scale (int w, int h, AspectRatioMode 
mode) is not callable using argument types (QSize,AspectRatioMode)
/home/otto/aur/qtd/src/qtd/d2/qt/core/QSize.d(62): Error: 
(QSize __ctmp1680 = 0;

 , __ctmp1680).this(w, h) is not an lvalue
/home/otto/aur/qtd/src/qtd/d2/qtd/MOC.d(181): Deprecation: 
variable modified in foreach body requires ref storage class


This doesn't seem as easy to work around. I'll have to dig into 
the code. Running:


* gcc 4.8
* qt  4.8.4

Still looking into it.


Is there interest in a std.http?

2012-11-19 Thread Tyler Jameson Little
I'd like to see an HTTP module in Phobos, but I wanted to gauge 
interest first and see if this has been discussed before.


An example of how I would like to interface with it (for creating 
a server):


interface HTTPHandler {
void serveHTTP(Request, Response);
}

class CustomHandler : HTTPHandler {

void serveHTTP(Request req, Response res) {
}
}

auto handler = new CustomHandler(); // implements http.Handler
auto server = new HttpServer(handler);
server.listen(0.0.0.0, 80);

As long as serveHttp() is thread safe, the http server could be 
concurrent or evented (with libev or similar) and the custom 
handler code wouldn't have to change.


I'm willing to put in the lion's share of the work (I've already 
written a bunch of it), but I'd naturally like to get some 
community input so I don't go in a completely wrong direction. 
I'm thinking of emulating Go's http library 
(http://golang.org/pkg/net/http), but of course D style.


I think the following are necessary:

* Access to underlying TcpSocket (for protocol upgrades, like 
WebSocket)

* HTTP body is a stream
* Simple HTTP requests
* Errors can be recovered from (allow user-defined error 
handlers):

* User settable size limits
  * Size of request line (1024 bytes by default)
  * Size of each header (1024 bytes by default)
  * Total size of header block (4096 bytes by default)

So, basically I'm looking for two things:

1. Interest level: willing to writ code, willing to test, want to 
use

2. Suggestions for the API/behavior


Re: Is there interest in a std.http?

2012-11-19 Thread Tyler Jameson Little
I've been asked to put my cgi.d in phobos before (which 
includes a http server as well as a cgi, fastcgi, and scgi 
implementation of its common interface), so there's probably 
some interest.


I haven't put mine in just because I'm not particularly 
motivated to go through the red tape.


The file is cgi.d in here:
https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff


Awesome. I assume this hasn't gone through rigorous testing, but 
has it been used in production?


* Errors can be recovered from (allow user-defined error 
handlers):

* User settable size limits


eh, I'm partially there on these. There's some constructor args 
you can play with and some degree of exception catching but 
probably not fully what you have in mind.


I'm thinking of something a little more sophisticated than 
exceptions. Exceptions unwind the stack, so you can't just 
continue where you left off.


How do you feel about function pointer callbacks? The http parser 
in node.js (https://github.com/joyent/http-parser) takes a struct 
of function pointers to handle errors/events. When the parser 
encounters a recoverable error (max header length reached), the 
programmer could opt to ignore it and continue the parse. 
Unrecoverable errors could throw exceptions (like trying to parse 
garbage data).


If an error is recovered, it would just continue as if nothing 
happened.


Of course, implementing this would increase complexity in the 
parser, but I think it's justified. Thoughts?


Re: Is there interest in a std.http?

2012-11-19 Thread Tyler Jameson Little

* HTTP body is a stream


No Content-Size, no multiple requests per connection (unless 
you use chunked encoding?).


Not sure what you mean. I meant incoming stream. There would be a 
request object with access to all other headers, it just wouldn't 
be read in until the user actually wanted it.



* User settable size limits
 * Size of request line (1024 bytes by default)
 * Size of each header (1024 bytes by default)
 * Total size of header block (4096 bytes by default)


I think this is pointless. Use an appender and the RFC limits. 
Unless you are serving VERY simple pages, the cost of a few 
allocations for handling the HTTP protocol's overhead will not 
be noticeable compared to the application's. (Slowloris is 
something to keep in mind, though.)


I just made up those limits, but the point is I'd like the user 
to be able to tweak those. The default will work for most people 
though.


My HTTP library, which is used for this forum and several other 
production projects, is here: 
https://github.com/CyberShadow/ae
The main problem with getting it into Phobos is that it's tied 
to a lot of other related code, such as the asynchronous 
sockets module, the unmanaged memory wrapper type, etc.


Cool, I didn't know that this was served by D code. I'll take a 
look at it.



Also, what about vibe.d?


vibe.d does a lot of things, and it probably does those things 
very well. It seems well maintained, and generally a good 
project. I think that a lot of the things it does well could be 
brought into the standard library. As this thread pointed out, 
there are several HTTP parsers floating around out there. For 
some, vibe.d might not be a perfect fit for whatever reason, but 
everyone can benefit from a simple HTTP library.


Maybe it shouldn't be as high-level as Go's http library, but it 
should at least make writing a simple HTTP server trivial.


Would a minor refactor of vibe.d be acceptable? This is pretty 
much what I'm looking for:


https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/common.d
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/server.d

Except that I'd remove some parts of it, like JSON parsing.

I think that vibe.d could benefit from moving some of the code 
there into Phobos. I guess it comes down to whether it makes 
sense to make a standard HTTP library.


Reflection: is type an inner class

2012-10-20 Thread Tyler Jameson Little

Say I have something like this:

class A {
class B {
}

B b;
}

Right now, I have a loop that does something like this (taken 
from orange project):


foreach (i, type; typeof(A.tupleof)) {
enum name = A.tupleof[i].stringof[1 + A.stringof.length + 
2 .. $];

}

This gets all members of class A (in this case only b), but I 
can't 'new' the class because it needs the context of the 
instance of A. So, I need to check (preferably at compile time) 
if b's type is an inner class or an outer class. If it's an inner 
class, I'll need to instantiate it with the reference of A.


std.traits doesn't seem to have anything useful. I'd like to know 
the following:


* is b's type instantiatable (is that a word?) without extra 
information?

* is b's type an inner class (needs reference to instantiate)

In my code, I have a reference handy, so I just need to know how 
to instantiate it.


I saw the macro 'compiles' in the traits documentation 
(__traits(compiles, ...)). Do I need to do something with this?


Re: Reflection: is type an inner class

2012-10-20 Thread Tyler Jameson Little

I got it working using compiles:

A a = new A;
foreach (i, type; typeof(A.tupleof)) {
enum name = A.tupleof[i].stringof[1 + A.stringof.length +
2 .. $];
static if (__traits(compiles, mixin(A. ~ 
type.stringof))) {

mixin(a. ~ name) = a.new type;
} else {
mixin(a. ~ name) = new type;
}
}

It's pretty hacky, but it seems to work. I'd be very interested 
in a cleaner solution, but this works for now.


Re: Reflection: is type an inner class

2012-10-20 Thread Tyler Jameson Little
I hope this isn't a double post, I'm posting from the web ui. I 
got this working using __traits(compiles):


A a = new A;
static if (is(A == class)) {
alias TypeTuple!(A, BaseClassesTuple!A) Types;
} else {
alias TypeTuple!A Types;
}

foreach (BT; Types) {
foreach (i, type; typeof(BT.tupleof)) {
enum name = BT.tupleof[i].stringof[1 + 
BT.stringof.length + 2 .. $];


if (!mixin(ret. ~ name)) {
static if (__traits(compiles, mixin(BT. ~ 
type.stringof))) {

mixin(a. ~ name) = ret.new type;
} else {
mixin(a. ~ name) = new type;
}
}
}
}

This is the basic idea. I've omitted a bunch of the irrelevant 
code for brevity.  The following structure gets initialized 
correctly:


class A {
class B {
class C {
int d;
}
C c;
}
B b;
}

Is this the best way to do this, or is there a cleaner, type 
independent way?  I'm doing this for a JSON marshaller I'm 
working on.


Re: Reflection: is type an inner class

2012-10-20 Thread Tyler Jameson Little

On Sunday, 21 October 2012 at 03:40:15 UTC, Andrej Mitrovic wrote:

On 10/21/12, Tyler Jameson Little beatgam...@gmail.com wrote:

Say I have something like this:

 class A {
 class B {
 }

 B b;
 }


I can't find a way to figure out if the inner type is static or 
not.

If it's static you don't need the outer class to instantiate it.
Figuring out if it's nested or not is doable:

class A
{
class B { }
B b;
}

template GetType(T)
{
alias T GetType;
}

template GetParentType(alias T)
{
alias GetType!(__traits(parent, T)) GetParentType;
}

template isInnerClass(T)
{
enum bool isInnerClass = is(GetParentType!T == class);
}

void main()
{
A.B ab;
static assert(isInnerClass!(typeof(ab)));
}

(P.S. to others, why is __traits so impossible to work with? 
typeof
can't be used to extract the type, I had to write a special 
template

just to extract the type of a symbol..)


Hmm, maybe something like this should go into std.traits? This 
seems more readable than my hacky solution (__traits(compiles, 
A.B)).


Re: Code review: JSON unmarshaller

2012-10-17 Thread Tyler Jameson Little
I could make my marshaller/unmarshaller only update objects in 
place. I think this is more useful and would remove the overlap 
between orange and the JSON library. We could then write a JSON 
archiver for orange and include it in std.json as well.


The call to unmarshal would look like:

bool unmarshalJSON(T)(JSONValue val, out T ret);

The following restrictions would apply:

* T must be fully instantiated (all pointers are valid [not null])
* T must not be recursive (results in infinite recursion, and 
hence stack overflow)


And the marshaller:

JSONValue marshalJSON(T)(in T val);

For marshalling, the restrictions are:

* Slices are handled as if they were an array (copy all values)
* Same as unmarshaller, except null pointers will be treated as 
JSON null


I really like Go's JSON marshaller/unmarshaller, so I'm trying to 
model after that one. It allows updating an object in place, 
which was already a goal.


There should probably be some standard D serialization format. In 
working with a structure trained on data (for machine learning, 
natural language processing, etc), a complete serialization 
solution makes sense. But for simple data passing, JSON makes a 
lot of sense.


What do you think, do you think there's a place in Phobos for a 
simple JSON marshaller/unmarshaller?


I'll have some updated code soon, and I'll post back when that's 
done, in case you'd like to have a look.


Re: Code review: JSON unmarshaller

2012-10-17 Thread Tyler Jameson Little
You have mentioned needing an allMembers that excluded 
functions in one of your other posts. The following thread was 
exactly about that. I can never remember the solution, but I 
found it again: :)



http://www.digitalmars.com/d/archives/digitalmars/D/learn/Getting_only_the_data_members_of_a_type_34086.html


The mentioned solution doesn't account for shared fields from a 
super class:


class A { int a; }
class S { int b; }

foreach (i, type; typeof(S.tupleof)) {
enum name = S.tupleof[i].stringof[4..$];
writef((%s) %s\n, type.stringof, name);
}

This will print:

(int) b

My implementation is ugly, but it works for this case:

(ret.b) b
(ret.a) a

I could use std.traits.BaseClassTuple, but then I'd have to 
filter out common fields, and that sounds like a lot of work, 
especially since there's no practical difference.



 I used asserts and contracts to validate input, so the
following would
 throw an AssertError:

 int x = unmarshalJSON!int(`5`);

std.exception.enforce is the right choice in that case. You 
don't want the checks to disappear when asserts are turned off.


 I wasn't sure if this is bad style, since AssertError is in
 core.exception. If this is considered bad style in D, I can
create a
 JSONMarshalException and throw that instead.

That makes sense too. There is enforceEx() to throw a specific 
type of exception.


Ali


Good point. I'll probably make a JSONMarshalException, which is 
separate from JSONException in std.json so the library clearly 
indicates which part failed.


Thanks for the link, it was an interesting read! Maybe I'll have 
to dig around in std.traits and maybe add some missing stuff. 
With mixin() (I'd forgotten about it) I was able to get rid of 
all __traits calls except for allMembers.


Re: Code review: JSON unmarshaller

2012-10-17 Thread Tyler Jameson Little

Here's the updated code. It's got a marshaller and unmarshaller:

https://gist.github.com/3894337

It's about 650 lines. If you have time, I'd be very interested in 
getting some feedback (or from anyone else who sees this post of 
course).


The main problem I'm having right now is that classes/structs 
have to be static. I'm not 100% sure why the compiler cannot see 
non-static classes/structs at compile time. Do you happen to know 
why? It seems like a template should work in either case, 
assuming I'm understanding D templates correctly.


I didn't find any clear documentation for static outer classes, 
only static inner classes. It's not the same as static Java 
classes, which cannot be instantiated (if memory serves).


Code review: JSON unmarshaller

2012-10-15 Thread Tyler Jameson Little

https://gist.github.com/3894337

This is my first non-trivial D code, and I'd eventually like to 
get this into Phobos as part of std.json.


I haven't written the marshaller yet, but that shouldn't be too 
hard. I wanted to get some feedback on whether this code is up to 
the quality standards of Phobos.


I used a lot of templates, so I hope I didn't break any cardinal 
sins, especially in terms of readability. I did my best in 
grokking std.traits, but I may have missed some subtleties about 
what the templates are actually testing.


I used asserts and contracts to validate input, so the following 
would throw an AssertError:


int x = unmarshalJSON!int(`5`);

I wasn't sure if this is bad style, since AssertError is in 
core.exception. If this is considered bad style in D, I can 
create a JSONMarshalException and throw that instead.


Re: Code review: JSON unmarshaller

2012-10-15 Thread Tyler Jameson Little
I'm not sure what your goal with this marshaller is but I would 
say it's a lot harder than you think if you want to have a 
complete serialization library. A couple of things making it 
harder to create a fully working serialization library:


I'm basically trying to reproduce other JSON marshallers, like 
Go's, but using compile-time reflection. Go uses runtime 
reflection, which D notably does not support. I like the idea of 
compile-time reflection better anyway. There are a few things 
that would make it easier (like a __traits call like allMembers 
that excludes functions).


I use a lot of JSON, so a JSON marshaller/unmarshaller is going 
to save a lot of time, and make my code a lot cleaner.



* Pointers


I've done this, but haven't fully tested it. Basic pointers work.


* Array slices


I think this is handled.


* Serializing through base class references


Doesn't __traits(allMembers, T) give everything from all super 
classes?



* const/immutable fields


Hmm, not sure to handle this. These have to be set in the 
constructor, right?



* Any reference type (not really hard but it's more work)


Are you talking about aliases? What other kind of reference types 
are there in structs/classes? I'm assuming this will have more to 
do with marshalling as opposed to unmarshalling.


Have a look at for a basically fully working serialization 
library Orange:


https://github.com/jacob-carlborg/orange


Hmm, looks interesting. This looks like it only supports XML, 
which I don't use, but I'm sure you've already solved a lot of 
the corner cases. Thanks, I'll take a look!


Explicit TCE

2012-10-12 Thread Tyler Jameson Little
I've read a few threads discussing tail call elimination, but 
they all wanted the spec to articulate specific circumstances 
where tail call elimination is required.  Has there been any 
thought to adding syntax to explicitly state tail call 
elimination?


D could use something like Newsqueak's become keyword. If you're 
not familial with Newsqueak, become is just like a return, except 
it replaces the stack frame with the function that it calls.  
This was briefly mentioned years ago in this forum, but the 
become keyword was ignored:



http://www.digitalmars.com/d/archives/digitalmars/D/Rob_Pike_s_Newsqueak_-_some_good_concepts_53511.html


I think D could do something like this:

int fact(int n, int accum = 1) {
if (n == 0) {
return accum
}
// become means return, but guarantees TCE
become fact(n - 1, accum * n)
}

DMD should optimize this already, but explicitly stating become 
is a key to the compiler that the user wants this call to be 
eliminated.  Then, more interesting things can be implemented 
more simply, like a state machine:


void stateA() {
become stateB();
}

void stateB() {
become stateC();
}

void stateC() {
return;
}

void main() {
become stateA();
}

This would only take a single stack frame. If there were 
conditionals in there for branching, this could end up with a 
stack overflow because DMD does not support complicated TCE, only 
simple recursive TCE.


The become keyword would probably have to have these properties:

* statement after become must only be a function call (can't 
do foo() + 3)
* scope() needs to be handled appropriately for functions 
with become


There might be others. I'm not sure how D handles stack sizes, so 
this may be an issue as well if stack sizes are determined at 
runtime. A requirement could be that functions called with become 
need to have a static stack size, since this stack might never be 
collected (in the case of an infinite state machine).


Adding this feature wouldn't cost much, but it would add a ton of 
functional power.


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little

I'm a big fan of explicit, guaranteed TCE.

However, the primary problem with this approach is a really 
mundane one: The major compiler back ends (GCC and LLVM) don't 
have any means of guaranteeing TCE...


Ugh... I thought that might be a problem.

I don't know too much about GCC/LLVM, but I saw 'tailcallelim' 
for LLVM:


http://llvm.org/docs/Passes.html#tailcallelim

GCC seems to support it in 4.x:

arxiv.org/pdf/1109.4048

These look promising, so I wouldn't completely rule out the 
possibility of doing it in GCC/LLVM.  Perhaps someone more 
knowledgeable about GCC/LLVM could comment? I would really like 
to see D have this feature (then I can stop daydreaming about 
LISP).


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little

No idea what you are talking about.


I'm not sure which part wasn't clear, so I'll try to explain 
myself. Please don't feel offended if I clarify things you 
already understand.


An optimizable tail call must simply be a function call. The 
current stack frame would be replaced with the new function, so 
anything more complex than a simple function call would require 
some stack from the preceding function to stick around in the new 
function, thus requiring the old stack to stick around.


For example, te following is not optimizable the old stack (the 
one with 3) needs to be maintained until foo() returns, which is 
not TCE.


return foo() * 3


Since the old stack won't be around anymore, that leaves us with 
in a sticky situation with regard to scope():


http://dlang.org/statement.html#ScopeGuardStatement

If the current stack is going to be replaced with data from 
another function call, the behavior of scope() is undefined. The 
scope that scope() was in has now been repurpose, but the scope 
is still kind of there. If scope() is allowed, they must be 
executed just before the tail call, otherwise it will be 
overwritten (or it has to stick around until the actual stack 
frame is cleared. Consider:


void a() {
  become b();
}

void b() {
  // when does this get called?
  scope(exit) writeln(exited);
  become a();
}

If we allow scope(), then the line should be written before the 
call to a(). If we don't, then this is a compile time error. I 
like disallowing it personally, because if the scope(exit) call 
frees some memory that is passed to a, the programmer may think 
that it will be called after a exits, which may not be the case.


void a(void* arr) {
  // do something with arr
  become b();
}

void b() {
  void* arr = malloc(sizeof(float) * 16);
  scope(exit) free(arr);
  become a(arr);
}

I just see this as being a problem for those who don't fully 
understand scoping and TCE.



My mention of overhead was just how complicated it would be to 
implement. The general algorithm is (for each become keyword):


* determine max stack size (consider all branches in all 
recursive contexts)

* allocate stack size for top-level function
* do normal TCE stuff (use existing stack for new call)

The stack size should be known at compile time for cases like the 
one above (a calls b, b calls a, infinitely) to avoid infinitely 
expanding stack. A situation like this is a memory optimization, 
so forcing guaranteed stack size puts an upper-bound on memory 
usage, which is the whole point of TCE. If the stack is allowed 
to grow, there is opportunity for stack overflow.




My use case for this is a simple compiler, but I'm sure this 
could be applied to other use cases as well.  I'd like to produce 
code for some BNF-style grammar where each LHS is a function. 
Thus, my state machine wouldn't be a huge, unnatural switch 
statement that reads in the current state, but a series of code 
branches that 'become' other states, like an actual state machine.


For example:

A := B | C | hello
B := bye | see ya
C := go away

void A() {
char next = getNext();
if (next == 'b' || next == 's') {
become B();
}
if (next == 'g') {
become C();
}
if (next == 'h') {
// consume until hello is found, or throw exception
// then put some token on the stack
}
}

void B() {
// consume until 'bye' or 'see ya'
}

void C() {
// consume until 'go away'
}

This would minimize memory use and allow me to write code that 
more closely matches the grammar. There are plenty of other use 
cases, but DSLs would be very easy to implement with TCE.


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little

On Friday, 12 October 2012 at 18:02:57 UTC, bearophile wrote:

Tyler Jameson Little:

D could use something like Newsqueak's become keyword. If 
you're not familial with Newsqueak, become is just like a 
return, except it replaces the stack frame with the function 
that it calls.


Are you talking about CPS?
http://en.wikipedia.org/wiki/Continuation_passing_style


I don't think it would necessitate CPS, but that is a nice side 
effect. I'm thinking more of a recursive function call that may 
or may not return. For example, a process that shovels data 
between two network connections. If the data never stops, the 
function will never return. If there's some kind of a problem, 
then it could return with that error, and be restarted when that 
problem is fixed. All of this could happen with a series of 
function calls that use the same stack.


void handleIncommingData() {
   if (error) {
   // returns directly to manageLongRunningProcess
   return;
   }

   // do something useful

   become longRunningProcess();
}

void longRunningProcess() {
become handleIncommingData();
}

void manageLongRunningProcess() {
longRunningProcess();
// there was a problem, so fix it


// try again
manageLongRunningProcess();
}

Exceptions are not needed, so these can be nothrow functions, and 
this implementation is simpler than some complex while loop, 
while having the same memory footprint.


CPS would make things like imitating javascript's 
setTimeout/setInterval possible. I don't think this is a major 
benefit for D because the parallelism/concurrency support is 
already pretty awesome.


The main benefit is for implementing things like lexical 
analyzers (or tokenizers, whatever), which don't really depend on 
previous states and can emit tokens. This allows for efficient 
representation of recursive problems, that call functions 
circularly (a - b - c - a - b ...), like a state machine.


I think it just allows an extra level of expressiveness without a 
backwards incompatible change to the language. True, you can 
express this same idea with trampolining, but that isn't as fun:


http://stackoverflow.com/a/489860/538551
http://en.wikipedia.org/wiki/Tail-recursive_function#Through_trampolining

There are still some problems that I think a LISP language would 
make more sense for, and for those problems, it would be great to 
express them in D with my other code.


DMD should optimize this already, but explicitly stating 
become is a key to the compiler that the user wants this call 
to be eliminated.  Then, more interesting things can be 
implemented more simply, like a state machine:


   void stateA() {
   become stateB();
   }

   void stateB() {
   become stateC();
   }

   void stateC() {
   return;
   }

   void main() {
   become stateA();
   }


Seems nice.


I'm glad you think so =D

I'm not sure how D handles stack sizes, so this may be an 
issue as well if stack sizes are determined at runtime.


D doesn't currently support C99 VLAs, but it supports alloca(), 
so stack frames are sized dynamically. But maybe this is not a 
big problem for CPS.


Well, dynamic stack frames aren't strictly a bad thing for CPS, 
it just removes the memory use guarantee. There's already a huge 
memory gain from using TCE, I just don't know debugging would be 
done if a function keeps adding to and passing a dynamic array.


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little

My mention of overhead was just how complicated it would be to
implement. The general algorithm is (for each become keyword):

* determine max stack size (consider all branches in all 
recursive

contexts)
* allocate stack size for top-level function
* do normal TCE stuff (use existing stack for new call)


What's wrong with just allocating a new stack _in-place_  of 
the old?
In other words make 'become' synonym for 'reuse the current 
stack frame'.
Effectively you still stay in constant space that is maximum of 
all functions being called.


That would work too. If scope() is disallowed, it doesn't matter 
where the stack comes from. It's only slightly cheaper to reuse 
the current stack (CPU), but making a new one would be lighter on 
memory.


I see nice staff. My use case is optimizing virtual machine, 
the one inside std.regex primarily.


Yeah, that is a great example! I've read some bug reports about 
std.regex using a ton of memory, especially with CTFE. Since 
regex is by definition a state machine, this would be a 
particularly elegant fit (granted, backreferences et al break 
that model, but it's still a nice metaphor).


The main problem I see is working with other compilers like 
GCC/LLVM. If this can be done on those compilers, I don't see any 
major hurdle to getting this implemented.


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little
Hey, that's dmd (compiler) using a ton of memory,  not 
std.regex :(
It actually flies with only a modest set of ram after CTFE (or 
rather 'if') succeeds that is :)


My bad. Even then, TCE wouldn't hurt.

The main problem I see is working with other compilers like 
GCC/LLVM. If
this can be done on those compilers, I don't see any major 
hurdle to

getting this implemented.


Perhaps the biggest one would be convincing GCC/LLVM devs to 
accept patches :)


I think getting Walter Bright on board is the best starting 
point. If he likes the idea, I'm sure we can work out way with 
the GCC/LLVM devs. I saw some basic signs (noted earlier) that 
this may be a non-issue, as the functionality may already be 
there.


I'll keep looking and see if I can find a definitive answer for 
those compilers. Would support of one of the compilers be enough, 
or would both be required to get this in the formal language spec?


Re: Explicit TCE

2012-10-12 Thread Tyler Jameson Little

On Friday, 12 October 2012 at 20:23:00 UTC, David Nadlinger wrote:
On Friday, 12 October 2012 at 17:39:53 UTC, Alex Rønne 
Petersen wrote:
However, the primary problem with this approach is a really 
mundane one: The major compiler back ends (GCC and LLVM) don't 
have any means of guaranteeing TCE...


LLVM shouldn't be as big a problem – there is some support 
for guaranteed TCO in order to make implementations of some of 
the functional languages possible.


I know that you can force LLVM to tail-call everything it 
possibly can (which in consequence horribly breaks the ABI), 
but I am not sure right now how fine-grained you can control 
that mechanism.


Also don't forget that some calling conventions don't lend 
themselves particularly well for doing efficient tail calls.


David


I found this:

http://llvm.org/docs/CodeGenerator.html#tail-call-optimization
http://llvm.org/docs/CodeGenerator.html#target-feature-matrix

It seems that llvm won't be a problem. I've never worked with 
LLVM (or any compiler for that matter) at this low of a level, 
but I assume that the front-end produces code that looks like the 
provided code snippet in the first link. If that's the case, then 
we can basically guarantee that LLVM will do what we expect, as 
long as we can guarantee that all callers and callees use 
fastcc. I'm not 100% on the implications of this, but it should 
work.



As for GCC, the situation seems less hopeful. I found this thread 
about GUILE, but it did mention GCC's lack of support for tail 
calls. This was april of last year, so maybe things have 
improved. The thread does mention that the GCC devs would be open 
to suggestions, but it seems like this might be a harder fought 
battle than for LLVM.


http://lists.gnu.org/archive/html/guile-devel/2011-04/msg00055.html


LLVM should be sufficient though, right? GDC can just outright 
reject explicit TCO for now until it supports proper TCO. Maybe 
the GUILE mailing list would be a good place to start, since 
there may be efforts already there.



What steps would need to happen for this to become a reality?  
Here's my list:


1. Get Walter Bright/Andrei Alexandrescu on board
2. Verify that it will work with LLVM
3. Get it working in DMD
4. Get it working in LDC
5. Work with GCC devs

Is there enough interest in this to implement it? I really don't 
know DMD or LLVM at all, so I don't know how big of a project 
this is.


Re: Status on Precise GC

2012-09-09 Thread Tyler Jameson Little

On Sunday, 9 September 2012 at 17:22:01 UTC, dsimcha wrote:
On Sunday, 9 September 2012 at 16:51:15 UTC, Jacob Carlborg 
wrote:

On 2012-09-08 23:35, Tyler Jameson Little wrote:

Awesome, that's good news. I'd love to test it out, but I've
never built the D runtime (or Phobos for that matter) from
source. Are there any instructions or do I just do something 
like

make  sudo make install and it'll put itself in the right
places? FWIW, I'm running Linux with the standard DMD 2.060
compiler.


Just run:

make -f posix.mak

Or, for Windows:

make -f win32.mak


You also need to build Phobos, which automatically links the 
druntime objects into a single library file, by going into the 
Phobos directory and doing the same thing.


An annoying issue on Windows, though, is that DMD keeps running 
out of memory when all the precise GC teimplates are 
instantiated.  I've been meaning to rewrite the make file to 
separately compile Phobos on Windows, but I've been preoccupied 
with other things.


Cool, that sounds easy enough. I'm running Linux, so hopefully I 
won't have that problem. I won't need to compile on Windows for 
quite a while, so that's not a big deal.


I probably won't get to it for a few days (because of class 
responsibilities), but I'll try to get to it by the end of the 
week. I'm excited to test it out and see if I can break it!


I'll check back here every so often, so if you hear from that 
GSoC person, I'd love to hear any updates on what may or may not 
be finished. I'd really like to develop something non-trivial in 
D.


Re: Status on Precise GC

2012-09-08 Thread Tyler Jameson Little

Awesome, that's good news. I'd love to test it out, but I've
never built the D runtime (or Phobos for that matter) from
source. Are there any instructions or do I just do something like
make  sudo make install and it'll put itself in the right
places? FWIW, I'm running Linux with the standard DMD 2.060
compiler.

I'm still relatively new to D. A year ago I wrote some simple
programs to get familiar (like an HTTP lib and a start at a
package manager), but nothing low level like GC tuning or the
like. I took a class where we implemented a few simple GCs, but
that doesn't mean I know anything about GC design =D.

Let me know how I can help! I'm currently in school, so time is a
little hard to come by, but I'm willing to report any oddities
that I notice, and perhaps write a few unit tests.

On Saturday, 8 September 2012 at 02:58:44 UTC, dsimcha wrote:
Here's the GSoC project I mentored this summer.  A little 
integration work still needs to be done, and I've been meaning 
to ping the student about the status of this.  If you want, I'd 
welcome some beta testers.


https://github.com/Tuna-Fish/druntime/tree/gc_poolwise_bitmap

On Saturday, 8 September 2012 at 01:55:44 UTC, Tyler Jameson 
Little wrote:

This issue on bugzilla hasn't been updated since July 2011, but
it's assigned to Sean Kelly:
http://d.puremagic.com/issues/show_bug.cgi?id=3463

I've found these threads concerning a precise GC:

http://www.digitalmars.com/d/archives/digitalmars/D/learn/Regarding_the_more_precise_GC_35038.html

http://www.digitalmars.com/d/archives/digitalmars/D/How_can_I_properly_import_functions_from_gcx_in_object.di_171815.html

Is this issue obsolete, or is it being worked on?

Reason being, I'm writing a game in D and I plan to write it in
nearly 100% D (with the exception being OpenGL libraries and 
the
like), but I know I'll run into problems with the GC 
eventually.

If this is an active project that may get finished in the
relative near term (less than a year), then I'd feel 
comfortable

knowing that eventually problems may go away.

I want to eventually make this work with ARM (Raspberry PI 
cubieboard), and the GC is a major blocker here (well, and a
cross-compiler, but I'll work that out when I get there).

I'm using dmd atm if that matters.

Thanks!

Jameson





Status on Precise GC

2012-09-07 Thread Tyler Jameson Little

This issue on bugzilla hasn't been updated since July 2011, but
it's assigned to Sean Kelly:
http://d.puremagic.com/issues/show_bug.cgi?id=3463

I've found these threads concerning a precise GC:

http://www.digitalmars.com/d/archives/digitalmars/D/learn/Regarding_the_more_precise_GC_35038.html

http://www.digitalmars.com/d/archives/digitalmars/D/How_can_I_properly_import_functions_from_gcx_in_object.di_171815.html

Is this issue obsolete, or is it being worked on?

Reason being, I'm writing a game in D and I plan to write it in
nearly 100% D (with the exception being OpenGL libraries and the
like), but I know I'll run into problems with the GC eventually.
If this is an active project that may get finished in the
relative near term (less than a year), then I'd feel comfortable
knowing that eventually problems may go away.

I want to eventually make this work with ARM (Raspberry PI 
cubieboard), and the GC is a major blocker here (well, and a
cross-compiler, but I'll work that out when I get there).

I'm using dmd atm if that matters.

Thanks!

Jameson


Status on Precise GC

2012-09-07 Thread Tyler Jameson Little
This issue on bugzilla hasn't been updated since July 2011, but 
it's assigned to Sean Kelly:

http://d.puremagic.com/issues/show_bug.cgi?id=3463

I've found these threads concerning a precise GC:

http://www.digitalmars.com/d/archives/digitalmars/D/learn/Regarding_the_more_precise_GC_35038.html

http://www.digitalmars.com/d/archives/digitalmars/D/How_can_I_properly_import_functions_from_gcx_in_object.di_171815.html

Is this issue obsolete, or is it being worked on?

Reason being, I'm writing a game in D and I plan to write it in 
nearly 100% D (with the exception being OpenGL libraries and the 
like), but I know I'll run into problems with the GC eventually. 
If this is an active project that may get finished in the 
relative near term (less than a year), then I'd feel comfortable 
knowing that eventually problems may go away.


I want to eventually make this work with ARM (Raspberry PI  
cubieboard), and the GC is a major blocker here (well, and a 
cross-compiler, but I'll work that out when I get there).


I'm using dmd atm if that matters.

Thanks!

Jameson


Can I do an or in a version block?

2012-03-07 Thread Tyler Jameson Little

I would like to do something like this:

version (linux || BSD) {
// do something...
} else {
version (Windows) {
// do something else
} else {
// do something else
assert(false, Unsupported operating system);
}
}

The only way I've been able to do this, is by splitting up the 
two versions and repeat code.


Is there a better way to do this? A static if can do this, so is 
there a way that I can use a static if somehow?


Re: Can I do an or in a version block?

2012-03-07 Thread Tyler Jameson Little

Now, you could do

version(x)
version = xOrY
else version(y)
version = xOrY

version(xOrY) {}


Huh, clever! I like it!! I hope I don't have to do that very 
often, though...


Of course, if the issue is linux || FreeBSD, you might want to 
just consider
using Posix. Unless you're doing something that is specific to 
linux and
FreeBSD but not Posix in general (which I would expect to be 
unlikely), Posix

will do the trick just fine.


Yeah, that was just an example. Perhaps a better example would be 
comparing versions of something:


version (LIBV1 || LIBV2) {
// include some dirty hacks for old versions
} else {
// do some new fancy stuff for new features
}

I was mostly thinking that there are things that linux and BSD 
share that other BSDs may not. I know that Mac OS X has some 
subtle differences in a few things.


Anyway, I think this answers my question. I can get by with using 
your suggestion for those (hopefully) rare cases where I need 
something more specific.


Thanks!


  1   2   >