Re: Weird…

2014-07-15 Thread JR via Digitalmars-d
On Tuesday, 15 July 2014 at 16:56:27 UTC, Russel Winder via 
Digitalmars-d wrote:

Funny how when people send big smiles, they always mention D?

:-D

:D


D:


Re: Random points from a D n00b CTO

2014-07-15 Thread JR via Digitalmars-d

On Tuesday, 15 July 2014 at 09:24:14 UTC, Vic wrote:

To illustrate point on D complexity:
https://d262ilb51hltx0.cloudfront.net/max/800/1*_gRpHqzB-1zbG17jdxGPaQ.png

It appears that it mission is to be Java, vs a system lang.
hth


Maybe I misunderstand the term, but it seems to me that a systems 
language inherently has to be complex? I'd do a car analogy but 
I'd cringe. :3


The more abstractions you hide low-level code behind, the less 
direct control the developer has over the hardware. Providing 
both *allows* it to be complex but doesn't necessitate it, making 
the black magic opt-in. I concede that it makes the language 
bigger.


I mean, I have a bunch of shell scripts written over the years 
that do everything from updating /etc/hosts with additions from 
MVPS[1] to determining the OpenNIC DNS server[2] with the best 
ping. And I could probably do *quick-and-dirty* rewrites of these 
in D with little effort, and they'd probably even be less complex 
as shell interpreters' limitations makes writing non-trivial 
stuff a pain in the derriere[3]. No std.socket.ping though, would 
have to think about that.


A new user would get more hidden allocations, but the 
abstractions allow for that. He would see a simple language with 
most of everything he wants *in phobos*. An experienced one could 
probably make them very effective and nigh-allocation-free, and 
the complex low-level parts allow for that too.



[1]: http://winhelp2002.mvps.org/hosts.htm
[2]: http://wiki.opennicproject.org/Tier2
[3]: http://pastebin.com/Uj5rw7TL


Re: std.math performance (SSE vs. real)

2014-06-30 Thread JR via Digitalmars-d

On Monday, 30 June 2014 at 17:01:07 UTC, Walter Bright wrote:
2. My business accounts have no notion of fractional cents, so 
there's no reason to confuse the bookkeeping with them.

.1 BTC


Re: Perlin noise benchmark speed

2014-06-20 Thread JR via Digitalmars-d

On Friday, 20 June 2014 at 15:24:38 UTC, bearophile wrote:
If I add this import in Noise2DContext.getGradients the 
run-time decreases a lot (I am now just two times slower than 
gcc with -Ofast):


import core.stdc.math: floor;

Bye,
bearophile


Was just about to post that if I cheat and replace usage of 
floor(x) with cast(float)cast(int)x, ldc2 is almost down to gcc 
speeds (119.6ms average over 100 full executions vs gcc 102.7ms).


It stood out in the callgraph. Because profiling before 
optimizing.


Re: foreach

2014-06-13 Thread JR via Digitalmars-d

On Thursday, 12 June 2014 at 16:59:34 UTC, bearophile wrote:

Nick Treleaven:


there is also this usage:

foreach (i, _; range){...}


I think this is a very uncommon usage. I think I have not used 
it so far.


Would enforcing immutability there be a breaking change?

foreach (/*immutable*/ i; _; range) {
version(none) i = 0;
}


Re: [OT] DConf socks

2014-06-10 Thread JR via Digitalmars-d

On Tuesday, 10 June 2014 at 20:52:27 UTC, Walter Bright wrote:

On 6/10/2014 1:18 AM, JR wrote:
Missed opportunity to use std.socks.assumeMine and netting 
yourself an extra

pair...


The trouble with the socks datatype is the destructor is 
randomly run on only one of each pair.


Sock[2] sockPair();
Creates a pair of connected socks.
The two socks are indistinguishable.


Re: [OT] DConf socks

2014-06-10 Thread JR via Digitalmars-d

On Monday, 9 June 2014 at 20:37:57 UTC, Andrei Alexandrescu wrote:

On 6/9/14, 12:56 PM, Justin Whear wrote:

On Mon, 09 Jun 2014 12:18:08 -0700, Andrei Alexandrescu wrote:


Hello,

Someone at DConf left me a pair of handmade socks to pass to 
a coworker
whom they didn't get to meet. I forgot who! Please email me 
with the

answer. Thanks!

Andrei


This is probably the strangest non-spam newsgroup message I've 
ever seen.


I know. In the meantime I did remembered and handed the socks 
(they were for a baby) to my coworker, who is very grateful! -- 
Andrei


Missed opportunity to use std.socks.assumeMine and netting 
yourself an extra pair...


Re: D Language Version 3

2014-05-29 Thread JR via Digitalmars-d

On Thursday, 29 May 2014 at 08:50:38 UTC, Chris wrote:
On Wednesday, 28 May 2014 at 22:48:22 UTC, Jonathan M Davis via 
Digitalmars-d wrote:



Is this the first attempt at D Version 3? :-)


Compiles to only ELF/DWARF headers!


Re: Scenario: OpenSSL in D language, pros/cons

2014-05-06 Thread JR via Digitalmars-d

On Monday, 5 May 2014 at 15:01:05 UTC, Andrei Alexandrescu wrote:

On 5/5/14, 2:32 AM, JR wrote:

On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote:
And then comes my next question: except for that malloc-hack, 
would it
have been possible to write it in @safe D? I guess that if 
not,
module(s) could have been made un-@safe. Not saying that a 
similar
separation of concerns was not possible in OpenSSL itself, 
but that D

could have made it less development-expensive in my opinion.


TDPL SafeD visions notwithstanding, @safe is very very 
limiting.


I/O is forbidden so simple Hello Worlds are right out, let 
alone

advanced socket libraries.


Sounds like a library bug. Has it been submitted? -- Andrei


When mentioned in #d it was met with replies of "well 
*obviously*", so I chalked it up to an irreconcilability of 
@safe's. Perhaps I'm expecting too much of the subset.


My code certainly does no pointer arithmetic, but adding @safe: 
in select locations quickly shows that string operations like 
indexOf are unsafe, as is everything in concurrency (including 
thisTid), getopt, std.conv.to (to avoid casts), and all socket 
operations. All of those can throw, but I don't see how they can 
corrupt memory.


Apologies for the negativity. It's not that much of a deal, but 
your code will have to be very unreliant upon phobos to be 
completely @safe. I appreciate these threads gauging community 
reactions and I hope the mood will be lighter post-dconf, but at 
present there's still a sour taste left in my mouth after the 
final-by-default pivot.


(Manu and Thaut, please don't leave~ :<   )


Re: Scenario: OpenSSL in D language, pros/cons

2014-05-05 Thread JR via Digitalmars-d

On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote:
And then comes my next question: except for that malloc-hack, 
would it have been possible to write it in @safe D? I guess 
that if not, module(s) could have been made un-@safe. Not 
saying that a similar separation of concerns was not possible 
in OpenSSL itself, but that D could have made it less 
development-expensive in my opinion.


TDPL SafeD visions notwithstanding, @safe is very very limiting.

I/O is forbidden so simple Hello Worlds are right out, let alone 
advanced socket libraries.


Re: Table lookups - this is pretty definitive

2014-04-01 Thread JR

On Tuesday, 1 April 2014 at 18:35:50 UTC, Walter Bright wrote:

immutable bool[256] tab2;
static this()
{
for (size_t u = 0; u < 0x100; ++u)
{
tab2[u] = isIdentifierChar0(cast(ubyte)u);
}
}


Table lookups are awesome.

While tab2 there lends itself to being populated in a static 
this() it would really lower the bar if we could define static 
immutable AA literals at compile-time. Or at least CTFE-fill 
them. Having to rely on static this() for such is unfortunate and 
hacky.


Relevant is 
http://forum.dlang.org/thread/oxnwtojtdymopgvan...@forum.dlang.org 
in which I get a speed increase of >3.5x by doing string-to-enum 
translation/mapping using someEnum[string] AA lookups, instead of 
using someString.to!someEnum.


(Given CtAAs, std.conv.to could then generate such for various 
translations where cheap -- assuming we still like AAs?)


Re: Challenge: write a really really small front() for UTF8

2014-03-24 Thread JR

On Monday, 24 March 2014 at 07:53:11 UTC, Chris Williams wrote:
On Sunday, 23 March 2014 at 21:23:18 UTC, Andrei Alexandrescu 
wrote:

Here's a baseline: http://goo.gl/91vIGc. Destroy!

Andrei


http://goo.gl/TaZTNB



 for (int i = 1; i < len; i++) {


size_t, ++i! :>

(optimized away but still, nitnit pickpick)


Re: Need help creating banner ads for Dconf 2014

2014-03-12 Thread JR

http://99designs.com/


Re: More Illuminating Introductory Code Example on dlang.org

2014-02-13 Thread JR

On Wednesday, 12 February 2014 at 20:49:54 UTC, Nordlöw wrote:
On Wednesday 19:th of Februari I'm giving my first talk on D 
for my fellow collegues at my consultant firm office HiQ, 
Linköping, Sweden.


If any of you are in the neighbourhood please let me know and I 
will invite you. The lecture will most likely be held in 
Swedish.


Were it in Luleå I would have loved to attend, but Linköping is 
way away. :<


Re: Smart pointers instead of GC?

2014-02-02 Thread JR
On Sunday, 2 February 2014 at 05:30:02 UTC, Andrei Alexandrescu 
wrote:

On 2/1/14, 8:18 PM, Frank Bauer wrote:
On Sunday, 2 February 2014 at 03:38:03 UTC, Andrei 
Alexandrescu wrote:
Whoa, this won't work without an explosion in language 
complexity.


Andrei


Only daydreaming ...


No, it's a nightmare.

Andrei


So, going forward, what would you say is the preferred direction 
to strive toward?


I *seem* to remember reading here that you and Walter were 
increasingly growing to favor ARC, but I can't find the post. 
(Memory bitrot on my part is more than likely.)


Re: Smart pointers instead of GC?

2014-02-01 Thread JR

On Saturday, 1 February 2014 at 12:20:33 UTC, develop32 wrote:
If I turn on GC on the game I'm making, it takes 80ms every 
frame when memory usage is 800mb (shown on task manager). This 
was actually surprising, previously it was 6ms, good that I 
made my engine GC independent.


You wouldn't happen to have a blog post someplace with 
reflections on the steps you took to avoid the GC, by any chance? 
Hint hint! I'm genuinely curious.


Or is it merely a matter of manual allocation?


Re: Smart pointers instead of GC?

2014-02-01 Thread JR

On Saturday, 1 February 2014 at 05:36:44 UTC, Manu wrote:
I write realtime and memory-constrained software (console 
games), and for

me, I think the biggest issue that can never be solved is the
non-deterministic nature of the collect cycles, and the 
unknowable memory
footprint of the application. You can't make any guarantees or 
predictions
about the GC, which is fundamentally incompatible with realtime 
software.
(tried to manually fix ugly linebreaks here, so apologies if it 
turns out even worse.)


(Maybe this would be better posted in D.learn; if so I'll 
crosspost.)


In your opinion, of how much value would deadlining be? As in, 
"okay handyman, you may sweep the floor now BUT ONLY FOR 6 
MILLISECONDS; whatever's left after that you'll have to take care 
of next time, your pride as a professional Sweeper be damned"?


It obviously doesn't address memory footprint, but you would get 
the illusion of determinism in cases similar to where 
race-to-idle approaches work. Inarguably, this wouldn't apply if 
the goal is to render as many frames per second as possible, such 
as for non-console shooters where tearing is not a concern but 
latency is very much so.


I'm very much a layman in this field, but I'm trying to soak up 
as much knowledge as possible, and most of it from threads like 
these. To my uneducated eyes, an ARC collector does seem like the 
near-ideal solution -- assuming, as always, the code is written 
with the GC in mind. But am I right in gathering that it solves 
two-thirds of the problem? You don't need to scan the managed 
heap, but when memory is actually freed is still 
non-deterministic and may incur pauses, though not necessarily a 
program-wide stop. Aye?


At the same time, Lucarella's dconf slides were very, very 
attractive. I gather that allocations themselves may become 
slower with a concurrent collector, but collection times in 
essence become non-issues. Technically parallelism doesn't equate 
to free CPU time; but that it more or less *is* assuming there is 
a cores/thread to spare. Wrong?


Lastly, am I right in understanding precise collectors as 
identical to the stop-the-world collector we currently have, but 
with a smarter allocation scheme resulting in a smaller managed 
heap to scan? With the additional bonus of less false pointers. 
If so, this would seem like a good improvement to the current 
implementation, with the next increment in the same direction 
being a generational gc.


I would *dearly* love to have concurrency in whatever we end up 
with, though. For a multi-core personal computer threads are free 
lunches, or close enough so. Concurrentgate and all that jazz.


Re: Current state of "D as a better C" (Windows)?

2014-01-26 Thread JR

On Sunday, 26 January 2014 at 08:24:55 UTC, Mike wrote:
... But as a rude janitor that barges into the party, orders 
the band to stop playing and the dancers to stop moving so he 
can sweep the floor (i.e. GC) ...


Nominated for best stop-the-world GC analogy 2014. Cannot unsee. 
(un-think?)


Re: D - Unsafe and doomed

2014-01-05 Thread JR

On Saturday, 4 January 2014 at 03:45:22 UTC, Maxim Fomin wrote:

Why have you posted this ungrounded Rust advertisement anyway?


To spark discussion?


Re: Natural Word Length

2013-12-27 Thread JR

On Friday, 27 December 2013 at 07:25:46 UTC, Marco Leise wrote:

alias ℕ = size_t;

ℕ myWord = 123;


One part of me thinks that's absolutely beautiful and another 
part cries out in terror. ;>


Re: DIP54 : revamp of Phobos tuple types

2013-12-23 Thread JR
Disclaimer: I am a newbie and I have *almost* understood the 
difference between built-in tuples, Tuple and TypeTuple. Almost. 
I'll have to get back to you on that. I also have some bad 
history with auto-expansion from my work with bash scripts, but 
that's for me and my therapist.


On Monday, 23 December 2013 at 11:08:26 UTC, Andrej Mitrovic 
wrote:

We always seem to forget that all newbies will eventually become
experienced current users. Current (experienced) users need a 
little
respect as well, not everything has to be tailored to the next 
batch

of newbies by breaking existing users' code. Documentation and
tutorials are the solutions here.


This assumes that said newbies stick with the language instead of 
moving on to something with a better-paved learning curve. 
Hyperbole analogy: I'd love to be able to play the violin, but to 
my hands the threshold is nigh insurmountable, despite textbooks 
showing me how.


Excuse the argument from authority, but I seem to recall that 
Andrei and/or Walter suggesting that D's focus should now be on 
stability and avoiding breaking changes -- except where such make 
code *right*. To my naïve eyes, it seems like we could be 
preserving entropy where we're currently not, but then I don't 
fully grasp to what extent it would break existing code.


(As an aside, I'd love for built-in tuples not to implicitly 
expand either. Maybe this is one of those things I can achieve 
using functionality surrounding said other tuples I don't 
understand yet, as an inverse to an .expand property. void 
foo(alias fun, Args...)(Args args) { fun(args.raw); /* or 
unexpanded or other UFCS call */ } )


Re: static with?

2013-12-15 Thread JR

On Sunday, 15 December 2013 at 02:03:09 UTC, bearophile wrote:
I think I'd like with() to not create a scope, just like the 
"static if". So you could write this code:

[...]


And/or with the same syntax as : (saving some 
indentation, but perhaps less idiomatic?). Granted it doesn't 
allow for top-level definitions, and it *really* would highlight 
the need for a way to end the attribue scope.


enum Foo { A, B }

void main() {
import std.stdio;
with (Foo):
immutable data = [A, B];
// ~with (Foo) ? pretty pretty please, would help so very 
much elsewhere

data.writeln();
}

In my code I usually locally alias the enum to a single letter 
and use it as a shorthand identifier. But it's ugly.


enum DescriptiveEnumNameBecauseReasons { A, B }

void main() {
import std.stdio;
alias d = DescriptiveEnumNameBecauseReasons;
immutable data = [d.A, d.B];
data.writeln();
}


Re: Option!T

2013-12-11 Thread JR
On Tuesday, 10 December 2013 at 18:15:58 UTC, Andrei Alexandrescu 
wrote:

On 12/10/13 9:40 AM, JR wrote:
On Tuesday, 10 December 2013 at 17:28:26 UTC, Andrei 
Alexandrescu wrote:
We have only(x) 
(http://dlang.org/phobos/std_range.html#.only) to be a
collection of exactly one value, but not a type for "a value 
of type T

or nothing at all".


Is this not what Nullable!T is?


Nullable is not a range.

Andrei


Right, I stand corrected and invoke TMYK.


Re: Option!T

2013-12-10 Thread JR
On Tuesday, 10 December 2013 at 17:28:26 UTC, Andrei Alexandrescu 
wrote:
We have only(x) (http://dlang.org/phobos/std_range.html#.only) 
to be a collection of exactly one value, but not a type for "a 
value of type T or nothing at all".


Is this not what Nullable!T is?


Re: The "no gc" crowd

2013-10-09 Thread JR
On Wednesday, 9 October 2013 at 02:22:35 UTC, Andrei Alexandrescu 
wrote:
* Get Robert Schadek's precise GC in. Walter and I have become 
101% convinced a precise GC is the one way to go about GC.


An orthogonal question, but is Lucarella's CDGC (still) being 
ported? There's nothing mutually exclusive between a precise and 
a concurrent gc, no?


Re: I'll do a IAmA on reddit 2013/10/02 (tomorrow) at 9:15 AM PST

2013-10-01 Thread JR
On Tuesday, 1 October 2013 at 23:56:01 UTC, Andrei Alexandrescu 
wrote:

Hello,


I'll do a http://www.reddit.com/r/iama tomorrow morning. The 
draft title is "I am a member of Facebook's HHVM team, a C++ 
and D pundit, and a Machine Learning guy. Ask me anything!"


I assume there will be a fair amount of D-related questions. 
I'm hoping to benefit from air support from you all.



Wish me luck!

Andrei


Taking bets on how many posts there'll be until someone drops 
http://imgur.com/W5AMy0P


Re: std.concurrency.receive() and event demultiplexer

2013-08-16 Thread JR
On Friday, 9 August 2013 at 17:59:33 UTC, Ruslan Mullakhmetov 
wrote:
Now we could go further and overcome it with complication of 
logic:

  - timed wait for event on socket
  - when timeout occur check receive for incoming messages 
again with timeout

  - switch back to events waiting

The drawbacks are obvious:
  - complicated logic
  - artificial switching between two demultiplexers: event loop 
and std.concurrency.receive()
  - need to choose good timeout to meet both: responsiveness 
and cpu load


Alternatively it is possible to take away all blocking 
operations to another child thread, but this does not eliminate 
problem of resource freeing. with socket example this is 
dangerous with hanging socket descriptor and (1) not telling 
peer socket to shut up conversation (2) overflow of number of 
open file descriptors


I ended going that way with my small toy IRC bot; one thread to 
*read* from the connected stream and pass on incoming lines, only 
briefly checking for messages inbetween (short) stream read 
timeouts; another thread to *write* to the same stream, 
indefinitely blocking in std.concurrency.receive() until a string 
comes along.


Somewhat dumbed-down excerpt with added clarifying comments;

/* --8<8<8<8<8<8<8<8<8<8<--*/

__gshared Socket __gsocket;  // *right* in the pride D:
__gshared SocketStream __gstream;


void serverRead() {
bool halt;
char[512] buf;
char[] slice;
Tid broker = locateTid("broker");

register("reader");  // thread string identifier

__gsocket.setOption(SocketOptionLevel.SOCKET,
SocketOption.RCVTIMEO, 5.seconds);  // arbitrary value

// some template mixins to reduce duplicate code
mixin MessageActionLocal!(halt,true,Imperative.Abort) 
killswitch;
mixin MessageActionLocal!(halt,true,OwnerTerminated) 
ownerTerm;

mixin MessageAction!Variant variant;  // for debugging

while (!halt) {
slice = __gstream.readLine(buf);
if (slice.length)
broker.send(slice.idup);

// we just want to check for queued messages,
// not block waiting for new ones, so timeout immediately

receiveTimeout(0.seconds,
&killswitch.trigger,  // these set halt = true
&ownerTerm.trigger,   // ^
&variant.doPrint  // this just prints what it 
received

);
}
}


void serverWrite() {
bool halt;
long prev;

register("writer");

// even so, ugh for the boilerplate that remains
mixin MessageActionLocal!(halt,true,Imperative.Abort) 
killswitch;
mixin MessageActionLocal!(halt,true,OwnerTerminated) 
ownerTerm;

mixin MessageAction!Variant variant;  // likewise

while (!halt) {
receive(
(string text) {
__gstream.sendLine(text);
// heavily abbreviated; with only this you'll soon
// get kicked due to spam
},
&killswitch.trigger,
&ownerTerm.trigger,
&variant.doPrint
);
}
}

/* --8<8<8<8<8<8<8<8<8<8<--*/


In particular I'm not happy about the __gshared resources, but 
this works well enough for my purposes (again, IRC bot). I 
initialize said socket and stream in the thread that spawns these 
two, so while used by both, neither will close them when exiting 
scope.


But yes, the reader keeps switching around. Both need to be able 
to catch OwnerTerminated and some other choice imperatives of 
import.


The longest stall will naturally be when it's sent a message 
while blocked reading from the stream, but in this context 5 
seconds is not that big a deal. Still, I wish I knew some other 
way -- to salvage my pride, if nothing else.