I got it from here:
https://code.google.com/p/go/source/detail?r=3bf9ffdcca1f9585f28dcf0e4ca1c75ea29e18be.
Apparently it's a linear feedback shift register, and was used in
Newsqueak.
On Sunday, 10 November 2013 at 09:42:30 UTC, Joseph Rushton
Wakeling wrote:
On 10/11/13 05:31, logicchains w
On 10/11/13 05:31, logicchains wrote:
The former produces better random numbers, but it's possible that it may be
slower.
Ahh, makes sense. Where did you get the particular RNG you used? I don't
recognize it.
I imagine (although I haven't checked) that std.random.Xorshift32
uses the algorithm:
seed ^= seed << 13;
seed ^= seed >> 17;
seed ^= seed << 5;
return seed;
while the levgen benchmarks use the algorithm:
seed += seed;
seed ^= (seed > int.max) ?
09-Nov-2013 16:23, bearophile пишет:
Joseph Rushton Wakeling:
How does the speed of that code change if instead of the Random
struct, you use std.random.Xorshift32 ... ?
That change of yours was well studied in the first blog post (the serial
one) and the performance loss of using Xorshift32
Joseph Rushton Wakeling:
How does the speed of that code change if instead of the Random
struct, you use std.random.Xorshift32 ... ?
That change of yours was well studied in the first blog post (the
serial one) and the performance loss of using Xorshift32 was
significant, even with LDC2. I d
On 07/11/13 14:12, bearophile wrote:
Very nice. I have made a more idiomatic version (in D global constants don't
need to be IN_UPPERCASE), I have added few missing immutable annotations, and
given the benchmark also counts line numbers, I have made the code a little more
compact (particularly th
On 08/11/13 04:13, logicchains wrote:
Benchmark author here. I left the ldmd2 entry there to represent the performance
of the D implementation from the time of the benchmark, to highlight that the
current D implementation is much newer than the others, and that there have been
no attempts to opti
logicchains:
Okay, I've updated it to 83. The other entries didn't include
comments, so I didn't bother checking to remove comments from
the linecount.
Thank you :-) I think few comments help the code look more
natural :-)
Bye,
bearophile
Okay, I've updated it to 83. The other entries didn't include
comments, so I didn't bother checking to remove comments from the
linecount.
On Friday, 8 November 2013 at 13:57:31 UTC, bearophile wrote:
Your site counts 90 SLOC for the D entry, that comes from 83
lines of code plus 7 comment lin
On Friday, 8 November 2013 at 11:47:02 UTC, logicchains wrote:
Ah, right. I'll bear it in mind if I'm ever writing
cross-architectural code in D.
Using size_t as array indices is a c/c++ convention that is also
relevant to D, it's definitely not a D specific thing. Perhaps it
is more common h
Your site counts 90 SLOC for the D entry, that comes from 83
lines of code plus 7 comment lines. I think you shouldn't count
the lines of comments, from all the entries.
If you want to count the comments too, then if you want I'll
submit a 83 lines long D version without comments for your site
Ah, right. I'll bear it in mind if I'm ever writing
cross-architectural code in D.
On Friday, 8 November 2013 at 09:37:56 UTC, Marco Leise wrote:
Am Fri, 08 Nov 2013 09:58:38 +0100
schrieb "logicchains" :
That's interesting. Is there a particular reason for using
size_t for array indexing rat
On Friday, 8 November 2013 at 09:37:56 UTC, Marco Leise wrote:
The _t indicates that its size depends on the target
architecture.
Erm? I am pretty sure "_t" is just a short form for "type",
common naming notation from C.
Am Fri, 08 Nov 2013 09:58:38 +0100
schrieb "logicchains" :
> That's interesting. Is there a particular reason for using size_t
> for array indexing rather than int?
It is the natural representation of an array index. It is
unsigned and spans the whole addressable memory area.
The _t indicates th
That's interesting. Is there a particular reason for using size_t
for array indexing rather than int?
uint[10] data;
foreach (i, ref x; data)
x = i;
This code works on 32 bit systems, because the index i of an
array is deduced as a size_t. So it fits inside the array of
uints. On a 64 sy
logicchains:
Benchmark author here. I left the ldmd2 entry there to
represent the performance of the D implementation from the time
of the benchmark, to highlight that the current D
implementation is much newer than the others, and that there
have been no attempts to optimise the C and C++ ve
Benchmark author here. I left the ldmd2 entry there to represent
the performance of the D implementation from the time of the
benchmark, to highlight that the current D implementation is much
newer than the others, and that there have been no attempts to
optimise the C and C++ versions similarl
On Thursday, 7 November 2013 at 14:27:59 UTC, Daniel Davidson
wrote:
Regarding what is idiomatic D, isn't `immutable x = rnd.next %
levelSize;` pedantic.
Why not just go with `const x = rnd.next % levelSize;`
I actually prefer usage of `immutable` by default for value types
because it is like
Dmitry Olshansky:
Regarding what is idiomatic D, isn't `immutable x = rnd.next %
levelSize;` pedantic.
Why not just go with `const x = rnd.next % levelSize;`
IMHO yes, it's pedantic.
It's a little pedantic, and it's some characters longer than
"const", but I think it's the good standard to
Joseph Rushton Wakeling:
Slightly baffled to see "ldc2" and "ldmd2" listed as two
separate entries. Your code will surely achieve similar speed
when compiled with ldmd2 and appropriate optimization choices
?
Yes, I think the ldmd2 "entry" should be removed...
Bye,
bearophile
On 07/11/13 12:47, Marco Leise wrote:
I made it idiomatic, D is on place 1 now by a big margin. See the
'ldc2' entry:
Slightly baffled to see "ldc2" and "ldmd2" listed as two separate entries. Your
code will surely achieve similar speed when compiled with ldmd2 and appropriate
optimization c
07-Nov-2013 18:27, Daniel Davidson пишет:
On Thursday, 7 November 2013 at 13:12:56 UTC, bearophile wrote:
Marco Leise:
I made it idiomatic, D is on place 1 now by a big margin. See the
'ldc2' entry:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimr
Am Thu, 07 Nov 2013 15:27:57 +0100
schrieb "Daniel Davidson" :
> Regarding what is idiomatic D, isn't `immutable x = rnd.next %
> levelSize;` pedantic.
> Why not just go with `const x = rnd.next % levelSize;`
Yes it is pedantic and I don't mind if anyone objects. :)
> Any time the type is a fun
On Thursday, 7 November 2013 at 13:12:56 UTC, bearophile wrote:
Marco Leise:
I made it idiomatic, D is on place 1 now by a big margin. See
the
'ldc2' entry:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
Very nice. I have made a more idioma
Marco Leise:
foreach (immutable xi; r.x .. r.x + r.w + 1)
What the heck?! I didn't know that even compiles. :)
It's an enhancement that I requested, and Kenji implemented some
time ago.
About the UPPERCASE_CONSTANTS: I know we tend to use camelCase
for them, too. It's just a personal pre
Am Thu, 07 Nov 2013 14:19:00 +0100
schrieb "bearophile" :
> > http://dpaste.dzfl.pl/d37ba995
>
> That gives me 83 cloc (http://cloc.sourceforge.net ) lines of
> code, so if you submit that code to the benchmark site, make sure
> the line count (currently 108, despite cloc gives me 101 on it
http://dpaste.dzfl.pl/d37ba995
That gives me 83 cloc (http://cloc.sourceforge.net ) lines of
code, so if you submit that code to the benchmark site, make sure
the line count (currently 108, despite cloc gives me 101 on it)
too gets updated.
Bye,
bearophile
Marco Leise:
I made it idiomatic, D is on place 1 now by a big margin. See
the
'ldc2' entry:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
Very nice. I have made a more idiomatic version (in D global
constants don't need to be IN_UPPERCASE
On Saturday, 24 August 2013 at 04:22:09 UTC, Meta wrote:
On Saturday, 24 August 2013 at 02:12:41 UTC, Jesse Phillips
wrote:
It gets second no matter how you read it!
It was also not very idiomatic. It looks like some performance
improvements could be made.
I made it idiomatic, D is on place
On Monday, 26 August 2013 at 12:05:09 UTC, Russel Winder wrote:
On Mon, 2013-08-26 at 00:57 +0200, Joseph Rushton Wakeling
wrote:
On 24/08/13 19:01, Ramon wrote:
> I think that there is a lot speaking against sloc.
>
> First it's often (ab?)used for "Ha! My language x is better
> than yours. I
On 26/08/13 14:04, Russel Winder wrote:
OK so good for the first 20s of a lecture on Quicksort and totally
useless for doing anything properly. Two main reasons:
1. It copies data rather than doing it in situ, should use Mergesort.
2. passes over the data twice instead of once.
This is a perfec
On Mon, 2013-08-26 at 00:57 +0200, Joseph Rushton Wakeling wrote:
> On 24/08/13 19:01, Ramon wrote:
> > I think that there is a lot speaking against sloc.
> >
> > First it's often (ab?)used for "Ha! My language x is better than yours. I
> > can
> > write a web server in 3 lines, you need 30".
>
>
On Monday, 26 August 2013 at 03:44:30 UTC, Andrei Alexandrescu
wrote:
On 8/25/13 6:16 PM, Paul Jurczak wrote:
On Sunday, 25 August 2013 at 23:16:17 UTC, Joseph Rushton
Wakeling wrote:
On 26/08/13 01:06, Andrei Alexandrescu wrote:
This is one of the worst PR functional programming has ever
gott
On 8/25/13 6:16 PM, Paul Jurczak wrote:
On Sunday, 25 August 2013 at 23:16:17 UTC, Joseph Rushton Wakeling wrote:
On 26/08/13 01:06, Andrei Alexandrescu wrote:
This is one of the worst PR functional programming has ever gotten,
and one of
the worst things FP has done to the larger community. So
On Monday, 26 August 2013 at 01:16:21 UTC, Paul Jurczak wrote:
On Sunday, 25 August 2013 at 23:16:17 UTC, Joseph Rushton
Wakeling wrote:
On 26/08/13 01:06, Andrei Alexandrescu wrote:
This is one of the worst PR functional programming has ever
gotten, and one of
the worst things FP has done to t
On Monday, 26 August 2013 at 01:16:21 UTC, Paul Jurczak wrote:
You still have a chance, because I don't quite get it. With the
little I know about Haskell, I find this code very elegant.
What is wrong with it? Performance?
It's a huge blowup in time complexity. They say that Lisp
programmers
Paul Jurczak:
You still have a chance, because I don't quite get it. With the
little I know about Haskell, I find this code very elegant.
What is wrong with it? Performance?
A faithful QuickShort should work in-place, unlike that code.
This is an implementation of a similar functional algori
On Sunday, 25 August 2013 at 23:16:17 UTC, Joseph Rushton
Wakeling wrote:
On 26/08/13 01:06, Andrei Alexandrescu wrote:
This is one of the worst PR functional programming has ever
gotten, and one of
the worst things FP has done to the larger community. Somebody
should do hard
time for this. And
On 26/08/13 01:06, Andrei Alexandrescu wrote:
This is one of the worst PR functional programming has ever gotten, and one of
the worst things FP has done to the larger community. Somebody should do hard
time for this. And yes, for that matter it's a great example in which SLOCs are
not a very goo
On 8/25/13 3:57 PM, Joseph Rushton Wakeling wrote:
On 24/08/13 19:01, Ramon wrote:
I think that there is a lot speaking against sloc.
First it's often (ab?)used for "Ha! My language x is better than
yours. I can
write a web server in 3 lines, you need 30".
Don't know about a web server, but I
On 24/08/13 19:01, Ramon wrote:
I think that there is a lot speaking against sloc.
First it's often (ab?)used for "Ha! My language x is better than yours. I can
write a web server in 3 lines, you need 30".
Don't know about a web server, but I remember somewhere online I found this
really cool
On Saturday, 24 August 2013 at 17:01:56 UTC, Ramon wrote:
I think that there is a lot speaking against sloc.
First it's often (ab?)used for "Ha! My language x is better
than yours. I can write a web server in 3 lines, you need 30".
And then slocs say a lot of things about a lot of things. Like:
Paul Jurczak:
Using long names vs. short ones will substantially inflate zip
file size, but will not affect LOC count.
On the other hand if you use a very good compressor (like the
PPMd of 7Zip or even better a PAQ by the good Matt) the
identifier names that are mostly a concatenation of Eng
On Saturday, 24 August 2013 at 04:59:41 UTC, H. S. Teoh wrote:
[..]
A far more reliable measure of code complexity is to look at the
compressed size of the source code (e.g., with zip), which is an
approximation of the Kolgomorov complexity of the text, roughly
equivalent to the amount of informa
On 8/24/2013 9:17 AM, H. S. Teoh wrote:
What's so difficult about running zip on the code?
It's not so easy to run zip on a snippet in a magazine article, as opposed to
visually just looking at it.
The fault of LOC is precisely that people "fairly intuitively"
understand it. The problem is
On Saturday, 24 August 2013 at 16:19:07 UTC, H. S. Teoh wrote:
The fault of LOC is precisely that people "fairly intuitively"
understand it. The problem is that no two people's intuitions
ever
match. So any conclusions drawn from LOC must necessarily be
subjective, and really not that much bet
I think that there is a lot speaking against sloc.
First it's often (ab?)used for "Ha! My language x is better than
yours. I can write a web server in 3 lines, you need 30".
And then slocs say a lot of things about a lot of things. Like:
Experience (being new or not used to X I'll need more lin
On Sat, Aug 24, 2013 at 12:13:06AM -0700, Walter Bright wrote:
> On 8/23/2013 10:23 PM, H. S. Teoh wrote:
> >>Like I said, you can still game it. I think some common sense
> >>applies, not a literal interpretation.
> >You conveniently snipped the rest of my post, which postulates a far
> >better me
On 24/08/13 06:58, H. S. Teoh wrote:
In none of the above examples did I try to deliberately game with the
metric. But the metric is still pretty inaccurate, and requires
subjective judgment calls.
It's a heuristic, rather than a metric, I'd say. But as a heuristic it may be
useful to compare
On 8/23/2013 10:23 PM, H. S. Teoh wrote:
Like I said, you can still game it. I think some common sense
applies, not a literal interpretation.
You conveniently snipped the rest of my post, which postulates a far
better metric that's no harder to apply in practice. :)
You can't compress by visua
On Fri, Aug 23, 2013 at 10:13:56PM -0700, Walter Bright wrote:
> On 8/23/2013 9:58 PM, H. S. Teoh wrote:
> >On Fri, Aug 23, 2013 at 08:25:20PM -0700, Walter Bright wrote:
> >>On 8/23/2013 7:10 PM, Jesse Phillips wrote:
> >>>If we decided that 2 lines was how we do formatting,
> >>
> >>In general, I
On 8/23/2013 9:58 PM, H. S. Teoh wrote:
On Fri, Aug 23, 2013 at 08:25:20PM -0700, Walter Bright wrote:
On 8/23/2013 7:10 PM, Jesse Phillips wrote:
If we decided that 2 lines was how we do formatting,
In general, I regard a "line of code" as one statement or one
declaration. Comments don't cou
On Fri, Aug 23, 2013 at 09:58:11PM -0700, H. S. Teoh wrote:
[...]
> A far more reliable measure of code complexity is to look at the
> compressed size of the source code (e.g., with zip), which is an
> approximation of the Kolgomorov complexity of the text, roughly
> equivalent to the amount of inf
On Fri, Aug 23, 2013 at 08:25:20PM -0700, Walter Bright wrote:
> On 8/23/2013 7:10 PM, Jesse Phillips wrote:
> >If we decided that 2 lines was how we do formatting,
>
> In general, I regard a "line of code" as one statement or one
> declaration. Comments don't count, nor does cramming 3 statements
On Saturday, 24 August 2013 at 02:12:41 UTC, Jesse Phillips wrote:
It gets second no matter how you read it!
It was also not very idiomatic. It looks like some performance
improvements could be made.
On 8/23/2013 7:10 PM, Jesse Phillips wrote:
If we decided that 2 lines was how we do formatting,
In general, I regard a "line of code" as one statement or one declaration.
Comments don't count, nor does cramming 3 statements into one line make it one LOC.
Of course, you can still game that,
On Friday, 23 August 2013 at 16:50:27 UTC, H. S. Teoh wrote:
Seriously, I don't understand what's with this obsession with
line count metrics.
While LOC isn't a very good metric, you're complaining about
things that aren't really there. Yes you can shorten it to 2
lines, but did he? It looks
On Friday, 23 August 2013 at 19:17:01 UTC, Walter Bright wrote:
On 8/23/2013 6:48 AM, bearophile wrote:
The author of the serial language comparison has now created a
simple parallel
comparison:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
On Friday, 23 August 2013 at 16:50:27 UTC, H. S. Teoh wrote:
[..]
Frankly, the fact that line counts are used at all has already
decremented the author's credibility for me.
I agree that LOC is a very poor measure, but I think the intent
was to offer some sort of comparison of syntactic comple
On Friday, 23 August 2013 at 13:48:39 UTC, bearophile wrote:
The author of the serial language comparison has now created a
simple parallel comparison:
...
Off-topic:
First time hearing of Nimrod, it has a neat GC implementation for
games and similar soft-realtime applications. Being able to
On 8/23/13 9:29 AM, bearophile wrote:
The missing link to the Reddit thread:
http://www.reddit.com/r/programming/comments/1kxt7w/parallel_roguelike_levgen_benchmarks_rust_go_d/
Awesome, upboat!
I am mostly speculating, but in the past few months I subjectively
perceive we've turned a corner
Not really intrinsic to the language(syntactically), but there is
the "soft realtime GC", meaning you can control when and for how
long the gc can do the collecting. Sounds like a lovely feature
for games.
http://nimrod-code.org/gc.html
On Friday, 23 August 2013 at 17:33:12 UTC, Ramon wrote:
I
On 8/23/2013 6:48 AM, bearophile wrote:
The author of the serial language comparison has now created a simple parallel
comparison:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
And note how well D did in the speed tests!
H. S. Teoh:
Seriously, I don't understand what's with this obsession with
line count metrics.
...
Frankly, the fact that line counts are used at all has already
decremented the author's credibility for me.
I agree with you. When you show a table to people, and such table
is ranked according
I like it and see an interesting mix of concepts in Nimrod.
That said, I didn't and still don't see the major breakthrough or
value of {} vs. begin/end vs. Python style. While I agree that
Python enforces some visual style I also see that this question
always comes down to personal philosophy and
On Fri, 23 Aug 2013 09:48:54 -0700, H. S. Teoh wrote:
> Seriously, I don't understand what's with this obsession with line count
> metrics. Here's a 2-line version of the above code:
>
> struct Tile { int X = void; int Y = void; int T = void; }
> struct Room { int X = void; int Y = void; int W =
On Fri, Aug 23, 2013 at 03:48:38PM +0200, bearophile wrote:
> The author of the serial language comparison has now created a
> simple parallel comparison:
>
> http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
[...]
> Also the author keeps changing t
The missing link to the Reddit thread:
http://www.reddit.com/r/programming/comments/1kxt7w/parallel_roguelike_levgen_benchmarks_rust_go_d/
Bye,
bearophile
The author of the serial language comparison has now created a
simple parallel comparison:
http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
From the blog post:
but for the D and Rust implementations only the single-threaded
portion was written
69 matches
Mail list logo