On Wednesday, 27 June 2018 at 06:47:46 UTC, Manu wrote:
This is some seriously good news for GDC. Awesome stuff guys!
Agreed!
On Tue., 26 Jun. 2018, 11:45 am Iain Buclaw via Digitalmars-d, <
digitalmars-d@puremagic.com> wrote:
> On 26 June 2018 at 20:07, Manu via Digitalmars-d
> wrote:
> > On Tue, 26 Jun 2018 at 10:43, Iain Buclaw via Digitalmars-d
> > wrote:
> >>
> >> On 26 June 2018 at 19:41, Manu via Digitalmars-d
>
On Tuesday, 26 June 2018 at 02:20:37 UTC, Manu wrote:
On Mon, 25 Jun 2018 at 19:10, Manu wrote:
Some code:
-
struct Entity
{
enum NumSystems = 4;
struct SystemData
{
uint start, length;
}
SystemData[NumSystems] systemData;
@property uint systemBi
either.
> My point is, it's a really bad thing to present to users. DMD should
> really care about that impression.
I think that it mainly comes down to priorities, and performance is not at
the top of the list for the work being done on dmd. It's desirable to be
sure, but with eve
On Tuesday, 26 June 2018 at 17:38:42 UTC, Manu wrote:
I know, but it's still the reference compiler, and it should at
least
to a reasonable job at the kind of D code that it's
*recommended* that
users write.
I get your point, but IMO it's all about efficient allocation of
the manpower we hav
On 26 June 2018 at 20:07, Manu via Digitalmars-d
wrote:
> On Tue, 26 Jun 2018 at 10:43, Iain Buclaw via Digitalmars-d
> wrote:
>>
>> On 26 June 2018 at 19:41, Manu via Digitalmars-d
>> wrote:
>> > On Mon, 25 Jun 2018 at 20:50, Nicholas Wilson via Digitalmars-d
>> > wrote:
>> >>
>> >> Then use L
On 26 June 2018 at 20:26, Eugene Wissner via Digitalmars-d
wrote:
> On Tuesday, 26 June 2018 at 18:07:56 UTC, Manu wrote:
>>
>> On Tue, 26 Jun 2018 at 10:43, Iain Buclaw via Digitalmars-d
>> wrote:
>>>
>>>
>>> On 26 June 2018 at 19:41, Manu via Digitalmars-d
>>> wrote:
>>> > On Mon, 25 Jun 2018
On Tuesday, 26 June 2018 at 18:07:56 UTC, Manu wrote:
On Tue, 26 Jun 2018 at 10:43, Iain Buclaw via Digitalmars-d
wrote:
On 26 June 2018 at 19:41, Manu via Digitalmars-d
wrote:
> On Mon, 25 Jun 2018 at 20:50, Nicholas Wilson via
> Digitalmars-d wrote:
>>
>> Then use LDC! ;)
>
> Keep LDC u
On Tue, 26 Jun 2018 at 10:43, Iain Buclaw via Digitalmars-d
wrote:
>
> On 26 June 2018 at 19:41, Manu via Digitalmars-d
> wrote:
> > On Mon, 25 Jun 2018 at 20:50, Nicholas Wilson via Digitalmars-d
> > wrote:
> >>
> >> Then use LDC! ;)
> >
> > Keep LDC up to date with DMD master daily! ;)
>
> Lik
On 26 June 2018 at 19:41, Manu via Digitalmars-d
wrote:
> On Mon, 25 Jun 2018 at 20:50, Nicholas Wilson via Digitalmars-d
> wrote:
>>
>> Then use LDC! ;)
>
> Keep LDC up to date with DMD master daily! ;)
Like what GDC is doing (almost) ;-)
Iain.
On Mon, 25 Jun 2018 at 20:50, Nicholas Wilson via Digitalmars-d
wrote:
>
> Then use LDC! ;)
Keep LDC up to date with DMD master daily! ;)
e to use
> ldc instead of dmd if you really care about the performance of the generated
> binary.
I'm using unreleased 2.081, which isn't in LDC yet. Also, LDC seems to
have more problems with debuginfo than DMD.
Once LDC is on 2.081, I might have to flood their bugtracker with
debu
When I see "DMD" and "performance" in the same sentence, my first
reaction is, "why aren't you using LDC or GDC"?
Seriously, doing performance measurements with DMD is a waste of time,
because its optimizer has been proven time and again to be suboptim
On Tuesday, 26 June 2018 at 02:10:17 UTC, Manu wrote:
Some code:
-
struct Entity
{
enum NumSystems = 4;
struct SystemData
{
uint start, length;
}
SystemData[NumSystems] systemData;
@property uint systemBits() const { return
systemData[].map!(e =>
On Tuesday, 26 June 2018 at 02:20:37 UTC, Manu wrote:
I optimised another major gotcha eating perf, and now this
issue is taking 13% of my entire work time... bummer.
Without disagreeing with you, ldc2 optimizes this fine.
https://run.dlang.io/is/NJct6U
const @property uint onlineapp.Enti
On Tuesday, 26 June 2018 at 02:10:17 UTC, Manu wrote:
[snip]
@property uint systemBits() const { return
systemData[].map!(e =>
e.length).sum; } [snip]
This property sum's 4 ints... that should be insanely fast. It
should
also be something like 5-8 lines of asm.
Turns out, that call to sum(
ll-tree.
dmd's inliner is notoriously poor, but I don't know how much effort has
really been put into fixing the problem. I do recall it being argued several
times that it only should only be in the backend and that there shouldn't be
one in the frontend, but either way, the typical
On Mon, 25 Jun 2018 at 19:10, Manu wrote:
>
> Some code:
> -
> struct Entity
> {
> enum NumSystems = 4;
> struct SystemData
> {
> uint start, length;
> }
> SystemData[NumSystems] systemData;
> @property uint systemBits() const { return systemData[].m
Some code:
-
struct Entity
{
enum NumSystems = 4;
struct SystemData
{
uint start, length;
}
SystemData[NumSystems] systemData;
@property uint systemBits() const { return systemData[].map!(e =>
e.length).sum; }
}
Entity e;
e.systemBits(); // <- call th
On Saturday, 21 April 2018 at 20:54:32 UTC, Steven Schveighoffer
wrote:
I'm all for a string type and auto-decoding, so we can get rid
of auto-decoding for char arrays.
I've floated the idea of having the String type not be a range in
order to solve this problem once and for all. In order to g
On Saturday, 21 April 2018 at 19:15:58 UTC, Steven Schveighoffer
wrote:
An RCString could have slicing just like C#.
And it doesn't prevent "raw slicing" with char arrays.
FWIW, I support having a string library type and have been
advocating for it for years (I'd love to have my char arrays
b
On 4/21/18 4:31 PM, Jack Stouffer wrote:
On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer wrote:
Since when?
Since Andrei came up with the RCStr concept. Even a non-RC String type
would still solve our auto decoding problem while also allowing us to do
SSO.
Rereading your po
On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer
wrote:
Since when?
-Steve
Since Andrei came up with the RCStr concept. Even a non-RC String
type would still solve our auto decoding problem while also
allowing us to do SSO.
around the framework. For people not
familiar with C#, Span is similar to a D array slice.
https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/
And we’re trying to move towards a string library type and away from
raw slices :)
Since when?
At
not familiar with C#, Span is similar to a D array
slice.
https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/
And we’re trying to move towards a string library type and
away from raw slices :)
Since when?
-Steve
At least 2 1/2 years:
https
://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/
And we’re trying to move towards a string library type and away from raw
slices :)
Since when?
-Steve
/performance-improvements-in-net-core-2-1/
And we’re trying to move towards a string library type and away
from raw slices :)
.NET Core 2.1 was announced, with emphasis on using Span
instead of classic String class all around the framework. For
people not familiar with C#, Span is similar to a D array
slice.
https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/
Thanks, I will try that.
On Mon, Dec 11, 2017 at 7:34 PM, German Diago via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:
> On Thursday, 7 December 2017 at 14:09:35 UTC, Steven Schveighoffer wrote:
>
>> On 12/7/17 6:46 AM, Daniel Kozak wrote:
>>
>>> Not much helpful, still does not know whic
On Thursday, 7 December 2017 at 14:09:35 UTC, Steven
Schveighoffer wrote:
On 12/7/17 6:46 AM, Daniel Kozak wrote:
Not much helpful, still does not know which compiler flags
have been used, or how I can reproduce this. It would be nice
to have some shell script which will compile it and run it i
On 12/9/17 12:11 PM, unleashy wrote:
On Saturday, 9 December 2017 at 14:00:16 UTC, Steven Schveighoffer wrote:
Yes, it would be nice to have a "If you do this in C, here's how you
do it in D" guide. It could be part of the tour, for sure. Just tag it
intermediate.
What about this? https://dla
On Saturday, 9 December 2017 at 14:00:16 UTC, Steven
Schveighoffer wrote:
Yes, it would be nice to have a "If you do this in C, here's
how you do it in D" guide. It could be part of the tour, for
sure. Just tag it intermediate.
What about this? https://dlang.org/ctod.html
On 12/9/17 5:55 AM, Kagamin wrote:
On Thursday, 7 December 2017 at 21:38:57 UTC, Daniel Kozak wrote:
Yes using FILE* directly could be the way. But using file.rawRead is
still possible. But it is better to use static array with length one.
This can reflect absence of middle level resources li
ation is
right there.
Daniel's point was that Appender is more akin to std::string
(which doesn't have the benefit of having language-defined
array operaions). If the blogger used Appender, he would have
had better performance.
-Steve
As a new comer, I find these very educationa
On Thursday, 7 December 2017 at 21:38:57 UTC, Daniel Kozak wrote:
Yes using FILE* directly could be the way. But using
file.rawRead is still possible. But it is better to use static
array with length one.
This can reflect absence of middle level resources like basic
optimization techniques f
ation is
right there.
Daniel's point was that Appender is more akin to std::string
(which doesn't have the benefit of having language-defined
array operaions). If the blogger used Appender, he would have
had better performance.
-Steve
thanks for the explanation steve.
On 12/7/17 6:07 PM, Iain Buclaw wrote:
On 7 December 2017 at 23:39, Daniel Kozak wrote:
The other slowdown is caused by concatenation. Because std::string += is
more simillar to std.array.(Ref)Appender
Correct. The semantics of ~= mean that the memory is copied around to
a new allocation ev
kin to std::string (which
doesn't have the benefit of having language-defined array operaions). If
the blogger used Appender, he would have had better performance.
-Steve
On 12/07/2017 03:07 PM, Iain Buclaw wrote:
On 7 December 2017 at 23:39, Daniel Kozak wrote:
The other slowdown is caused by concatenation. Because std::string += is
more simillar to std.array.(Ref)Appender
Correct. The semantics of ~= mean that the memory is copied around to
a new allocatio
On Thursday, 7 December 2017 at 22:39:44 UTC, Daniel Kozak wrote:
The other slowdown is caused by concatenation. Because
std::string += is more simillar to std.array.(Ref)Appender
wait, i thought appenders performed better than concatenation. is
that not true or did i just misunderstand your p
On 7 December 2017 at 23:39, Daniel Kozak wrote:
> The other slowdown is caused by concatenation. Because std::string += is
> more simillar to std.array.(Ref)Appender
>
Correct. The semantics of ~= mean that the memory is copied around to
a new allocation every time (unless the array is marked
a
The other slowdown is caused by concatenation. Because std::string += is
more simillar to std.array.(Ref)Appender
Dne 7. 12. 2017 11:33 odp. napsal uživatel "Daniel Kozak" :
> Yes, it reuse the same pointer. But it still a little bit slower than
> accessing stack memory
>
> Dne 7. 12. 2017 11:04
Yes, it reuse the same pointer. But it still a little bit slower than
accessing stack memory
Dne 7. 12. 2017 11:04 odp. napsal uživatel "Iain Buclaw via Digitalmars-d" <
digitalmars-d@puremagic.com>:
On 7 December 2017 at 20:56, Steven Schveighoffer via Digitalmars-d
wrote:
> On 12/7/17 1:26 PM,
On 7 December 2017 at 20:56, Steven Schveighoffer via Digitalmars-d
wrote:
> On 12/7/17 1:26 PM, Daniel Kozak wrote:
>>
>> This is not about write the best D code. It is about similar code to
>> perform same. However when I looked at the D code it is not good port of
>> C/C++. He made many mistake
Yes using FILE* directly could be the way. But using file.rawRead is still
possible. But it is better to use static array with length one. Other
problem is with string. It would make sense make it out instead of ref and
change it to empty string and use RefAppender.
Dne 7. 12. 2017 9:00 odp. nap
On 12/7/17 1:26 PM, Daniel Kozak wrote:
This is not about write the best D code. It is about similar code to
perform same. However when I looked at the D code it is not good port of
C/C++. He made many mistakes which make it slower than C/C++ counterpart.
One example is read_one_line function:
This is not about write the best D code. It is about similar code to
perform same. However when I looked at the D code it is not good port of
C/C++. He made many mistakes which make it slower than C/C++ counterpart.
One example is read_one_line function:
C++: https://github.com/jpakkane/pkg-config/
So who is going to do the experiment and write the best D code to solve
the problem, write the rebuttal article, and post it?
It is good to get emotion going on the email list, but without external
action D gets no positive marketing.
--
Russel.
===
Dr Rus
On 12/7/17 6:46 AM, Daniel Kozak wrote:
Not much helpful, still does not know which compiler flags have been
used, or how I can reproduce this. It would be nice to have some shell
script which will compile it and run it in a same manner as a original
author
https://github.com/jpakkane/pkg-con
://nibblestew.blogspot.com.es/2017/12/comparing-c-c-and-d-performance->
with.html
[...]
I do wonder what the results would look like with clang and ldc
though, particularly since the version of gdc in Ubuntu is
going to be pretty old.
Yes and the GDC version also explains the 7 leaks. These are
qu
t; > Jussi Pakkanen (one of the meson build system creators) has
> > written a post comparing C, C++ and D. Worth a read.
> >
> > http://nibblestew.blogspot.com.es/2017/12/comparing-c-c-
> and-d-performance-> with.html
>
> Honestly, I find the results a bit depressing
t 1:55 AM, Antonio Corbi via Digitalmars-d <
>> digitalmars-d@puremagic.com> wrote:
>>
>>> Hello all,
>>>
>>> Jussi Pakkanen (one of the meson build system creators) has written a
>>> post comparing C, C++ and D. Worth a read.
>>>
>
On Thursday, December 07, 2017 09:55:56 Antonio Corbi via Digitalmars-d
wrote:
> Hello all,
>
> Jussi Pakkanen (one of the meson build system creators) has
> written a post comparing C, C++ and D. Worth a read.
>
> http://nibblestew.blogspot.com.es/2017/12/comparing-c-c
of the meson build system creators) has
written a post comparing C, C++ and D. Worth a read.
http://nibblestew.blogspot.com.es/2017/12/comparing-c-c-and-d-performance-with.html
Antonio.
The code is in the github repo mentioned there. It has several
branches.
The application is built using
t; comparing C, C++ and D. Worth a read.
>
> http://nibblestew.blogspot.com.es/2017/12/comparing-c-c-and-d-performance-with.html
>
> Antonio.
Hello all,
Jussi Pakkanen (one of the meson build system creators) has
written a post comparing C, C++ and D. Worth a read.
http://nibblestew.blogspot.com.es/2017/12/comparing-c-c-and-d-performance-with.html
Antonio.
On 9/12/2017 8:03 AM, kinke wrote:
Okay so I'm (sadly) used to every ~10th forum.dlang.org web request to take
something like 10-15 seconds to get a response (while the other ~9 are
instantaneous). But the last couple of days, the Wiki is hardly usable (editing
last night took > 1 minute for th
Okay so I'm (sadly) used to every ~10th forum.dlang.org web
request to take something like 10-15 seconds to get a response
(while the other ~9 are instantaneous). But the last couple of
days, the Wiki is hardly usable (editing last night took > 1
minute for the page to reload), and Travis CI is
Kostya has updated his benchmarks today and moved from:
gdc 5.2.0 to 6.3.0
LDC 0.15.2 beta1 to 1.4.0-beta1 (LLVM 4.0.1)
https://github.com/kostya/benchmarks/commit/73e0cb0e755f8e45d79fd2083b217d107e1185a9
The results are interesting to see as there over a year and half
development between vers
On 2017-07-14 11:55, Marek wrote:
So why Ruby or Python frameworks are much faster in this benchmark?
They scale better since, at least Ruby on Rails applications, are run
using multiple processes.
--
/Jacob Carlborg
On Thursday, 6 July 2017 at 07:27:24 UTC, Marek wrote:
https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=plaintext
C++, Java and Go frameworks have very high performance. Vibe.d
is supposed to have similar performance, but in fact vibe.d
performance is very low. Why?
Am 14.07.2017 um 11:55 schrieb Marek:
On Friday, 7 July 2017 at 19:03:52 UTC, Jacob Carlborg wrote:
I think that vibe.d didn't take full advantage of multi core, even
when enabling threading support. Ruby, or rather Rails, applications
are usually run using multiple processes, which allows to s
On Friday, 7 July 2017 at 19:03:52 UTC, Jacob Carlborg wrote:
I think that vibe.d didn't take full advantage of multi core,
even when enabling threading support. Ruby, or rather Rails,
applications are usually run using multiple processes, which
allows to scale on a multi core CPU. You can do t
This is great news:
http://www.phoronix.com/scan.php?page=news_item&px=glibc-malloc-thread-cache
these tests are pretty flawed considering that top places are
taken by event loop libraries that have no other features other
than 'respond to request'
Am 07.07.2017 um 21:27 schrieb FoxyBrown:
On Thursday, 6 July 2017 at 10:57:31 UTC, Sönke Ludwig wrote:
Am 06.07.2017 um 09:27 schrieb Marek:
https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=plaintext
C++, Java and Go frameworks have very high performance. V
On Thursday, 6 July 2017 at 10:57:31 UTC, Sönke Ludwig wrote:
Am 06.07.2017 um 09:27 schrieb Marek:
https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=plaintext
C++, Java and Go frameworks have very high performance. Vibe.d
is supposed to have similar performance
On 2017-07-07 20:22, Marek wrote:
What do you mean by 'scalability'? Raw tornado or bottle frameworks have
much better results than vibe.d. Python and Ruby have GIL so they can't
use threads in their standard implementations. They have much better
results anyway.
I think that vibe.d didn't tak
On Thursday, 6 July 2017 at 10:57:31 UTC, Sönke Ludwig wrote:
This is a scalability issue, which should hopefully be fixed
with 0.8.0. I'll open a PR once that is out. Basically with the
version that was used in the last benchmark round, it didn't
scale at all, and they use a server with many c
Am 06.07.2017 um 09:27 schrieb Marek:
https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=plaintext
C++, Java and Go frameworks have very high performance. Vibe.d is
supposed to have similar performance, but in fact vibe.d performance is
very low. Why?
T
https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=plaintext
C++, Java and Go frameworks have very high performance. Vibe.d is
supposed to have similar performance, but in fact vibe.d
performance is very low. Why?
On Monday, 3 July 2017 at 20:34:00 UTC, Ali Çehreli wrote:
On 07/03/2017 07:53 AM, Vladimir Panteleev wrote:
> That code is pretty old. std.process was changed to use poll
when
> available over a year ago:
>
> https://github.com/dlang/phobos/pull/4114
>
> Update your D installation?
Jonathan is
On 07/03/2017 07:53 AM, Vladimir Panteleev wrote:
> That code is pretty old. std.process was changed to use poll when
> available over a year ago:
>
> https://github.com/dlang/phobos/pull/4114
>
> Update your D installation?
Jonathan is hailing from Weka. :) As evidenced from Johan's recent
mes
On Monday, 3 July 2017 at 14:48:38 UTC, Jonathan Shamir wrote:
if (!(config & Config.inheritFDs))
{
import core.sys.posix.sys.resource;
rlimit r;
getrlimit(RLIMIT_NOFILE, &r);
foreach (i; 3 .. cast(int) r.rlim_cur) close(i);
sly this becomes a problem if the rlimit is high.
A few suggestions:
1. Make inheritFDs the default, since most people aren't aware of
the performance cost. Also most programs will run just fine if a
few fds are leaked.
1.1. Or, by default, close up to min(rlim_cur, 1024) - this
should be enoug
o that newcomers don't get
the wrong impression about the language being slow. I'd also recommend
investigating reducing GC load, as I described in my previous post, as
another angle for improving the performance of std.csv.
As for whether to validate or not: if you were to ask me,
've talked to std.csv's performance in
the past, probably with the author of the fast command line
tools.
[snip]
I'm the author of the TSV tools. I'd be happy to provide insights
I've gained from this exercise to help improve std.csv. I did
examine std.csv when I f
On Sunday, 4 June 2017 at 15:59:03 UTC, Jesse Phillips wrote:
On Sunday, 4 June 2017 at 06:15:24 UTC, H. S. Teoh wrote:
[...]
Ok, I took you up on that, I'm still skeptical:
LDC2 -O3 -release -enable-cross-module-inlining
std.csv: 12487 msecs
fastcsv (no gc): 1376 msecs
csvslicing: 3039 msec
On Sunday, 4 June 2017 at 06:15:24 UTC, H. S. Teoh wrote:
On Sun, Jun 04, 2017 at 05:41:10AM +, Jesse Phillips via
Digitalmars-d wrote:
On Saturday, 3 June 2017 at 23:18:26 UTC, bachmeier wrote:
> Do you know what happened with fastcsv [0], original thread
> [1].
>
> [0] https://github.com
On Saturday, 3 June 2017 at 04:25:27 UTC, Jesse Phillips wrote:
Even though the issue can be ignored, the overhead of parsing
to identify issues still remains. I haven't attempted write the
algorithm assuming proper data structure so I don't know what
the performance would look l
f you read the data into
memory yourself.
If the file is in the file cache of the kernel, memory mapping
does not need to reload the file as it is already in memory. In
fact, calling mmap() changes only the sharing of the pages in
general. That's where most of the performance win from memor
file cache of the kernel, memory mapping
does not need to reload the file as it is already in memory. In
fact, calling mmap() changes only the sharing of the pages in
general. That's where most of the performance win from memory
mapping comes from.
This stackoverflow [1] discussion
limitations (the file has to fit in memory, no
validation is done, etc.), which are also documented up-front in the
README file. I wrote the code targeting a specific use case mentioned
by the OP of the original thread, so I do not expect or claim you will
see the same kind of performance for other u
On Saturday, 3 June 2017 at 23:18:26 UTC, bachmeier wrote:
Do you know what happened with fastcsv [0], original thread [1].
[0] https://github.com/quickfur/fastcsv
[1]
http://forum.dlang.org/post/mailman.3952.1453600915.22025.digitalmars-d-le...@puremagic.com
I do not. Rereading that in light
On Saturday, 3 June 2017 at 04:25:27 UTC, Jesse Phillips wrote:
Author here:
The discussion[1] and articles[2] around "Faster Command Line
Tools" had me trying out std.csv for the task.
Now I know std.csv isn't fast and it allocates. When I wrote my
CSV parser, I'd also left around a parser
On Saturday, 3 June 2017 at 04:25:27 UTC, Jesse Phillips wrote:
I compared these two: LDC -O3 -release
Quick note:
Keep in mind that LDC does not do cross-module inlining
(non-template functions) by default yet. It's good to check
whether you see big differences with
`-enable-cross-module-i
as up to 5 milliseconds; thing is starts
with was called a whopping 384,806,160 times.
Keep in mind that the file itself has 10,512,769 rows of data
with four columns. Now I've talked to std.csv's performance in
the past, probably with the author of the fast command line
tools. Es
On Tuesday, 11 April 2017 at 18:13:11 UTC, Russel Winder wrote:
I have only the data that compiling and linking a GtkD
application against a shared library is a lot shorter than
against a static library.
Sure, but that might be easily fixed, and if you really want to
use shared libraries, you
On Sunday, 8 January 2017 at 13:16:29 UTC, Joseph Rushton
Wakeling wrote:
I'm asking for eyes on the problem because reducing it to a
minimal example appears non-trivial, while the bug itself looks
serious beyond its effect on this PR.
I underestimated myself :-P Minimal example is as follows
On Sunday, 8 January 2017 at 02:51:51 UTC, Andrei Alexandrescu
wrote:
This indicates a compiler bug in dmd itself, so the best
outcome for this library work would be a reduced compiler bug +
a simple library workaround. -- Andrei
Yes, that much is clear -- sorry if I wasn't clear enough myself
On 1/7/17 6:16 PM, Joseph Rushton Wakeling wrote:
The method I developed works fine with LDC, but fails with DMD: the
internal state of the generator winds up as all zeros, except for the
`State.z` parameter which mysteriously ends up at the correct value.
This would suggest that somehow the gen
On Saturday, 26 November 2016 at 20:13:36 UTC, Andrei
Alexandrescu wrote:
On 11/26/16 11:31 AM, Ilya Yaroshenko wrote:
Hey,
32-bit Mt19937 random number Generator is default in Phobos.
It is default in Mir too, except that 64-bit targets use
64-bit Mt19937 instead.
Congrats! Also thanks for
On Sunday, 11 December 2016 at 19:40:21 UTC, safety0ff wrote:
On Sunday, 11 December 2016 at 19:00:23 UTC, Stefan Koch wrote:
Just use this little program to simulate the process.
That's not really useful for understanding and making progress
on the issue.
I had a patch with improved hash
On Tuesday, 20 December 2016 at 18:51:05 UTC, Brad Anderson wrote:
Could the comma expression be contextually removed?
Specifically in return expressions as discussed initially in
this post?
Back in May a change was introduced to issue a deprecation
message for uses of the comma operator outs
On Tuesday, 20 December 2016 at 17:28:49 UTC, Stefan Koch wrote:
On Tuesday, 20 December 2016 at 17:15:53 UTC, Ilya Yaroshenko
wrote:
Are they already CTFEable? I have not seen an anounce, sorry
They have been for years now.
Of course only pointers from a CTFE context are valid at ctfe.
The
On 20.12.2016 14:47, Ilya Yaroshenko wrote:
One good thing for safety and CTFE is allow multiple return value. In
combination with `auto ref` it is _very_ powerful:
auto ref front()
{
// Returns 2 values, each value is returned by reference if possible
return (a.front, b.front);
}
On Tuesday, 20 December 2016 at 15:47:38 UTC, Nordlöw wrote:
On Tuesday, 20 December 2016 at 15:40:57 UTC, Nordlöw wrote:
DIP-32 has been dormant since 2013. I've been waiting for
builtin tuples ever since I started using D.
I wonder if it might be possible to add the tuple syntax
incremental
On Tuesday, 20 December 2016 at 17:28:49 UTC, Stefan Koch wrote:
On Tuesday, 20 December 2016 at 17:15:53 UTC, Ilya Yaroshenko
wrote:
Are they already CTFEable? I have not seen an anounce, sorry
They have been for years now.
Of course only pointers from a CTFE context are valid at ctfe.
The
On Tuesday, 20 December 2016 at 17:15:53 UTC, Ilya Yaroshenko
wrote:
Are they already CTFEable? I have not seen an anounce, sorry
They have been for years now.
Of course only pointers from a CTFE context are valid at ctfe.
The new engine will support them as well, (as it will eventually
supp
On Tuesday, 20 December 2016 at 17:05:03 UTC, Stefan Koch wrote:
On Tuesday, 20 December 2016 at 16:57:34 UTC, Ilya Yaroshenko
wrote:
On Tuesday, 20 December 2016 at 16:34:04 UTC, Walter Bright
wrote:
On 12/20/2016 6:08 AM, Ilya Yaroshenko wrote:
No, tuples stores either value or pointer. If it
1 - 100 of 2544 matches
Mail list logo