On Fri, 04 Mar 2011 15:53:56 -0500, dsimcha wrote:
But then official "judgement day" will be April Fool's Day. I don't want
anyone
thinking std.parallelism is an April Fool's joke.
IIRC, I believe that day is reserved for the big release of preprocessor
macros for D. We'll have to find
On Friday, March 04, 2011 12:53:56 dsimcha wrote:
> == Quote from Lars T. Kyllingstad (public@kyllingen.NOSPAMnet)'s article
>
> > On Fri, 04 Mar 2011 18:34:39 +, dsimcha wrote:
> > > == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
> > > article
> > >
> > >> On 3/4/11 5:32
On Fri, 04 Mar 2011 20:53:56 +, dsimcha wrote:
> == Quote from Lars T. Kyllingstad (public@kyllingen.NOSPAMnet)'s article
>> On Fri, 04 Mar 2011 18:34:39 +, dsimcha wrote:
>> > == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
>> > article
>> >> On 3/4/11 5:32 AM, Lars T.
== Quote from Lars T. Kyllingstad (public@kyllingen.NOSPAMnet)'s article
> On Fri, 04 Mar 2011 18:34:39 +, dsimcha wrote:
> > == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
> > article
> >> On 3/4/11 5:32 AM, Lars T. Kyllingstad wrote:
> >> > On Tue, 01 Mar 2011 16:23:43 +0
On Fri, 04 Mar 2011 18:34:39 +, dsimcha wrote:
> == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
> article
>> On 3/4/11 5:32 AM, Lars T. Kyllingstad wrote:
>> > On Tue, 01 Mar 2011 16:23:43 +, dsimcha wrote:
>> >
>> >> Ok, so that's one issue to cross off the list. To
On Fri, 2011-03-04 at 11:27 -0800, Jonathan M Davis wrote:
[ . . . ]
> > Presumably this is a four-state vote:
> >
> > +1 approve
> > 0 cannot decide
> > -1 disapprove
> > -- no opinion
> >
> > Anyone not emailing is deemed to have cast a -- vote all of which are
> > automatically
On Friday, March 04, 2011 11:12:00 Russel Winder wrote:
> On Fri, 2011-03-04 at 10:10 -0800, Jonathan M Davis wrote:
> [ . . . ]
>
> > We've never really discussed that. Thus far, anyone who posted on the
> > newsgroup could vote. Now, if there were a bunch of votes from unknown
> > folks and that
On Fri, 2011-03-04 at 10:10 -0800, Jonathan M Davis wrote:
[ . . . ]
> We've never really discussed that. Thus far, anyone who posted on the
> newsgroup
> could vote. Now, if there were a bunch of votes from unknown folks and that
> definitely shifted the vote, then I would fully expect those vo
On 3/4/11 12:34 PM, dsimcha wrote:
This sounds reasonable. Should I be doing anything besides following the thread
and reacting accordingly?
Basically yes. Here's a good set of notes:
http://www.boost.org/community/reviews.html#Review_Manager
Don't forget that the ultimate accept/reject deci
On 3/4/11 12:34 PM, dsimcha wrote:
This sounds reasonable. Should I be doing anything besides following the thread
and reacting accordingly?
Basically yes. Here's a good set of notes:
http://www.boost.org/community/reviews.html#Review_Manager
Don't forget that the ultimate accept/reject deci
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
> On 3/4/11 5:32 AM, Lars T. Kyllingstad wrote:
> > On Tue, 01 Mar 2011 16:23:43 +, dsimcha wrote:
> >
> >> Ok, so that's one issue to cross off the list. To summarize the
> >> discussion so far, most of it's revolved
On Friday, March 04, 2011 09:52:17 Russel Winder wrote:
> On Fri, 2011-03-04 at 09:27 -0600, Andrei Alexandrescu wrote:
> [ . . . ]
>
> > > - We give it one more week for the final review, starting today, 4
> > > March. - If this review does not lead to major API changes, we start
> > > the vote n
On 3/4/11 11:52 AM, Russel Winder wrote:
On Fri, 2011-03-04 at 09:27 -0600, Andrei Alexandrescu wrote:
[ . . . ]
- We give it one more week for the final review, starting today, 4 March.
- If this review does not lead to major API changes, we start the vote
next Friday, 11 March. Vote closes af
On Fri, 2011-03-04 at 09:27 -0600, Andrei Alexandrescu wrote:
[ . . . ]
> > - We give it one more week for the final review, starting today, 4 March.
> > - If this review does not lead to major API changes, we start the vote
> > next Friday, 11 March. Vote closes after one week, 18 March.
> >
> >
On 3/4/11 5:32 AM, Lars T. Kyllingstad wrote:
On Tue, 01 Mar 2011 16:23:43 +, dsimcha wrote:
Ok, so that's one issue to cross off the list. To summarize the
discussion so far, most of it's revolved around the issue of
automatically determining how many CPUs are available and therefore how
On Tue, 01 Mar 2011 16:23:43 +, dsimcha wrote:
> Ok, so that's one issue to cross off the list. To summarize the
> discussion so far, most of it's revolved around the issue of
> automatically determining how many CPUs are available and therefore how
> many threads the default pool should have
On Tue, 2011-03-01 at 13:06 -0500, jasonw wrote:
> dsimcha Wrote:
>
> > Ok, so that's one issue to cross off the list. To summarize the discussion
> > so
> > far, most of it's revolved around the issue of automatically determining
> > how many
> > CPUs are available and therefore how many threa
Am 01.03.2011 20:19, schrieb dsimcha:
> == Quote from jasonw (u...@webmails.org)'s article
>> dsimcha Wrote:
>>> Ok, so that's one issue to cross off the list. To summarize the discussion
>>> so
>>> far, most of it's revolved around the issue of automatically determining
>>> how many
>>> CPUs ar
== Quote from jasonw (u...@webmails.org)'s article
> dsimcha Wrote:
> > Ok, so that's one issue to cross off the list. To summarize the discussion
> > so
> > far, most of it's revolved around the issue of automatically determining
> > how many
> > CPUs are available and therefore how many thread
dsimcha Wrote:
> Ok, so that's one issue to cross off the list. To summarize the discussion so
> far, most of it's revolved around the issue of automatically determining how
> many
> CPUs are available and therefore how many threads the default pool should
> have.
> Previously, std.parallelism
Ok, so that's one issue to cross off the list. To summarize the discussion so
far, most of it's revolved around the issue of automatically determining how
many
CPUs are available and therefore how many threads the default pool should have.
Previously, std.parallelism had been using core.cpuid for
On Mon, 2011-02-28 at 18:54 -0500, dsimcha wrote:
> On 2/28/2011 10:14 AM, Russel Winder wrote:
> >
> >> This code is tested (at least on my hardware) on Windows 7 and Ubuntu
> >> 10.10 in both 32 and 64 mode. I did not test on Mac OS because I don't
> >> own any such hardware, though it **should*
On 2/28/2011 10:14 AM, Russel Winder wrote:
This code is tested (at least on my hardware) on Windows 7 and Ubuntu
10.10 in both 32 and 64 mode. I did not test on Mac OS because I don't
own any such hardware, though it **should** work because Mac OS is also
POSIX. Someone please confirm.
std
On Monday 28 February 2011 06:39:02 dsimcha wrote:
> On 2/28/2011 7:22 AM, Don wrote:
> > Russel Winder wrote:
> >> I accept your argument about core.cpuid and so will not investigate it
> >> further, other than to say it needs to work on 64-bit processors as well
> >> as 32-bit ones. The campaign
== Quote from Russel Winder (rus...@russel.org.uk)'s article
> std.parallelism.d fails to compile on Mac OS X 32-bit:
Crap. Didn't notice that half of unistd.d is in version(Linux) blocks. Will
fix
later, but don't have time now and am waiting on a stackoverflow answer to
figure
out where the
On Mon, 2011-02-28 at 09:39 -0500, dsimcha wrote:
[ . . . ]
> Done. This was actually much easier than I thought. I didn't
> document/expose it, though, because I didn't put any thought into
> creating an API for it. I just implemented the bare minimum to make
> std.parallelism work properly.
On 2/28/2011 7:22 AM, Don wrote:
Russel Winder wrote:
I accept your argument about core.cpuid and so will not investigate it
further, other than to say it needs to work on 64-bit processors as well
as 32-bit ones. The campaign now must be to have an OS query capability
to find the number of proc
Russel Winder wrote:
On Mon, 2011-02-28 at 10:41 +0100, Don wrote:
[ . . . ]
As the name says, it is * cores per CPU *. That is _not_ the same as the
total number of cores in the machine.
I guess then the missing extension is to have a function that returns an
array opf processor references so
On Mon, 2011-02-28 at 10:41 +0100, Don wrote:
[ . . . ]
> As the name says, it is * cores per CPU *. That is _not_ the same as the
> total number of cores in the machine.
I guess then the missing extension is to have a function that returns an
array opf processor references so that the core count
Am 28.02.2011 10:41, schrieb Don:
Russel Winder wrote:
David,
On Sun, 2011-02-27 at 10:40 -0500, dsimcha wrote:
I realized the obvious kludge and "fixed" this. Now, all benchmarks
take a --nCpu command line argument that allows you to set the number
of cores manually. This is an absolute must
Russel Winder wrote:
David,
On Sun, 2011-02-27 at 10:40 -0500, dsimcha wrote:
I realized the obvious kludge and "fixed" this. Now, all benchmarks
take a --nCpu command line argument that allows you to set the number of
cores manually. This is an absolute must if running on 64. If you
don'
I've looked into this more. I realized that I'm only able to reproduce
it when running Linux in a VM on top of Windows. When I reboot and run
my Linux distro in bare metal instead, I get decent (but not linear)
speedups on the matrix benchmark. I'm guessing this is due to things
like locking
David,
On Sun, 2011-02-27 at 09:48 -0500, dsimcha wrote:
[ . . . ]
> Can you please re-run the benchmark to make sure that this isn't just a
> one-time anomaly? I can't seem to make the parallel matrix inversion
> run slower than serial on my hardware, even with ridiculous tuning
> parameters
David,
On Sun, 2011-02-27 at 10:40 -0500, dsimcha wrote:
> I realized the obvious kludge and "fixed" this. Now, all benchmarks
> take a --nCpu command line argument that allows you to set the number of
> cores manually. This is an absolute must if running on 64. If you
> don't set this, the
On Sun, 2011-02-27 at 11:36 -0500, dsimcha wrote:
[ . . . ]
> figured out why. I think it's related to my Posix workaround for Bug
> 3753 (http://d.puremagic.com/issues/show_bug.cgi?id=3753). This
> workaround causes GC heap allocations to occur in a loop inside the
[ . . . ]
> list, etc.), bu
David,
On Sun, 2011-02-27 at 09:48 -0500, dsimcha wrote:
[ . . . ]
> Can you please re-run the benchmark to make sure that this isn't just a
> one-time anomaly? I can't seem to make the parallel matrix inversion
> run slower than serial on my hardware, even with ridiculous tuning
> parameters
On 2/27/2011 9:48 AM, dsimcha wrote:
On 2/27/2011 8:03 AM, Russel Winder wrote:
32-bit mode on a 8-core (twin Xeon) Linux box. That core.cpuid bug
really, really sucks.
I see matrix inversion takes longer with 4 cores than with 1!
Actually, I am able to reproduce this, but only on Linux, an
On 2/26/2011 4:13 PM, dsimcha wrote:
One last note: Due to Bug 5612
(http://d.puremagic.com/issues/show_bug.cgi?id=5612), the benchmarks
don't work on 64-bit because core.cpuid won't realize that your CPU is
multicore. There are two ways around this. One is to use 32-bit mode.
The other is to cha
On 2/27/2011 8:03 AM, Russel Winder wrote:
32-bit mode on a 8-core (twin Xeon) Linux box. That core.cpuid bug
really, really sucks.
I see matrix inversion takes longer with 4 cores than with 1!
Can you please re-run the benchmark to make sure that this isn't just a
one-time anomaly? I can't
32-bit mode on a 8-core (twin Xeon) Linux box. That core.cpuid bug
really, really sucks.
I see matrix inversion takes longer with 4 cores than with 1!
|> scons runall
/usr/bin/python /home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py
runall
scons: Reading SConscript files ...
scons:
On an ancient 32-bit dual core Mac Mini:
|> scons runall
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
/home/Checkouts/Mercurial/SCons/bootstrap/src/script/scons.py runall
scons: Reading SConscript files ...
scons: done reading SConscript f
On Sat, 2011-02-26 at 16:13 -0500, dsimcha wrote:
[ . . . ]
> One last note: Due to Bug 5612
> (http://d.puremagic.com/issues/show_bug.cgi?id=5612), the benchmarks
> don't work on 64-bit because core.cpuid won't realize that your CPU is
> multicore. There are two ways around this. One is to u
On 2/26/2011 6:08 PM, Andrej Mitrovic wrote:
The example code is quite simple to digest. The makeAngel name is funny. :p
I wonder how this compares to other languages.
Should the return values "Task!(run,TypeTuple!(F,Args))" and
"Task!(run,TypeTuple!(F,Args))*" be exposed like that? I'd maybe v
The example code is quite simple to digest. The makeAngel name is funny. :p
I wonder how this compares to other languages.
Should the return values "Task!(run,TypeTuple!(F,Args))" and
"Task!(run,TypeTuple!(F,Args))*" be exposed like that? I'd maybe vote
for auto on this one, if possible. Although
On 2/26/11 3:13 PM, dsimcha wrote:
I've taken care of all of the issues Andrei mentioned a while back with
regard to std.parallelism. I've moved the repository to Github
(https://github.com/dsimcha/std.parallelism/wiki), updated/improved the
documentation
(http://cis.jhu.edu/~dsimcha/d/phobos/std
I have no idea why the euclidean benchmark shows a superlinear speedup
without -release, though I'm able to reproduce this on my box. Must
have something to do with std.algorithm's use of asserts or something.
As far as operating systems, I'm glad you tested on XP32. One thing
that can make
Without release, only the euclidean benchmark shows a more dramatic
speed difference:
Serial reduce: 6298 milliseconds.
Parallel reduce with 4 cores: 567 milliseconds.
I forgot to mention I'm on XP32. I could test these on a virtualized
Linux, if that's worth testing.
Some results on an Athlon II X4, 2.8Ghz (quad-core):
https://gist.github.com/845676
I've taken care of all of the issues Andrei mentioned a while back with
regard to std.parallelism. I've moved the repository to Github
(https://github.com/dsimcha/std.parallelism/wiki), updated/improved the
documentation
(http://cis.jhu.edu/~dsimcha/d/phobos/std_parallelism.html), cleaned up
49 matches
Mail list logo