On 08/20/2011 12:36 PM, Alexander Klenin wrote:
Basically, no.
Here is a quite recent "call for action" in this regard:
http://software.intel.com/en-us/blogs/2011/08/09/parallelism-as-a-first-class-citizen-in-c-and-c-the-time-has-come/
I thought C++11 already would be more advanced on that behal
On 08/19/2011 01:53 PM, David W Noon wrote:
The 2011 C++ standard does not, but GCC and a few other compilers offer
a facility called OpenMP that parallelises loops; it works for C, C++
and FORTRAN, at least within GCC.
I do know about OpenMP and I seem to remember that there is an article
abou
On Fri, Aug 19, 2011 at 21:33, Michael Schnell wrote:
> Does C++11 also provide a more "automatic" parallel processing feature than
> just something similar to worker threads (Object Pascal TThread), maybe
> similar to Prism's "parallel loop" ?
Basically, no.
Here is a quite recent "call for acti
On 08/19/2011 02:15 PM, David W Noon wrote:
I might do some experiments in C# to see if the thread manager creates
threads with process scope or system scope, as it might be a bit
smarter than Java's "green threads".
I remember that there once was a version of the PThreadLib that used an
interna
On 08/19/2011 02:15 PM, David W Noon wrote:
Threads are not tied to processors.
I do know that in Linux the scheduler tries to keep a thread in a core
if possible (even after re-scheduling the thread after preemption). This
greatly increases cache performance.
This is especially true in
"manag
On 08/19/2011 02:15 PM, David W Noon wrote:
My experience with OpenMP is that it is difficult to write a loop body
large enough that context switching does not overwhelm the benefits of
parallelism.
Hmmm.
If you do a multiplication of a 100*100 Matrix you could spawn 1
threads and this wi
On 08/19/2011 02:15 PM, David W Noon wrote:
That is not my experience. While CIL byte code interprets much faster
than Java byte code, it is still discernibly slower than native object
code.
Hmm. I don't have any personal experience with this, but from what I
read, this idea with CIL is to spli
On 19 Aug 2011, at 14:15, David W Noon wrote:
I might do some experiments in C# to see if the thread manager creates
threads with process scope or system scope, as it might be a bit
smarter than Java's "green threads".
According to wikipedia, Java stopped using the green threads model
after
On Fri, 19 Aug 2011 12:30:43 +0200, Michael Schnell wrote about Re:
[fpc-devel] C++ gets language-internal concurrency support:
>On 08/17/2011 06:49 PM, David W Noon wrote:
>> Perhaps the slower execution speed of CIL (.NET,
>> Mono) byte code masks the context switching overheads
On 08/17/2011 06:03 PM, David W Noon wrote:
I am (reasonably so, at least).
Where is the parallel aspect?
As I said I just did a quick search in the FAQ.
Does C++11 also provide a more "automatic" parallel processing feature
than just something similar to worker threads (Object Pascal TThre
On 08/17/2011 06:49 PM, David W Noon wrote:
Perhaps the slower execution speed of CIL (.NET,
Mono) byte code masks the context switching overheads and makes this
practice look less inefficient.
I doubt that this is the case. AFAIK, CIL code is not necessarily much
slower than native code. (Of
On Wed, 17 Aug 2011 18:14:03 +0200 (CEST), Marco van de Voort wrote
about Re: [fpc-devel] C++ gets language-internal concurrency support:
> In our previous episode, David W Noon said:
>
> > The threads t1 and t2 execute in parallel. Moreover, they will
> > execute in parallel
In our previous episode, David W Noon said:
> The threads t1 and t2 execute in parallel. Moreover, they will execute
> in parallel with any code that occurs between the declaration that
> start the threads and the join() method calls that synchronize them
> with the invoking thread. On a SMP sys
On Wed, 17 Aug 2011 16:24:35 +0200 (CEST), Marco van de Voort wrote
about Re: [fpc-devel] C++ gets language-internal concurrency support:
> In our previous episode, Michael Schnell said:
[snip]
>> int main()
>> {
>> std::thread t1{std::bind(f,some_vec)};
>>
On 08/17/2011 04:24 PM, Marco van de Voort wrote:
I'm no C++ expert, but:
Where is the parallel aspect? It looks more like a shorthand to spawn a
thread to evaluate an expression/closure/function call, and then wait on it
using .join().
Same here. I just did a short search in the FAQ (
http://
In our previous episode, Michael Schnell said:
> Some c++11 code doing parallel execution:
>
> *
> void f(vector&);
>
> struct F {
> vector& v;
> F(vector& vv) :v{vv} { }
> void operator()();
> };
>
> int main()
> {
>
Some c++11 code doing parallel execution:
*
void f(vector&);
struct F {
vector& v;
F(vector& vv) :v{vv} { }
void operator()();
};
int main()
{
std::thread t1{std::bind(f,some_vec)}; //*f(s
http://www.linuxfordevices.com/c/a/News/C11-standard-approved/
Prism already does have "parallel loops" and "future variable" for that
purpose (but of course usable only with a .NET/Mono framework, as the
implementation is done therein)
I remember discussions about providing something on that
18 matches
Mail list logo