Hello everyone,
In attach I included the patch Albert Cohen was referring to.
Middle-end selection is performed by marking the regions of the source
code, that should be compiled for an specific ISA, using pragmas such
as:
#pragma target
Or even to reset the above by just doing:
#pragma ta
> The most visible ongoing effort is the conversion from target macros
> to target hooks (which is incomplete). The goal was to allow "hot
> swapping" of backends. This is still the most obvious, most complete,
> and least unappealing (from a technical POV) approach IMHO. But Kaveh
> showed at on
On Wed, 18 Mar 2009, Joern Rennecke wrote:
> It would be easier to implement if C++ with virtual member functions
> would be allowed for the target vector. Then, where this is not already
> readily available, we can tweak the optimizers and/or the code so that
> we obtain de-virtualization and in
Quoting Steven Bosscher :
The most visible ongoing effort is the conversion from target macros
to target hooks (which is incomplete). The goal was to allow "hot
swapping" of backends. This is still the most obvious, most complete,
Yes, I initially thought about this one.
and least unappealin
Steven Bosscher wrote:
On Wed, Mar 18, 2009 at 8:17 PM, Albert Cohen wrote:
Antoniu Pop wrote:
(...)
The multiple backends compilation is not directly related, so you
should use a separate branch. It makes sense to go in that direction.
Indeed.
Work has been going on for years in this direc
On Wed, Mar 18, 2009 at 8:17 PM, Albert Cohen wrote:
> Antoniu Pop wrote:
> (...)
>>
>> The multiple backends compilation is not directly related, so you
>> should use a separate branch. It makes sense to go in that direction.
>
> Indeed.
Work has been going on for years in this direction, but it
Antoniu Pop wrote on 18/03/2009 18:55:52:
> > I'd like to explore distributing threads across a heterogenous NUMA
> > architecture. I.e. input/output data would have to be transferred
> > explicitly, and the compiler would have to have more than one backend.
>
> I'm currently working on somethi
Antoniu Pop wrote:
(...)
The multiple backends compilation is not directly related, so you
should use a separate branch. It makes sense to go in that direction.
Indeed.
There has been some work in the area, using different approaches. I've
been involved in one attempt, for the Cell, with Cupe
> I'd like to explore distributing threads across a heterogenous NUMA
> architecture. I.e. input/output data would have to be transferred
> explicitly, and the compiler would have to have more than one backend.
I'm currently working on something that looks quite similar, in the
"streamization" br
I'd like to explore distributing threads across a heterogenous NUMA
architecture. I.e. input/output data would have to be transferred
explicitly, and the compiler would have to have more than one backend.
Would such work be appropriate for an existing branch, or should I better
work on my own br
Tobias Grosser wrote on 10/03/2009 16:54:41:
> Hi Razya
>
> great to hear these Graphite plans. Some short comments.
Thanks :)
>
> On Tue, 2009-03-10 at 16:13 +0200, Razya Ladelsky wrote:
> > [...]
> >
> > The first step, as we see it, will teach Graphite that parallel code
needs
> > to be
Hi Razya
great to hear these Graphite plans. Some short comments.
On Tue, 2009-03-10 at 16:13 +0200, Razya Ladelsky wrote:
> [...]
>
> The first step, as we see it, will teach Graphite that parallel code needs
> to be produced.
> This means that Graphite will recognize simple parallel loops (us
Hello,
Described here is the future plan for automatic parallelization in GCC.
The current autopar pass is based on GOMP infrastructure; it distributes
iterations of loops
to several threads (the number is instructed by the user) if it was
determined that
they are independent. The only depen
13 matches
Mail list logo