[EMAIL PROTECTED] wrote:
> Hi,
>
>>> On the other hand I could very well believe that if there were
>>> sufficiently large spans, this parallelization would help a lot, but then
>>> this would be such an exceptional situation that is very far away from
>>> actual uses of Mesa.
>>>
>> That's actual
Hi,
>> On the other hand I could very well believe that if there were
>> sufficiently large spans, this parallelization would help a lot, but then
>> this would be such an exceptional situation that is very far away from
>> actual uses of Mesa.
>>
>
> That's actually quite interesting. However, I
[EMAIL PROTECTED] wrote:
> Hi
>
> I found that thread very interesting, and I experimented a bit
> (unfortunately with no success). I would still like to describe what I
> did, maybe this helps someone finding a better approach.
>
> I used the Irrlicht-demo 2 to profile. This demos loads a quake
Hi
I found that thread very interesting, and I experimented a bit
(unfortunately with no success). I would still like to describe what I
did, maybe this helps someone finding a better approach.
I used the Irrlicht-demo 2 to profile. This demos loads a quake level, in
which you can walk around.
Jerome Glisse wrote:
> On Fri, 24 Oct 2008 15:17:06 -0500
> Ioannis Papadopoulos <[EMAIL PROTECTED]> wrote:
>
>
>> If I'm not mistaken, the critical path is basically operations on
>> vectors/matrices - I'm not an expert on graphics, but I thought that's
>> the reason why the GPUs look a lot l
On Fri, 24 Oct 2008 15:17:06 -0500
Ioannis Papadopoulos <[EMAIL PROTECTED]> wrote:
> If I'm not mistaken, the critical path is basically operations on
> vectors/matrices - I'm not an expert on graphics, but I thought that's
> the reason why the GPUs look a lot like vector processors.
Well today
Stephane Marchesin wrote:
> On Fri, Oct 24, 2008 at 08:29, Ioannis Papadopoulos
> <[EMAIL PROTECTED]> wrote:
>
>
>> I should have written "task scheduling and management". Yes, the threads are
>> OS handled but everything else has to be handled by a runtime system. This
>> means using some effic
On Fri, Oct 24, 2008 at 08:29, Ioannis Papadopoulos
<[EMAIL PROTECTED]> wrote:
>
> I should have written "task scheduling and management". Yes, the threads are
> OS handled but everything else has to be handled by a runtime system. This
> means using some efficient way one has to create and schedu
Stephane Marchesin wrote:
> On Thu, Oct 23, 2008 at 19:52, Ioannis Papadopoulos
> <[EMAIL PROTECTED]> wrote:
>
>> Well, I'm working all the time with pthreads, OpenMP and MPI. I'm also
>> familiar with Intel's TBB. The all have their pros and cons - and I'm
>> only talking for shared-memory appl
On Thu, Oct 23, 2008 at 19:52, Ioannis Papadopoulos
<[EMAIL PROTECTED]> wrote:
> Well, I'm working all the time with pthreads, OpenMP and MPI. I'm also
> familiar with Intel's TBB. The all have their pros and cons - and I'm
> only talking for shared-memory applications, as a distributed memory
> im
Well, I'm working all the time with pthreads, OpenMP and MPI. I'm also
familiar with Intel's TBB. The all have their pros and cons - and I'm
only talking for shared-memory applications, as a distributed memory
implementation would have too much latency (people have been using MPI
even for appli
On Thu, Oct 23, 2008 at 6:18 PM, Brian Paul
<[EMAIL PROTECTED]> wrote:
>
> I'm not too familiar with OpenMP. When you talk about OpenMP, are you
> talking about parallelizing across multiple machines and simulating
> shared memory across a network? Or is it just for shared-memory
> multiprocessor
I'm not sure of the merrits of OpenMP vs. pthreads but a few comments:
As Ben suggested, probably the best way to take advantage of multiple
processors is to begin by parallelizing rasterization. Fragment
processing (in particular when there's lots of texture sampling or
non-trival fragment s
I'll look into that, seems like starting point and is coarse-grained
enough. Thanks.
I have spend some time looking into the Larrabee architecture and it was
basically my inspiration: at some point, all these processors will
probably live in the main CPU that will contain 100s of simple cores a
I'm pretty ignorant of the Mesa internals, but my first stab at such a
thing would be to try and parallelize the triangle rasterizer by
splitting the framebuffer into tiles of say 64x64 pixels, and have a
queue for each of those tiles. Then, you have a pool of rasterization
threads that consume the
Hi,
I'm interested in parallelizing some parts of Mesa using OpenMP. I don't
know if anyone has tried it however I think it worths a shot.
I'm aware of pMesa, but it's not exactly what I have in mind. I'm more
interested in seeing how well would Mesa behave in a manycore chip
(although there i
16 matches
Mail list logo