Dear Ben,
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
I'm not very familiar with the hardware part, but the lspci output
looks like I have *two* GigE devices, so they are not shared between
the two processors, right?
Try /sbin/ifconfig to see how many interfaces are actually configured. Most
ne
Nevermind...;-)
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Cc: libmesh-users@lists.sourceforge.net
Sent: Tue Sep 09 11:52:23 2008
Subject: Re: [Libmesh-users] Performance of EquationSystems::reinit()
withParallelMesh
Ahh
Ahh yes.
Buit couldn't DofMap::reinit() copy the send_list before distributing new dofs?
That is the list we want...
(Sorry, I always reply on top using my phone.)
- Original Message -
From: Roy Stogner <[EMAIL PROTECTED]>
To: Kirk, Benjamin (JSC-EG)
Cc: John Peterson <[EMAIL PROTECT
On Tue, 9 Sep 2008, Kirk, Benjamin (JSC-EG) wrote:
> At one point the send_list was used to do exactly that. In fact,
> that is the only reason it exists. I need to look back and see when
> that changed. If the send list is broken I'll factor roy's stuff
> into the DofMap to fix it.
Whoa, wai
At one point the send_list was used to do exactly that. In fact, that is the
only reason it exists. I need to look back and see when that changed. If the
send list is broken I'll factor roy's stuff into the DofMap to fix it.
-Ben
- Original Message -
From: [EMAIL PROTECTED] <[EMAI
On Tue, 9 Sep 2008, John Peterson wrote:
Did you send a patch to the list? I'm going back through my email but
not seeing it...
No, just to Tim (and Ben, since he'd mentioned having some time to
benchmark a hopefully similar problem). When the patch didn't show
any clear improvement, I didn'
On Tue, Sep 9, 2008 at 9:46 AM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>
> On Tue, 9 Sep 2008, Benjamin Kirk wrote:
>
>> That is my thinking. The System::project_vector() code does some all-to-all
>> communication,
>
> Would you take a look at that patch of mine? I thought it removed
> half of th
>> Running with 1 CPU/node will hopefully perform better
>> since you are not sharing a gigE connection between processors.
>
> I'm not very familiar with the hardware part, but the lspci output
> looks like I have *two* GigE devices, so they are not shared between
> the two processors, right?
T
On Tuesday 09 September 2008 16:01:04 Tim Kroeger wrote:
> Dear John,
>
> On Tue, 9 Sep 2008, John Peterson wrote:
> > On linux, lspci will tell you something about the hardware connected
> > to the PCI bus. This may list the interconnect device(s).
>
> lspci seems not to be installed on that mach
Dear Ben,
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
> Running with 1 CPU/node will hopefully perform better
> since you are not sharing a gigE connection between processors.
I'm not very familiar with the hardware part, but the lspci output
looks like I have *two* GigE devices, so they are not s
Dear John,
On Tue, 9 Sep 2008, John Peterson wrote:
>> Attached is the output with 20 nodes and 1 CPU per node. Unfortunately, it's
>> even slower than 10 nodes with 2 CPUs each.
>
> Interesting. And you have exclusive access to these nodes via some
> sort of scheduling software? There's no cha
On Tue, Sep 9, 2008 at 9:44 AM, Tim Kroeger
<[EMAIL PROTECTED]> wrote:
> Dear all,
>
> On Tue, 9 Sep 2008, Benjamin Kirk wrote:
>
>>> There are about 120 nodes with 2 CPUs each. Please find attached the
>>> content of /proc/cpuinfo of one of these nodes (should be typical for
>>> all of them). Wh
Sure. I'll take a look at it this afternoon.
On 9/9/08 9:46 AM, "Roy Stogner" <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 9 Sep 2008, Benjamin Kirk wrote:
>
>> That is my thinking. The System::project_vector() code does some all-to-all
>> communication,
>
> Would you take a look at that patch
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
> That is my thinking. The System::project_vector() code does some all-to-all
> communication,
Would you take a look at that patch of mine? I thought it removed
half of the all-to-all communication (the global vector of old degrees
of freedom, but not t
Dear Ben,
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
> That is my thinking. The System::project_vector() code does some all-to-all
> communication, and this seems to be scaling quite badly as you get to larger
> processor counts. Running with 1 CPU/node will hopefully perform better
> since you a
> Compute node and head node give exactly the same output. So does this
> mean I have a very slow interconnect, and is this the reason for the
> bad scalability?
That is my thinking. The System::project_vector() code does some all-to-all
communication, and this seems to be scaling quite badly as
Dear Ben,
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
> If you have not done so already it would be instructive to see how using one
> CPU per node performs.
Okay, I have started with 1 node and 1 CPU (i.e. serial) now. It
might not be finished before knocking-off time, though. I will then
(i.e
On Tue, Sep 9, 2008 at 9:18 AM, Tim Kroeger
<[EMAIL PROTECTED]> wrote:
> Dear John,
>
> On Tue, 9 Sep 2008, John Peterson wrote:
>
>> On linux, lspci will tell you something about the hardware connected
>> to the PCI bus. This may list the interconnect device(s).
>
> lspci seems no
Dear John,
On Tue, 9 Sep 2008, John Peterson wrote:
> On linux, lspci will tell you something about the hardware connected
> to the PCI bus. This may list the interconnect device(s).
lspci seems not to be installed on that machine, although it is linux.
>>>
>>> Try /sbin/lspci
> There are about 120 nodes with 2 CPUs each. Please find attached the
> content of /proc/cpuinfo of one of these nodes (should be typical for
> all of them). When I run with n CPUs, I usually mean that I run on
> n/2 nodes using both CPUs each (although there is also the possibility
> to use one
On Tue, Sep 9, 2008 at 9:08 AM, Tim Kroeger
<[EMAIL PROTECTED]> wrote:
> Dear Ben,
>
> On Tue, 9 Sep 2008, Benjamin Kirk wrote:
>
On linux, lspci will tell you something about the hardware connected
to the PCI bus. This may list the interconnect device(s).
>>>
>>> lspci seems not to be i
Dear Ben,
On Tue, 9 Sep 2008, Benjamin Kirk wrote:
On linux, lspci will tell you something about the hardware connected
to the PCI bus. This may list the interconnect device(s).
lspci seems not to be installed on that machine, although it is linux.
Try /sbin/lspci - there is a good chance
On Tue, 9 Sep 2008, Tim Kroeger wrote:
> Dear John,
>
> On Tue, 9 Sep 2008, John Peterson wrote:
>
>> On linux, lspci will tell you something about the hardware connected
>> to the PCI bus. This may list the interconnect device(s).
>
> lspci seems not to be installed on that machine, although i
>> On linux, lspci will tell you something about the hardware connected
>> to the PCI bus. This may list the interconnect device(s).
>
> lspci seems not to be installed on that machine, although it is linux.
Try /sbin/lspci - there is a good chance /sbin is not in your path.
-Ben
Dear John,
On Tue, 9 Sep 2008, John Peterson wrote:
> On linux, lspci will tell you something about the hardware connected
> to the PCI bus. This may list the interconnect device(s).
lspci seems not to be installed on that machine, although it is linux.
Best Regards,
Tim
--
Dr. Tim Kroeger
On Tue, Sep 9, 2008 at 3:53 AM, Tim Kroeger
<[EMAIL PROTECTED]> wrote:
>
> Concerning the interconnect, I actually don't know. Is there some easy way
> to find out, i.e. some file like /proc/something? (The machine is located
> far away, so it's not with the admin sitting next door to me.)
On li
26 matches
Mail list logo