Greetings,
Running Rocks 4.1 on a 30 node system and seeing serious RX packet
loss, drops and overruns while running heavy MPI i/o over e1000. I have
replaced cabling, and switches, updated e1000 drivers, ran multiple
kernels, etc. No modifications seem to affect the issue. I am pursuing
a
Have any of you managed to get WOL working for an nforce4 mobo
running a recent linux kernel?
On an A8N5X (nforce4, not ultra) motherboard WOL works fine if
the machine shuts down from XP. So the BIOS and hardware are
clearly set up correctly. However on poweroff from linux
(2.6.16.9 using the
Kernel 2.6.9-34.ELsmp under a RH distro, DIY version
Running into a problem where I need to increase the max locked memory
environment variable, typically done by adding a soft and hard value to
the /etc/security/limits.conf file. And so I add the lines:
* soft memlock 1024
* hard
So I submit a job, through Torque, and I simply execute a ulimit -l
only to find that my limit is set to the original value, 32. Using a
inherited from the process that started your process, probably.
try restarting relevant scheduler daemons - it matters when they
started up relative to
On Wed, May 17, 2006 at 06:17:43PM -0400, Mark Hahn wrote:
incidentally what leads you to think you should be using memory locking?
The kernel.org people are insisting that kernel modules in the kernel
enforce the limit, so the default has to be raised for everyone using
OpenIB and InfiniPath
On Wed, May 17, 2006 at 05:03:37PM -0400, Eric Dantan Rzewnicki wrote:
There will be live streams available for tonight's DCLUG meeting. The
meeting is scheduled to start around 19:00. Check the DCLUG site for
meeting topic and speaker info: http://dclug.tux.org
URLs for the live video
My quick and dirty test was to do NFS reads from memory on the server (to
rule out disk contention) to multiple
clients. So I did this:
o On client 1, 'tar cO $DATA | cat /dev/null' ~4GB of data from the
server (4GB being the amount of memory in the server) to /dev/null.
This caches