>We have eliminated all but one Windows PC. We have found that
>unless we re-boot it EVERY 48 hours it become slow, hangs, drops the
>network... So IMO rebooting is part of Windows SOP (Standard Operating
>procedure)
>The only time we reboot any of our Linux systems is:
Well, w98 and w95 are for games. NT with sp3 or higher is very good
(sp5 must be installed before 2000 though).
Yet linux... ...i get very little reaction on all questions regarding
a decent shared memory implementation in linux.
In 1997 already dudes saw it's tough to allocate shared memory in linux,
when youneed more than 4 or 32 mb shared between processes.
Even old ultrasparcs with sunos or unix at it allow an easy call to
shmget with as argument that a 256mb memory block must be allocated.
Also not enough for me. I want to allocate a block of 450mb memory.
Linux doesn't get beyond 32 though.
To allocate some shared memory Robert Hyatt, Tim Mann and many
others who all only had a single piece of the shared memory work
around solution,
figured out for me that one first must echo to the kernel:
echo 450000000 > /proc/sys/kernel/shmmax
Now what i would like to do is a call to
shmget to allocate this, instead of everything written here,
not to mention the huge effort to figure it out how to
do it.
Secondly. I can only allocate it with IPC_PRIVATE:
shm_hash = shmget(IPC_PRIVATE,HASH_TABLE_SIZE,IPC_CREAT|0777);
However, ONLY THIS PROCESS CAN READ THIS MEMORY.
allocating it with IPC_PRIVATE SUCKS SUCKS SUCKS!
i want to allocate it shared.
Now what i do is:
shm_tree = shmget(ftok(".",'t'),sizeof(TreeInfo),IPC_CREAT|0777);
where sizeof(TreeInfo) is a very small array, only a couple of
tens of kilobytes. here i store the shm_hash in.
Now i also shmget to this shm_tree in other processes.
Then i can read tree->shm_hash.
Then i attach with shmat to shm_hash.
So a lot of effort, for something that should be simply implemented
in linux.
Now recently i get the next problem a lot.
When my parallel program crashes (and that happens a lot as
parallellism is hard to debug), then the 150 mb hashtable where i test
with, remains allocated.
For some weird reasons, if it crashes bad, this shared memory becomes
root.
You hear me correct it becomes ROOT, although i don't even have a root
password (better not give me, i might install NT).
Now if i restart my program, then linux CRASHES. Shared memory
can't be swapped. So if i allocate at a 256 mb machine 2 times 150mb,
then it crashes directly. The first 150mb is the root 150mb.
As it's allocated with IPC_PRIVATE, a new block gets allocated when i
restart my program (and as i'm not root i can't clean up that root block).
Then direct crash follows.
For me this is weird. If all processes are crashed. How can shared memory
still get allocated? Why does Linux have this problem and NT doesn't?
I'm not deep into OS, but what i need is just a block of memory that
several processes can use. If my program crashes, then this block
of memory must be gone. because even if my program does NOT crash,
but only a single process dies, then i already have this problem.
Happened quite a few times now. I'm in Netherlands. Machine is in
America. Hard to reboot it from here if it's crashed.
>When we upgrade something.
>When we are testing drivers/devices etc.
>Doug
>-------------------------------------------------------------------
>Paralogic, Inc. | PEAK | Voice:+610.861.6960
>115 Research Drive | PARALLEL | Fax:+610.861.8247
>Bethlehem, PA 18017 USA | PERFORMANCE | http://www.plogic.com
>-------------------------------------------------------------------
>
>-
>Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
>To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
>
>
-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]