At 04:06 PM 4/21/2005, you wrote:
Please don't mention Dell servers around me.
Oh no, please don't accuse me of pushing a vendor, especially DELL. I'm
just suggesting that the big box solution to emulate smaller boxes just
isn't worth the time or money. The big boxes end up costing exponentiall
Current Dell count is 81 servers, but not all are file servers.
The file servers are mostly PowerEdge 2450, 2550, and 2650 machines.
Rest are 1650, 1750, and 1850 machines. We use the internal
scsi Perc RAID cards with internal drives, not external.
That is a very good plan. Funny as I emaile
Matthew Cocker wrote:
Do you use the Dell powervaults and any of the scsi perc raid cards? We
do and they have been a huge problem. Not just for us but all over the
university. They seem to work better for windows but one of the windows
shops also lost a server last year to a single failed drive
Do you use the Dell powervaults and any of the scsi perc raid cards? We
do and they have been a huge problem. Not just for us but all over the
university. They seem to work better for windows but one of the windows
shops also lost a server last year to a single failed drive in a raid5
set which
ted creedon wrote:
Here I just move my raid card and 4 drives.
Linux 2.6 doesn't have support for the older IDE raid cards. New systems
have soft raid which allows a drive only move. Its particularly easy with
sata drives.
tedc
We have tried sata disks but they have very low mean times between
fa
Matthew Cocker wrote:
Please don't mention Dell servers around me. For the last three years we
have had dell servers with attached storage. It has been a nightmare
from day one. First we had to have all the scsi disks (100 of them)
replaced becasue they were incompatible with the Dell backplanes
Hi
You haven't told us what kernel version or architecture are involved, or
what OpenAFS versions your servers or vos client are. That makes it
hard to tell which known-and-fixed bugs you might be running into.
kernel is 2.4.29 from kernel.org
afs version is 1.2.13
OS is debian stable
vos lisvo
Please don't mention Dell servers around me. For the last three years we
have had dell servers with attached storage. It has been a nightmare
from day one. First we had to have all the scsi disks (100 of them)
replaced becasue they were incompatible with the Dell backplanes (disks
were supplied
On Thursday, April 21, 2005 02:35:01 PM +1200 Matthew Cocker
<[EMAIL PROTECTED]> wrote:
Hi
We have just invested in a Fibre Channel SANs and several FC attached ESX
servers (brillant product, just love vmotion and virtual center) and are
playing with Virtualised Openafs Fileservers. All is workin
ehalf Of Rodney M Dyer
Sent: Thursday, April 21, 2005 8:50 AM
To: Derek Atkins; Matthew Cocker
Cc: openafs-info@openafs.org
Subject: Re: [OpenAFS] openafs fileservers in VMware ESX
At 11:51 PM 4/20/2005, Derek Atkins wrote:
>I've never seen any reason to virtualize an AFS server. Ever. Th
At 11:51 PM 4/20/2005, Derek Atkins wrote:
I've never seen any reason to virtualize an AFS server. Ever. The key is IO
bandwith, which isn't increased by virtualization. You really want separate
PHYSICAL servers for AFS servers. Virtualization does not give you any
benefits due to hardware fail
One thing to look at - if you are running ESX 2.5 (or anything fairly
new), change the HardTimerInterval (I think that is it) in advanced
settings to 333 instead of it's default value, and boot any linux 2.6
guests with the "clock=pit" option. It may not help this problem, but it
corrects clock iss
Matthew Cocker wrote:
The question is how much does the overhead of virtualisation (which with
afs is not much) actually matter with an AFS fileserver and the client
side caching.
That should read
The question is how much does the overhead of virtualisation (which with
esx is not much) actually
The question is how much does the overhead of virtualisation (which with
afs is not much) actually matter with an AFS fileserver and the client
side caching.
The data I have collected over time on our hardware server suggests that
our afs servers while containing a lot of data and user volumes
I've never seen any reason to virtualize an AFS server. Ever. The key is IO
bandwith, which isn't increased by virtualization. You really want separate
PHYSICAL servers for AFS servers. Virtualization does not give you any
benefits due to hardware failure, power failure, or any other failure.
here is some of the strace. It seems that the gettimeofday function is
having issues. Would this cause the vos listvol to slow? If this is the
case then would I be save to say it is a OS level issue not afs issue.
Of cause now I have to move all the volumes onto a REDHAT server (we use
debian)
Hi
We have just invested in a Fibre Channel SANs and several FC attached
ESX servers (brillant product, just love vmotion and virtual center) and
are playing with Virtualised Openafs Fileservers. All is working very
well except if we put to many volumes on a server at which point "vos
listvol"
17 matches
Mail list logo