H800 CLI location?

2010-12-10 Thread David Hubbard
I can't seem to find the linux cli tools for managing the
H800 controller anymore; I used to just go to a server
that came with one and could grab it from the raid sas
category but nothing there currently for an R900 and
redhat.

Thanks,

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Best 10 Gig PCIe for R900?

2010-10-07 Thread David Hubbard
Hi all, I've got an R900 with four six-core cpu's that
will be doing Symantec NetBackup backing up and
de-duplication duties over ten gig via a Cisco 4900M
so it's going to need to be a fiber card, short range.

Any recommendations on best NIC for the job, i.e. 
best throughput and lowest resource utilization?  OS
is RHEL 5 and ideally I'd like to stick with the
built in drivers but if necessary I can replace
them, I just hate dealing with 3rd-party drivers and
kernel patches and rebooting with no networking, etc.

Thanks,

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: R300 auto negotiation

2010-07-27 Thread David Hubbard
 -Original Message-
 From: linux-poweredge-boun...@dell.com 
 [mailto:linux-poweredge-boun...@dell.com] On Behalf Of Robin Bowes
 Sent: Thursday, July 22, 2010 7:23 PM
 To: linux-poweredge@dell.com
 Subject: Re: R300  auto negotiation
 
 How are you determining that it is only running at 10-half?
 
 I had an issue some time ago (different NIC, different driver) where I
 thought that the NICs were not running at Gb speed, but it turned out
 that mii-diag didn't report the correct information., but ethtool did.

Was getting horrible transfer rates and then confirmed it
was 10-half via ethtool, the switch did match at 10 but
thought it was full.

 -Original Message-
 From: linux-poweredge-boun...@dell.com 
 [mailto:linux-poweredge-boun...@dell.com] On Behalf Of Shaun Qualheim
 Sent: Thursday, July 22, 2010 7:49 PM
 To: Robin Bowes
 Cc: linux-poweredge@dell.com
 Subject: Re: R300  auto negotiation
 
 Most of the time when this has happened to us, it's been more 
 of a switch issue.  Have you used these switch ports with 
 other PowerEdge servers before?

Yeah, we use all Foundry FES9604 96-port 10/100 switches
at the edge and have 400+ various PowerEdge servers
plugged into them and the only server on any of the switches
that has ever had a negotiation issue has been this one
R300, and it did replace a first gen PE1950 on the same
port which negotiated fine.  We only have one other R300
out of all the servers, but it's CentOS 4 and x86 and
negotiates fine, as opposed to the one with the problem
being CentOS 5 x86_64, and probably about nine months
difference in manufacture date, so not sure if any of those
factors made the difference.

Thanks,

Dave



___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


R300 auto negotiation

2010-07-22 Thread David Hubbard
Has anyone had issues with R300's and the stock CentOS 5.5 x86_64 tg3
driver not auto negotiating properly?  We use Foundry (aka Brocade)
switches and I have an R300 that insists on negotiating to 10-half, I
had to use ethtool to force it to 100-full.  Have not tried at gig.
Would prefer to not have to hard code since we dislike having to hard
code switch ports for other reasons.

Thanks,

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Some MD1200 tuning info

2010-03-31 Thread David Hubbard
Just wanted to share some info I accumulated on
the MD1200 and H800 controller while testing and
configuring a disk deduplication media server for
a NetBackup installation.  The performance of the
H800 was atrocious while the background
initialization was running, so don't put an array
into production while it's still doing that if you
require good performance.  In fact, if the
performance is similar when it is rebuilding,
that may be an issue for some people too because
it was literally a factor of eight slower than
after initialization finally completed.

I initialized an array of (10) 2 TB 7200 rpm SAS
drives with two hot spares in an MD1200 connected
to an H800 controller via dual-paths on a T710
server and it took about three days total to
finish initialization.  The array is configured as 
RAID 50 across the ten drives with what ended up
being a 128k stripe size.

To test, I used the Bonnie++ disk benchmarking tool
because it pretty closely simulates the type of
load NetBackup puts on a server when doing disk-based
backup with deduplication.  The external array is
about 16 TB usable after formatting, it's partitioned
with parted and I tested on CentOS 5.4 latest kernel
and both XFS and EXT3 with a combination of 64k 
and 128k stripe sizes on the hardware side, ended
up with 128k as it was faster for this testing.  I
used the bonnie defaults so on this 40 GB server, it
ended up testing with an 80 GB data set.

The results:

1) XFS with hardware read ahead: 455 MB/sec write,
675 MB/sec read, 97 MB/sec random rewrite, 397 random
seeks/sec.

2) XFS with hardware adaptive read ahead: 218 MB/sec
write, 290 MB/sec read, 40 MB/sec random rewrite, 431
random seeks/sec.

3) EXT3 with hardware read ahead: 510 MB/sec write,
633 MB/sec read, 187 MB/sec random rewrite, 796 random
seeks/sec.

4) EXT3 with hardware adaptive read ahead: 507 MB/sec
write, 632 MB/sec read, 205 MB/sec random rewrite, 887
random seeks/sec.

I was kind of surprised at that, I had expected XFS to
be a lot better, perhaps there are mkfs or mount
options I need to play with but I didn't do anything
special to EXT3 either.  I have not disabled atime in
the mount.

So then I come across this article:

http://thias.marmotte.net/archives/2008/01/05/Dell-PERC5E-and-MD1000-per
formance-tweaks.html

and it advises of the blockdev command and adjusting
the read ahead value.  I tried a few options and
setting it to 8192 achieved the best result, which
changed my EXT3 with adaptive read ahead to 516 MB/sec
write, 959 MB/sec read (!!), 292 MB/sec random rewrite,
806 random seek/sec.  I did try the starting sector
alignment stuff too, serious PITA when using parted,
but it didn't make a significant difference.

Should be noted that while XFS was a lot slower for my
particular configuration, the CPU usage under writing
was about half what it was with EXT3, so that may be
a factor for some.  I'd also expect less dramatic
figures on servers handling lots of small files, maybe
that is where XFS shines too; for a backup de-dupe 
server it is a lot of large files.

Dave

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


R900 crashing under NFS

2010-03-29 Thread David Hubbard
Got an R900 with four six-core processors
running latest centos 5, all stock updates
so kernel is 2.6.18-164.15.1.el5PAE.
Using built-in NIC at 100 Mbit, bnx2 driver.
Under heavy NFS from just one client the
server kernel panics.  We're really just
using the server to move some files off a
vmware system so I just did a fresh centos
install, ran the updates, set up nfs server
with the default options and one exported
directory and went on our way.  

Are there some known issues with the bnx2
driver or the latest centos/rhel kernel and
nfs?

Thanks,
David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: External array showing as /dev/sda

2010-03-21 Thread David Hubbard
From: Behalf Of J. Epperson
 
 I'm somehow missing how getting the non-installable smaller 
 GPT VD to be /dev/sda will change that scenario. The other
 responder echoed one of my initial thoughts when he suggested
 turning off the external array.  That should do it.

If I could get the internal raid controller to be
/dev/sda, then the RHEL/centos installer will not
care about the fact that the external array is
too big and would require GPT to boot off of, then
the installer would let me proceed.   It was only
an issue with it being /dev/sda since that made the
installer think there was no way to writen an MBR
and boot off of it.

But, unplugging external did lead me the right direcation.
What I've had to do is this:

1) Internal array I had desired to be single RAID 50
across 8 drives.  Thanks to Dell's choice of LSI
for their current raid controllers and LSI missing
the feature that most others seem to have in being
able to present parts of one array as multiple
logical drives, I ended up having to waste the first
two drives to make a RAID 1 mirror smaller than 2 TB
and then only six remaining drives in the RAID 50.

2) Unplugged the external array and installed centos
using normal non-GPT boot to the raid 1 virtual
drive.  It installed to /dev/sda.

3) After install, edit /boot/grub/device.map and
changed it to show:

(hd0) /dev/sdb

Then:

grub
grub device (hd0) /dev/sda
device (hd0) /dev/sda
grub root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
grub setup (hd0)
setup (hd0)
 Checking if /boot/grub/stage1 exists... no
 Checking if /grub/stage1 exists... yes
 Checking if /grub/stage2 exists... yes
 Checking if /grub/e2fs_stage1_5 exists... yes
 Running embed /grub/e2fs_stage1_5 (hd0)...  15 sectors are embedded.
succeeded
 Running install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2
/grub/grub.conf... succeeded
Done.


4) Reboot, connect external array while server is
rebooting, comes back up and boots off of internal
array from bios, grub is happy because now it is
set up for /dev/sdb.

Only downside to this situation is if something were
to fail and take the external array down the server
won't boot since internal will go back to being
/dev/sda.  But if the external array is down then
we've got issues anyway. :-)


Thanks,

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: External array showing as /dev/sda

2010-03-21 Thread David Hubbard
From: Stephan van Hienen [mailto:stephan.van.hie...@thevalley.nl] 
 
 No sure which controller you have, but with the perc5/6 you 
 can create multiple virtual disks ?
 I have one PE2950 server with a Perc5i with 6 * 750GB drives, 
 with a 150gb boot vdisk, and a 3TB vdisk. (raid5)
 

When doing raid 50 the H200/700/800-series controllers do not
let you do that, the virtual disk size box becomes
a fixed value.  I think it may let you do that on some
other raid types.

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


External array showing as /dev/sda

2010-03-20 Thread David Hubbard
Got a PowerEdge T510 with internal raid
plus an H800 controller hooked to an MD1200
external array.  Trying to install centos;
raid controllers are 

bus 2 device 0 internal
bus 7 device 0 external 

During setup its identifying /dev/sda as
external storage which I don't want.  Is
there anything I can tweak to make it
detect the storage in an order that 
results in the internal being /dev/sda?

Thanks,

David

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: External array showing as /dev/sda

2010-03-20 Thread David Hubbard
From: linux-poweredge-boun...@dell.com 
 
 Probably.  But it may not be worth it.  Why does it matter to 
 you?  Not saying that it doesn't matter, just trying to
 understand why.   Getting it to be /dev/sda during install,
 for instance, wouldn't guarantee that it
 would be that when you booted the installed kernel.

Because I can't figure out how to get the OS installed
otherwise.  As it stands currently, I would like to use
RAID 50 on both the internal and external arrays.  Dell's
raid controllers do not allow you to create anything other
than one logical drive presenting 100% of the physical
raid 50 array size to the OS as a drive, so basically
this means my external /dev/sda drive shows as 24 TB,
my internal /dev/sdb drive shows as 4.5 TB.

So, trying to install RHEL 5.4 x86_64, the LVM wizard
cranks up and since the external array is /dev/sda
I un-check the box to tell the installer to not look
at that 'drive'.  I leave /dev/sdb checked which is
my 4.5 TB internal drive.  Proceed and then the
installer tells me my boot drive is managed by GPT
but the system cannot boot with GPT and I'm done.
As far as I can tell there is not currently a supported
way to get RHEL 5 installed with the server in UEFI
boot mode, or at least I can't figure it out, I did
try putting it in UEFI mode but it refused to boot
off an ISO on DVD or a native DVD.  So you can't
boot off a GPT drive and you can't install to a
MBR drive lol. 

As best I can tell, this leaves me with the only
option being get internal to show as /dev/sda, 
waste a bunch of money by being forced to reconfigure
that array as a RAID 1 of two drives for the sole
purpose of being able to present a 'drive' of less
than 2 TB to the OS so RHEL will install on it using
MBR as /dev/sda, do the remaining six disks as RAID
50 and let them become /dev/sdb, keep the external
array as RAID 50 /dev/sdc now.  I can't accomplish
this without the internal raid controller being
/dev/sda though so the installer will make it past the 
partitioning step.  Also quite unhappy that the two 750 
GB drives that should have been part of my RAID 50
internal will effectively be used to store about 2 GB
of boot and OS files but I think I'm stuck.

David

 There's a seminal paper by Matt Domsch of Dell, about Linux 
 device naming
 at 
 http://www.dell.com/downloads/global/power/ps1q07-20060392-Domsch.pdf
 
 That might give some insight.  It's several years old, but pretty much
 still valid, although UUIDs seem to be displacing labels for 
 identifying
 partitions for mounting.   I still use labels.
 
 
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq
 
 

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: Third-party drives not permitted on Gen 11 servers

2010-02-09 Thread David Hubbard
I'd be more inclined to buy into the whitepaper
and the idea behind it if it were not for the fact
that Dell servers continue to come with whatever
random hard drive model and manufacturer Dell can
get at a low price; I don't believe there is any
special evaluation of manufacturers quality and/or
performance, either that or the standards are so
low that every model passes.

I don't know one week to the next what the hard
drive flavor of the week will be when a new server
arrives. 

Additionally, as someone who has 500+ servers in
production, we regularly have Dell branded drives
die and if the server is out of warranty, we throw
the same model drive bought off the street into
it.  I have to say I've not had any indication that
the Dell drives have been more reliable, if anything
less reliable since they buy up whatever a
manufacturer is willing to make a deal on at a
given time.

I'm glad this thread came up though, I could have
been in a bad spot if it had not; we have hundreds
of Dell servers and buy third party drives
simply to have as spare parts so when a drive fails
we can throw a new one in immediately instead of
waiting four hours or next day depending on a
server's support contract.  I guess now I have to
buy Dell spare part drives so I don't end up screwed.

David

 -Original Message-
 From: linux-poweredge-boun...@dell.com 
 [mailto:linux-poweredge-boun...@dell.com] On Behalf Of 
 howard_sho...@dell.com
 Sent: Tuesday, February 09, 2010 5:18 PM
 To: linux-powere...@lists.us.dell.com
 Subject: RE: Third-party drives not permitted on Gen 11 servers
 
 Thank you very much for your comments and feedback regarding 
 exclusive use of Dell drives. It is common practice in 
 enterprise storage solutions to limit drive support to only 
 those drives which have been qualified by the vendor.  In the 
 case of Dell's PERC RAID controllers, we began informing  
 customers when a non-Dell drive was detected with the 
 introduction of PERC5 RAID controllers in early 2006. With 
 the introduction of the PERC H700/H800 controllers, we began 
 enabling only the use of Dell qualified drives.
 
 There are a number of benefits for using Dell qualified 
 drives in particular ensuring a positive experience and 
 protecting our data.
 
 While SAS and SATA are industry standards there are 
 differences which occur in implementation.  An analogy is 
 that English is spoken in the UK, US and Australia. While the 
 language is generally the same, there are subtle differences 
 in word usage which can lead to confusion. This exists in 
 storage subsystems as well. As these subsystems become more 
 capable, faster and more complex, these differences in 
 implementation can have greater impact.
 
 Benefits of Dell's Hard Disk and SSD drives are outlined in a 
 white paper on Dell's web site at 
 http://www.dell.com/downloads/global/products/pvaul/en/dell-ha
 rd-drives-pov.pdf
 
 -Original Message-
 From: linux-poweredge-bounces-Lists On Behalf Of Philip Tait
 Sent: Friday, February 05, 2010 4:31 PM
 To: linux-poweredge-Lists
 Subject: Third-party drives not permitted on Gen 11 servers
 
 I just received my first Gen11 server, R710, with H700 PERC. I removed
 the supplied drives, and installed 4 Barracuda ES.2s. After doing a
 Clear Configuration in the pre-boot RAID setup utility, I 
 can perform
 no operation with the drives - they are marked as blocked.
 
 Is Dell preventing the use of 3rd-party HDDs now?
 
 Thanks for any enlightenment.
 
 Philip J. Tait
 http://subarutelescope.org
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq
 
 

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq