Re: [BULK] RE: R910/Linux CPU Heat Problems?

2010-12-08 Thread Eberhard Moenkeberg
Hi,

On Wed, 8 Dec 2010, Erich Weiler wrote:

 Very useful:

 [r...@server ~]# ipmitool sdr type Fan
 FAN 1 RPM| 30h | ok  |  7.1 | 1320 RPM
 FAN 2 RPM| 31h | ok  |  7.1 | 1320 RPM
 FAN 3 RPM| 32h | ok  |  7.1 | 1440 RPM
 FAN 4 RPM| 33h | ok  |  7.1 | 1680 RPM
 FAN 5 RPM| 34h | ok  |  7.1 | 1560 RPM
 FAN 6 RPM| 35h | ok  |  7.1 | 1680 RPM
 Fan RPM  | 36h | ok  | 10.1 | 3480 RPM
 Fan RPM  | 37h | ok  | 10.2 | 10080 RPM
 Fan RPM  | 38h | ok  | 10.3 | 3120 RPM
 Fan RPM  | 39h | ok  | 10.4 | 2160 RPM
 Fan Redundancy   | 75h | ok  |  7.1 | Fully Redundant

 I wonder why one fan is so fast while the others are slower.  I'm
 beginning to think the BIOS might be the next step, to check Fan speed
 options...

I guess the fast fan is one inside the power supplies, with little effect 
on CPU temperatueres.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: [BULK] RE: R910/Linux CPU Heat Problems?

2010-12-08 Thread Eberhard Moenkeberg
Hi,

On Wed, 8 Dec 2010, Jefferson Ogata wrote:
 On 2010-12-08 23:31, Bond Masuda wrote:

 yeah, looks like the R910 has 4 PSUs definitely something off with one
 of them. I'd consider taking a physical look at it who knows? maybe one
 PSU is failing and generating a lot of heat?

BTW: each PSU has two fans.

 Or perhaps the high fan speed in one PSU is part of a scaled response to
 the high CPU temp. Maybe at higher CPU temps the other PSU fans will
 spin up to high speed.

If the (single!) PSU fan is triggered by big heat from a CPU in the way of 
his air, one of the three central fan pairs should have got triggered to 
high speed too, and the second fan of the same PSU too.

So the heat source (if ever, may be a failing sensor only) seems for me to 
be a local defect in one of the two PSUs.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Perc 5i PE 1950 limitations

2010-10-18 Thread Eberhard Moenkeberg
Hi,

On Mon, 18 Oct 2010, markw wrote:

 Wondering what the drive capacity limitations are on a PE 1950 with perc
 5i controller?   Can it handle the newer 2.5 600GB SAS drives, or am I
 going to be stuck with a 300GB limit?  Firmware release is 5.2.2-0072.
 I don't see anything in the release notes for the firmware, and it
 appears that dell does a max of 4x300gb drives in these things.

I have got 600 GB drives for 1950, but only after special help by Mr. 
Heinz (jens_he...@dell.com).

It will be out of the system warranty by formal reasons, but it works with 
PERC5/i.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: 2.5 vs 3.5 drive performance?

2010-10-12 Thread Eberhard Moenkeberg
Hi,

On Tue, 12 Oct 2010, Adam Nielsen wrote:

 Not sure if it's still the case, but these drives used to have smaller
 platters to reduce the seek time, so they were pretty much 2.5 drives
 in a 3.5 shell.

 Impossible.

 Well a quick Google suggests otherwise:

 http://www.pcguide.com/ref/hdd/op/mediaSize-c.html (see table at end of
 page)

 And to quote from the end of
 http://www.datarecoverylink.com/understanding_platter_sizes.html:

 Decreasing the size of the platters decreases the distance in which the
 head actuator must move the heads side-to-side performing random seeks
 thus improving seek time and making random reads/writes more
 efficient...The movement to smaller platters began in earnest when some
 manufacturers trimmed the platters in their 10,000 RPM hard disk
 drives from 3.74 down to 3 while keeping them as standard 3.5 form
 factor drives on the outside for compatibility. Seagate's Cheetah X15
 15,000 RPM drive goes even further, dropping the platter size down to
 2.5, again trading performance for capacity

 So given equal RPM the 2.5 drive should have a faster full-stroke seek,
 but as has been pointed out, an SSD would be even better in this respect.

 How do you see an effort of 2.5 against 3.5?

 Well if the platters are smaller the heads have less distance to move,
 so seeking from the start of the disk to the end would be quicker...

 The opposite is fact.

 [1]
 http://www.latestpcnews.com/western-digital-launches-new-backplane-compatible-wdvelociraptor-hard-drive/

 Only advertising, sorry.

 Not all advertising is wrong :-)

All these aspects forget the capacity loss.
I was talking about same capacity.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: 2.5 vs 3.5 drive performance?

2010-10-11 Thread Eberhard Moenkeberg
Hi,

On Tue, 12 Oct 2010, Adam Nielsen wrote:

 With all drives, the used surface starts at the outer corner and does
 not reach to the inner corner. So 3.5 drives have in any case the bits
 sliding faster under the heads at a given rpm.

 Unless they're 10kRPM or faster.

No, at any given speed.

 Not sure if it's still the case, but these drives used to have smaller 
 platters to reduce the seek time, so they were pretty much 2.5 drives 
 in a 3.5 shell.

Impossible.

 I believe the WD Raptor was (is?) almost a 2.5 drive 
 with a heatsink large enough to bring it up to 3.5 [1] - this is going 
 back a few years now though, I don't know if it is still true.

 So given equal RPM the 2.5 drive should have a faster full-stroke seek,
 but as has been pointed out, an SSD would be even better in this respect.

How do you see an effort of 2.5 against 3.5?

The opposite is fact.

 [1] 
 http://www.latestpcnews.com/western-digital-launches-new-backplane-compatible-wdvelociraptor-hard-drive/

Only advertising, sorry.

Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: 2.5 vs 3.5 drive performance?

2010-10-09 Thread Eberhard Moenkeberg
Hi,

On Sat, 9 Oct 2010, Seth Mos wrote:

 With all drives, the used surface starts at the outer corner and does
 not reach to the inner corner. So 3.5 drives have in any case the bits
 sliding faster under the heads at a given rpm.

 The sequential throughput of the 3.5 inch disk is a lot higher over the 
 2.5 inch disks.

 The smaller platters of the 2.5 inch drives does mean that the end to 
 end swipe of the head is smaller between the 2 formats. This has random 
 seek performance benefits.

 The lesser mass of the 2.5 inch platters mean that it normally uses less 
 power as well.

 And you can stack a lot more of them into a 2U box == more iops.

Reflecting equal track and bit density, 3.5 needs less tracks due to 
longer circles. So random seek performance benefit too is on the 3.5 
side.

More iops per physical server volume is the only benefit of 2.5.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer: Prof. Dr. Oswald Haan und Dr. Paul Suren
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


linux.dell.com rsync targets

2010-04-01 Thread Eberhard Moenkeberg
Hi,

would it be possible to offer http://linux.dell.com/files/ via rsync?


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: linux.dell.com rsync targets

2010-04-01 Thread Eberhard Moenkeberg
Hi,

On Thu, 1 Apr 2010, David Pierce wrote:

 Probably not the right place to ask, but I really doubt they would.
 I'd just use curl and a bit of logic in a script:

 1. Get a list of files.
 2. If I don't have a file in that list, get it.
 3. If I have it, does mine have the same size (and maybe timestamp)
 (poorman's checksum)?  If not, get it.

Good that (or if) you do it, but a crazy way I won't go.
I will only mirror it if I get access via rsync.

 On 04/01/2010 02:58 PM, Eberhard Moenkeberg wrote:
 Hi,

 would it be possible to offer http://linux.dell.com/files/ via rsync?


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Monitoring non-Dell drives with OMSA?

2010-03-31 Thread Eberhard Moenkeberg
Hi,

On Thu, 1 Apr 2010, Robin Bowes wrote:
 On 01/04/10 00:59, Eberhard Moenkeberg wrote:

 If you don't mention that the failed drive is a one-year item from Dell,
 it is covered by your 3-year warranty.

 Er, I wouldn't think so - all Dell items have unique serial numbers.
 They will know which items have 3-year support and which have only 1-year.

 Maybe too if you buy Seagate drives at second source.

 I doubt that very much too. Dell drives have Dell firmware. They're not
 going let me send in generic Seagate drives for replacement.

Ok, but if a Dell drive will fail and you put instantly a second-source 
drive in for it, you will return the Dell drive.

And in a second phase, you could put your remaining second-source drives 
into all the hotspare positions, so they would be very late in a chance 
to get directly returned to Dell.

 But my question is: why didn't you choose originally the 5-year warranty
 from Dell?
 You can get it, even if not choosable in the web interface.

 Our original systems have 3-year warranty. I have asked if the drives
 can be delivered with 3-year warranty too.

You can relong the warranty to 5 years for the whole system, and that 
would cover all Dell-delivered drives.

 And/or try to get an advertise for drives without tray - I am currently
 struggling for an empty 3,5 tray for R710 which can accept a 2,5 SATA
 SSD - Dell can't deliver what I want (at least my sales representative is
 too much straightened to really understand  act properly - I know it
 would be possible if I would know the part number), and the proper drive
 tray with 160 GB SATA disk (to throw away for me) shall cost me 160 Euros
 plus tax.
 So the trays are costly - ask to go without.

 I have been quoted the same price for the drive, with or without the
 drive tray.

Crazy, Dell, really.

 You know you can buy drive trays here [1] don't you?

Thanks, but my case is a special one: they don't offer 3,5 SAS-trays with 
an internal adapter for 2,5 SATA drives - which I would need to swap the 
SATA drive and put a SSD instead.

 [1]
 http://www.scsi4me.com/dell-0f238f-3-5-inch-sas-satau-drive-tray-caddy-carrier-for-poweredge-r410-r710-t610-etc.html

Thanks again, also for all others frustrated by the Dell official sales 
persons.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: R710 w/ Mixed SSD + SAS Drives

2010-03-24 Thread Eberhard Moenkeberg
Hi,

On Tue, 23 Mar 2010, Peter Grandi wrote:

 As to buying separate drives, you can buy nice Corsair X256 SSDs
 from Dell, as an accessory, and they are known to be very good
 (Indilinx controller) performers (I am buyin eight, mostly to go
 into a T710).

Can you please post an URL?


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: External array showing as /dev/sda

2010-03-21 Thread Eberhard Moenkeberg
Hi,

On Sun, 21 Mar 2010, David Hubbard wrote:

 From: Behalf Of J. Epperson

 I'm somehow missing how getting the non-installable smaller
 GPT VD to be /dev/sda will change that scenario. The other
 responder echoed one of my initial thoughts when he suggested
 turning off the external array.  That should do it.

 If I could get the internal raid controller to be
 /dev/sda, then the RHEL/centos installer will not
 care about the fact that the external array is
 too big and would require GPT to boot off of, then
 the installer would let me proceed.   It was only
 an issue with it being /dev/sda since that made the
 installer think there was no way to writen an MBR
 and boot off of it.

 But, unplugging external did lead me the right direcation.
 What I've had to do is this:

 1) Internal array I had desired to be single RAID 50
 across 8 drives.  Thanks to Dell's choice of LSI
 for their current raid controllers and LSI missing
 the feature that most others seem to have in being
 able to present parts of one array as multiple
 logical drives, I ended up having to waste the first
 two drives to make a RAID 1 mirror smaller than 2 TB
 and then only six remaining drives in the RAID 50.

 2) Unplugged the external array and installed centos
 using normal non-GPT boot to the raid 1 virtual
 drive.  It installed to /dev/sda.

 3) After install, edit /boot/grub/device.map and
 changed it to show:

 (hd0) /dev/sdb

 Then:

 grub
 grub device (hd0) /dev/sda
 device (hd0) /dev/sda
 grub root (hd0,0)
 root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
 grub setup (hd0)
 setup (hd0)
 Checking if /boot/grub/stage1 exists... no
 Checking if /grub/stage1 exists... yes
 Checking if /grub/stage2 exists... yes
 Checking if /grub/e2fs_stage1_5 exists... yes
 Running embed /grub/e2fs_stage1_5 (hd0)...  15 sectors are embedded.
 succeeded
 Running install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2
 /grub/grub.conf... succeeded
 Done.


 4) Reboot, connect external array while server is
 rebooting, comes back up and boots off of internal
 array from bios, grub is happy because now it is
 set up for /dev/sdb.

 Only downside to this situation is if something were
 to fail and take the external array down the server
 won't boot since internal will go back to being
 /dev/sda.  But if the external array is down then
 we've got issues anyway. :-)

The real matter is the sequence of drivers within the initrd file.

I can't tell about RH, but with SUSE you have /etc/sysconfig/kernel with a 
line like

   INITRD_MODULES=amd74xx megaraid_mbox processor thermal fan jbd ext3 \
   edd aic7xxx qla2300 tg3

where you can change the sequence for your needs.
After changing, the initrd file needs a rebuild.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Dell git trees now available by git://

2010-02-11 Thread Eberhard Moenkeberg
Hi,

On Thu, 11 Feb 2010, Matt Domsch wrote:

 For those wanting to follow along and contribute to Dell's various
 open source projects which have published git trees at
 http://linux.dell.com/git, you can now use http, rsync, and git:// to
 pull from the projects.

 For example, the tree for DKMS is available via any of:

 git://linux.dell.com/dkms.git
 http://linux.dell.com/git/dkms.git
 rsync://linux.dell.com/git/dkms.git

 We've had both http and rsync available for some time, just not well
 advertised.  We added git:// today.

 I hope you enjoy this new way to participate.

 ftp://ftp5.gwdg.de/pub/linux/dell/
http://ftp5.gwdg.de/pub/linux/dell/
   rsync://ftp5.gwdg.de/pub/linux/dell/

will be a mirror of both repo/ and git/.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: RAID battery-backed cache - necessary? (was: Linux-PowerEdge Digest, Vol 68, Issue 35)

2010-02-10 Thread Eberhard Moenkeberg
Hi,

On Thu, 11 Feb 2010, Adam Nielsen wrote:

 This is perhaps off-topic too, but I have always wondered...

 You might also want to look at getting a hardware RAID card or
 daughterboard like the PERC-6i - these will allow you to set up a
 RAID-10/50/60 that will stripe all data between two drives, giving you
 another twofold speed increase. You probably want to make sure that your
 card has battery backup if you care about your database - otherwise a
 power cut can lose cached data rather painfully, even if you have a UPS.
 If you're moderately paranoid, or your data is important, you should
 disable on-drive write caching, as these never have battery backup - but
 this will cost you some speed. (This is a software issue, though, and
 won't affect your purchased configuration.)

 I am curious as to why this type of battery-backed cache is important.
 The OS would do a large amount of caching (Linux can have a disk cache
 of many gigabytes) which I am sure would be far more effective than the
 small caches on many RAID cards.

 Given that the OS, if configured properly, should provide the best type
 of caching possible, why is it still necessary to have RAID cache and
 on-drive cache?  Surely these would provide no additional benefit?

 Anyway, just something I've often wondered about :-)

Simple: because sudden total power loss could waste the buffer cache.
The battery backuped controller cache would keep the data and write back 
after the next power-on (if it happens within the battery lifetime).


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Third-party drives not permitted on Gen 11 servers

2010-02-10 Thread Eberhard Moenkeberg
Hi,

On Wed, 10 Feb 2010, Jason Edgecombe wrote:

 Please, email your Dell customer rep and complain about this!

 I did.

 I contacted my Dell customer rep and he forwarded my complain to the
 product support group. He said they may re-evaluate things if lots of
 people complain. (I can hope...)

 We don't have the Dell R710's, and I still complained.

The mass of complaints here could easily get to knowledge to those 
relevant people at DELL by help of the participating DELL members.

I guess at least Matt Domsch has already formed a proper signal against 
his marketing collegues, and I guess he has the power to place it right.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: PE 2650 - which memory?

2009-12-09 Thread Eberhard Moenkeberg
Hi,

On Wed, 9 Dec 2009, Howard, Chris wrote:

 You guys are a lifesaver.

 This says I have two 512MB DIMMs in bank1_A and bank1_B
 and speed is 266 Mhz.

As I remember it is DDR-1 reg ECC.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Hidden DRAC commands

2009-11-16 Thread Eberhard Moenkeberg
Hi,

On Tue, 17 Nov 2009, Adam Nielsen wrote:

 While waiting for the DRAC firmware source to be released (stuff is
 happening, just very slowly) I thought I'd poke around a bit more in the
 DRAC5 firmware and see what I could find.  For those who are interested,
 I discovered a hidden subcommand in racadm.

 The racadm util command allows you to perform certain debug-related
 tasks.  racadm util help will list them, and you can get further
 information with racadm util help command.  Note that the bfinfo
 command is misspelled in the help, it is really bdinfo.

 One interesting command is racadm util misc -opendir /etc which will
 give you a directory listing of the /etc folder.  It sure beats my
 previous method of for I in /etc/*; do $I; done :-)

 I still haven't figured out how to get a proper console though...

Many thanks for this, really many.

Too much vendors do not see the efforts they can gain thru open-sourcing 
their software, by binding the brains of their users.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG)
Am Fassberg 11, 37077 Goettingen
URL:http://www.gwdg.de E-Mail: g...@gwdg.de
Tel.:   +49 (0)551 201-1510Fax:+49 (0)551 201-2150
Geschaeftsfuehrer:   Prof. Dr. Bernhard Neumair
Aufsichtsratsvorsitzender: Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft: Goettingen
Registergericht:   Goettingen  Handelsregister-Nr. B 598
-

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: How To: Replace PERC6 In The r710 With An Areca ARC-1680

2009-10-28 Thread Eberhard Moenkeberg
Hi,

On Wed, 28 Oct 2009, Brian A. Seklecki wrote:

 All:

 Here are a few quick notes  photos on ripping the PERC6 out of an r710
 and replacing it with an Areca ARC-1680IX-12 :

 Photos:  http://digitalfreaks.org/~lavalamp/cp/thumbnails.php?album=52


 1) The PERC6 performance is really poor.  It's really slow to
   write on any RAID level.  In  RAID5, it averages writes ~30-50
   MBps where as Areca cards average 300-400 MBps

   - It's even faster with ZFS on FreeBSD/amd64 RELENG_8 and some
 sysctl tweaks

   - The management interface is horrible, the documentation for
 the proprietary CLI is horrible, and after 10 years, its yet
 to be integrated into the IPMI BMC

   - It's probably not Dell's fault.  LSI/QLogic makes the
 chip; blame them (But it's Dell that takes it in the hilt,
 repeatably, with every generation of server, when they
 renew the contract)

 I used to get faster disk I/O in my SBUS QLogic FAS408 in my
 SparcStation 20.

 Anyway, you get what you pay for with Dell;.. but you can get a
 lot more w/ Areca...for what you pay for with Dell!?

 2) The latest r710 and PERC6 use the industry standard SAS SFF-8087
   internal cable connector between the HBA and the backplane.

   That means you can just swap out the HBA, or if you're one of
   Dell's big embedded clients, order the unit w/o PERC6 or have
   Dell ship you whatever you want (3Ware?), probably.

 3) Installation of the Areca ARC-1680IX-12

   We used a PCIe x8 SAS RAID Card 2/512mb Cache.  We purchased it
   off of NewEgg.com for appropriately the same price as a PERC6 adds
   to an r710.  The LCD monitor and battery put it slightly over.

   The unit is PCIE-8x and there's 512 of DDR Cache onboard, 2GB
   DDR addon, with up to 4GB addon (PERC can't compete here).

   The card has it's own Intel IOP348 1200MHz CPU, an
   IPv4-enabled firmware (Web, SSH, Telnet, SNMP, SMTP), and
   very very decent mgmnt F/OSS support.

   You can see photos of the ARC-1680X-12 in Pictures 10, 13, 14.

   The external connectors on the card are:
- RS232 over RJ11/RJ14 (The included cable terminates to DB9M)
- Ethernet management
- External SAS SFF-8088

 3.1) Installation Notes

   In pictures 3 and 4, you can see the Dell SFF-8087 cables from
   the PERC6 terminating into the backplane.  The cables run along
   a raceway on the right side of the case (oriented look at the
   faceplate)

   In pictures 1 and 2, if you remove the CPU/RAM cover and front
   fan bank, you can see the cables in the raceway.

   Trace them back to the PERC6 and disconnect them from the HBA
   (Dell used a proprietary ribbon connector on the PERC6 side;
   good thinking Dell!  Look at how well proprietary worked for
   IBM, Sun, etc.). This connector can be seen in picture 6 and 12.
 .
   Pull the cables out of the raceway, then disconnect the SFF-8087
   from the SAS backplane.

   As you can see in picture 6, the PERC6 is secured in place in
   a special PCIE port retainer that reminds me MCA or EISA cards
   in my PS/2 servers.

   The retainer is a T-16 hex nut head, as see in picture 8.  Failing
   that, use an acetylene welder or plasma torch.

   Install your Areca card on the top PCI-E 8x/16x port (picture 15)

   Install the SFF-SFF cable, included with the Areca, as seen
   in picture into the r710 backplane raceway. See picture 16
   and 20

   You may need to run multiple SFF cables depending on your
   backplane configuration.

   Note: 90 degree angled cables would be best.  Dell has the
 custom made apparently, I can't find them on the
 Interwebs, so I carefully bend the SFF connector.

   Restore the fan array and CPU/RAM cover.

   Note: Photo 21 the SAS/SFF cable goes above the cover, so
 tuck the cover under the cable (90deg cable
 mitigates this)

   Final cable routing seen in Picture 22.

   Restore case and experience an instantaneous I/O'gasm' as
   your $16k server screams to life.

   Did I mention that the Areca has volume management
   built into it? :}

   Walk directly to the local bar and buy everyone a few
   rounds with the money you saved by having a few fast
   severs instead a datacenter full of them trying to keep
   up with the Slony backlog.

   Good luck and let me know if you have any questions (or where to
   find some slick SFF-8087 cables with a 90deg angle connector)

   You can see a dmesg(8) for the r710 w/ areca for NetBSD/amd64
   -current from last month at:

 http://www.nycbug.org/?NAV=dmesgd;f_dmesg=;f_bsd=;f_nick=;f_descr=;dmesgid=2016#2016

Splendid. Brian, you are one of the greatest. Really.
Thanks for this highlight.


Viele Gruesse
Eberhard Moenkeberg (emoe...@gwdg.de, e...@kki.org)

-- 
Eberhard Moenkeberg
Arbeitsgruppe IT-Infrastruktur
E-Mail: emoe...@gwdg.de  Tel.: +49 (0)551 201-1551
-
Gesellschaft fuer wissenschaftliche Datenverarbeitung mbH