Re: [Linux-PowerEdge] PERC RAID init speed slowness

2021-05-18 Thread Blake Hudson
I believe the default PERC setting is to leave the disk cache at the 
default setting. However, can manually enable/disable the disk cache on 
a per VD basis to see if that has any positive or negative effect (this 
change can be made via perccli or the iDrac). For data integrity 
reasons, most folks recommend disabling the disk's cache to ensure data 
is committed promptly.


--Blake


On 5/18/2021 1:19 PM, Paul Raines wrote:

Thanks, maybe the cache is the reason.  I am still suprised by the
difference if that is it.

The Seagate RAID is progressing at about 0.25% per hour while the
WD RAID is progressing about 2% per hour.   So 8x faster.

-- Paul Raines (http://help.nmr.mgh.harvard.edu)



On Tue, 18 May 2021 1:53pm, Blake Hudson wrote:

  External Email - Use Caution The WD drives are a family that 
comes in either 18TB or 16TB sizes while the Seagate model comes in 
16TB-10TB sizes, this would indicate that the WD drives have higher 
density or might be based on newer technology. The WD drives have a 
512MB cache while the Seagate drives have 256MB cache; The increased 
cache may provide additional performance in a wider variety of 
applications.


As far as Dell is concerned, a 16TB 7.2k SAS drive is a commodity 
part where one model/brand is as good as another, even if the 
replacement part is a model based on technology several years older 
or newer than the original.


https://secure-web.cisco.com/1pHUYOg2xL6iDLuEfdSrcWvN_mBPTKnJPQ_IoHNDbouMN4VhWXX2FKDn6p-siRkpZnYE4umdmynorozZHhBR4XkPcY4s3oKu_GLqBfK8ou7ElLuVa-NTYq0x7vBGM5exQnVyI52fTdp0sRoic5hD_6Hnm59YySdDfBxmsETMpDwjnTFOKrehyJN4af-PT6_fyhQ7oGPXmc7-0dF88g_2-upnYRX31dBQxK6HLtDWaVV8rVUycwQJpXJUF6BrCqSBt2wvpA6K_4tBoEHXilWeffg/https%3A%2F%2Fwww.seagate.com%2Ffiles%2Fwww-content%2Fdatasheets%2Fpdfs%2Fexos-x16-DS2011-3-2008US-en_US.pdf 

https://secure-web.cisco.com/1YhZeVbxD3JZU-Gss1FRY5CuMSxy6G3Llst8bHzTk6ujgVe20gzkXNSu-wnwor6C1kMCpvqRdKak2D2rpx3misb5fqkwRd4jf_ZnsTlSNB9qmJAFMNuc4pwMoQ1iif1LYTLxtSdrYZDpREKDUWIPui-etDgVUib7-KujOLMQTMfb_HnJut8uQRwnf2OXIk05AwXgtbdoe2sJgkbttMwS9Y5jphGLDbh4O3IRvVo57X030MC1R-9iM8-hIfiAIJ3OlMmX4WxDVOujn00s9pYskfQ/https%3A%2F%2Fdocuments.westerndigital.com%2Fcontent%2Fdam%2Fdoc-library%2Fen_us%2Fassets%2Fpublic%2Fwestern-digital%2Fproduct%2Fdata-center-drives%2Fultrastar-dc-hc500-series%2Fproduct-manual-ultrastar-dc-hc550-sas-oem-spec.pdf 



--Blake


On 5/18/2021 11:57 AM, Paul Raines wrote:


 I did firmware updates of everything in the lifecycle controller
 as the first thing I did when I booted before making the RAIDs
 so all firmware levels should be the same.

 The disks are different brands it appears.  Not sure why that
 happened as we ordered these at the same time.

 The first, slower box has ST16000NM010G 16TB drives and
 the second, faster box has WUH721816AL5200 16TB drives.

 A suprising difference to have different brands of disks but I am also
 suprised it makes such a big difference in the init rates if
 that really is the cause.

 I guess I will open a ticket will Dell support.

 -- Paul Raines
 (http://secure-web.cisco.com/1utkIp4YPXrFeKb7D1ttrCwJqycVgCZJXU69Wev8U-OwUMPp-OC3gvVVDncwbXWXHikHT-fq_lFeleAXy9MlvWBl4wcST5SAZaSHoB-KniG74INsRrJWJvkUXmAtq5KmAQfIHfDBVY2sNSC1Sds08I5-QCHOmGPa_zHs7ETqUFWIJ_hNuTDYKxCD2_XBv4BpJ29BTaBbMol9_BuoVGFNE18oqmNUAyviVhEjJVU5pduEkn3dnJTMw_N9EA7-1fL5TTCGCNaP6UqUDp2Ma1wae7w/http%3A%2F%2Fhelp.nmr.mgh.harvard.edu) 





 On Tue, 18 May 2021 11:42am, Blake Hudson wrote:


   External Email - Use Caution One explanation might be that the
 disks are not identical. Dell will sometimes ship different model 
drives
 depending on availability, or the drives could have come with a 
different
 firmware version installed. You might check with perccli to see if 
this

 is a possibility.

 Also, some manufacturers may re-use the same model number to 
indicate a
 drive of X capacity for X usage, but the internals for the drive 
could
 change from year to year (WD is known for this). Something to 
check if

 the drives report a subtly different model number, different date of
 manufacture/location of manufacture, or have different style 
labels or

 fonts from one server to another.

 --Blake

 On 5/18/2021 9:20 AM, Paul Raines wrote:


  I recently bought two identical PowerEdge T640 with internal 
PERC H730P
  Adapter and eighteen 16TB disks.  I created a 17 disk RAID6 with 
1 hot

  spare on both.

  On the 1st server I created the RAID Sunday (May 16 16:20:13)
  and on the 2nd server I creatd it Monday (May 17 11:23:45)

  As of right now the 1st server RAID reports 9% complete while
  the 2nd server RAID reports 50% complete.   Which is crazy since
  the 1st has been running almost half a day longer!

  I checked the BGI rate on both which are both the default 30%

  There is no other activity on either server going on as I am
  waiting for the init to finish.

  I can find no errors or other clues in the logs I know to check.
  All other aspects of the RAIDs are good (all disks r

Re: [Linux-PowerEdge] PERC RAID init speed slowness

2021-05-18 Thread Blake Hudson
The WD drives are a family that comes in either 18TB or 16TB sizes while 
the Seagate model comes in 16TB-10TB sizes, this would indicate that the 
WD drives have higher density or might be based on newer technology. The 
WD drives have a 512MB cache while the Seagate drives have 256MB cache; 
The increased cache may provide additional performance in a wider 
variety of applications.


As far as Dell is concerned, a 16TB 7.2k SAS drive is a commodity part 
where one model/brand is as good as another, even if the replacement 
part is a model based on technology several years older or newer than 
the original.


https://www.seagate.com/files/www-content/datasheets/pdfs/exos-x16-DS2011-3-2008US-en_US.pdf
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/data-center-drives/ultrastar-dc-hc500-series/product-manual-ultrastar-dc-hc550-sas-oem-spec.pdf

--Blake


On 5/18/2021 11:57 AM, Paul Raines wrote:


I did firmware updates of everything in the lifecycle controller
as the first thing I did when I booted before making the RAIDs
so all firmware levels should be the same.

The disks are different brands it appears.  Not sure why that
happened as we ordered these at the same time.

The first, slower box has ST16000NM010G 16TB drives and
the second, faster box has WUH721816AL5200 16TB drives.

A suprising difference to have different brands of disks but I am also
suprised it makes such a big difference in the init rates if
that really is the cause.

I guess I will open a ticket will Dell support.

-- Paul Raines (http://help.nmr.mgh.harvard.edu)



On Tue, 18 May 2021 11:42am, Blake Hudson wrote:

  External Email - Use Caution One explanation might be that the 
disks are not identical. Dell will sometimes ship different model 
drives depending on availability, or the drives could have come with 
a different firmware version installed. You might check with perccli 
to see if this is a possibility.


Also, some manufacturers may re-use the same model number to indicate 
a drive of X capacity for X usage, but the internals for the drive 
could change from year to year (WD is known for this). Something to 
check if the drives report a subtly different model number, different 
date of manufacture/location of manufacture, or have different style 
labels or fonts from one server to another.


--Blake

On 5/18/2021 9:20 AM, Paul Raines wrote:


 I recently bought two identical PowerEdge T640 with internal PERC 
H730P

 Adapter and eighteen 16TB disks.  I created a 17 disk RAID6 with 1 hot
 spare on both.

 On the 1st server I created the RAID Sunday (May 16 16:20:13)
 and on the 2nd server I creatd it Monday (May 17 11:23:45)

 As of right now the 1st server RAID reports 9% complete while
 the 2nd server RAID reports 50% complete.   Which is crazy since
 the 1st has been running almost half a day longer!

 I checked the BGI rate on both which are both the default 30%

 There is no other activity on either server going on as I am
 waiting for the init to finish.

 I can find no errors or other clues in the logs I know to check.
 All other aspects of the RAIDs are good (all disks report good)
 A comparison of 'perccli64 /c0 show all' between the two shows
 no differences in any settings.

 Simple random-write 'fio' tests on both show slightly slower
 on the first server compared to the second but only about 10-15%

 Anyone have any clues as to why the huge slowness on the
 initialization on the first server?


 ---
 Paul Raines
 http://secure-web.cisco.com/1fi3qgfk-20nJoYU6te9gK96dQIHMd77NtVCeh7fdRY0c3np7SJPrz8MIamuhl0s9lDN5Nx_lHHmy1eJDtAefyetl4hGPSWd46VHPOK2CPYuo_LKowp9vEBZmjIGgve7zXiFm3eaPK2MRhRXWmfXBYb2-0GBkd4Y9TJkc_sZaKixj1hH_xDybnpG2HAZ4UQKfhXKLKxximvTvbkAApqsg9qlQZEw5IafpAuygxQJz-pd9WvubRaTT3d7O7Efjh_A5U544DVsbIKFOLMnmosoBFQ/http%3A%2F%2Fhelp.nmr.mgh.harvard.edu 


 MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
 149 (2301) 13th Street Charlestown, MA 02129    USA



 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://secure-web.cisco.com/1OetFTg67QQKSiywJM_pejv6aBCcPmHoktnDwLnvfpBcUzh0nEp0t9PklvbbCUzK1s4iPpqk1Z1CPSd_6rH3AVuybYh73yTwuRsovdQndcd3eMetKpL6sEr1mHvL-_3Km7FOQMdWHtWpX_0N5gypIAIuJz0heQW01__xsBWU3VFaZ8sHkvtddKc15HU0bZ8Fe4z6utp52ihoUo8SnKJypOAu-7XDWV0N2T45lkYbOIgFYEScZNE6XQO-6kXaeCVxr_PGp4TIZP1Ejbx5JfuadtQ/https%3A%2F%2Flists.us.dell.com%2Fmailman%2Flistinfo%2Flinux-poweredge 



___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://secure-web.cisco.com/1OetFTg67QQKSiywJM_pejv6aBCcPmHoktnDwLnvfpBcUzh0nEp0t9PklvbbCUzK1s4iPpqk1Z1CPSd_6rH3AVuybYh73yTwuRsovdQndcd3eMetKpL6sEr1mHvL-_3Km7FOQMdWHtWpX_0N5gypIAIuJz0heQW01__xsBWU3VFaZ8sHkvtddKc15HU0bZ8Fe4z6utp52ihoUo8SnKJypOAu-7XDWV0N2T45lkYbOIgFYEScZNE6XQO-6kXaeCVxr_PGp4TIZP1Ejbx

Re: [Linux-PowerEdge] PERC RAID init speed slowness

2021-05-18 Thread Blake Hudson
One explanation might be that the disks are not identical. Dell will 
sometimes ship different model drives depending on availability, or the 
drives could have come with a different firmware version installed. You 
might check with perccli to see if this is a possibility.


Also, some manufacturers may re-use the same model number to indicate a 
drive of X capacity for X usage, but the internals for the drive could 
change from year to year (WD is known for this). Something to check if 
the drives report a subtly different model number, different date of 
manufacture/location of manufacture, or have different style labels or 
fonts from one server to another.


--Blake

On 5/18/2021 9:20 AM, Paul Raines wrote:


I recently bought two identical PowerEdge T640 with internal PERC 
H730P Adapter and eighteen 16TB disks.  I created a 17 disk RAID6 with 
1 hot spare on both.


On the 1st server I created the RAID Sunday (May 16 16:20:13)
and on the 2nd server I creatd it Monday (May 17 11:23:45)

As of right now the 1st server RAID reports 9% complete while
the 2nd server RAID reports 50% complete.   Which is crazy since
the 1st has been running almost half a day longer!

I checked the BGI rate on both which are both the default 30%

There is no other activity on either server going on as I am
waiting for the init to finish.

I can find no errors or other clues in the logs I know to check.
All other aspects of the RAIDs are good (all disks report good)
A comparison of 'perccli64 /c0 show all' between the two shows
no differences in any settings.

Simple random-write 'fio' tests on both show slightly slower
on the first server compared to the second but only about 10-15%

Anyone have any clues as to why the huge slowness on the
initialization on the first server?


---
Paul Raines http://help.nmr.mgh.harvard.edu
MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
149 (2301) 13th Street Charlestown, MA 02129    USA



___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] Disk temperature

2019-10-11 Thread Blake Hudson


[EXTERNAL EMAIL] 

Onno, I'm not sure how Dell allows you to configure the server at order 
time, but Dell often have configuration limitations that do not seem 
immediately obvious. As a concrete example, if one tries to configure a 
server with too much RAM or too many disks you may receive a warning 
that you have to upgrade the PSU in the server to complete your order.


I could imagine that, as another possible case, Dell may allow you to 
install rear disks but it's possible they limit these disks to 7.2k (or 
disks rated to run at higher temperature, draw less power, etc). If you 
are installing disks yourself, you may not be aware of some of these 
limitations (as they may not be documented clearly or publicly). Every 
disk manufacturer sets their own temperature thresholds so while one 
drive may support 60 C, another may top out at 50 C. This isn't to say 
that either disk will be reliable if kept under those temperatures, just 
that those are the manufacturer's recommended operating temperatures and 
that they vary from model to model. If you're having high failure rates 
on rear disks, and the only obvious difference between front and rear 
disks is the operating temperature, I think that's a strong indicator 
that temperature could be a factor. Going forward you might consider 
using SSDs (which often produce less heat), lower rpm disks (that 
produce less heat), or disks rated for higher temperature extremes to 
see if there is a reliability improvement.


Onno Zweers wrote on 10/11/2019 2:43 AM:

Following up.

I checked two classes of servers:
R730xd - rear disk 58°C
R740xd2 - rear disk 33°C

That's a huge difference. The fan speeds were similar, between 10,300 and 
11,160 rpm. I don't think this accounts for the difference in temperature. 
Perhaps the airflow of the system has been improved in the R740. But there is 
one significant difference: in the R740xd2, the rear disks are SSDs, where the 
R730xd have spinning disks.

Cheers,
Onno


Op 10 okt. 2019, om 20:19 heeft Onno Zweers  het 
volgende geschreven:

Thanks everyone for the very useful answers. I had a quick look:

[root@shark5 ~]# for disk in $(smartctl --scan | egrep -o megaraid,[0-9]+) ; do echo -n 
"$disk - " ; smartctl -a /dev/sdb -d $disk | grep 'Current Drive Temperature' ; 
done
megaraid,0 - Current Drive Temperature: 31 C
megaraid,1 - Current Drive Temperature: 32 C
megaraid,2 - Current Drive Temperature: 32 C
megaraid,3 - Current Drive Temperature: 31 C
megaraid,4 - Current Drive Temperature: 32 C
megaraid,5 - Current Drive Temperature: 30 C
megaraid,6 - Current Drive Temperature: 32 C
megaraid,7 - Current Drive Temperature: 32 C
megaraid,8 - Current Drive Temperature: 31 C
megaraid,9 - Current Drive Temperature: 32 C
megaraid,10 - Current Drive Temperature: 34 C
megaraid,11 - Current Drive Temperature: 32 C
megaraid,12 - Current Drive Temperature: 56 C
megaraid,13 - Current Drive Temperature: 58 C
megaraid,14 - Current Drive Temperature: 44 C
megaraid,15 - Current Drive Temperature: 45 C
megaraid,16 - Current Drive Temperature: 47 C
megaraid,17 - Current Drive Temperature: 51 C

58 degrees C seems very hot to me, and indeed disks 12 and 13 are in the back 
of the machine. We have lots of these servers and we've noticed that these rear 
disks fail rather often. The 2 disks in the rear have as many failures as the 
12 disks in front. I guess the next step would be to check at which speed the 
fans blowing.

Cheers,
Onno

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] R710 not iDRAC upgradeable?

2019-10-07 Thread Blake Hudson



[EXTERNAL EMAIL] 

You should be able to use any of the Windows .exe files via the method 
you have linked. If that fails, I'd try booting a live version of Linux 
(pick a Dell supported version for your platform) from USB/CD and 
install the BIOS update using the Linux .bin file. The life cycle 
controller update check may also work (not sure if dell has finally 
killed off the 11g platform or not).


--Blake

Mauricio Tavares wrote on 10/7/2019 3:07 PM:

So I want to apply the most recent firmware/bios patch

https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=0f4yy&oscode=ws8r2&productcode=poweredge-r710

https://www.dell.com/support/article/us/en/04/sln292363/update-dell-poweredge-servers-firmware-remotely-using-the-idrac?lang=en#idrac78
implies I can do that from the iDrac, but none of the images seem to
be iDRAC-friendly. Does that mean I need to instan an OS first?

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] T640 / T440 noise level comparison

2019-03-07 Thread Blake Hudson


[EXTERNAL EMAIL] 

We have a few dozen T630/T620 servers and recently started purchasing 
the T640 models. I noticed that the T640 models exhaust noticeably more 
air through the rear than a similarly equipped T630/T620 sitting in the 
same rack (causing an increase in noise). I consider the T640 a quiet 
server, but they are not as quiet as the preceding generation and they 
are certainly not silent (nor would I expect them to be).


I don't have any other Dell 14G servers around for comparison, but I 
would assume that these fan changes were made intentionally and across 
the board for the 14G server line so that the T430 -> T440 fan speed 
increase would be similar to the T630 -> T640 increase. I believe you 
can still purchase the 13G servers (minus some options that used to be 
available), if noise/volume is a priority in your application.


--Blake

Peter Holl wrote on 3/7/2019 7:41 AM:

Hello everyone:

I have a T640 here and can't get it anywhere near to silent. It's 
equipped with two "Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz" CPUs, a 
BOSS System for the OS and nine 4TB NL-SAS disks. System profile is 
set to "OS DBPM" and /proc/cpuinfo shows 800MHz for all cores, when 
nothing CPU intensive is running. However, fans are always running 
with at least 50%.


OK, this machine will end up in a server room, and I already learned 
from the DELL hotline that this should be normal since the chip-set 
temperature causes the fans to run as fast as they do.



So my real question is: has anyone out there experience with the noise 
level of a T440?


I plan to order the T440 with the same two CPUs as above, plus BOSS 
card, only two 2.5" 2TB disks, additionally a QuadPort 1Gb network 
adapter. Maybe a GPU card (low to mid performance).


The DELL shop advertises it as "Powerful, expandable and quiet", and 
I'd like to verify the "quiet".


Other hints are also welcome.


Thanks in advance,
Peter


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] iDRAC 2.60.60.60 Power Issues

2018-10-03 Thread Blake Hudson


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive information.



I haven't noticed that specific error, but I have about a dozen 
T630/R330 where the DRAC will spontaneously reset after I we upgraded to 
v2.60.60.60. This version is definitely not stable.


I have not bothered rolling back to v2.52.52.52 for fear the rollback 
might cause more issues. The DRAC resets are not impacting to us so far 
so I had hoped the issue would get resolved in a future update and we 
would just upgrade when such an update became available.


--Blake

I-Ming Chen wrote on 10/2/2018 3:42 PM:


[EXTERNAL EMAIL]
Please report any suspicious attachments, links, or requests for 
sensitive information.


Has anyone else experience power supply issues with iDRAC v2.60.60.60?

I’ve been starting to see the iDRAC spamming log events like:
“The system board PS1 PG Fail voltage is outside of range.”

“Power supply 1 is incorrectly configured.”

In most cases, the server ends up rebooting on its own as a result.

--

I-Ming Chen | 949-955-1380 x14094 (O) | 949-656-1405 (C)

Data Center Engineer | Blizzard Entertainment



___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] Expanding Raid 5 with additional drive

2017-03-23 Thread Blake Hudson
You've expanded your disk so now you need to expand the partition, LVM 
(and its sub parts), and the file system.


1. parted (I recommend gparted live) to expand the partition
2. pvresize to resize the LVM pv
3. vgextend to resize the LVMvg
4. lvresize to resize the LVM lv
5. fsadm to resize the file system

(Run pvs, vgs, lvs, fdisk -l, and parted --list to collect an inventory 
of current setup prior to making any changes)


I don't know the specifics of your LVM names and partition table, but 
here's an example using sda partition 2, with a VG named VolGroup, and 
LV named lv_root :


parted /dev/sda
print
resize 2 1024 838860800
pvresize /dev/sda2
vgextend /dev/VolGroup /dev/sda2
lvresize /dev/mapper/VolGroup-lv_root  --size 800G
fsadm resize /dev/VolGroup/lv_root

Good luck,
--Blake


Sid Young wrote on 3/22/2017 5:55 PM:

sorry thats a "pvresize" typo on my part :(

Sid Young
http://z900collector.wordpress.com/restoration/
http://sidyoung.com/

My Latest Book is available:
http://www.amazon.com/Rebuild-Japanese-Motorcycles-Motorbooks-Workshop/dp/0760347972

On Thu, Mar 23, 2017 at 8:54 AM, Sid Young > wrote:


The file system is what the OS sees, so adding more disk does not
automatically increase the filesystem. If its a native ext4 linux
partition then you should be able to do a resizefs, if its a PV in
an LVM group then you will need to do a pzresize. Do lots of
research and backup all the data first as any errors and you could
loose the lot.

Sid



Sid Young
http://z900collector.wordpress.com/restoration/

http://sidyoung.com/

My Latest Book is available:

http://www.amazon.com/Rebuild-Japanese-Motorcycles-Motorbooks-Workshop/dp/0760347972



On Thu, Mar 23, 2017 at 7:46 AM, Jeff Boyce
mailto:jbo...@meridianenv.com>> wrote:

Greetings -

 I just added a new hard drive to a PE T610 running RAID
5.  In OMSA
I selected the new drive and added it to the existing virtual
disk, then
executed a reconfiguration.  After about 3 hours this successfully
completed showing the new virtual disk as 836.62 GB.

 My system is running CentOS 6 as the host KVM system,
with a few
other CentOS 6 and 7 guests.  In the host system I still only
see the
previous virtual disk size of about 557 GB.

Specifically:
fdisk -l /dev/sda  =  598.9 GB
Gparted shows /dev/sda  =  557.75 GB
vgdisplay  =  557.26 GB
pvdisplay  =  557.26 GB

 What special incantation do I need to do now to make the
space
available to the OS.

 I haven't done this since I last added a disk to my old
PE2600
about 6-8 years ago and I can't seem to find my notes, and am
apparently
not using the right terms in Google to get me the answer I am
looking
for.  Thanks for any assistance.

Jeff

--

Jeff Boyce
Meridian Environmental

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com 
https://lists.us.dell.com/mailman/listinfo/linux-poweredge






___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] noise level of T630?

2017-02-07 Thread Blake Hudson
Having purchased many 2800's, 2900's, T710's, T620's, and T630's, I can 
say that the trend has been quieter and power draw lower with each 
generation. There is an initial spin up at power on, but after a few 
minutes the fans do spin down automatically. I believe that the latest 
servers also allow some degree of software control over the fan speed 
(perhaps I saw this setting in the iDRAC Enterprise?). The latest PERCs 
also allow spin down/power save of drives to reduce power draw/heat 
output and, of course, the CPUs support idling/shutting down of unused 
cores for the same purpose.

Bond Masuda wrote on 1/20/2017 4:27 PM:
> Hi Fellow Linux/PowerEdge users:
>
> I have a pair of old PE2900 that are starting to show their age and I'm
> considering replacing them. They run CentOS7 to serve as NAS function +
> KVM + few other things.
>
> I'm considering T630, but was wondering how loud the T630, especially
> compared against old PE2900? I had to swap all the fans in the PE2900
> and tweak the BMC firmware to get it to about 38 dBA and that is
> acceptable. Was wondering what other Linux users with T630 have experienced?
>
> Thanks for anything you care share,
>
> Bond
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] Mixing speed - 15k and 7, 5k RPMs SAS HDD in the same raid5 array

2016-08-11 Thread Blake Hudson
Yes. You can mix drives of different sizes and speeds as long as A) the 
drives are of the same type (no mixing SATA & SAS or mixing HDDs & SSDs 
in the same array) and B) the new drives are the same size or larger 
than the drive they are replacing.


Please note, installing slower drives may lead to significant 
performance degradation of the array. I wouldn't recommend mixing 
online(SSD/15k) and nearline(7.2k) storage in the same array.


huret deffgok wrote on 8/11/2016 4:06 AM:

Hi list,

On a PE R720, with a PERC H710P Mini and a 8 (15k RPM SAS 6Gb - 600GB) 
drives RAID5 array, can I replace one faulty HDD with a 7,5k RPM SAS 
6Gb - 2TB drive ?


Thank you very much if you have a definitive answer,
kfx


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] older IDRAC's

2016-04-04 Thread Blake Hudson
You don't mention which version of FireFox, OS, or Java you're using. 
Most problems I've ran into are client side, and are usually ruled out 
there.

To rule out a server issue, I would suggest resetting the iDRAC via ssh 
and updating to the latest iDrac firmware (Dell's site shows this as 2.85).

On the client side, some of the older iDRACs don't work with current 
versions of Java. I keep around a WinXP VM with java 1.6 and another VM 
with java 1.7 just to access some of the older dracs. I believe the 
iDrac 6 should work fine with XP + Java 1.7. If it matters, on a Win10 
VM I have trouble viewing the iDrac 6 web interface in IE 10/Edge and 
the console applet does not work with Java 8 (1.8), so the iDrac 6 is 
basically unsupported on current client platforms. Hopefully Java will 
die soon enough and these remote KVMs will be able to utilize HTML 5 or 
a custom plugin that works with up to date browsers and operating systems.

--Blake

Stephen Berg (Contractor) wrote on 4/4/2016 12:39 PM:
> Got a few systems, older R610's not under warranty, that are refusing to
> connect on the virtual console.  Log in to the iDRAC, click "launch" on
> the Virtual Console Preview, get the opening dialog in Firefox, have to
> click "Continue" for a security warning, it verifies the application and
> asks if I want to run it, click Run.
>
> Then I get a small dialog "Connecting to Virtual Console Server", and
> immediately get another small dialog, "Connection failed."
>
> I've cleared my Firefox cache, power cycled the server, just updated the
> iDRAC firmware to 2.80 using DSU and still getting Connection Failed.
> I've googled this problem and not found any solutions so far.  The
> system is running SciLinux 7.2 as are most of my servers that don't
> exhibit this failure.
>

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: mailing list issue

2010-11-30 Thread Blake Hudson
 Original Message  
Subject: Re: mailing list issue
From: Matt Domsch 
To: Blake Hudson 
Cc: "linux-poweredge@dell.com" 
Date: Tuesday, November 30, 2010 12:40:39 PM
> On Mon, Nov 29, 2010 at 12:22:15PM -0600, Blake Hudson wrote:
>> Happened to me last week and the week before. I checked my mail server
>> logs and their have been no delivery attempts. My guess is that there is
>> some internal problem on the mailing list server causing the messages to
>> bounce.
> Mailman definitely thinks you're bouncing:
>
> Nov 11 11:31:51 2010 (17817) linux-poweredge: bl...@ispn.net has stale bounce 
> info, resetting
> Nov 12 14:42:10 2010 (17817) linux-poweredge: bl...@ispn.net current bounce 
> score: 2.0
> Nov 15 14:42:24 2010 (17817) linux-poweredge: bl...@ispn.net current bounce 
> score: 3.0
> Nov 16 08:52:19 2010 (17817) linux-poweredge: bl...@ispn.net current bounce 
> score: 4.0
> Nov 17 07:47:53 2010 (17817) linux-poweredge: bl...@ispn.net current bounce 
> score: 5.0
> Nov 17 07:47:53 2010 (17817) linux-poweredge: bl...@ispn.net disabling due to 
> bounce score 5.0 >= 5.0
> Nov 24 09:00:03 2010 (26373) Notifying disabled member bl...@ispn.net for 
> list: linux-poweredge
>
>
> However my mail logs have rotated away for before the 24th, so I'm not
> sure what caused mailman to think it's a bounce.
>

Thanks for looking into it. Perhaps the same issue is occurring for
other users, which may assist you in finding something current.
Something to investigate, if possible, is the length of time messages
will remain in queue in the case of a transient failure (such as a
network fault or DNS failure). I can understand that dedicated mailing
list servers don't want to be busied with messages that won't deliver,
but I would think at least 2 delivery attempts would be prudent. With
the default settings of most mail server software, 2 attempts would take
under 15 minutes. This would still be considered very aggressive, as
most mail servers will typically retry a message for several days before
giving up.

--Blake

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: mailing list issue

2010-11-29 Thread Blake Hudson
Happened to me last week and the week before. I checked my mail server
logs and their have been no delivery attempts. My guess is that there is
some internal problem on the mailing list server causing the messages to
bounce.

--Blake

 Original Message  
Subject: Re: mailing list issue
From: Brian O'Mahony 
To: Jefferson Ogata , linux-poweredge@dell.com

Date: Wednesday, November 03, 2010 10:03:11 AM
> This happened me a few months ago, but hasn't happened since.
>
> -Original Message-
> From: linux-poweredge-boun...@dell.com 
> [mailto:linux-poweredge-boun...@dell.com] On Behalf Of Jefferson Ogata
> Sent: Tuesday, November 02, 2010 4:23 AM
> To: linux-poweredge@dell.com
> Subject: OT: mailing list issue
>
> Got a message from the linux-poweredge@dell.com mailman interface last night 
> claiming:
>
> "Your membership in the mailing list Linux-PowerEdge has been disabled due to 
> excessive bounces The last bounce received from you was dated 31-Oct-2010.  
> You will not get any more messages from this list until you re-enable your 
> membership.  You will receive 3 more reminders like this before your 
> membership in the list is deleted."
>
> with a link to re-enable the membership. Checked my mail server logs and 
> there have been no bounces on my end. Something's funky.
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>
>
> The information in this email is confidential and may be legally privileged.
> It is intended solely for the addressee. Access to this email by anyone else
> is unauthorized. If you are not the intended recipient, any disclosure,
> copying, distribution or any action taken or omitted to be taken in reliance
> on it, is prohibited and may be unlawful. If you are not the intended
> addressee please contact the sender and dispose of this e-mail. Thank you.
>
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Performance of MD1220 on Perc H800 slower than MD1120 on Perc/6E

2010-08-31 Thread Blake Hudson

>
> 
> *From: *Richard Ems 
> *Date: *Wed, 18 Aug 2010 03:11:12 -0700
> *To: *Marc Stephenson 
> *Cc: *
> *Subject: *Re: Performance of MD1220 on Perc H800 slower than MD1120
> on Perc/6E
>
> On 08/16/2010 09:59 PM, Marc Stephenson wrote:
> > Their next recommendation was to try installing RHEL 5 which I’m working
> > on now. Has anyone else seen performance problems on their MD1220’s?
>
> Hi Marc,
>
> Any new performance values on RHEL 5 ?
> Why are you using sysbench? Have you tried other tools?
>
> We are getting a MD1200 and a H800 controller the next days, and I am
> very interested in your results. We are going to use also XFS.
>
> Your HDDs are 2.5", right? Aren't this drives slower than the 3.5" ones?
>
> Best regards,
> Richard
>

Just FYI, the only 2.5" 300GB drives I can spec a power vault with are
10k RPM, while the only 3.5" 300GB drives (currently) are 15k RPM. 15k >
10k. When it comes to rotating magnetic disk drives, faster rotation is
better.

You can get 15k drives in the MD1220, but they're going to be smaller so
you're going to need more if you want to keep the same capacity.

There's nothing intrinsically slower about a 2.5" drive - in fact, 15k
3.5" drives use platters sized between 2.5" and 3". However, there is
room for more platters in a 3.5" drive. Which means that there are less
compromises between high capacity and high speed - you can have both in
the 3.5" form factor. 2.5" drives seem to be either high capacity or
high speed, but not both.

--Blake
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: 1-2TB SATA drives currently shipping with 11G servers + onboard SATA options?

2010-07-08 Thread Blake Hudson
 Original Message  
Subject: Re: 1-2TB SATA drives currently shipping with 11G servers +
onboard SATAoptions?
From: Tim Small 
To: Blake Hudson 
Cc: linux-poweredge@dell.com
Date: Thursday, July 08, 2010 4:54:59 AM
> Blake Hudson wrote:
>   
>> Tim Small wrote:
>>   
>> 
>>> I know they've previously been shipping Seagate ST31000340NS 1TB drives,
>>> but I've no idea which vendor/model 2TB drives they're using?
>>>   
>>> 
>>>   
>> I recently purchased a couple Dell branded 2TB SATA drives. They are
>> Hitachi's - HDS72202A28A.
>>   
>> 
> Interesting - thanks for that - any idea if they were pulls from
> server-class, or desktop-class Dell hardware?  The reason I ask is that
> whilst Google come up with a blank for that part number, the non-Dell
> 2TB Hitachi Deskstar (i.e. desktop-class drive) is the HDS722020ALA330,
> whereas their Ultrastar drives have the part number HUA722020ALA330...
>
>   
These came mounted to sleds for 19xx/29xx servers. So I assume they were
server pulls.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: 1-2TB SATA drives currently shipping with 11G servers + onboard SATA options?

2010-07-07 Thread Blake Hudson
Tim Small wrote:
> I know they've previously been shipping Seagate ST31000340NS 1TB drives,
> but I've no idea which vendor/model 2TB drives they're using?
>   
I recently purchased a couple Dell branded 2TB SATA drives. They are
Hitachi's - HDS72202A28A.

Can't comment on how Dell sells them - these were purchased via eBay as
working system pulls.

--Blake

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: PERC 6/i on a T110 ?

2010-07-06 Thread Blake Hudson
No. The PERC6/i is an integrated controller which is only compatible
with select PE servers that have the specific daughter board
connections. The T110 does not have the proper connector. You can,
however, use a PERC6/E (or other controller) in one of the PCI-e slots.

 Original Message  
Subject: PERC 6/i on a T110 ?
From: Davide Ferrari 
To: linux-poweredge@dell.com
Date: Tuesday, June 29, 2010 10:39:32 AM
> Hi, I've a T110 with the infamous S300 (fake)RAID card, and I have a
> spare PERC 6/i.. will it fit in that server?
> I'm not very used to tower servers :)
>
> This is the controller I have installed in another server
>
> 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078
> (rev 04)
> Subsystem: Dell PERC 6/i Adapter RAID Controller
>
> Thanks in advance and sorry for this little off topic
>
>   

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: replacement drives in PE servers

2010-03-06 Thread Blake Hudson

 Original Message  
Subject: Re: replacement drives in PE servers
From: Kevin Davidson 
To: Blake Hudson 
Cc: Matt Garman , "linux-poweredge@dell.com"

Date: Saturday, March 06, 2010 6:47:31 AM
>
> On 6 Mar 2010, at 06:53, Blake Hudson  wrote:
>
>> They usually use a Dell specific identifier, but I don't
>> believe the Dell firmware/id significantly modifies the drive's behavior
>> [...] I'm using a pair of Dell
>> branded Samsung "raid version" SATA drives in my workstation in a RAID 1
>> and they are the most unreliable drives I've ever used in a RAID- one or
>> the other will drop out of the array about once every month or two under
>> heavy load.
>
> Just read back what you've written and you'll see your blunder. You're
> using Dell drives with a non-Dell controller.
>
> :-)
>
> All that remains is for Dell to announce their own connector standards
> (DAS & DATA) to prevent this sort of error.
>
I assume that the "my blunder" comment was a bit of facetiousness.  If
it matters, the Samsung 160GB SATA drives were purchased in a PE 2900III
and immediately replaced with 1.5TB Seagates. The Samsungs then went
into a new Dell Vostro. No problems with the Seagates I purchased from
Newegg, constant problems with both of the Dell branded Samsungs.

As you can tell, we purchase a few Dells... this could certainly change
based on this single policy that Dell has chosen to force.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: RAID battery-backed cache - necessary?

2010-03-05 Thread Blake Hudson


Adam Nielsen wrote:
>
> I am curious as to why this type of battery-backed cache is important. 
> The OS would do a large amount of caching (Linux can have a disk cache 
> of many gigabytes) which I am sure would be far more effective than the 
> small caches on many RAID cards.
>
> Given that the OS, if configured properly, should provide the best type 
> of caching possible, why is it still necessary to have RAID cache and 
> on-drive cache?  Surely these would provide no additional benefit?
>
>   

While the other posts are accurate, one thing wasn't clarified. And that
is you're confusing read cache and write cache. The "several gigabytes"
are file that linux has *read *into memory from disk - allowing quicker
access if it needs them again. Most linux file systems maintain a small
write cache and flush it to disk every few seconds. This ensures file
system consistency in the event of a crash or power loss.

As the other posters have mentioned, the cache on the RAID cards is
there to improve write performance by making up for (or hiding) the
mechanical limitations of disks. To give you an example, an application
that requires a sync after every operation (like ISC's DHCP server
offering a lease) could only give out a theoretical maximum (actual
results were about 50%) of 250 leases per second (this is the number of
revolutions of a 15k RPM disk drive). By enabling the write cache on a
PERC controller, ISC's DHCP server was able to give out between 1000 and
2000 leases per second (until it became CPU limited) in my testing.


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: replacement drives in PE servers

2010-03-05 Thread Blake Hudson
Dell just uses commodity drives from whatever manufacturer had the most
available. They usually use a Dell specific identifier, but I don't
believe the Dell firmware/id significantly modifies the drive's behavior
or enhances the drive in any way. In fact, I'm using a pair of Dell
branded Samsung "raid version" SATA drives in my workstation in a RAID 1
and they are the most unreliable drives I've ever used in a RAID- one or
the other will drop out of the array about once every month or two under
heavy load.

Ideally, you'd use matched drives in an array, but something that uses
the same interface (don't mix SAS/SATA/SCSI), spindle speed, cache, and
equal or greater size will work fine. In a pinch you can forgo spindle
speed/cache considerations, but I wouldn't run that way long term.

We've been replacing our Dell drives with off the shelf drives for years
as we upgrade our servers. This advice is valid for PERC6 and below as
well as all non-RAID controllers. Dell's newest H series PERCs will only
work with Dell drives - if you get one of these, expect to pay a few
$1000 in markup if you need to upgrade or replace drives out of
warranty. Just factor that into the cost of the server and see if you
still want to buy a new dell.

--Blake

 Original Message  
Subject: replacement drives in PE servers
From: Matt Garman 
To: linux-poweredge@dell.com
Date: Wednesday, March 03, 2010 4:05:44 PM
> This isn't a Linux-specific question, but I'm hoping the Dell Linux
> community has enough experience/insight to offer some suggestions.
>
> We have many PowerEdge servers, mostly 1950, R410 and R610 models.
> Most are running two disks in a RAID-1 (mirrored) setup using the PERC
> 5/i, 6/i or LSI SAS 1068E controllers.
>
> We decided that we'd like to have spare drives for all of our servers,
> on the chance that one dies.  My question is: to order spares of the
> exact same Dell part number is quite expensive.  In the event we have
> to use a spare, is it safe to use any drive that matches size and
> interface?
>
> One example: Dell part number JW551, with description "Hard Drive,
> 750G, ES3, 7.2K, 3.5 SGT2, Galaxy".  These aren't available on the
> website, but we got a quote for a (refurbished) model that was over
> $500.  But, ultimately, it's just a 750 GB SATA drive.  I can buy a
> brand new 750 GB SATA drive for $150 or less.
>
> I was just wondering what folks' thoughts are on this matter.  What
> are the risks involved in using non-exact---but properly
> spec'ed---replacement drives?
>
> Thanks!
> Matt
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>   

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Third-party drives not permitted on Gen 11 servers

2010-02-16 Thread Blake Hudson
 Original Message  
Subject: Re: Third-party drives not permitted on Gen 11 servers
From: Jeff 
To: linux-poweredge@dell.com
Date: Wednesday, February 10, 2010 1:24:38 PM
> On Wed, Feb 10, 2010 at 12:48 PM, Bond Masuda  wrote:
>   
>> however, bottom line is this: Dell is trying to increase profits and
>> they see this "lock-in" as a potential method to achieve that goal. if
>> Dell customers want to see this change, you'll just need to show Dell
>> that it doesn't accomplish that goal. I.e., stop buying Dell, cancel
>> your orders, etc. anything short of this will not change how a business
>> operates. no amount of complaining on this mailing list is going to make
>> this change until dollars are at stake.
>> 
> +1.
>
> We are all preaching to the choir here. This list is not the best
> forum for getting our message across to Dell. I just wrote to my Dell
> Sales rep informing her that future sales are in jeopardy. Maybe if we
> all do that, they might take notice.
>
> Jeff
>
> ___
>   

RAM and HDDs are the most common upgrades we perform on our servers. At
least half of our servers get upgrades of one or both of these. I
typically buy qualified RAM from Crucial and purchase HDDs from a local
or online vendor as a commodity item. This often occurs several years
after initial purchase when the servers are re-purposed. We wrote our
sales rep regarding the topic of this thread and his response was
basically: Yes, we are doing this... "It's called HDD lock strategy
which blocks non-Dell certified HDDs from being used with these
controllers.". Attached was a pdf explaining the stringent quality
control standards for Dell's HDDs. No apology, remorse, alternative
solutions, etc.

Vendor lock in is not an option I am willing to support. Either we will
purchase RAID controllers that support standard drives with our Dell
servers or we will purchase non-Dell servers.

--Blake


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq