Re: [Linux-PowerEdge] RPM repo GPG key changed

2018-06-28 Thread Gottloeb, Jeff [US] (ES)
Chandra,

Please provide the justification for not signing all of the RPMs with the new 
key.  There are Dell customers with systems that do not have Internet 
connectivity and therefore need other solutions to manage the DSU and OMSA 
repositories.  Red Hat's disconnected Satellite server is one method designed 
for this purpose but it does not support multiple GPG keys for the same 
repository.

Is there a target date when all of the RPMs will be signed with this new key?


Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395



___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


[Linux-PowerEdge] Problem with one OMSA RPM file name

2018-01-17 Thread Gottloeb, Jeff [US] (ES)
Sorry all, I forgot to change the Subject line of my last post.


Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395


-Original Message-
From: Gottloeb, Jeff [US] (ES) 
Sent: Wednesday, January 17, 2018 9:18 AM
To: linux-powere...@lists.us.dell.com
Subject: RE: [Linux-PowerEdge] H740p JBOD

Dell,

I have discovered that ever since the December build of the OMSA repo in 
the srvadmin-cm repo is missing the vendor/major release abbreviation in the 
file name.  This results in the RHEL6, RHEL7, SLES11, and SLES12 RPM having the 
exact same name but the checksums are different.  The file name should include 
.el6., .el7., .sles11., and .sles12. as they did up to and including the 
November build.  This is the only RPM in the srvadmin group that is not named 
correctly.

This duplication of names prevents the importing of these RPMs into Red Hat's 
Satellite server.  We use the Satellite application to manage and update system 
level software as this.  The Satellite sees the duplicate names but different 
checksum and refuses to import them.

This was reported to you tech support with Dell Service Request# 959465939.  
Their response was that while this repo is not supported by tech support they 
will try to get it fixed in a future release and to post this issue to this 
forum.   I need to point out that we are unable to upgrade to this version 
until this is resolved.  This doesn't seem like a difficult fix to make that it 
would require much effort or time.

Thanks.

http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL6_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/SLES11_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/SLES12_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 


http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/RHEL6_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.el6.x86_64.rpm 2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/RHEL7_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.el7.x86_64.rpm 2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/SLES11_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.sles11.x86_64.rpm  2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/SLES12_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.sles12.x86_64.rpm  2017-11-28 01:13110M




Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] H740p JBOD

2018-01-17 Thread Gottloeb, Jeff [US] (ES)
Dell,

I have discovered that ever since the December build of the OMSA repo in 
the srvadmin-cm repo is missing the vendor/major release abbreviation in the 
file name.  This results in the RHEL6, RHEL7, SLES11, and SLES12 RPM having the 
exact same name but the checksums are different.  The file name should include 
.el6., .el7., .sles11., and .sles12. as they did up to and including the 
November build.  This is the only RPM in the srvadmin group that is not named 
correctly.

This duplication of names prevents the importing of these RPMs into Red Hat's 
Satellite server.  We use the Satellite application to manage and update system 
level software as this.  The Satellite sees the duplicate names but different 
checksum and refuses to import them.

This was reported to you tech support with Dell Service Request# 959465939.  
Their response was that while this repo is not supported by tech support they 
will try to get it fixed in a future release and to post this issue to this 
forum.   I need to point out that we are unable to upgrade to this version 
until this is resolved.  This doesn't seem like a difficult fix to make that it 
would require much effort or time.

Thanks.

http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL6_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/SLES11_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 

http://linux.dell.com/repo/hardware/dsu/os_dependent/SLES12_64/srvadmin/
srvadmin-cm-9.1.0-17.12.00.x86_64.rpm   2018-01-12 04:2456M 


http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/RHEL6_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.el6.x86_64.rpm 2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/RHEL7_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.el7.x86_64.rpm 2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/SLES11_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.sles11.x86_64.rpm  2017-11-28 01:12110M

http://linux.dell.com/repo/hardware/DSU_17.11.01/os_dependent/SLES12_64/srvadmin/
srvadmin-cm-8.5.0-2372.10488.sles12.x86_64.rpm  2017-11-28 01:13110M




Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] EXT :Re: dsu does not show/see all NICs?

2017-10-27 Thread Gottloeb, Jeff [US] (ES)
If you have disabled loading the usb-storage kernel module in an 
/etc/modprobe.d/ file, temporarily enable it and re-run dsu.  The inventory 
collector needs this to collect certain bits of info.



Sent from my Verizon, Samsung Galaxy smartphone


 Original message 
From: lejeczek 
Date: 10/27/17 6:15 AM (GMT-08:00)
To:
Cc: linux-poweredge-Lists 
Subject: EXT :Re: [Linux-PowerEdge] dsu does not show/see all NICs?

thanks for the info.
I'm not in hurry to update firmware, inasmuch as I'd like to
know why dsu does not do its job here.
Therefore it was @dell in the first place.
especially:
Getting System Inventory ...
warning: Inventory collector returned with partial failure.
<= HERE

omsa, omreport sees all NICs, and dsu fails.


On 27/10/17 12:23, Rene Shuster wrote:
> Try downloading and manually running v10.01.00 (
> http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=7N5GW
> ) and see if it recognizes and updates the card. If not
> fall back to v8.07.26 (
> http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=82J79
> ) and if that doesn't work either use the last released
> version before QLOGIC took over and thats v7.12.19 (
> http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=35RF5
> ). All three support flavors of 5709, but as they come
> with different DEV_IDs and SUBSYS_IDs some models might
> have been dropped during the firmware development. The
> (Windows-only! Why?) Repository Manager usually shows the
> PCIDs in the properties of each firmware, so that could be
> a way to verify.
>
> On Fri, Oct 27, 2017 at 6:23 AM, lejeczek
> > wrote:
>
> hi everyone
>
> @dell
>
> In my r815s(plural) I have three NICs in total,
> embedded 4-port, and two more, like this:
>
> $ lspci | grep -i net
> 01:00.0 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 01:00.1 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 02:00.0 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 02:00.1 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 06:00.0 Ethernet controller: Broadcom Limited
> NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
> 06:00.1 Ethernet controller: Broadcom Limited
> NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
> 06:00.2 Ethernet controller: Broadcom Limited
> NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
> 06:00.3 Ethernet controller: Broadcom Limited
> NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
> 23:00.0 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 23:00.1 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 24:00.0 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
> 24:00.1 Ethernet controller: Broadcom Limited
> NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
>
> Yet! output of dsu is this:
>
> $ dsu --inventory
> DELL EMC System Update 1.5.0
> Copyright (C) 2014 DELL EMC Proprietary.
> Verifying catalog installation ...
> Installing catalog from repository ...
> Fetching dsucatalog ...
> Reading the catalog ...
> Verifying inventory collector installation ...
> Getting System Inventory ...
> warning: Inventory collector returned with partial
> failure.
>
> 1. OpenManage Server Administrator  ( Version : 8.5.0 )
>
> 2. BIOS  ( Version : 3.2.2 )
>
> 3. Dell LifeCycle Controller v1.7.5, 1.7.5.4, A00  (
> Version : 1.7.5.4 )
>
> 4. Dell 32 Bit Diagnostics, version 5162 Installer
> Revision 14.05.00, 5162A0, 5162.1  ( Version : 5162A0 )
>
> 5. iDRAC6  ( Version : 2.90 )
>
> 6. Power Supply  ( Version : M1.01.04 )
>
> 7. Power Supply  ( Version : M1.01.04 )
>
> 8. NetXtreme BCM5719 Gigabit Ethernet PCIe rev 01
> (p2p1)  ( Version : 7.10.64 )
>
> 9. NetXtreme BCM5719 Gigabit Ethernet PCIe rev 01
> (p2p2)  ( Version : 7.10.64 )
>
> 10. NetXtreme BCM5719 Gigabit Ethernet PCIe rev 01
> (p2p3)  ( Version : 7.10.64 )
>
> 11. NetXtreme BCM5719 Gigabit Ethernet PCIe rev 01
> (p2p4)  ( Version : 7.10.64 )
>
> 12. QLogic QLE2462 Adapter  ( Version : 02.23.06 )
>
> 13. QLogic QLE2462 Adapter  ( Version : 02.23.06 )
>
> Exiting DSU!
>
>
> Is above all? Is above correct?
>
> many thanks, L.
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com 
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
>
>
>
>
> --
> Tech III * AppControl * Endpoint Protection * Server
> Maintenance
> Buncombe County 

[Linux-PowerEdge] Latest version of iDRAC

2017-10-24 Thread Gottloeb, Jeff [US] (ES)

I found a version of iDRAC with Lifecycle Controler (v2.50.50.50, 
https://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=278FC)
 that is not in the Dell repository at 
http://linux.dell.com/repo/hardware/dsu/os_independent/noarch/.  Will it be 
there soon?


Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] Ubuntu dsu catalog doesn't work with rack 7910

2017-10-24 Thread Gottloeb, Jeff [US] (ES)
I found a version of iDRAC with Lifecycle Controler (v2.50.50.50, 
https://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=278FC)
 that is not in the Dell repository at 
http://linux.dell.com/repo/hardware/dsu/os_independent/noarch/.  Will it be 
there soon?


Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] Expanding Raid 5 with additional drive

2017-03-23 Thread Jeff Boyce
Joe -

 I think you are covering what I need, but since you are describing 
a couple of options I will describe my system in more detail.  My goal 
here is that I want to know precisely what I am doing, and what the 
response of the system should be before I issue a command.

 Your reference to a device rescan caught my attention and I think 
is the step I am missing in my knowledge.  OMSA shows I have one Virtual 
Disk(00) with RAID 5, a size of 836.62 GB, and device name of /dev/sda.

 Yet the host system shows:

[root@earth ~]# fdisk -l /dev/sda

Disk /dev/sda: 598.9 GB, 598879502336 bytes
255 heads, 63 sectors/track, 72809 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b2ab0

Device Boot  Start End  Blocks   Id  System
/dev/sda1   *   1  64  512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2  64   72810   584330240   8e  Linux LVM

So the layout of my virtual drive is a boot partition, then everything 
else is a single LVM physical volume.  The above fdisk output matches 
what I see in GParted.  So I believe that ultimately I need to do a 
pvresize in order to have unallocated space in my volume group that I 
can assign to whatever virtual machine partition I need.  Once I have 
the unallocated space in my volume group I know what steps to complete 
after that.

But I believe right now that pvresize won't do anything because fdisk 
does not recognize the additional drive space that I have added to the 
virtual disk.  That is why I believe the rescan is what I need.

So generically,
1.  rescan the scsi bus
2.  pvresize
3.  then I should see the unallocated space in my volume group ?

Does this sound about right?

Go ahead and cc me directly as I forgot to mention that I only get the 
daily digest of this list.
Thanks.

Jeff


On 3/22/2017 7:16 PM, Joe Gooch wrote:
> I'd imagine you have some partitions there :)
>
> Maybe sda1 for boot, sda2 swap, sda3 LVM or some such thing.
>
> Should the drives need to be rescanned this will do it:
> for i in /sys/class/scsi_device/*/device/rescan; do echo "1" > $i; done
>
> Depends on whether dmesg |grep sda is returning the right drive space - 
> giving that fdisk is, I'm guessing it already picked up the change from the 
> underlying hardware.
>
>
>
> If you try to extend the partition you'll need to reboot for it to take 
> effect.(after making appropriate changes)  If you're extending a filesystem 
> on a partition, that's what you'll want to do.  (use fdisk, gdisk, parted, 
> etc to extend the partition - which in the case of fdisk means "WRITE 
> EVERYTHING DOWN PRECISELY" then "DELETE the partition and pray I wrote the 
> info PRECISELY" and then create a replacement partition with the PRECISE 
> start position and an end position further up the drive.  Or you could use 
> something more advanced that can resize and move partitions around more 
> safely, with a GUI, etc.  After a reboot, assuming everything lined up you 
> can extend the filesystem.
>
> If all that sounds dangerous, it isn't as dangerous as it sounds, but it's 
> tedious.  Which is why hopefully you used LVM.
>
> With LVM, I'd recommend instead of trying to extend the partition, just 
> create a new one.  Create a new partition with the additional space, make it 
> also a LVM partition type, save.  Since it's a new partition, it should fire 
> up... If not partprobe or kpartx can liven it up.  (Since it's a new 
> partition, no existing filesystem mount will have it locked)  Then you can 
> pvcreate /dev/sda4 or whatever it ended up being, and then vgextend 
> YourVGName /dev/sda4, and then you can lvextend -L +50G 
> YourVGName/YourLVName, and then you can resize2fs or mount -o remount,resize, 
> as appropriate for your filesystem.
>
> If you decide to extend the LVM partition instead, follow the previous 
> instructions, then pvresize /dev/sda3 (for example), then lvextend, then 
> resize2fs.
>
> As an aside this is one of the reasons why wherever possible I pvcreate on 
> whole disks, not partitions.  (I.e. pvcreate /dev/sda)  For physical servers 
> that ends up being a RAID1 system mirror with partitions as normal, and a 
> second VD (R5 or R6 or whatever) that can be used for bulk storage.  For VMs, 
> it's a boot drive VMDK, a swap drive vmdk, and a LVM vmdk.  /dev/sda1 gets 
> the boot volume, mkswap /dev/sdb, pvcreate /dev/sdc.
>
> Then all changes that might need to be made later can be made live.  Rescan 
> the bus, physical drive object increases in size, pvresize and you're good to 
> go.
>
>
>
> --
>   
> Joe
>
>
>
>
>
>
>

[Linux-PowerEdge] Expanding Raid 5 with additional drive

2017-03-22 Thread Jeff Boyce
Greetings -

 I just added a new hard drive to a PE T610 running RAID 5.  In OMSA 
I selected the new drive and added it to the existing virtual disk, then 
executed a reconfiguration.  After about 3 hours this successfully 
completed showing the new virtual disk as 836.62 GB.

 My system is running CentOS 6 as the host KVM system, with a few 
other CentOS 6 and 7 guests.  In the host system I still only see the 
previous virtual disk size of about 557 GB.

Specifically:
fdisk -l /dev/sda  =  598.9 GB
Gparted shows /dev/sda  =  557.75 GB
vgdisplay  =  557.26 GB
pvdisplay  =  557.26 GB

 What special incantation do I need to do now to make the space 
available to the OS.

 I haven't done this since I last added a disk to my old PE2600 
about 6-8 years ago and I can't seem to find my notes, and am apparently 
not using the right terms in Google to get me the answer I am looking 
for.  Thanks for any assistance.

Jeff

-- 

Jeff Boyce
Meridian Environmental

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] EXT : ups... repo has gone

2016-03-23 Thread Gottloeb, Jeff (ES & CSO)
Here is Dell's reply on the issue:

"I've spoken with our folks over in systems management that own OpenManage and 
they are saying they are not sure how it was added to DSU, it's technically not 
released yet. My recommendation is to uninstall 8.3 and downgrade to 8.2 just 
to be on the safe side. I'm not sure why it was removed."

Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395


-Original Message-
From: linux-poweredge-boun...@dell.com 
[mailto:linux-poweredge-boun...@dell.com] On Behalf Of Gottloeb, Jeff (ES & CSO)
Sent: Wednesday, March 23, 2016 9:34 AM
To: lejeczek; linux-powere...@lists.us.dell.com
Subject: Re: [Linux-PowerEdge] EXT : ups... repo has gone

It looks like all of the v8.3.0 release has been removed and the repository 
only has v8.2.0 now for at least RHEL6 and RHEL7.

Jeff Gottloeb
Northrop Grumman IT Solutions
310 812 4395

-Original Message-
From: linux-poweredge-boun...@dell.com 
[mailto:linux-poweredge-boun...@dell.com] On Behalf Of lejeczek
Sent: Wednesday, March 23, 2016 8:15 AM
To: linux-powere...@lists.us.dell.com
Subject: EXT :[Linux-PowerEdge] ups... repo has gone

is it just us?

srvadmin-realssd-8.3.0-1908.90 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-realssd-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
srvadmin-smcommon-8.3.0-1908.9 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-smcommon-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
srvadmin-storage-8.3.0-1908.90 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-storage-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
srvadmin-storage-cli-8.3.0-190 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-storage-cli-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
srvadmin-storelib-8.3.0-1908.9 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-storelib-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
srvadmin-xmlsup-8.3.0-1908.905 FAILED
http://linux.dell.com/repo/hardware/dsu/os_dependent/RHEL7_64/srvadmin/srvadmin-xmlsup-8.3.0-1908.9058.el7.x86_64.rpm:
 
[Errno 14] HTTP Error 404 - Not Found
Trying other mirror.

regards

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Re: [Linux-PowerEdge] EXT : R720 DSU and iDRAC version

2016-03-21 Thread Gottloeb, Jeff (ES & CSO)
Do you have the usb-storage kernel module disabled or uninstalled?  DSU (and 
OMSA) up to v8.3 require this kernel module.  Found this out after a couple of 
weeks of troubleshooting with Dell.

Sent from my Windows Phone

From: Jean-Daniel TISSOT
Sent: ‎3/‎21/‎2016 5:44 AM
To: linux-poweredge@dell.com
Subject: EXT :[Linux-PowerEdge] R720 DSU and iDRAC version


Hi,

This server is running CentOS release 6.7

dsu --apply-upgrades-only show me a new version (2.30.30.30) for iDRAC with 
Lifecycle Controller (2.21.21.21 installed).
Booting on Lyfecycle and searching for new firmwares don't find anything to 
upgrade.
Searching on Dell Support web site for my Service Tag found only 2.21.21.21 
version of iDrac.

Runining dsu -u give me :

|---Dell System Updates---|
[ ] represents 'not selected'
[*] represents 'selected'
[-] represents 'Component already at repository version (can be selected only 
if -e option is used)'
Choose:  q - Quit without update, c to Commit,  - To Select/Deselect, a 
- Select All, n - Select None

[*]1  iDRAC
 Current Version : 2.21.21.21 Upgrade to : 2.30.30.30

Enter your choice : c
Installing iDRAC-with-Lifecycle-Controller_Firmware_JHF76_LN_2.30.30.30_A00...
Collecting inventory...
.
Running validation...

iDRAC

The version of this Update Package is newer than the currently installed 
version.
Software application name: iDRAC
Package version: 2.30.30.30
Installed version: 2.21.21.21


Executing update...
WARNING: DO NOT STOP THIS PROCESS OR INSTALL OTHER DELL PRODUCTS WHILE UPDATE 
IS IN PROGRESS.
THESE ACTIONS MAY CAUSE YOUR SYSTEM TO BECOME UNSTABLE!
.   USB 
Device is not found
...   USB 
Device is not found
..   USB Device 
is not found
.
Device: iDRAC
  Application: iDRAC
  Failed to access Virtual USB Device

iDRAC-with-Lifecycle-Controller_Firmware_JHF76_LN_2.30.30.30_A00 could not be 
installed


Why the virtual device disconnect several time ? How to correct this problem ?

Many thanks in advance.

Cheers.


--
Bien cordialement, Jean-Daniel 
TISSOT
Administrateur Systèmes et Réseaux
Tel: +33 3 81 666 440 Fax: +33 3 81 666 568

Laboratoire Chrono-environnement
16, Route de Gray
25030 BESANCON Cédex

Plan et 
Accès
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge


Preventing I/O starvation on MD1000s triggered by a failed disk.

2010-08-23 Thread Jeff Ewing

I had a 1TB SATA disk fail on an NFS server running RHEL5.2. A rebuild onto a 
global hot spare was triggered. One hour later, when the rebuild was 42% 
complete, serviced NFS requests dropped from 20 per second to zero. CPU2 went 
to 100% utilization, in an I/O wait state. Soon after, the internal drives 
became read only and the server needed to be power reset through the DRAC 
(server was not configured to take crash dumps).

This hardware configuration had been in production and stable for many months.

How could this be prevented in future?


Server Configuration

Dell PowerEdge 2950
Two Quad core E5440 CPUs
16 GB RAM
Red Hat Enterprise Linux Version 5.2
Kernel  2.6.18-92.1.6.el5  (x86_64)
PERC Driver :  00.00.03.21
PERC Firmware : 6.2.0-0013

Dell Support (Server/MD1000) Pro Support for IT 

Storage configuration:
-
2 * PERC6E with two MD1000s attached to each 

Controller 1:
MD1000 with SAS 400GB 10K RPM 
MD1000 with SATA 1 TB 7.2K RPM

Controller 2
MD1000 with SATA 750GB 7.2K RPM
MD1000 with SATA 2 TB 7.2K RPM

PERC6E Controller Configurations:
-
Controller Rebuild Rate : 30%
Three RAID 5 Virtual Disks on each MD1000 
  (5 disks / 5 disks /4 disks + 1 Hot Spare)
Read Policy  : No Read Ahead
Write Policy : Write Back




Jeff Ewing

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Preventing I/O starvation on MD1000s triggered by a failed disk.

2010-08-23 Thread Jeff Ewing
On Mon, Aug 23, 2010 at 08:46:03PM -0700, Bond Masuda wrote:
 Do you know if you encountered a URE? I've been running into URE's on
 
I have now exported the controller log from the controller that failed. I didnt 
see a URE around the time the that my NFS requests stopped (around 
11:38)(rebuild was actually 19% complete):

08/16/10 11:37:32: EVT#47438-08/16/10 11:37:32: 103=Rebuild progress on PD 
33(e0x32/s14) is 18.98%(3262s)^M
08/16/10 11:40:10: EVT#47439-08/16/10 11:40:10: 103=Rebuild progress on PD 
33(e0x32/s14) is 19.98%(3420s)^M
08/16/10 11:42:48: EVT#47440-08/16/10 11:42:48: 103=Rebuild progress on PD 
33(e0x32/s14) is 20.98%(3578s)^M
08/16/10 11:45:26: EVT#47441-08/16/10 11:45:26: 103=Rebuild progress on PD 
33(e0x32/s14) is 21.98%(3736s)^M
08/16/10 11:48:04: EVT#47442-08/16/10 11:48:04: 103=Rebuild progress on PD 
33(e0x32/s14) is 22.98%(3894s)^M
08/16/10 11:50:42: EVT#47443-08/16/10 11:50:42: 103=Rebuild progress on PD 
33(e0x32/s14) is 23.98%(4052s)^M
08/16/10 11:53:20: EVT#47444-08/16/10 11:53:20: 103=Rebuild progress on PD 
33(e0x32/s14) is 24.98%(4210s)^M


There were errors later, my collegue tried to set the rebuild rate to 5% to 
bring the server back:

08/16/10 13:22:54: EVT#47479-08/16/10 13:22:54: 103=Rebuild progress on PD 
33(e0x32/s14) is 58.96%(9584s)^M
08/16/10 13:25:31: EVT#47480-08/16/10 13:25:31: 103=Rebuild progress on PD 
33(e0x32/s14) is 59.96%(9741s)^M
08/16/10 13:27:06: NCQ Mode value is not valid or not found, return default^M
08/16/10 13:27:06: EVT#47481-08/16/10 13:27:06:  40=Rebuild rate changed to 5%^M
08/16/10 13:31:12: mfiIsr: idr=0020^M
08/16/10 13:31:12: Driver detected possible FW hang, halting FW.^M
08/16/10 13:31:12: Pending Command Details:^M



 Were there any kernel messages when the I/O stopped? Have you dumped the
 log from the controller? If not, that's where I would start looking


There were alot of megasas messages in the debug log at the time the NFS 
requests stopped:

Aug 16 11:38:06 nas2 kernel: sd 1:2:0:0: megasas: RESET -356143413 cmd=2a 
retries=0
Aug 16 11:38:06 nas2 kernel: megasas: [ 0]waiting for 4 commands to complete
Aug 16 11:38:07 nas2 kernel: megasas: reset successful 

These were in dmesg also:

mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc4 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc5 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc6 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc7 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc0 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc1 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc2 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc3 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc4 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc5 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc6 not found!
mptctldrivers/message/fusion/mptctl.c::mptctl_ioctl() @596 - ioc7 not found!
sd 1:2:0:0: megasas: RESET -356143413 cmd=2a retries=0
megasas: [ 0]waiting for 4 commands to complete
megasas: reset successful 
sd 1:2:0:0: megasas: RESET -356143737 cmd=2a retries=0
megasas: [ 0]waiting for 5 commands to complete
megasas: reset successful 
sd 1:2:0:0: megasas: RESET -356143749 cmd=2a retries=0
megasas: [ 0]waiting for 5 commands to complete
megasas: reset successful 
sd 1:2:0:0: megasas: RESET -356143771 cmd=2a retries=0
megasas: [ 0]waiting for 4 commands to complete
megasas: reset successful 
sd 1:2:0:0: megasas: RESET -356143781 cmd=2a retries=0
megasas: [ 0]waiting for 5 commands to complete



Thank you.

Jeff Ewing


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: deleting file takes longer than creating it

2010-05-05 Thread Jeff Hanson
Andrew Reid wrote:
 
 
 Stroller wrote:
   Ou of curiosity, is there a reason you didn't do
 `time dd if=/dev/zero of=zero.txt bs=1024 count=100  time rm 
   zero.txt` ?
  
   This machine is serving as a print server, debian package mirror, 
   and a
   mmysql database server. I am not sure what kind of info you might 
   need to
   help me. But for starters, I'm running debian lenny with a 2.6.30 
   kernel.
   The file system is ext3.
  
   I'm not saying this is wholly the reason, but ext3 is notoriously slow 
   at deletions.
  
  
 
 Simple reason, insufficient caffiene; I copy and pasted the command line
I used to test the commnad and then partially editted it to use the
 OP's file name.
 
 For clarity the correct test is:
 
 dd if=/dev/zero of=zero.txt bs=1024 count=100; sync; echo 3 
 /proc/sys/vm/drop_caches ; sleep 15 ; time rm zero.txt;
 
 
 On an xfs filessystem this gives:
 
 1048576+0 records in
 1048576+0 records out
 1073741824 bytes (1.1 GB) copied, 11.3907 s, 94.3 MB/s
 
 real0m0.023s
 user0m0.000s
 sys 0m0.000s
 
 
 11 seconds create, 23 ms delete. YMMV

Randomly chose ext3 filesystem gives

time dd if=/dev/zero of=zero.txt bs=1024 count=100; sync; echo 3  
/proc/sys/vm/drop_caches ; sleep 15 ; time rm -f zero.txt
100+0 records in
100+0 records out
102400 bytes (1.0 GB) copied, 19.9531 seconds, 51.3 MB/s

real0m20.603s
user0m0.525s
sys 0m5.958s

real0m1.576s
user0m0.001s
sys 0m0.073s
-- 
---
Jeff Hanson - jhan...@sgi.com - Field Technical Analyst

You can choose a ready guide in some celestial voice.
If you choose not to decide, you still have made a choice.
You can choose from phantom fears and kindness that can kill;
I will choose a path that's clear
I will choose freewill. - Lee/Lifeson/Peart

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: R710 w/ Mixed SSD + SAS Drives

2010-03-22 Thread Jeff Ewing

Which internal controller do you have? 

H700, PERC 6/i, H200 or SAS 6/iR 



On Mon, Mar 22, 2010 at 07:45:39PM -0700, Jefferson Cowart wrote:
 Does anyone have an R710 with a mixture of SAS + SSD drives? I'm trying
 to build one using a pair of SAS drives for the OS/applications and 4
 SSDs for an Oracle database. The R710 documentation
 (http://www.dell.com/downloads/global/products/pedge/en/server-poweredge
 -r710-tech-guidebook.pdf - Pages 37-39) seems to say that this is
 supported, but I can't get the online configuration tool to let me do
 it. My inside sales person is also unable to get it built. Is it
 supported to simply buy it with the SAS drives factory installed and buy
 the SSDs separately to install myself once it gets here?
 
 -- 
 Thank You
 Jefferson Cowart
 Network and Systems Administrator
 Claremont University Consortium
 
 
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: Third-party drives not permitted on Gen 11 servers

2010-02-10 Thread Jeff Boyce
 that my partners will tell me not to even consider 
buying a Dell server as the replacement.

Jeff Boyce
Forest Ecologist
Seattle
www.meridianenv.com 

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: Third-party drives not permitted on Gen 11 servers

2010-02-10 Thread Jeff
On Wed, Feb 10, 2010 at 12:48 PM, Bond Masuda bond.mas...@jlbond.com wrote:
 however, bottom line is this: Dell is trying to increase profits and
 they see this lock-in as a potential method to achieve that goal. if
 Dell customers want to see this change, you'll just need to show Dell
 that it doesn't accomplish that goal. I.e., stop buying Dell, cancel
 your orders, etc. anything short of this will not change how a business
 operates. no amount of complaining on this mailing list is going to make
 this change until dollars are at stake.

+1.

We are all preaching to the choir here. This list is not the best
forum for getting our message across to Dell. I just wrote to my Dell
Sales rep informing her that future sales are in jeopardy. Maybe if we
all do that, they might take notice.

Jeff

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: PE 2650 - which memory?

2009-12-09 Thread Jeff Hanson
Rogers, Jamie wrote:
 Try  dmidecode |grep Speed
 

And dmidecode -t 17
for more details on the memory.

 
 
 From: linux-poweredge-boun...@dell.com on behalf of Howard, Chris
 Sent: Wed 12/9/2009 8:27 AM
 To: linux-poweredge@dell.com
 Subject: PE 2650 - which memory?
 
 
 
 
 I have a PE-2650 which I'm attempting to
 administer remotely.
 
 I need more memory and the web page for the memory vendor says
 there are two types of PE-2650s and I need to know what
 the speed is of the memory that I already have installed.
 
 1) is there a way to know from the Linux command line what
 speed memory I have?
 
 2) is there a way to know from the Linux command line if
 I have a 400 Mhz or 500 Mhz front side bus?
 
 
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq
 
 
 
 ___
 Linux-PowerEdge mailing list
 Linux-PowerEdge@dell.com
 https://lists.us.dell.com/mailman/listinfo/linux-poweredge
 Please read the FAQ at http://lists.us.dell.com/faq


-- 
---
Jeff Hanson - jhan...@sgi.com - Field Technical Analyst

You can choose a ready guide in some celestial voice.
If you choose not to decide, you still have made a choice.
You can choose from phantom fears and kindness that can kill;
I will choose a path that's clear
I will choose freewill. - Lee/Lifeson/Peart

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: change bios settings via script

2009-11-03 Thread Jeff Layton
omconfig should be able to do it. You can also look on the support
site for the R410 and look for the Dell Deployment Toolkot (DTK).
The DTK is designed for enmasse BIOS changes.

BTW - you might want to look at your other BIOS settings. Typically
for HPC, HT is turned off and there are a few other settings (Turbo On,
C-state's off, max power, and one other I think). I'm betting the account
team didn't put the HPC SKU on the order. The HPC SKU changes
the BIOS settings for you.

Jeff






From: Daniel De Marco d...@bartol.udel.edu
To: linux-poweredge@dell.com
Sent: Tue, November 3, 2009 8:19:04 PM
Subject: change bios settings via script

Hi,

I just received a shipment of several R410 that I'm going to use in a
compute cluster and I just found out that the hyper-threading is turned
on by default in the bios. Linux sees the two quad cores as 16 logical
processors. I need to disable it in 40 or so machines. Is there any way
of disabling it via a script? ipmitool, racadm, omconfig, anything??

Thanks, Daniel.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: Installing CentOS on R710 with GPT

2009-10-23 Thread Jeff Hanson
Jason Slagle wrote:
 On Fri, 23 Oct 2009, Ryan Langseth wrote:
 
 Bond Masuda wrote:
 
 Yea that is what I would prefer to do but we are using RAID10 on the
 system for better disk performance and the raid card does not seem to
 allow multiple virtual disks on a RAID10 array.

 Looks like I will end up having to image the system, and have work out
 the booting afterwards.
 
 I believe grub will certainly boot a gpt partition, even if centos won't 
 install there.

Only grub with the patch to support gpt.  Default grub 1.x will not.
Fedora has it, OpenSuSE does too.  RHEL/CentOS and SLES do not.

 
 Does the BIOS on a T710 support booting a gpt partition?
 
 Jason
 
 
 


-- 
---
Jeff Hanson - jhan...@sgi.com - Field Technical Analyst

You can choose a ready guide in some celestial voice.
If you choose not to decide, you still have made a choice.
You can choose from phantom fears and kindness that can kill;
I will choose a path that's clear
I will choose freewill. - Lee/Lifeson/Peart

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq