racvmcli segfault mounting iso

2009-12-18 Thread Robin Bowes
Hi,

We have a rack of R410s which have no optical drive.

I generally use PXE boot to install CentOS, but we want to try vyatta on
a couple of servers and I've not able to fix a PXE boot from the ISO image.

So, I thought I'd mount the ISO from another server using racvmcli.

The server is running CentOS 5.4, x86_64 and I've installed
mgmtst-racadm-6.2.0-677.

I'm using this command:

racvmcli -r 192.168.57.18 -u root -p secret -c vyatta-livecd-vc5.0.2.iso

racvmcli: connecting(1)..
Segmentation fault


As you can see, I get a segfault. Am I doing something wrong?

R.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


FW: racvmcli segfault mounting iso

2009-12-18 Thread Gagan_Shrestha
For R410, 'vmcli' utility should be used instead of 'racvmcli'.


For details, please refer documentation at:
http://support.dell.com/support/edocs/software/smdrac3/idrac/idrac13mono
/en/ug/html/racugc1e.htm#wp53555

Thanks,
Gagan

-Original Message-
From: linux-poweredge-boun...@dell.com
[mailto:linux-poweredge-boun...@dell.com] On Behalf Of Robin Bowes
Sent: Friday, December 18, 2009 6:42 PM
To: linux-poweredge-Lists
Subject: racvmcli segfault mounting iso

Hi,

We have a rack of R410s which have no optical drive.

I generally use PXE boot to install CentOS, but we want to try vyatta on
a couple of servers and I've not able to fix a PXE boot from the ISO
image.

So, I thought I'd mount the ISO from another server using racvmcli.

The server is running CentOS 5.4, x86_64 and I've installed
mgmtst-racadm-6.2.0-677.

I'm using this command:

racvmcli -r 192.168.57.18 -u root -p secret -c vyatta-livecd-vc5.0.2.iso

racvmcli: connecting(1)..
Segmentation fault


As you can see, I get a segfault. Am I doing something wrong?

R.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: ***POSSIBLE SOLUTION*** OM 6.2 Broken Storage Section

2009-12-18 Thread Ryan Miller
Turns out that the reason that srvadmin-all wouldn't install after just editing 
the .repo file to say OMSA_6.1 instead of latest and the failure of 
update_firmware were related:  something strange with dependency resolution 
around libsmbios means that for things to work I need to remove that package 
before upgrading or downgrading; seems like a specific version of python-smbios 
is required, but I'm able to downgrade srvadmin* without yum throwing a 
conflict or dependency.  Not sure what's wrong with the dependency graph or how 
to give you more useful information, but at least now I have a way back onto 
6.1, and update_firmware now works with 6.2.

Still not seeing the disk controllers under 6.2 no matter what I do though, for 
SAS or PERC6, on 1950-III or R610.

Hope that's at least moderately helpful, and let me know if there's some way I 
can provide better information.

Ryan

From: jeffrey_l_mend...@dell.com [mailto:jeffrey_l_mend...@dell.com]
Sent: Friday, December 18, 2009 12:12 PM
To: Ryan Miller; linux-powere...@lists.us.dell.com
Subject: RE: ***POSSIBLE SOLUTION*** OM 6.2 Broken Storage Section

Ryan,

Thanks for the report on the firmware update exception. What distro/version are 
you running? Would you mind posting the output with '--verbose'?

Thanks,
Jeff


From: linux-poweredge-bounces-Lists On Behalf Of Ryan Miller
Sent: Thursday, December 17, 2009 7:03 PM
To: linux-poweredge-Lists
Subject: ***POSSIBLE SOLUTION*** OM 6.2 Broken Storage Section
I'm also seeing this issue, and no luck with the workaround either.  (Plus I 
had to manually unload the ipmi module-stopping the service didn't do it).I 
opened a support case and he tried suggested to use the driver from the website 
- v00.00.03.21 instead of the 00.00.04.08 in the 2.6.18-164.6.1.el5 kernel, but 
as this kernel and hardware play fine with OMSA6.1 I didn't really want to go 
that direction (and doubt it would help anyway).

Also, firmware updates seem to be broken:

[r...@write1y ~]# update_firmware --yes

Running system inventory...

Searching storage directory for available BIOS updates...
Checking BIOS - 1.2.6
Available: dell_dup_componentid_00159 - 1.3.6
Found Update: dell_dup_componentid_00159 - 1.3.6
Checking MBE2147RC Firmware - d701
Available: dell_dup_componentid_20515 - d903
Found Update: dell_dup_componentid_20515 - d903
Checking SAS/SATA Backplane 0:0 Backplane Firmware - 1.07
Available: dell_dup_componentid_11204 - 1.05
Did not find a newer package to install that meets all installation 
checks.
Checking PERC 6/i Integrated Controller 0 Firmware - 6.2.0-0013
Available: 
pci_firmware(ven_0x1000_dev_0x0060_subven_0x1028_subdev_0x1f0c) - 6.2.0-0013
Did not find a newer package to install that meets all installation 
checks.
Checking System BIOS for PowerEdge R610 - 1.2.6
Did not find a newer package to install that meets all installation 
checks.

Found firmware which needs to be updated.

Running updates...
|   Installing dell_dup_componentid_00159 - 1.3.6Traceback (most recent 
call last):
  File /usr/sbin/update_firmware, line 23, in ?
ftmain.main(sys.argv[1:])
  File /usr/share/firmware-tools/ftmain.py, line 109, in main
result, resultmsgs = base.doCommands()
  File firmwaretools.peak_util_decorators.rewrap wrapping cli.doCommands at 
0x2ABEB180B230, line 3, in doCommands
  File /usr/lib/python2.4/site-packages/firmwaretools/trace_decorator.py, 
line 81, in trace
result = func(*args, **kw)
  File /usr/share/firmware-tools/cli.py, line 134, in doCommands
self.opts.mode, self.fullCmdLine, self.args)
  File firmwaretools.peak_util_decorators.rewrap wrapping 
update_cmd.doCommand at 0x2ABEB2151C80, line 3, in doCommand
  File /usr/lib/python2.4/site-packages/firmwaretools/trace_decorator.py, 
line 81, in trace
result = func(*args, **kw)
  File /usr/share/firmware-tools/plugins/update_cmd.py, line 61, in doCommand
base.updateFirmware(base.opts.show_unknown)
  File firmwaretools.peak_util_decorators.rewrap wrapping cli.updateFirmware 
at 0x2ABEB180BAA0, line 3, in updateFirmware
  File /usr/lib/python2.4/site-packages/firmwaretools/trace_decorator.py, 
line 81, in trace
result = func(*args, **kw)
  File /usr/share/firmware-tools/cli.py, line 214, in updateFirmware
ret = firmwaretools.pycompat.runLongProcess(pkg.install, 
waitLoopFunction=statusFunc)
  File firmwaretools.peak_util_decorators.rewrap wrapping 
firmwaretools.pycompat.runLongProcess at 0x2ABEB0EEFAA0, line 3, in 
runLongProcess
  File /usr/lib/python2.4/site-packages/firmwaretools/trace_decorator.py, 
line 81, in trace
result = func(*args, **kw)
  File /usr/lib/python2.4/site-packages/firmwaretools/pycompat.py, line 177, 
in runLongProcess
raise thread.exception
xml.parsers.expat.ExpatError: no element found: line 1, column 0

Haven't dug through the python to see what the source of that might be yet, but 

Re: SAS5/E with MD1000 for JBOD

2009-12-18 Thread Preston Hagar
On Thu, Dec 17, 2009 at 4:22 PM, Philip Tait phi...@subaru.naoj.org wrote:
 On 12/17/2009 11:39 AM, Jose-Marcio Martins da Cruz wrote:
 Philip Tait wrote:
 We want to attach an MD1000 to a PE2900 in a non -RAID configuration.

 The Dell sales people are very convinced that we have to have a PERC6/E
 to connect an MD1000, but they are doing further research.

 We have two MD1000 attached to a PE2950. We have a PERC5/E. With the
 PERC5/E you can have RAID 0 (stripe). So, if I understood what you're
 wanting, you shall be able to create one RAID 0 volume for each disk,
 or a single RAID0 for all disks.

 Thanks for the response, but I believe this would not work for our
 application because the disks would require a PERC-equipped computer for
 them to be readable. We want these drives to be readable on any PC with
 a SATA interface.


Honestly (an maybe someone can correct me) I don't think it is
possible.  I have pretty much never found a way to connect drives with
a PERC5/E and MD1000 or even connected directly to a PERC5/i for that
matter that doesn't add Dell mojo in between.  The best solution we
found was to buy multiple PERC cards, save the configs once we had
everything like we wanted it (doing RAID 0 on the hard drives to fake
JBOD), and then loading that config on other machines to be backups.
Still, if the MD1000 went out, we still might be up a creek.  Although
I generally love Dell hardware, it is one drawback I have found to the
MD1000 and PERC cards is that they want their Dell specific voodoo in
between.  We have even found that just buying drives for a third party
vendor will seem to work sometimes, but often lead to flakiness.
Apparently they all have to be matched drives with Dell firmware on
the drives themselves to be fully supported (or at least that is what
we have been told).

Anyway, I hope you have better luck that we have.

Preston

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: SAS5/E with MD1000 for JBOD

2009-12-18 Thread J. Epperson
On Fri, December 18, 2009 15:25, Preston Hagar wrote:
 On Thu, Dec 17, 2009 at 4:22 PM, Philip Tait phi...@subaru.naoj.org
 wrote:
 On 12/17/2009 11:39 AM, Jose-Marcio Martins da Cruz wrote:
 Philip Tait wrote:
 We want to attach an MD1000 to a PE2900 in a non -RAID
 configuration.

 The Dell sales people are very convinced that we have to have a
 PERC6/E to connect an MD1000, but they are doing further research.

 We have two MD1000 attached to a PE2950. We have a PERC5/E. With the
 PERC5/E you can have RAID 0 (stripe). So, if I understood what you're
  wanting, you shall be able to create one RAID 0 volume for each
 disk, or a single RAID0 for all disks.

 Thanks for the response, but I believe this would not work for our
 application because the disks would require a PERC-equipped computer
 for them to be readable. We want these drives to be readable on any PC
 with a SATA interface.


 Honestly (an maybe someone can correct me) I don't think it is possible.
 I have pretty much never found a way to connect drives with a PERC5/E and
 MD1000 or even connected directly to a PERC5/i for that matter that
 doesn't add Dell mojo in between.  The best solution we found was to buy
 multiple PERC cards, save the configs once we had everything like we
 wanted it (doing RAID 0 on the hard drives to fake JBOD), and then
 loading that config on other machines to be backups. Still, if the MD1000
 went out, we still might be up a creek.  Although I generally love Dell
 hardware, it is one drawback I have found to the MD1000 and PERC cards is
 that they want their Dell specific voodoo in between.  We have even found
 that just buying drives for a third party vendor will seem to work
 sometimes, but often lead to flakiness. Apparently they all have to be
 matched drives with Dell firmware on the drives themselves to be fully
 supported (or at least that is what we have been told).


I guess you can't connect one of these with a plain SAS controller and
have the drives presented as plain physical drives?  You could do that
with the old SCSI Powervaults.


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: ***POSSIBLE SOLUTION*** OM 6.2 Broken Storage Section

2009-12-18 Thread Ryan Miller
Just to get this out into Google for anybody else that's been having trouble 
rolling back to 6.1, I ended up with the following order of operations:


1.Replace the .repo file with the OMSA_6.1-specific version (replace 
latest with OMSA_6.1

2.   Yum remove libsmbios srvadmin* sblim-sfcb

3.   Yum install yum-dellsysid

4.   Yum clean metadata

5.   Yum install srvadmin-all

I believe that step 2 implies a bug in the dependency graph, because libsmbios 
is a dependency of yum-dellsysid, so the latter is uninstalled also, but then 
the correct version cannot be installed in step3 because it introduces a 
conflict with sblim-sfcb, which implies that that package should have been 
listed as a dep of yum-dellsysid and removed with it in the first place.  I 
think something similar holds with srvadmin and libsmbios, but I'm less clear 
on the details.

Ryan

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq