Re: [CentOS] IBM ServeRAID M5014 and CentOS 5.5

2010-06-15 Thread Jens Neu
http://www.redbooks.ibm.com/technotes/tips0738.pdf clearly states Support 
for RHEL 4 as well as 5, so you should have no problem with CentOS 5.5.

regards
Jens Neu




Peter Hinse  
Sent by: centos-boun...@centos.org
06/15/2010 01:46 PM
Please respond to
CentOS mailing list 


To
CentOS mailing list 
cc

Subject
[CentOS] IBM ServeRAID M5014 and CentOS 5.5






Hi all,

we are about to buy some IBM x3550 M2 servers with ServeRAID M5014 SAS
onboard controller. Can anyone confirm that these controllers will work
with CentOS 5.5 (seems to be some rebranded LSI controller).

I cannot find any hint in the 2.6.18-194.3.1.el5 sources...

Regards,

 Peter

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



signature.asc
Description: Binary data
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Running yum shows errors

2010-07-14 Thread Jens Neu
Hi,
Did you try to disable the mirror selection and specify one single repo?

-Jens



- Originalnachricht -
Von: Jatin Davey [jasho...@cisco.com]
Gesendet: 14.07.2010 11:03 ZE5B
An: centos@centos.org
Betreff: [CentOS] Running yum shows errors



  Hi

I am getting the following errors when i try to use yum to install the 
net-snmp paclages.

[r...@sc1 yum.repos.d]# yum install net-snmp
Loaded plugins: fastestmirror
Determining fastest mirrors
Traceback (most recent call last):
   File "/usr/bin/yum", line 29, in ?
 yummain.user_main(sys.argv[1:], exit_code=True)
   File "/usr/share/yum-cli/yummain.py", line 229, in user_main
 errcode = main(args)
   File "/usr/share/yum-cli/yummain.py", line 104, in main
 result, resultmsgs = base.doCommands()
   File "/usr/share/yum-cli/cli.py", line 339, in doCommands
 self._getTs(needTsRemove)
   File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in 
_getTs
 self._getTsInfo(remove_only)
   File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in 
_getTsInfo
 pkgSack = self.pkgSack
   File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 591, in 

 pkgSack = property(fget=lambda self: self._getSacks(),
   File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 434, in 
_getSacks
 self.repos.populateSack(which=repos)
   File "/usr/lib/python2.4/site-packages/yum/repos.py", line 223, in 
populateSack
 self.doSetup()
   File "/usr/lib/python2.4/site-packages/yum/repos.py", line 71, in doSetup
 self.ayum.plugins.run('postreposetup')
   File "/usr/lib/python2.4/site-packages/yum/plugins.py", line 176, in run
 func(conduitcls(self, self.base, conf, **kwargs))
   File "/usr/lib/yum-plugins/fastestmirror.py", line 176, in 
postreposetup_hook
 if downgrade_ftp and _len_non_ftp(repo.urls) == 1:
   File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 585, in 

 urls = property(fget=lambda self: self._geturls(),
   File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 582, in 
_geturls
 self._baseurlSetup()
   File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 538, in 
_baseurlSetup
 mirrorurls.extend(self._getMirrorList())
   File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1349, in 
_getMirrorList
 fo = urlgrabber.grabber.urlopen(url, proxies=self.proxy_dict)
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
612, in urlopen
 return default_grabber.urlopen(url, **kwargs)
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
891, in urlopen
 return self._retry(opts, retryfunc, url)
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
852, in _retry
 r = apply(func, (opts,) + args, {})
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
890, in retryfunc
 return URLGrabberFileObject(url, filename=None, opts=opts)
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
1008, in __init__
 self._do_open()
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
1091, in _do_open
 fo, hdr = self._make_request(req, opener)
   File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 
1204, in _make_request
 fo = opener.open(req)
   File "/usr/lib/python2.4/urllib2.py", line 358, in open
 response = self._open(req, data)
   File "/usr/lib/python2.4/urllib2.py", line 376, in _open
 '_open', req)
   File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
 result = func(*args)
   File "/usr/lib/python2.4/urllib2.py", line 573, in 
 lambda r, proxy=url, type=type, meth=self.proxy_open: \
   File "/usr/lib/python2.4/urllib2.py", line 580, in proxy_open
 if '@' in host:
TypeError: iterable argument required


Please let me know what needs to be done.

Thanks
Jatin
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


http://www.biotronik.com

* * * * * * * *

BIOTRONIK SE & Co. KG   
Woermannkehre 1, 12359 Berlin, Germany  
Sitz der Gesellschaft: Berlin, Registergericht: Berlin HRA 6501

Vertreten durch ihre Komplementärin:
BIOTRONIK MT SE
Sitz der Gesellschaft: Berlin, Registergericht: Berlin HRB 118866 B
Vorsitzender des Verwaltungsrats: Dr. Max Schaldach
Geschäftsführende Direktoren: Christoph Böhmer, Dr. Werner Braun, Dr. Lothar 
Krings

* * * * * * * *

BIOTRONIK - A global manufacturer of advanced Cardiac Rhythm Management systems 
and Vascular Intervention devices. Quality, innovation, and reliability define 
BIOTRONIK and our growing success. We are innovators of technologies like the 
first wireless remote monitoring system - Home Monitoring®, Closed Loop 
Stimulation and coveted lead solutions as well as state-of-the-art stents, 
balloons and guide wires for coronary and peripheral indications. We highly 
invest in the development of drug eluting devices and are leading the industry 

[CentOS] GPT Partitions >2.2T with Centos 5.5

2010-07-14 Thread Jens Neu
Dear all,

unfortunately I observe drastic drops in read-  performance when I connect 
LUNs >2.2T to our Centos 5.5 Servers. I suspect issues with the GUID 
Partition Table, since it happens reproducibly only on the LUNs >2.2T and 
goes back to normal performance when using a LUN <2T with "normal", Legacy 
MBR partitions.
All machines are CentOS 5.5 (and RHEL 5.5) on IBM Blades LS21/LS41, all 
LUNs via Brocade 4G FC Switches from a SUN 7310 Unified Storage, EXT3 
filesystems with standard settings.

Any hints?

regards
Jens

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] GPT Partitions >2.2T with Centos 5.5

2010-07-14 Thread Jens Neu
> I think ext3 is a bit slow on so big LUNS, try xfs or similar?

there is no official support for xfs in RHEL5, so that is not an option. 
Besides, I'm talking about a drop of >150mb/s to ~10mb/s read performance 
when cracking the 2.2T size. This is clearly not in the ext3 (or FS for 
that matter) magnitude of performance issues. Also, numbers stay the same 
when dd'ing on the raw device (LUN).

-Jens




Eero Volotinen  
Sent by: centos-boun...@centos.org
07/14/2010 11:35 AM
Please respond to
CentOS mailing list 


To
CentOS mailing list 
cc

Subject
Re: [CentOS] GPT Partitions >2.2T with Centos 5.5





http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] GPT Partitions >2.2T with Centos 5.5

2010-07-15 Thread Jens Neu
> Do you see the slow behaviour (10 MiB/s) for all of the device or only 
for the 
> part that is >2T?
> 
> Why use a partition table at all? run dd directly against the device. If 
this 
> is slow then you have a controller side problem.

very true, back to "Go" :(




Peter Kjellstrom  
Sent by: centos-boun...@centos.org
07/14/2010 01:39 PM
Please respond to
CentOS mailing list 


To
centos@centos.org
cc

Subject
Re: [CentOS] GPT Partitions >2.2T with Centos 5.5







___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Multipathing with Sun 7310

2010-05-27 Thread Jens Neu
Dear list,

we have a relatively new Sun Storage 7310, where we connect CentOS 5.5 
Servers (IBM LS21/LS41 Blades) via Brocade Switches, 4GBit FC. The Blades 
boot from SAN via qla2xxx, and have no harddisks at all. We want them to 
use multipathing from the very beginning, so /boot and / are already seen 
by multipathd. Problem is, that the Sun 7310 has two storage heads which 
run in active/passive mode. BUT the multipathd thinks, they are 
active/active and therefor shows half the available paths as faulty 
(multipath -ll below)
While this probably gives me the redundancy that is desired, it is a 
relatively messy situation, since it will be unnecessary hard to detect 
real path failures and the OS is complaining about "readsector0 checker 
reports path is down" which gives me >40M/24h /var/log/messages garbage.
Any hints for a reasonable configuration? Unfortunately the Sun 7310 is 
rather new, so almost nothing shows up on google... even less for 
RHEL/CentOS :-(

regards from Berlin
Jens

[r...@dev-db1 tmp]# multipath -ll
sdaa: checker msg is "readsector0 checker reports path is down"
sdab: checker msg is "readsector0 checker reports path is down"
sdac: checker msg is "readsector0 checker reports path is down"
sdad: checker msg is "readsector0 checker reports path is down"
sdd: checker msg is "readsector0 checker reports path is down"
sdh: checker msg is "readsector0 checker reports path is down"
sdl: checker msg is "readsector0 checker reports path is down"
sdp: checker msg is "readsector0 checker reports path is down"
sdq: checker msg is "readsector0 checker reports path is down"
sdr: checker msg is "readsector0 checker reports path is down"
sds: checker msg is "readsector0 checker reports path is down"
sdt: checker msg is "readsector0 checker reports path is down"
sdu: checker msg is "readsector0 checker reports path is down"
sdv: checker msg is "readsector0 checker reports path is down"
sdx: checker msg is "readsector0 checker reports path is down"
sdz: checker msg is "readsector0 checker reports path is down"
mpath0 (3600144f0fdf58b5c4bc738070001) dm-0 SUN,Sun Storage 7310
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 0:0:1:0 sda  8:0[active][ready] 
\_ round-robin 0 [prio=1][enabled]
 \_ 1:0:0:0 sde  8:64   [active][ready] 
\_ round-robin 0 [prio=1][enabled]
 \_ 0:0:2:0 sdi  8:128  [active][ready] 
\_ round-robin 0 [prio=1][enabled]
 \_ 1:0:1:0 sdm  8:192  [active][ready] 
\_ round-robin 0 [prio=0][enabled]
 \_ 0:0:3:0 sdq  65:0   [failed][faulty]
\_ round-robin 0 [prio=0][enabled]
 \_ 1:0:2:0 sdr  65:16  [failed][faulty]
\_ round-robin 0 [prio=0][enabled]
 \_ 0:0:4:0 sdx  65:112 [failed][faulty]
\_ round-robin 0 [prio=0][enabled]
 \_ 1:0:3:0 sdz  65:144 [failed][faulty]
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipathing with Sun 7310

2010-05-27 Thread Jens Neu
Hi Alexander,

thanks for replying, here's my current multipath.conf:

defaults {
user_friendly_names yes
}

blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd|vd)[a-z]*"
wwid "*"
}

blacklist_exceptions {
wwid "3600144f0fdf58b5c4bc738070001"

devices{
device {
 vendor "SUN"
 product "Sun Storage 7310"
}
}
}

devices {
 device {
 vendor "SUN"
 product "Sun Storage 7310"
 path_grouping_policy failover
 getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
 prio_callout "/sbin/mpath_prio_rdac /dev/%n"
 features "0"
 failback immediate
 rr_weight uniform
 no_path_retry queue
 rr_min_io 1000
 }
}

I added "path_grouping_policy failover" because of your message. I also 
noticed, you have path_grouping_policy specified twice; is this on 
purpose?
Also, when I activate 'hardware_handler "1 rdac"', the box does not boot 
any more with some rdac driver error message that I can catch since it 
scrolls by fast...

with the above multipath.conf it gets even stranger:

[r...@dev-db1 ~]# multipath -ll
sdd: checker msg is "readsector0 checker reports path is down"
sdh: checker msg is "readsector0 checker reports path is down"
sdi: checker msg is "readsector0 checker reports path is down"
sdj: checker msg is "readsector0 checker reports path is down"
C9 Inquiry of device  failed.
C9 Inquiry of device  failed.
mpath0 (3600144f0fdf58b5c4bc738070001) dm-0 SUN,Sun Storage 7310
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:1:0 sda 8:0   [active][ready] 
 \_ 1:0:0:0 sde 8:64  [active][ready]

however, the system "lives happily", despite the error messages.

Jens Neu
Health Services Network Administration

Phone: +49 (0) 30 68905-2412
Mail: jens@biotronik.de



"Alexander Dalloz"  
Sent by: centos-boun...@centos.org
05/27/2010 04:47 PM
Please respond to
CentOS mailing list 


To
"CentOS mailing list" 
cc

Subject
Re: [CentOS] Multipathing with Sun 7310






> Dear list,
>
> we have a relatively new Sun Storage 7310, where we connect CentOS 5.5
> Servers (IBM LS21/LS41 Blades) via Brocade Switches, 4GBit FC. The 
Blades
> boot from SAN via qla2xxx, and have no harddisks at all. We want them to
> use multipathing from the very beginning, so /boot and / are already 
seen
> by multipathd. Problem is, that the Sun 7310 has two storage heads which
> run in active/passive mode. BUT the multipathd thinks, they are
> active/active and therefor shows half the available paths as faulty
> (multipath -ll below)
> While this probably gives me the redundancy that is desired, it is a
> relatively messy situation, since it will be unnecessary hard to detect
> real path failures and the OS is complaining about "readsector0 checker
> reports path is down" which gives me >40M/24h /var/log/messages garbage.
> Any hints for a reasonable configuration? Unfortunately the Sun 7310 is
> rather new, so almost nothing shows up on google... even less for
> RHEL/CentOS :-(
>
> regards from Berlin
> Jens
>
> [r...@dev-db1 tmp]# multipath -ll
> sdaa: checker msg is "readsector0 checker reports path is down"
> sdab: checker msg is "readsector0 checker reports path is down"
> sdac: checker msg is "readsector0 checker reports path is down"
> sdad: checker msg is "readsector0 checker reports path is down"
> sdd: checker msg is "readsector0 checker reports path is down"
> sdh: checker msg is "readsector0 checker reports path is down"
> sdl: checker msg is "readsector0 checker reports path is down"
> sdp: checker msg is "readsector0 checker reports path is down"
> sdq: checker msg is "readsector0 checker reports path is down"
> sdr: checker msg is "readsector0 checker reports path is down"
> sds: checker msg is "readsector0 checker reports path is down"
> sdt: checker msg is "readsector0 checker reports path is down"
> sdu: checker msg is "readsector0 checker reports path is down"
> sdv: checker msg is "readsector0 checker reports path is down"
> sdx: checker msg is "readsector0 checker reports path is down"
> sdz: checker msg is "readsector0 checker reports path is down"
> mpath0 (3600144f0fdf58b5c4bc738070001) dm-0 SUN,Sun Storage 7310
> [size=50G][features=0][hwhandler=0][rw]
&g

Re: [CentOS] Multipathing with Sun 7310

2010-05-31 Thread Jens Neu
Alexander,

thank you very much, you're the man!

best regards from Berlin
Jens Neu

Health Services Network Administration




"Alexander Dalloz"  
Sent by: centos-boun...@centos.org
05/28/2010 04:29 PM
Please respond to
CentOS mailing list 


To
"CentOS mailing list" 
cc

Subject
Re: [CentOS] Multipathing with Sun 7310







Ah, while coming to an end with this mail and considering to check
upstream (http://christophe.varoqui.free.fr/) whether they know about your
specific storage device, I found the infos you need to configure!

wikis.sun.com/download/attachments/186238602/2010_Q1_ADMIN.pdf

Page 107+108 gives you all required / SUN recommended settings:

device
{
vendor  "SUN"
product "Sun Storage 7310"
getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
prio_callout"/sbin/mpath_prio_alua /dev/%n"
hardware_handler"0"
path_grouping_policygroup_by_prio
failbackimmediate
no_path_retry   queue
rr_min_io   100
path_checkertur
rr_weight   uniform
}

I guess ALUA is enabled by default on the SUN storage 7310, but check it.

> Jens Neu
> Health Services Network Administration

Regards

Alexander




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] SUN Storage 7310 with RHEL 5.5 / Centos 5.5 and multipathed root, again.

2010-06-03 Thread Jens Neu
Dear all,

After some trouble finding the right multipath.conf, I'm running Centos 
5.5 against the SUN 7310 more or less successfully now. Unfortunately one 
issue remains:

the SUN 7310 has two storage heads, H1 and H2, which are in a 
active/passive configuration. If I define a LUN on H1, everything works 
great. I edit multipath.conf, re-create the initrd and I find 8 paths with 
active/ready and active/ghost:

mpath0 (3600...0001) dm-0 SUN,Sun Storage 7310
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]
 \_ 0:0:1:0 sdd 8:48  [active][ready]
 \_ 1:0:0:0 sdg 8:96  [active][ready]
 \_ 1:0:1:0 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=4][enabled]
 \_ 1:0:2:0 sdm 8:192 [active][ghost]
 \_ 0:0:2:0 sdn 8:208 [active][ghost]
 \_ 1:0:3:0 sds 65:32 [active][ghost]
 \_ 0:0:3:0 sdt 65:48 [active][ghost]

Thing is: as soon as I set up a machine with the LUN(s) on the second 
head, all works well until I want to boot with the freshly created 
initrd/multipath.conf. System hangs with Kernel Panic, "Creating multipath 
devices... no devices found" and "mount: could not find filesystem 
'/dev/root'".
Workaround: when I remove the paths to the passive ("ghost paths") Storage 
head in the SAN Fabric, the system boots fine and I can re-enable the 
paths when the kernel is up. The paths become [active][ghost] as wished 
after some time when they show up in the FC Zoning.

This smells like broken mkinitrd to me.

System is very up-to-date, Centos 5.5 Version below:

[r...@dev-db3 ~]# yum list device-mapper-multipath
Installed Packages
device-mapper-multipath.x86_640.4.7-34.el5_5.1installed
[r...@dev-db3 ~]# uname -r
2.6.18-194.3.1.el5

Any suggestions?

best regards from Berlin,

Jens Neu
Health Services Network Administration
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Package Distribution Server?

2010-06-04 Thread Jens Neu
If you set up a big number of workstations (my pain shreshold woud 
probably be around 20), or if you have hard requirements that the 
workstations are really equal in patch level you should maybe consider a 
Spacewalk server:

https://fedorahosted.org/spacewalk/
http://wiki.centos.org/HowTos/PackageManagement/Spacewalk

regards
Jens Neu

Health Services Network Administration





Zhihao Lou  
Sent by: centos-boun...@centos.org
06/04/2010 05:28 AM
Please respond to
CentOS mailing list 


To
centos@centos.org
cc

Subject
[CentOS] Package Distribution Server?






Dear List,

I'm trying to set up a lab with multiple workstations running CentOS 
5. Does anybody knows how to keep the packages in sync among 
workstations? Ideally I want any change made on any machine be able to 
applied to all other machines. Alternatively, to "push" the changes 
(add and/or remove packages) from one central server to all other 
machine is also fine.

Thanks

Zhihao Lou

P.S.: Sorry for the confusion subject line. I really don't know what's 
the accurate name for the feature I described here.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos