Yeah, I'll put it in with IBM next week. What you describe sounds like exactly what happened here, except I didn't have enough knowledge to do what you did to fix it. :)
With me, it could have been some kind of limit; there were 118 3390-3 drives attached, and a dozen or two 3390-9, all provided courtesy of my local Shark ESS800. -Paul > -----Original Message----- > From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On > Behalf Of Marcy Cortes > Sent: Friday, October 30, 2009 10:47 AM > To: IBMVM@LISTSERV.UARK.EDU > Subject: Re: z/LInux, LVM, and minor disasters. > > Paul, I thought it was all my fault, but after reading your issues, > perhaps not. > > I had a dozen servers that needed a disk added into LVM (sles 10 - > kernel 2.6.16.60-0.42.5). > I used a scripting tool to send the appropriate > dasdfmt,fdasd,pvcreate,vgextend. > Of those 12 identical servers, 2 had issues. > It was reporting that a particular uuid could not be found when doing > the vgscan. > I dared not reboot. > Issuing pvdisplays, pvscan, and lvdisplays I figured out that all of > the disks were there except the new one. > The new one had a completely different uuid than what vgscan was > complaining about. > So, I ended up issuing pvcreate --uuid against the new disk and then > (I think) a vgcfgrestore and all was well. > > I figured somehow my vgextend command might have picked up the wrong > disk, but I couldn't find any other that had that uuid in it. I > chalked it up to gremlins. > > I'm sure I didn't blow any limit since this was 6th disk added to the > LVM. > > Since you can duplicate, I think reporting it to your support provider > might be called for. > > > Marcy > "This message may contain confidential and/or privileged information. > If you are not the addressee or authorized to receive this for the > addressee, you must not use, copy, disclose, or take any action based > on this message or any information herein. If you have received this > message in error, please advise the sender immediately by reply e-mail > and delete this message. Thank you for your cooperation." > > > -----Original Message----- > From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On > Behalf Of Paul Raulerson > Sent: Friday, October 30, 2009 3:16 AM > To: IBMVM@LISTSERV.UARK.EDU > Subject: Re: [IBMVM] z/LInux, LVM, and minor disasters. > > Hi Mark - I would have agreed 100% with you up until a few days ago. I > am still not sure it was LVMs fault. > There were a LOT of DASD volumes attached to this instance, and I > think I may have blown some limit somewhere. > Part of the reason there were multiple LVMs is that I had a lot of > 3390-3's around, and LVM, at least at one time, > seemed to have trouble with creating volumes that were much larger > than 100gigs using 3390-3s. Fell into a > historical habit I suppose. > > It worries me enough, and the fact I can duplicate it, makes me > worried enough to not want to use LVMs for a while. > > I was kinda hoping someone else had ran into this, but perhaps it is > more likely I am just doing something wrong > > -Paul > > > On Oct 30, 2009, at 1:27 AM, Mark Post wrote: > > >>>> On 10/30/2009 at 1:23 AM, Paul Raulerson <p...@raulersons.com> > >>>> wrote: > > -snip- > >> Has anyone else ran into this? > > > > Not without some error messages and the like to go on. > > > > LVM doesn't care about device names, or DASD address ordering, etc. > > All it cares about is if it can find the UUIDs it expects for all > > its PVs. Those are written on the disk volumes when pvcreate and > > vgextend is done. > > > > If it can't find all those UUIDs, then it will throw a fit. Usually > > that means that a PV was added to the VG, and that volume was not > > available at the next reboot. Most often that's the result of not > > re-running mkinitrd and zipl. (YaST will do that for you > > automatically if you use it, otherwise you have to remember to do > > it. Not sure if Red Hat has a similar mechanism to keep things in > > synch.) > > > > > > Mark Post > >