This is good info, thanks for the update. I'll just wait for the
release and put up with this little issue for now.


-C

On Thu, May 15, 2008 at 4:13 PM, Collins, Kevin [Beeline]
<[EMAIL PROTECTED]> wrote:
> Well, here is the final outcome:
>
> This is a bug in RHEL5 and RHEL4, and will be fixed in RHEL5u2, which is
> likely to be released in the the next 2 weeks or so, and RHEL4u7.
>
> As a workaround, you can kill and restart clvmd on all nodes (as opposed
> to the "service" clvmd) while there are no LVM changes being made *this
> may cause problems - no serious testing was done*.
>
> Optionally, you could probably open a ticket and request an
> "unsupported" patch, which is what I currently have...
> (device-mapper-1.02.24-1.el5.i386.rpm
> lvm2-cluster-2.02.32-4.el5.i386.rpm lvm2-2.02.32-1.el5.i386.rpm)
>
> Here are the bug IDs I was given:
>
> RHEL5:
> https://bugzilla.redhat.com/show_bug.cgi?id=437446
>
> RHEL4:
> https://bugzilla.redhat.com/show_bug.cgi?id=435341
>
> Hope this helps... Also crossposting to the RHEL4 list.
>
> Thanks,
>
> Kevin
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Corey Kovacs
> Sent: Wednesday, May 14, 2008 9:45 PM
> To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list
> Subject: Re: [rhelv5-list] Clustered LVM
>
> I am having this exact problem with an up-to-date 5 node RHEL5.1
> cluster using an EVA8100 for storage.  The sequence is this.
>
> - create an additional vdisk on the eva and present it to the cluster
> nodes
> - rescan the scsi bus on all nodes to pick up the new device.
> - add the new device to my multipath config and activate it.
> - run pvcreate on the new multipath device
> - extend the VG using the new device
> - extend an existing volume with the add VG extents (+100%FREE)) <BOOM>
>
> Everything works as expected until I try to extend a volume. I get the
> same error until I restart clvmd on all the nodes.
>
> Once the VG is extended, gfs2_grow works fine and things proceed
> normally.
>
> Also, i am _not_ using any non-clustered lvm on any of these machines.
>
> Standing by to see the results of your testing ..  I hope it all works
> out as the idea of having to restart clvmd on a production cluster to
> extend a volume is a bit unnerving. :)
>
> On Mon, May 12, 2008 at 10:41 PM, Collins, Kevin [Beeline]
> <[EMAIL PROTECTED]> wrote:
>> FYI, I am still working the ticket with RedHat. Looks like a known
> bug.
>> I am currently testing updated versions of lvm2, lvm2-cluster and
>> device-mapper, which so far seems promising.
>>
>> Thanks,
>>
>> Kevin
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Collins, Kevin
>> [Beeline]
>> Sent: Tuesday, May 06, 2008 8:17 AM
>> To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list
>> Subject: RE: [rhelv5-list] Clustered LVM
>>
>> I don't know - I was hoping Nuno would respond and say what version he
>> was running so we could find out :)
>>
>> Kevin
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Zavodsky, Daniel
>> (GE Money)
>> Sent: Monday, May 05, 2008 11:31 PM
>> To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list
>> Subject: RE: [rhelv5-list] Clustered LVM
>>
>> Hello,
>>        I am using the latest stable one. I cannot really use beta
>> channels on production machines. Is it fixed there?
>>
>> Regards,
>>        Daniel
>>
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Collins, Kevin
>> [Beeline]
>> Sent: Monday, May 05, 2008 7:02 PM
>> To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list
>> Subject: RE: [rhelv5-list] Clustered LVM
>>
>> Which version of lvm2-cluster are you using? Are you subscribed to the
>> Beta channel for Cluster Storage?
>>
>> I am running the "standard" version, but there is a later one in the
>> Beta channel:
>>
>> root# rpm -q lvm2-cluster
>> lvm2-cluster-2.02.26-1.el5
>>
>> Thanks,
>>
>> Kevin
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Nuno Fernandes
>> Sent: Saturday, May 03, 2008 2:30 PM
>> To: [email protected]
>> Subject: Re: [rhelv5-list] Clustered LVM
>>
>> On Monday 28 April 2008 07:57:39 Zavodsky, Daniel (GE Money) wrote:
>>> Hello,
>>>     I have been experiencing exactly the same issues... and I found
>> the
>>> trick with restarting clvmd myself too. :-) It looks like the problem
>>> happens when you are using both non-clustered and clustered LVM.
>> I'm also using both clustered and non-clustered lvm on the same
> machine
>> and i don't have that problem.. Everything is working fine.
>>
>> Best regards,
>> Nuno Fernandes
>>
>>>     I don't think you are doing anything wrong.
>>>
>>> Regards,
>>>     Daniel
>>>
>>>
>>>
>>>   _____
>>>
>>> From: [EMAIL PROTECTED]
>>> [mailto:[EMAIL PROTECTED] On Behalf Of Collins, Kevin
>>> [Beeline]
>>> Sent: Friday, April 25, 2008 9:08 PM
>>> To: [email protected]
>>> Subject: [rhelv5-list] Clustered LVM
>>>
>>>
>>>
>>> Hi,
>>>
>>>         I am working through the process of building my first GFS
>>> installation and cluster. So far, things are going well. I have
> shared
>>
>>> SAN disk that is visible to both nodes, I've got the basic cluster up
>>> and I am now starting to create my LVM structure.
>>>
>>> I spent a lot of time trying to determine why I was seeing errors
>>> similar to the following when trying to create LVs in a new VG:
>>>
>>> root# lvcreate -L 50M /dev/vgtest
>>>   Rounding up size to full physical extent 52.00 MB
>>>   Error locking on node cpafisxb: Volume group for uuid not found:
>>> IC070PzNGG68uEesi33dH902E4GeGEvwcAccZY4AdnJRkPHUL7EYJzL0Xxkg3eqV
>>>
>>>   Error locking on node cpafisxa: Volume group for uuid not found:
>>> IC070PzNGG68uEesi33dH902E4GeGEvwcAccZY4AdnJRkPHUL7EYJzL0Xxkg3eqV
>>>
>>>   Failed to activate new LV.
>>>
>>> I noticed that I did not have a /dev/vgtest on the node where I
>> created
>>> the VG. I discovered (by googling the error message above) that
>> another
>>> person had seen a similar problem and resolved it by restarting the
>>> clvmd service. I did the same thing and the problem was resolved - I
>>> don't see that error when creating LVs in that VG, and I now have a
>>> /dev/vgtest and associated files (only on the node I created it on).
>>>
>>> Now, I created a new VG, try to create a new LV within it and see the
>>> same error! I see in the clvmd man page that there is a "-R" option,
>>> which sounds like maybe what should be done instead of restarting
>> clvmd.
>>> Tried it, but no luck - same error and no /dev/ directory created.
>>> Restart clvmd, instant fix.
>>>
>>> So - what am I doing wrong here? I can't imagine I am supposed to
>>> restart clvmd every time I create a new VG, am I?
>>>
>>> Additionally, I can see the clustered VGs and LVs on my other node,
>> but
>>> there are not any files in /dev for the clustered VGs - is this
>> normal?
>>> I have also restarted clvmd there with no effect.
>>>
>>> Thanks,
>>>
>>> Kevin
>>
>>
>> _______________________________________________
>> rhelv5-list mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>
>> _______________________________________________
>> rhelv5-list mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>
>>
>>
>> _______________________________________________
>> rhelv5-list mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>
>> _______________________________________________
>> rhelv5-list mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>
>> _______________________________________________
>> rhelv5-list mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>
>
> _______________________________________________
> rhelv5-list mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/rhelv5-list
>
> _______________________________________________
> rhelv5-list mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/rhelv5-list
>

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to