Re: [ovirt-users] Regression in Gluster volume code?

2015-12-17 Thread noc
On 16-12-2015 17:06, Sahina Bose wrote:
>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1292173
Thanks. Will have a look at it and amend if needed.

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regression in Gluster volume code?

2015-12-16 Thread Sahina Bose



On 12/16/2015 09:20 PM, Joop wrote:

On 15-12-2015 12:31, Sahina Bose wrote:


On 12/15/2015 03:26 PM, Joop wrote:

On 14-12-2015 12:00, Joop wrote:

I have reinstalled my test environment have come across an old error,
see BZ 988299, Bad volume specification {u'index': 0,.

At the end of that BZ there is mentioning of a problem with '_' in the
name of the volume and a patch is mentioned but the code has since been
change quite a bit and I can't find if that still applies. It look like
it doesn't because I have a gluster volume with the name
gv_ovirt_data01
and it look like it gets translated to gv__ovirt__data01 and then I
can't start any VMs :-(
Weird thing, I CAN import VMs from the export domain to this gluster
domain.


I have just done the following on 2 servers which also hold the volumes
with '_' in it:

mkdir -p /gluster/br-ovirt-data02

ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02
--fstype xfs /gluster/br-ovirt-data02

echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02
/gluster/br-ovirt-data02xfs defaults1 2 >>/etc/fstab

semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02

restorecon -Rv /gluster/br-ovirt-data02

mkdir /gluster/br-ovirt-data02/gl-ovirt-data02

chown -R 36:36 /gluster/

Added a replicated volume on top of the above, started it, added a
Storage Domain using that volume, moved a disk to it, and started the
VM, works! :-)

Should I open a BZ or does someone know of an existing one?

Could you open one?


I tried but it looks like the email from BZ isn't arriving at my mailbox :-(
I had to renew my password and haven't gotten the link, yet. Creating a
new account with a different email domain didn't work either so I'm
gonna summarize what I did today.

It looks like that something goes wrong in
vdsm/storage/glusterVolume.py. Volname in  getVmVolumeInfo ends up with
a volumename with double underscores in it, then
svdsmProxy.glusterVolumeInfo is called which in the end calls a cli
script, by supervdsmd, which returns an empty xml document because there
is no such volume with double underscores. Running the command which is
logged in supervdsm.log confirms this too. Reducing the volname to have
only single underscores returns a correct xml object.
My guess is that this: rpath =
sdCache.produce(self.sdUUID).getRemotePath() probably should return what
the real name is that has been used to connect to the storage. In my case:
Real path entered during setup: st01:gv_ovirt_data01
What's used: st01:gv__ovirt__data01
Just doing a 's/__/_/' is a bit shortsighted but would work for me since
I don't use '/' when entering the storage connection above (My
perception is that if you want the NFS export of gluster you use the /
else if you want the glusterfs protocol you don't. There is a line of
code in vdsm which replaces one underscore with two AND replaces a /
with an underscore, going back is ofcourse then impossible if you don't
store the original).

I hope one of the devs is willing to create the BZ with this info and I
hope has a solution to this problem.


https://bugzilla.redhat.com/show_bug.cgi?id=1292173



Regards,

Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regression in Gluster volume code?

2015-12-16 Thread Joop
On 15-12-2015 12:31, Sahina Bose wrote:
>
>
> On 12/15/2015 03:26 PM, Joop wrote:
>> On 14-12-2015 12:00, Joop wrote:
>>> I have reinstalled my test environment have come across an old error,
>>> see BZ 988299, Bad volume specification {u'index': 0,.
>>>
>>> At the end of that BZ there is mentioning of a problem with '_' in the
>>> name of the volume and a patch is mentioned but the code has since been
>>> change quite a bit and I can't find if that still applies. It look like
>>> it doesn't because I have a gluster volume with the name
>>> gv_ovirt_data01
>>> and it look like it gets translated to gv__ovirt__data01 and then I
>>> can't start any VMs :-(
>>> Weird thing, I CAN import VMs from the export domain to this gluster
>>> domain.
>>>
>> I have just done the following on 2 servers which also hold the volumes
>> with '_' in it:
>>
>> mkdir -p /gluster/br-ovirt-data02
>>
>> ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02
>> --fstype xfs /gluster/br-ovirt-data02
>>
>> echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02
>> /gluster/br-ovirt-data02xfs defaults1 2 >>/etc/fstab
>>
>> semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02
>>
>> restorecon -Rv /gluster/br-ovirt-data02
>>
>> mkdir /gluster/br-ovirt-data02/gl-ovirt-data02
>>
>> chown -R 36:36 /gluster/
>>
>> Added a replicated volume on top of the above, started it, added a
>> Storage Domain using that volume, moved a disk to it, and started the
>> VM, works! :-)
>>
>> Should I open a BZ or does someone know of an existing one?
>
> Could you open one?
>
I tried but it looks like the email from BZ isn't arriving at my mailbox :-(
I had to renew my password and haven't gotten the link, yet. Creating a
new account with a different email domain didn't work either so I'm
gonna summarize what I did today.

It looks like that something goes wrong in
vdsm/storage/glusterVolume.py. Volname in  getVmVolumeInfo ends up with
a volumename with double underscores in it, then
svdsmProxy.glusterVolumeInfo is called which in the end calls a cli
script, by supervdsmd, which returns an empty xml document because there
is no such volume with double underscores. Running the command which is
logged in supervdsm.log confirms this too. Reducing the volname to have
only single underscores returns a correct xml object.
My guess is that this: rpath =
sdCache.produce(self.sdUUID).getRemotePath() probably should return what
the real name is that has been used to connect to the storage. In my case:
Real path entered during setup: st01:gv_ovirt_data01
What's used: st01:gv__ovirt__data01
Just doing a 's/__/_/' is a bit shortsighted but would work for me since
I don't use '/' when entering the storage connection above (My
perception is that if you want the NFS export of gluster you use the /
else if you want the glusterfs protocol you don't. There is a line of
code in vdsm which replaces one underscore with two AND replaces a /
with an underscore, going back is ofcourse then impossible if you don't
store the original).

I hope one of the devs is willing to create the BZ with this info and I
hope has a solution to this problem.

Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regression in Gluster volume code?

2015-12-15 Thread Joop
On 15-12-2015 12:31, Sahina Bose wrote:
>
>
> On 12/15/2015 03:26 PM, Joop wrote:
>> On 14-12-2015 12:00, Joop wrote:
>>> I have reinstalled my test environment have come across an old error,
>>> see BZ 988299, Bad volume specification {u'index': 0,.
>>>
>>> At the end of that BZ there is mentioning of a problem with '_' in the
>>> name of the volume and a patch is mentioned but the code has since been
>>> change quite a bit and I can't find if that still applies. It look like
>>> it doesn't because I have a gluster volume with the name
>>> gv_ovirt_data01
>>> and it look like it gets translated to gv__ovirt__data01 and then I
>>> can't start any VMs :-(
>>> Weird thing, I CAN import VMs from the export domain to this gluster
>>> domain.
>>>
>> I have just done the following on 2 servers which also hold the volumes
>> with '_' in it:
>>
>> mkdir -p /gluster/br-ovirt-data02
>>
>> ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02
>> --fstype xfs /gluster/br-ovirt-data02
>>
>> echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02
>> /gluster/br-ovirt-data02xfs defaults1 2 >>/etc/fstab
>>
>> semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02
>>
>> restorecon -Rv /gluster/br-ovirt-data02
>>
>> mkdir /gluster/br-ovirt-data02/gl-ovirt-data02
>>
>> chown -R 36:36 /gluster/
>>
>> Added a replicated volume on top of the above, started it, added a
>> Storage Domain using that volume, moved a disk to it, and started the
>> VM, works! :-)
>>
>> Should I open a BZ or does someone know of an existing one?
>
> Could you open one?
I'll. Thanks.

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regression in Gluster volume code?

2015-12-15 Thread Sahina Bose



On 12/15/2015 03:26 PM, Joop wrote:

On 14-12-2015 12:00, Joop wrote:

I have reinstalled my test environment have come across an old error,
see BZ 988299, Bad volume specification {u'index': 0,.

At the end of that BZ there is mentioning of a problem with '_' in the
name of the volume and a patch is mentioned but the code has since been
change quite a bit and I can't find if that still applies. It look like
it doesn't because I have a gluster volume with the name gv_ovirt_data01
and it look like it gets translated to gv__ovirt__data01 and then I
can't start any VMs :-(
Weird thing, I CAN import VMs from the export domain to this gluster domain.


I have just done the following on 2 servers which also hold the volumes
with '_' in it:

mkdir -p /gluster/br-ovirt-data02

ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02 --fstype 
xfs /gluster/br-ovirt-data02

echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02 /gluster/br-ovirt-data02
xfs defaults1 2 >>/etc/fstab

semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02

restorecon -Rv /gluster/br-ovirt-data02

mkdir /gluster/br-ovirt-data02/gl-ovirt-data02

chown -R 36:36 /gluster/

Added a replicated volume on top of the above, started it, added a
Storage Domain using that volume, moved a disk to it, and started the
VM, works! :-)

Should I open a BZ or does someone know of an existing one?


Could you open one?



Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regression in Gluster volume code?

2015-12-15 Thread Joop
On 14-12-2015 12:00, Joop wrote:
> I have reinstalled my test environment have come across an old error,
> see BZ 988299, Bad volume specification {u'index': 0,.
>
> At the end of that BZ there is mentioning of a problem with '_' in the
> name of the volume and a patch is mentioned but the code has since been
> change quite a bit and I can't find if that still applies. It look like
> it doesn't because I have a gluster volume with the name gv_ovirt_data01
> and it look like it gets translated to gv__ovirt__data01 and then I
> can't start any VMs :-(
> Weird thing, I CAN import VMs from the export domain to this gluster domain.
>
I have just done the following on 2 servers which also hold the volumes
with '_' in it:

mkdir -p /gluster/br-ovirt-data02

ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02 --fstype 
xfs /gluster/br-ovirt-data02

echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02 
/gluster/br-ovirt-data02xfs defaults1 2 >>/etc/fstab

semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02

restorecon -Rv /gluster/br-ovirt-data02

mkdir /gluster/br-ovirt-data02/gl-ovirt-data02

chown -R 36:36 /gluster/

Added a replicated volume on top of the above, started it, added a
Storage Domain using that volume, moved a disk to it, and started the
VM, works! :-)

Should I open a BZ or does someone know of an existing one?

Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Regression in Gluster volume code?

2015-12-14 Thread Joop
I have reinstalled my test environment have come across an old error,
see BZ 988299, Bad volume specification {u'index': 0,.

At the end of that BZ there is mentioning of a problem with '_' in the
name of the volume and a patch is mentioned but the code has since been
change quite a bit and I can't find if that still applies. It look like
it doesn't because I have a gluster volume with the name gv_ovirt_data01
and it look like it gets translated to gv__ovirt__data01 and then I
can't start any VMs :-(
Weird thing, I CAN import VMs from the export domain to this gluster domain.

Regards,

Joop


JsonRpc (StompReactor)::DEBUG::2015-12-14 
11:55:26,487::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling 
message 
JsonRpcServer::DEBUG::2015-12-14 
11:55:26,488::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting 
for request
Thread-1285::DEBUG::2015-12-14 
11:55:26,491::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-12-14 
11:55:29,499::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling 
message 
JsonRpcServer::DEBUG::2015-12-14 
11:55:29,501::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting 
for request
Thread-1286::DEBUG::2015-12-14 
11:55:29,504::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-12-14 
11:55:32,512::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling 
message 
JsonRpcServer::DEBUG::2015-12-14 
11:55:32,514::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting 
for request
Thread-1287::DEBUG::2015-12-14 
11:55:32,517::task::592::Storage.TaskManager.Task::(_updateState) 
Task=`79726805-eec0-4719-b758-51749f067295`::moving from state init -> state 
preparing
Thread-1287::INFO::2015-12-14 11:55:32,517::logUtils::48::dispatcher::(wrapper) 
Run and protect: repoStats(options=None)
Thread-1287::INFO::2015-12-14 11:55:32,518::logUtils::51::dispatcher::(wrapper) 
Run and protect: repoStats, Return response: 
{u'f7453ce7-3aca-4ee3-98c3-827ce3e001d6': {'code': 0, 'version': 3, 'acquired': 
True, 'delay': '0.000659959', 'lastCheck': '6.5', 'valid': True}, 
u'4b083e6a-d588-4735-bf50-833033c24e6b': {'code': 0, 'version': 0, 'acquired': 
True, 'delay': '0.000797126', 'lastCheck': '6.5', 'valid': True}, 
u'6d00b190-a3a8-4e75-b718-4ba680d7a228': {'code': 0, 'version': 0, 'acquired': 
True, 'delay': '0.000403222', 'lastCheck': '6.7', 'valid': True}}
Thread-1287::DEBUG::2015-12-14 
11:55:32,518::task::1188::Storage.TaskManager.Task::(prepare) 
Task=`79726805-eec0-4719-b758-51749f067295`::finished: 
{u'f7453ce7-3aca-4ee3-98c3-827ce3e001d6': {'code': 0, 'version': 3, 'acquired': 
True, 'delay': '0.000659959', 'lastCheck': '6.5', 'valid': True}, 
u'4b083e6a-d588-4735-bf50-833033c24e6b': {'code': 0, 'version': 0, 'acquired': 
True, 'delay': '0.000797126', 'lastCheck': '6.5', 'valid': True}, 
u'6d00b190-a3a8-4e75-b718-4ba680d7a228': {'code': 0, 'version': 0, 'acquired': 
True, 'delay': '0.000403222', 'lastCheck': '6.7', 'valid': True}}
Thread-1287::DEBUG::2015-12-14 
11:55:32,518::task::592::Storage.TaskManager.Task::(_updateState) 
Task=`79726805-eec0-4719-b758-51749f067295`::moving from state preparing -> 
state finished
Thread-1287::DEBUG::2015-12-14 
11:55:32,518::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-1287::DEBUG::2015-12-14 
11:55:32,518::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-1287::DEBUG::2015-12-14 
11:55:32,519::task::990::Storage.TaskManager.Task::(_decref) 
Task=`79726805-eec0-4719-b758-51749f067295`::ref 0 aborting False
Thread-1287::DEBUG::2015-12-14 
11:55:32,521::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-12-14 
11:55:32,537::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling 
message 
JsonRpcServer::DEBUG::2015-12-14 
11:55:32,539::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting 
for request
Thread-1288::DEBUG::2015-12-14 
11:55:32,541::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-12-14 
11:55:35,596::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling 
message 
JsonRpcServer::DEBUG::2015-12-14 
11:55:35,598::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting 
for request
Thread-1289::DEBUG::2015-12-14 
11:55:35,600::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
ioprocess communication (15961)::DEBUG::2015-12-14 
11:55:35,830::__init__::411::IOProcess::(_processLogs) Queuing request 
(slotsLeft=20)
ioprocess communication (15961)::DEBUG::2015-12-14 
11:55:35,831::__init__::411::IOProcess::(_processLogs) (531) Start request for 
method 'statvfs' (waitTime=54)
ioprocess communication (15961)::DEBUG::2015-12-14 
11:55:35,831::__init__::411::IOProcess::(_processLogs) (531) Finished request 
for method 'statvfs'